Oppgrader til nyeste versjon av Internet eksplorer for best mulig visning av siden. Klikk her for for å skjule denne meldingen
Ikke pålogget
{{session.user.firstName}} {{session.user.lastName}}
Du har tilgang til Idunn gjennom , & {{sessionPartyGroup.name}}

Everyday Digital Schooling – implementing tablets in Norwegian primary school

Examining outcome measures in the first cohort
Associate professor, University of Bergen
Cand.polit., Rambøll Management
Professor (dr.philos) of education, University of Bergen

Implementation of tablets in Norwegian schools has become quite common, but we still have too little research knowledge about the learning outcome from these implementation measures. To achieve more knowledge about the topic, this trailing research examines the first cohort of Bærum municipality’s implementation of tablets in primary school. The outcome measures in the study are external for the intervention, and are recorded data from National Tests (National reading, arithmetic and English Tests, Classes 5, 8 and 9, National Mapping Tests for reading and arithmetic, Classes 1–3, and the 2014–2016 National Pupil Survey). The whole study (N=15,708) relies on an explanatory, sequentially mixed-methods design (Fetters, Curry and Creswell (2013), and in this study we examine the quantitative effects of this implementation. The results stemming from the general focus areas, that is, spelling, reading, comprehension and arithmetic in the National Mapping Tests for reading for Classes 1–3, we find that there is no significant impact on pupil learning1 with respect to Classes 1 and 2 at the Pilot 1 schools in 2015 and 2016, compared with the results obtained for the other Bærum schools. The same applies to Class 3, with the exception of arithmetic, where we see negative results as a consequence of being a pilot school and having used tablets. This differs to what we see in the results of the National Tests for Class 5 in 2015 and 2016, where we note significant positive outcomes with respect to arithmetic and English among boys at the pilot schools, compared with Class 5 boys at the other Bærum schools. The findings is only based on the first nine months from the implementation of tablets and therefore a second cohort of research is recommended in order to validate and elaborate on these preliminary findings.

Introduction

Lack of research-based knowledge about the effects of educational technology as tablets in school has been identified as one urgent challenge in school both internationally and in Norway. One of the reasons for this is that, for example, the PISA study, Students, Computers and Learning reveals that: “(…) we have not yet become good enough at the kind of pedagogies that make the most of technology” (OECD 2015, p. 5). This trailing research is positioned towards this “gap” within the research area and examines the first (of five) cohorts in the Municipality of Bærum’s “Everyday Digital Schooling” tablet project which implies examining outcome measures regularly through our longitudinal research design. This is the first large-scale effect study of implementation of tablets in Norwegian primary school where the outcome measures are external for the intervention, as recommended by, for example, Cheung and Slavin (2013). This means that the learning outcome in this study is the results from National Tests, the National Mapping Tests and the National Pupil Survey. Cheung and Slavin (2013) underline the rationale for this kind of effect analysis and research studies:

If a particular combination of hardware, software, print materials, professional development for teachers, and other elements can be reliably replicated in many classrooms, then it is worth evaluating as a potential means of enhancing student outcomes (Cheung and Slavin 2013, p. 90).

The establishment of Bærum Municipality’s “Everyday Digital Schooling” school development-project initiative commenced at five schools in Bærum in January 2015. The aim of introducing tablets as a primary learning aid for every single pupil at all stages at the pilot schools was to improve the academic and personal outcomes acquired by the pupils from their schooling.2 Investing in tablets was simultaneously supposed to challenge the teachers to develop and change their own teaching and working practices wherever possible and to help with the provision of better learning for the pupils. In other words, it was to renew their pedagogy and didactics in order to change the teacher’s role from the “sage on the stage” to the “guide on the side” (Van Dusen 2000, p. 14).

The experiences gained from the first six months were evaluated during the spring of 2015, and the results and findings were then collocated and summarised in the Municipality’s own report entitled “Everyday Digital Schooling – review and analysis of the experiences acquired by five pilot schools in the Municipality of Bærum” (Bærum Municipality 2015). This report concluded that the provisional evaluations have produced favourable results with respect to pupil motivation, mastery and learning. The feedback received from the pupils, teachers and head teachers also pointed towards the fact that using tablets provides good opportunities for more individually adapted teaching and differentiation of the assessment work undertaken by the teachers. The report also pointed out the challenges associated with several factors, that is, that pupils can become distracted by non-academic online activities (e.g. in social media and/or games), that several pupils prefer to write by hand because this enables them to remember things better (van der Meer & van der Weel 2017), and that parent opportunities for gaining an insight into their own children’s school work could be better.

Based on the favourable findings contained in the report, and in order to extend the knowledge and decision-making basis for the possible introduction of tablets in all of the Municipality’s primary and secondary schools, the Sectoral Committee for Children and Adolescents in the Municipality of Bærum decided to extend the pilot project to include a further ten schools with effect from August 2016. It was decided to examine this intervention by trailing research and external trailing researchers. The aim of this trailing research was therefore to conduct further consequential research on the “Everyday Digital Schooling” project and investigate the degree to which the aims of the project relating to better learning, better academic and personal outcomes, changes in pupil and teacher roles, knowledge-building and sharing experiences was met at 15 March 2017. In this trailing research the outcome measures in the study are external for the intervention and are recorded data from National Tests (National reading, arithmetic and English tests, Classes 5, 8 and 9, National Mapping Tests for reading and arithmetic, Classes 1–3, and the 2014–2016 National Pupil Survey).

In this first cohort of the trailing research, we only examine the quantitative effects of this implementation. The paper first presents a conceptual framework and the methodology of the study, followed up by the results and a discussion of the study’s main findings.

Conceptual framework

Literature review

What actually is a tablet? Is it a kind of mix between a laptop, PC and a mobile phone? Or is it something else? Since there are different perceptions of what a tablet is, we define it in this study as a “device with a touchscreen interface, screen sizes ranging from 5 inches to 12 inches, colour displays, Wi-Fi or 3G internet connectivity, and advanced mobile operating systems such as Apple iOS, Google Android, Windows 7 or BlackBerry” (Perrin 2011, p. 5). Since there are different types of tablets and tablet computers, the hardware used in this project in Bærum was “Ipads”.

We should all keep in mind that the first tablets came on the market 6–7 years ago, and thereby it is a quite new kind of hardware. Because of this (and other reasons), there are still a limited number of large-scale research studies within the area of this kind of tablet technology applied for education purposes. More research is therefore needed within this area – especially since we know that all around the world there are tablet initiatives at different policy levels from Governments, Counties, Municipalities, schools and others regarding implementation of tablets in schools. Concerning such initiatives, Tamim, Borokhovski, Pickup, Bernard & El Saadi (2015a) state that “we still know too little about what their underlying philosophical and educational principles are (if they exist at all) (p. 8)” and Säljö (2017) expresses the same concerns. With this backdrop, Tamim et al. (2015a) carried out a systematic review of current government-supported tablet initiatives around the world to understand more of their educational underpinning and underlying principles in general. This review from Tamim et al. (2015a) concluded “that the majority of these initiatives have been driven by the tablet hype rather than by educational frameworks or research-based evidence” (p. 9).

To a certain degree Escueta, Quan, Joshua and Oreopoulos (2017) find some of the same tendencies in their evidence based review about educational technology in general:

We found that simply providing students with access to technology yields largely mixed results. At the K-12 level, much of the experimental evidence suggests that giving a child a computer may have limited impacts on learning outcomes, but generally improves computer proficiency and other cognitive outcomes. One bright spot that warrants further study is the provision of technology to students at the post-secondary level, an area with some positive RCT evidence (p. 87)

In addition, Fairlie and Robinson (2013) revealed much of the same when they examined the effects of home computers on academic achievement among schoolchildren and concluded that “we find no evidence that home computers had an effect (either positive or negative) on any educational outcome, including grades, standardized test scores, or a host of other outcomes” (p. 234). From these three studies (and also from earlier meta-analysis as e.g. Tamim et al. 2011), we can see that access to technology is not enough – it seems to be a consensus in the research community that it has to be closely attached to well founded pedagogy and didactics. So, what do we know from recently published meta-analysis about tablets and mobile technology in pedagogical settings?

The meta-analysis “The effects of integrating mobile devices with teaching and learning on students’ learning performance: A meta-analysis and research synthesis” by Sung, Chang & Liu (2016) find that “The overall mean effect size for learning achievement in this meta-analysis was 0.523, meaning that learning with mobiles is significantly more effective than traditional teaching methods that only use pen-and-paper or desktop computers” (p. 257) . For tablet PCs they find a specific effect size of 0.615. Also Sung et al. state that if we compare these effect sizes with Kulik and Kulik’s (1991) and Tamim et al.’s (2011) meta-analyses between using computers and not using computers in education (effect size between 0.30–0.35), some of the reason for these improved effects might be attached to the affordances specific tablet and mobile technology give. However, Sung et al. emphasise that more research has to examine such issues.

Tamim et al. (2015b) carried out a meta-analysis of sixty-eight studies based on of twenty-seven quantitative studies and forty-one qualitative research studies, and concluded that “findings from the current meta-analysis indicate a moderate strength average effect size for the impact of tablets and smart mobile devices on student outcome measures” (p. 38).

These two meta-analyses are up to date, give some promising results, and indicate that tablets represent a type of hardware with other affordances than traditional computers. However, these are preliminary tendencies and we need more research concerning the affordances tablets might give (or not give).

Concerning more specifically literacy, Genlott and Grönlund (2016) examined the effects of the “Write to Learn” (WTL) method. The results showed that the WTL group achieved the best results, and they concluded that access to technology is not enough: ICT has to be attached to both didactical and pedagogical elements in instruction.

Cheung and Slavin (2013) find in their meta-analysis of the effectiveness of educational technology applications for enhancing mathematics achievement in K-12 classrooms only a positive, modest effect of d=0.15. In another meta-analysis examining how features of educational technology applications affect student reading outcomes, they also find positive, modest effects of d=0.16 (Cheung and Slavin 2012). They explain that high quality studies (included in their meta-analysis) within educational technology give a lower effect size than studies with methodological weaknesses (excluded from their meta-analysis).

On the basis of this literature review we find that despite having some research knowledge internationally about tablets (and other types of educational hardware) in schools, we have very little research knowledge about how the large-scale implementation of tablets affects pupils learning outcome in Norway. Our trailing research is therefore positioned towards this “gap”, and will provide empirical data as related to our research questions.

Theoretical framework

The theoretical framework for the whole study underpins the research questions (and are not an analytical framework). It refer to the theories of Piaget (1967) and Vygotsky (1978) where tablets are related to both knowledge construction and collaborative learning, and linked to student-centred and group-based teaching design. Educational technology (like tablets), as it appears today in Bærum schools with its distinctive feature of digital tools, relates especially to more recent socio-cultural perspectives on learning (Wertsch, 1998; Cole, 1996; Säljö 2005; Stahl, 1993, Lave & Wenger, 1991, Wenger, 1998) as a mediating artefact. The socio-cultural perspective emphasises the point that learning is constructed in interaction with other people and mediating artifacts, which has a significant focus on the basic thinking in the “Digital everyday school”-school development project. James Wertsch states that such new kinds of mediation and mediated artefacts can give new possibilities and the experience of “(…) how the introduction of novel cultural tools transforms the action” (Wertsch, 1998, p. 42). The use of tablets for learning purposes also relates to Richard Mayer’s (2010) Multimedia Learning Theory where he describes learning with technology, such as situations wherein technology is used for the purpose of promoting learning, and is concerned with the human construction of knowledge as a framework for learning.

However, tablets are a type of hardware that can be applied in numerous ways, and it is important to understand the affordances of such technology and the context of use. This is based on the fact that there are several similarities between ICT for entertainment use and educational technology for use in school, and sometimes it is hard to distinguish them from each other. However, educational technology is developed especially for educational purposes, while ICT consist of a myriad of technologies such as social media, mobile phones, wireless broadband, PCs, and so on, which are developed first of all for everyday life (and not especially for educational purposes). Tablets can be used in both contexts, but in this study we examine it as educational technology with certain affordances for teaching and learning in school contexts. Educational technology can therefore be defined as “the study and ethical practice of facilitating learning and improving performance by creating, using and managing appropriate technological processes and resources” (Januszewski & Molenda, 2008, p. 1). Thus, our theoretical underpinning for the study is also attached to digital didactics. This concept was introduced in Krumsvik’s (2008) article, “Situated learning and teachers’ digital competence”, and further examined in various studies (Krumsvik 2009a; Krumsvik 2009b; Almås & Krumsvik 2009; Krumsvik 2012). Like later digital didactic models of, for example, Jahnke, Bergström, Mårell-Olsson, Häll and Kumar (2016), this digital didactic model focuses on the most relevant elements teachers need to consider in the digitalised school with the awareness that “(…) adding 21st-century technologies to 20th-century teaching practices will just dilute the effectiveness of teaching” (OECD 2015, s. 5).

The coherence between the pupils’ knowledge construction and collaborative learning linked to student-centred teaching design in school (attached to sociocultural theory), learning with technology (tablets) attached to multimedia learning theory and teachers’ pedagogical practice (attached to digital didactic) underpins the research questions of the study which in the first cohort are:

  1. To what extent does the implementation of tablets affect learning outcomes at schools Bærum Municipality (where the outcome measures are recorded data such as National Mapping Tests, National Tests and the National Pupil Survey)?

  2. To what extent does the implementation of tablets affect social enjoyment and learning environments at schools Bærum Municipality? (based on the National Pupil Survey).

To be able to examine these variance research questions in the first cohort, we have chosen trailing research and mixed method research described below.

Methodology

The subsequent research made use of trailing research (Finne, Levin & Nilssen 1995) and mixed method research (Fetters et al. 2013), which involved combining different methods and data sources. To be able to answer the research questions in this study, we have chosen to design this study as an explanatory, sequentially mixed-methods design (Fetters, Curry and Creswell 2013). We follow the staged approach, which means that data are reported in stages and published separately. In this article (the first cohort), we therefore only report the quantitative effect analysis which is based on existing recorded data. The effects of the learning results are measured by using the following data sources:

  1. National reading, arithmetic and English tests, Classes 5, 8 and 9.

  2. National Mapping Tests for reading and arithmetic, Classes 1–3.

  3. The 2014–2016 National Pupil Survey.

We have obtained the results of the National Tests from the Norwegian Directorate of Education and Training’s School Portal, and the results of the National Mapping Tests have been provided by the Municipality of Bærum. Our two endpoints in this respect are based on class levels, divided according to gender and test type. The national arithmetic and English tests have been taken from 2014 to 2016 since there is no comparable data available prior to 2014. The reading test is nevertheless included in our analysis, but with the reservation that changes have been made to the scale so that the comparison cannot be made beyond 2016. However, this should not be a problem since the comparison is made up to 2016, but not any later. As regards the Mapping Tests, two respective tests are conducted in reading and arithmetic between 2014 and 2016.

Our third and final endpoint is social enjoyment and learning environments. This has been gathered from the the National Pupil Survey. The National Pupil Survey focuses on how pupils perceive their learning environment at school, how motivated they are, how their social well-being is at school, if they experienced any bullying, how they experience the teachers, and so on. The results of the the National Pupil Survey have also been obtained from the Norwegian Directorate of Education and Training’s School Portal, based on class levels and divided by gender. We have used as a basis the various indicators defined by the Directorate as being relevant for the pupils’ learning environments.3 We used data from the National Pupil Survey covering 2013 to 2015. No data for 2016 was available on the School Portal when our analysis was carried out.

Sample selection

All three endpoints are linked to school data taken from the Information System for Primary and Secondary Schools (GSI), and to the socio-economic indicators for the twenty-four primary schools in the Municipality of Bærum. The table below describes the pupils at the five pilot schools and the pupils at the other schools, where we investigate whether or not there are differences in the Pilot 1 schools and other schools in Bærum with respect to different indicators.

Table 1

Description of the five pilot schools in relation to the other schools in 2014

Pilot schoolsNon-pilot schools
Number of schools (total) 5 schools39 schools
Number of pupils (total) 1,743 pupils13,965 pupils
Percentage of secondary schools 40%31%
Percentage of schools with over 400 pupils 20%41%
Average number of pupils per man-labour year 15.3 pupils14.1 pupils
Average number of assistant hours per pupil 10 hours19 hours
Socio-demographic variables *)
Average percentage with a low income b) 7.7%7.3%
Average percentage with low or no qualifications 18.5%16.8%
Average percentage of recipients of benefits 2.0%1.5%
Average percentage of unemployed 1.9%1.8%
Average percentage of immigrants 18.0%15.4%

*) Based on living conditions statistics for Bærum municipality within different school areas and numbers from Statistics Norway (Statistisk Sentralbyrå) year 2011.The distribution of statistics between the schools is made by the Development Unit in Bærum Municipality. For some schools, there is a percentage breakdown between several areas. b) Share of low income is defined as a proportion of the area's population from 16–64 years with less than 50% of the median income. There are no significant differences between the pilot and non-pilot schools. Significances have been tested using a T-test (T-test for independent samples) with significance level variances of 10%, 5% and 1%. Source: Indicators from 2011 for 9 planning areas in Bærum have been calculated by Statistics Norway (SSB). The distribution between the schools has been prepared by the Municipality of Bærum. For some schools a percentage distribution has been created between several areas under 50% of the median.

This table shows that the Pilot 1 schools as a group are composed of a slightly higher percentage of secondary schools than other schools in the municipality when taken as a whole. Furthermore, the pilot schools are smaller than the schools which are not included in the pilot. The pilot schools also have fewer assistant hours per pupil than the other schools. All the variables have been statistically tested for significant deviations from the other schools. The tests showed that there were no significant differences between the pilot schools and other schools in Bærum in 2014.4

As regards the socio-economic parameters (using Living Conditions Indicator), there are only differences with respect to the percentage of immigrants. In this context it would appear that there are more pilot schools with a higher percentage of children with immigrant backgrounds (proportion of immigrants and Norwegians born with immigrant parents), although statistically we find no differences between these two groups. Normally caution should be exercised when drawing conclusions about the background of socio-economic variables since these variables date back to 2011 and not all the children in the population went to school then. It is anticipated that the pupil base and the surrounding area is constant since the school districts are only altered marginally each year. In our analysis the indicators are only used to test the soundness of the results from the other analysis, and not as an independent analysis.

In January 2017, Statistics Norway launched a report5 about school contribution (school added value) indicators for each school in Norway. School added value can be defined as: “The contribution of a school to student’s progress towards stated or prescribed education objectives (e.g. cognitive achievement). The contribution is net of other factors that contribute to student’s education progress” (OECD 2008, p. 12). The table below shows the Pilot 1 schools and their relative positions in relation to the average.

Table 2

Statistics Norway’s school contribution indicators

Pilot schoolSchool contribution indicator (total between 2010-2014)Compared with the national average of 3.4
Bekkestua Primary School*
Grav School 3.5Above
Jong School 3.5Above
Vøyenenga Secondary School 3.3Below
Gjettum Secondary School 3.4At the average
% above average
% of non-pilot secondary schools above the average in Bærum 18%
% of non-pilot primary schools above the average in Bærum 87%

* Bekkestua Primary School first opened its doors in August 2014 with three class 1s during the first year and then expanded by one new class per year. They are therefore not included in most of the analyses.

Most primary schools in the Municipality of Bærum are above the national average with respect to school contributions. At secondary school level there are only two schools out of the thirteen which are above the national average.

By conducting effect analyses we investigated how the tablets affected the learning outcomes acquired the pupils and their learning environment on school level. We made comparisons between the Pilot 1 schools in the Everyday Digital Schooling project and schools which were not part of the project. We used a difference-in-difference approach where developments at the pilot schools were compared with developments at the other Bærum schools. This means that our effect analysis is prepared using a difference-in-difference approach (diff-in-diff) in a simple respective average analysis and a more advanced fixed-effect regression analysis. The simple diff-in-diff analysis looks at the average differences between the five pilot schools and all the other schools in Bærum prior to the introduction of tablets. This is then compared with the differences between the five pilot schools and all the other schools in Bærum after the introduction of tablets. The figure below illustrates the difference-in-difference approach.

Figure 1

Illustration of the difference-in-difference approach. The green bubble shows the -estimated effects of the introduction of tablets.

By using a diff-in-diff approach in a more advanced fixe-effect regression analysis, it is possible to check for time-constant variables at school level. This means that variables which do not change over the years, such as size of school, geographical location and organisation will be checked. In addition this method will take into account unobservable properties which are constant over the years, such as school culture and pupil base (provided that the pupil base does not change and absence of contamination effects between pilot and control schools).

Effect analyses

The purpose of conducting an effect analysis is to investigate how the introduction of tablets affects pupil learning outcomes and learning environments. The pupils’ learning outcomes and learning environments are then compared at the five pilot schools against schools where tablets have not been introduced.

Identification of effects

The figure below provides details about when the five pilot schools introduced tablets. The figure also shows when the different endpoints were collected at national level. In addition the grey areas show the years which are used as “before” and “after” measurements.

Figure 2

Details showing the introduction of tablets and the three endpoints.

Jong and Bekkestua primary schools introduced tablets as early as the autumn of 2014, while the last three pilot schools introduced tablets in January 2015. The endpoints from 2014 are used as “before” measurements. For the National Pupil Survey and the National Tests, this means that the “before” measurements are not one hundred percent actual “before” measurements for Jong School and Bekkestua Primary School. On the other hand, the National Mapping Tests in 2014 qualify as “before” measurements for all the schools since they were undertaken during the spring of the same year.

The reason why data from 2013 was not used in the National Tests was that these tests are not comparable with data from 2014 and later.6

For the National Pupil Survey, 2013 can be used as “before” measurements. However, the “before” measurements from 2014 are used in order to keep the three analyses as a whole. In order to investigate whether or not it is significant that 2014 is not a one hundred percent real “before” measurement of the National Pupil Survey for all the schools, we have conducted a supplementary analysis of the National Pupil Survey where the 2013 measurements are used as “before” measurements.

The data from 2015 and 2016 is used for “after” measurements. Both years are used since in 2015 only half a year had elapsed since tablets were introduced in some schools, and by using both years it is possible to find a more stable effect. 2015 stands on its own as an “after” measurement in all the analyses in order to see if there is a difference between the length of the periods which are included in the analysis.

Reservations and uncertainties in the analysis

In diff-in-diff analyses (both simple and fixed effect analyses), it is assumed that the pilot and non-pilot schools would have developed along the same lines if the pilot schools had not introduced tablets.7 This assumption is necessary, since the diff-in-diff analysis defines the group of schools which have not received tablets as the contra-factual outcome for schools which have introduced tablets. In other words, when the schools’ previous differences have been taken into account, it is expected that they will have the same development over the years in the National Tests, Mapping Tests and National Pupil Survey. This is a strict assumption, and it cannot be tested in any greater detail in the data which we have available. Consequently, when interpreting the results one should be aware that there may be some instances when the Pilot 1 schools could nevertheless have developed the way they did without introducing tablets. One way of approaching this strict assumption is to include variables about the pupils’ individual backgrounds. Unfortunately we were not able to do this since this type of data was not available to us.

Apart from this strict assumption relating to development, there is another uncertainty which involves “class effect". Class effect means that the analysis is based on a comparison of a year group of pupils in, for example, Class 5, with the subsequent year group of Class 5 pupils in the following year. This means that we are not following the same pupils. Since we are not following the same pupils, it is possible that the effects produced by using tablets could be caused by pupils who are more clever or less clever when taken as a total, rather than the properties of the tablets themselves. The class effect can be tested by following a group of pupils over two classes (e.g. from Class 1 to Class 2) where possible, to see if this changes the results.

A third uncertainty is the number of years included in the analysis. If only before and after monitoring is included, then it is the short-term effects which are analysed. In this respect supplementary analyses involving two subsequent measurements for 2015 and 2016 respectively would enable us to see if there are any differences between the short-term effects and the effects after two years.

The final and greatest uncertainty relating to the quantitative analyses is that there are only two primary schools8 and two secondary schools that are Pilot 1 schools. This produces very few observations for analysis of the effects of tablets. This is a methodological limitation in this study because diff-in-diff approaches are normally data intensive (and will be taken better care of in the next cohort of the trailing research). This means in turn that the results cannot be generalised to other schools or municipalities. Furthermore, we have an analysis of the measurable effects, something which means that the analysis does not capture the potential effects of learning apart from the measurable indicators. All the results must therefore be read with these reservations.

Results

In this part of the trailing research we only present the quantitative effect analysis based on our research questions.

As previously mentioned, the results will be divided up so that the results of the National Tests are described first. These will be followed by the results of the Mapping Tests and finally the results of the National Pupil Survey. Finally these will be collated in a short summary of the results.

National Test results

In the following we will present the results of the National Tests for English, reading and arithmetic for Class 5, followed by the results of the National Tests for arithmetic and reading for Class 9. At the same time we need to make special reservations with respect to the results for reading and arithmetic for Class 5 during the autumn of 2016 when the Pilot 1 schools experienced problems with completing some reading and arithmetic tests when using tablets. This was also reported to the Office of the County Governor of Oslo and Akershus who confirmed the situation9 and asked the schools to consider ignoring the results of these tests.

National Tests for primary schools (Class 5)

The table below shows the average respective test results obtained in the National Tests for arithmetic, reading and English for all the children, only boys and only girls. The column on the right shows the extent to which the Pilot 1 schools have developed in a more positive direction than other schools in Bærum following the introduction of tablets. A positive figure indicates that the Pilot 1 schools have developed more positively than the other schools.

Table 3

Difference-in-difference analysis of Class 5 test results.

Before tablets (2014)After tablets (2015, 2016)Effects
Pilot schoolsNon--pilot schoolsDifference beforePilot schoolsNon--pilot schoolsDifference afterDiff-in-diff
AllArithmetic49.553.7–4.252.053.1–1.13.2
Reading50.553.4–2.953.552.80.73.4
English50.553.0–2.653.852.71.13.7
BoysArithmetic49.555.2–5.7**54.053.80.25.9*
Reading49.053.2–4.2*52.552.50.04.3
English49.553.9–4.3**53.853.40.44.7*
GirlsArithmetic49.552.4–2.949.552.1–2.60.3
Reading52.553.8–1.353.553.10.41.7
English51.052.9–1.953.051.81.23.1
No. of schoolsa) 2 schools23 schools4 obsb) 46 obsb)

Note: The significance tests are created with a two-tailed T-test with the same deviations. *** means that we can say with 99% certainty that there are differences between the control and tablet groups. ** means that we can say with 95% certainty that there are differences between the control and tablet groups. * means that we can say with 90% certainty that there are differences between the control and tablet groups. If there are no asterisks this means no statistically significant differences. a) Bekkestua Primary School is not included in the analysis since it did not have a separate Class 5 when the measurements were carried out. b) Double the number of observations was carried out since they are for both 2015 and 2016, i.e. all the schools are included twice.

In the “Difference before” column, we see that the pilot schools generally had lower total points in the National Tests in 2014 in relation to the other schools in the Municipality of Bærum (the non-pilot schools). As regards boys at the pilot schools, the total points are significantly lower, at a 5% level of significance. If we look at the “Difference after” column, that is, after the introduction of tablets, there are almost no differences between Pilot 1 and the average test results of the other schools. Overall the effects of introducing tablets are positive and significant for Class 5 boys with respect to English and arithmetic. As regards the girls, we cannot say with any statistical certainty whether or not any changes have occurred. If there is a change for the girls, the results indicate that it is positive.

The following table presents the advanced analysis with the fixed-effects regression analysis of the Class 5 National Test results.

Table 4

Diff-in-diff in “Fixed effect” regression analysis for Class 5.

All pupilsBoysGirls
ArithmeticReadingEnglishArithmeticReadingEnglishArithmeticReadingEnglish
Pilot school–2.9–4.5*–6.3***–4.4*–3.9–5.4**–1.5–2.9–3.7
After tablets–0.8*–0.7–0.8*–1.3**–0.5–0.3–0.4–0.5–0.9
Effect of tablets a) 2.63.55.0**5.3**3.95.9**–0.61.23.9
Large schoolb) 1.10.8–2.20.6–0.3–2.91.31.0–1.8
Number of pupils per man-labour year–0.30.3–0.2–0.30.2–0.2–0.50.30.4
FE, school levelYesYesYesYesYesYesYesYesYes
Explanation -degree (R2)0.810.610.680.770.580.640.720.560.49
No. of obs.757373

Note: The significance tests are created using a linear regression analysis with fixed effect for the school and year. *** means that we can say with 99% certainty that there are differences between the control and tablet groups. ** means that we can say with 95% certainty that there are differences between the control and tablet groups. * means that we can say with 90% certainty that there is a difference between the control and tablet groups. a) The effects of tablets is an interaction link between the dummy variable for being a tablet school and the dummy variable for being after the implementation of tablets. In other words, the effects are calculated using a difference-in-difference approach. The reason that 73 boys and 73 girls is not equal to 75 (all pupils) is because they are numbers of schools. The reason why not all schools (36 and 75) have broken numbers on gender is because some of the schools do not have this data available.

The fixed-effect analysis shows the same results as the simple difference-in-difference analysis in Table 3. The “Effects of tablets” variable, which is marked in grey, can be compared with the diff-in-diff effect in the previous table. In the fixed-effect analysis time-constant characteristics at school level have been taken into account, as well as a few other characteristics: large school and number of pupils per man-labour year. The results of the analysis are that we see positive significant effects with respect to English for “all pupils” and for the boys. There are also positive significant effects with respect to arithmetic for the boys. On the other hand there are no effects relating to arithmetic for the girls.

National Tests for secondary schools (Class 9)

This paragraph presents the results of the Class 9 National Tests. The table below shows the average test results of the National Tests and of the compulsory arithmetic and reading tests for all pupils respectively, only boys and only girls in Class 9. The column entitled “Diff-in-diff” shows the extent to which the pilot schools have developed in a more positive direction than other schools in Bærum following the introduction of tablets. A positive figure indicates that the pilot schools have developed more positively than the other schools. These analyses are prepared for Class 9, since the Class 8 pupils may have gone to one of the primary schools which have introduced tablets, and this would therefore create an uneven distribution in the results.

Table 5

Difference-in-difference analysis of average Class 9 test results.

Before tablets (2014)After tablets (2015, 2016)Effects
Subject:Pilot schoolsNon-pilot schoolsDifference beforePilot schoolsNon-pilot schoolsDifference afterDiff-in-diff
AllArithmetic57.557.40.156.058.0–2.0*–2.1
Reading57.057.4–0.455.857.8–2.1*–1.7
BoysArithmetic58.058.5–0.556.558.9–2.4**–1.9
Reading55.0b 56.7–1.755.056.6–1.60.2
GirlsArithmetic56.556.20.354.857.2–2.4*–2.7
Reading58.558.20.356.559.2–2.7**–3.0
No. of schools2 schools2 schools10 schools4 obs.a) 10 obs.a)

Note: The significance tests are created with a two-tailed T-test with variations. *** means that we can say with 99% certainty that there are differences between the control and tablet groups. ** means that we can say with 95% certainty that there are differences between the control and tablet groups. * means that we can say with 90% certainty that there is a difference between the control and tablet groups. a) There are twice as many observations in the “after” measurements since both 2015 and 2016 are included, i.e. all the school are included twice.

In the column entitled “Difference before” it can be seen that the Pilot 1 schools generally had almost the same test results in the 2014 National Tests as the other secondary schools in the Municipality of Bærum (non-pilot schools). The effects of using tablets are seen in the “Diff-in-diff” column. The difference over time is not statistically significant, that is, we cannot say with any great degree of certainty that the difference is not coincidental.

When preparing a similar analysis in which we follow the Class 8 pupils before the introduction of tablets and until Class 9 after the introduction, and thus take the class effects into account, we find the same results as those which we would obtain when only looking at Class 9.

The table below shows a fixed-effect linear regression analysis of the test results of the Class 9 National Tests.

Table 6

Diff-in-diff access in a fixed effect regression analysis for Class 9.

All pupilsBoysGirls
ArithmeticReadingArithmeticReadingArithmeticReading
Pilot school0.8–0.74.3*–1.12.03.7
After tablets0.80.20.6–0.01.10.9
Effect of tabletsa) –2.1–1.4–2.0–1.8–2.6–3.0
Large schoolb) 5.6*1.23.2**1.41.6–0.4
Number of pupils per man-labour year–0.70.0–0.51.0–1.10.5
FE at school levelYesYesYesYesYesYes
Explanation degree (R2)0.670.770.660.860.670.64
No. of obs.363434

Note: The significance tests are created using a linear regression analysis with fixed effects for the school and year. *** means that we can say with 99% certainty that there are differences between the control and tablet groups. ** means that we can say with 95% certainty that there are differences between the control and tablet groups. * means that we can say with 90% certainty that there is a difference between the control and tablet groups. a) The effects of tablets is an interaction link between the dummy for being a tablet school and the dummy for being after the implementation of tablets. In other words, the effects are calculated using a difference-in-difference design.

Once again it can be seen that there are no effects resulting from introducing tablets in Class 9. This can be seen from the variable entitled “Effects of tablets” which is marked in grey, where the effects are not significant and consequently it cannot be concluded with any great degree of certainty that there is a difference in development at the schools. “Effects of tablets” can be compared with the “Diff-in-diff” results in the previous table. Furthermore, it can be seen that pupils attending schools with more than 400 pupils have significantly better test results for arithmetic than pupils at smaller schools.

Results of the Mapping Tests

The introduction of tablets is also expected to be important with respect to learning environments and learning results for reading and arithmetic for Classes 1–3. These classes are not tested in the National Tests, but National Mapping Tests are held instead. The Mapping Tests investigate to see whether or not pupils are above or below the borderline for concern with respect to the anticipated learning level, and whether or not extra adjustments might be required. An increase in the percentage above the critical limit at the pilot schools could indicate that the introduction of tablets has helped to improve learning in primary schools.

The table below shows the percentage of pupils above the critical limit with respect to the Mapping Test for reading (the respective sub-tests are spelling, reading, comprehension) and the Mapping Test for arithmetic at the Pilot 1 schools and the other primary schools in Bærum. A positive value in the “Diff-in-diff” column indicates a positive effect of introducing tablets. In the table the effects are marked by asterisks if they are significant at a 10 percent (*), 5 percent (**) or 1 percent (***) significance level (see the note to the table for an explanation of the significance).

Table 7

Difference-in-difference analyse of the percentage of pupils above the critical limit, Classes 1, 2 and 3.

Before tablets (2014)After tablets (2015, 2016)Effects
Pilot schoolsControl schools Difference beforeb)Pilot schoolsControl schools Difference afterb) Diff-in-diffb)
Class 1Spelling90.4%89.6%0.8%92.6%88.6%4.0%3.2%
Reading91.7%89.3%2.4%93.9%88.9%5.0%2.6%
Comprehension87.6%88.2%–0.7%95.0%89.1%5.9%6.5%
Arithmetic80.1%83.8%–3.6%85.7%52.8%2.8%6.5%
Class 2Spelling86.8%86.0%0.8%87.9%85.7%2.2%1.4%
Reading89.1%84.9%4.2%89.6%85.1%4.5%0.3%
Comprehension81.7%84.8%–3.1%89.1%85.5%3.6%6.7%
Arithmetic84.3%87.2%–2.9%85.3%83.9%1.3%4.2%
Class 3Spelling87.8%88.6%–0.8%86.5%86.8%–0.3%0.5%
Reading88.2%88.2%0.0%84.9%84.0%0.9%0.8%
Comprehension88.7%91.5%–2.7%87.8%91.2%–3.4%–0.7%
Arithmetic94.5%86.3%8.2%80.9%88.0%–7.1%–15.3%**
No. of schools2 schools25 schools6 obs.a) 50 obs.a)

Note: The significance tests are created with a two-tailed T-test with variations. *** means that we can say with 99 % certainty that there are differences between the control and tablet groups. *** means that we can say with 95 % certainty that there are differences between the control and tablet groups. * means that we can say with 90 % certainty that there is a difference between the control and tablet groups. a) There are twice as many observations in the “after” measurements since both 2015 and 2016 are included, i.e. all the school are included twice. Furthermore, Bekkestua Primary School is only included in the “after” measurements, something which means that there are three pilot schools in the “after” measurements, but only two schools in the “before” measurements. If Bekkestua Primary School is not included in the “after” measurements, the conclusions will not however change.

The table above has no clear conclusions. Generally speaking, positive values are shown for Classes 1 and 2 with respect to estimated effects (the “Diff-in-diff” column). However, none of these effects can definitely be said to be significantly different to zero. The effect percentages for Class 3 in arithmetic are significantly negative.10

We cannot see any particular connection by comparing the results of the Mapping Tests and the National Tests. The Mapping Tests have negative effects for Class 3 with respect to arithmetic, while the National Tests for Class 5 are positive, especially with respect to the results for the boys. This disagreement could either be attributable to the fact that the effects of introducing tablets varies from class to class, or that the tests are not suitable for capturing the type of learning results which the introduction of tablets potentially entails. We also refer to our previously mentioned reservations.

Results of the National Pupil Survey

The third and final part of our quantitative effects evaluation with respect to introducing tablets concerns the pupils’ learning environments. The enjoyment experienced by the pupils and their learning environments are measured in the compulsory National Pupil Survey for all Class 5–10 pupils in the Municipality of Bærum. In this survey the pupils generally reply that they agree with a number of statements about their everyday lives at school, on a scale of 1 to 5. In our analyses below we have chosen to base our analysis on the results for Classes 7 and 10.

The table below shows the results and estimated effects (the “Diff-in-diff” column) of introducing tablets for Class 7 at the Pilot 1 schools compared with other primary schools in Bærum. A positive value means an increase for the Pilot 1 schools compared with the other schools in Bærum.

Table 8

Difference-in-Difference analysis of the indicators in the National Pupil Survey,a) Class 7.

Before tablets (2014)After tablets (2015)b)
Pilot schoolsNon-pilot schoolsDifference beforePilot schoolsNon-pilot schoolsDifference afterDiff-in-diff
Academic challenge4.154.080.084.104.080.02–0.05
Learning environment4.304.150.154.254.170.08–0.07
Bullying1.201.22–0.021.351.150.20**0.22*
Mastery4.204.22–0.024.204.24–0.04–0.03
Motivation4.204.050.154.154.140.01–0.14
Enjoyment4.454.440.014.404.50–0.10–0.11
Common rules4.404.280.124.354.320.03–0.09
Teacher support4.504.470.034.404.49–0.09–0.12
Home support4.654.480.184.654.470.190.01
Assessment of -teaching4.003.970.034.054.000.050.02
No. of schools2 schools24 schools2 schools23 schools

Note: The significance tests are created with a two-tailed T-test with variations. *** means that we can say with 99% certainty that there are differences between the control and tablet groups. ** means that we can say with 95% certainty that there are differences between the control and tablet groups. *** means that we can say with 90% certainty that there are differences between the control and tablet groups. If no asterisks are shown, this means that there are no statistical differences. a) In the Pupil Survey the pupils respond on a scale of 1 to 5. The 10 indicators are based on a number of background questions. The design of the indicators is explained on the following website: www.skoleporten.udir.no. b) 2016 is not included in the “after” measurements since these were not available on the school portal until 20 February 2017.

The table above shows that there are no great differences with respect to the results of the National Pupil Survey between the Pilot 1 schools and the non-pilot schools. The only significant effects we have found in the National Pupil Survey relate to bullying in Class 7. The bullying indicator in the National Pupil Survey consists of the following question: “Have you been bullied at school during the last few months?", and we register a significant increase in this respect which applied to the Pilot 1 schools in 2014 to 2015. This significance is 10%, which means that there is only a 10% likelihood that this increase is coincidental and is thus not attributable to the introduction of tablets.

When extending our analysis to also include the 2016 National Pupil Survey, we still see slightly higher figures among the Pilot 1 schools when compared with other primary schools in Bærum, but the difference is smaller and not significant. This means that we find a negative effect just after the introduction of tablets, but that this subsides and ceases over time. Furthermore, we cannot exclude the possibility that the effects of bullying in 2015 are related to possible class effects and other circumstances which are not related to the introduction of tablets. We would therefore request that the indicator for bullying and the results shown here should be interpreted with discretion and in line with the reservations made.

As regards to the other indicators, the effects are also negative, but these are not significant. There is therefore a high degree of likelihood that it is coincidental that the effects are different to zero.

In the table below we look more closely at the results of the National Pupil Survey for Class 10 at the Pilot 1 schools and other secondary schools in Bærum.

Table 9

Difference-in-Difference analysis of the National Pupil Survey,a) Class 10.

Before tablets (2014)After tablets (2015)
Pilot schoolsNon-pilot schoolsDifference beforePilot schoolsNon-pilot schoolsDifference afterDiff-in-diff
Academic challenge4.304.230.074.304.33–0.03–0.10
Learning environment3.903.720.184.103.800.300.12
Bullying1.251.210.041.151.20–0.05–0.09
Mastery4.104.060.054.104.050.060.01
Motivation 3.603.590.013.603.580.020.01
Enjoyment4.204.190.014.354.290.060.05
Common rules4.003.700.26*4.103.870.23–0.04
Teacher support4.003.890.114.203.950.260.15
Home support4.104.060.044.004.11–0.11–0.15
Assessment of -teaching3.253.210.043.303.260.050.00
No. of schools2 schools11 schools2 schools11 schools

Note: The significance tests are created with a two-tailed T-test with variations. *** means that we can say with 99% certainty that there are differences between the control and tablet groups. *** means that we can say with 95% certainty that there are differences between the control and tablet groups. *** means that we can say with 90% certainty that there are differences between the control and tablet groups. If no asterisks are shown, this means that there are no statistical differences. a) In the Pupil Survey the pupils respond on a scale of 1 to 5. The 10 indicators are based on a number of background questions. The design of the indicators is explained on the following website: www.skoleporten.udir.no . b) 2016 is not included in the “after” measurements since these were not be available on the school portal at the time the analysis was carried out.

In common with Class 7, most of the effects for Class 10 in the National Pupil Survey are around zero. Our findings show no effects with respect to the pupils’ learning environment as a result of introducing tablets. If the effects are divided between boys and girls, we see the same results for these two sub-groups: no significant effects with very small percentages. As regards bullying in Class 10, we have added three extra analyses which also include an average of the bullying indicator for Classes 8, 9 and 10, as well as for 2016 in the analysis. At the same time these extra analyses do not change our earlier conclusion about bullying at secondary schools, that is, that there are no significant differences between the Pilot 1 schools and other secondary schools in Bærum with respect to bullying.

In line with the National Tests and the Mapping Tests, one should be aware that the pupils who are monitored before are not the same pupils as those who are monitored afterwards. This means that there may be class effects. At the same time, since class effects have not been significant in the two previous endpoints, they are not expected to be particularly significant in this respect either.

Summary of statistical effect analyses

Our quantitative effect assessment shows small positive effects for English and arithmetic for the primary schools (analysed for Class 5) as a result of introducing tablets, based on the results of the National Tests. However, these effects are only significant with respect to the boys. At the same time we have observed no significant effects with respect to learning results in National Tests for arithmetic and reading for Class 9.

The results of the Class 1–3 Mapping Tests are mixed and do not align with the results of the Class 5 National Tests. We find negative effects resulting from the introduction of tablets in the National Mapping Tests for arithmetic for Class 3. One possible explanation for this difference could be related to possible class effects, but it could also be seen in the light of the very small selection with few observations and average figures for each class. In this respect time series involving observations over several years might provide a more thorough analysis. Generally speaking we have so far found few effects in the National Mapping Tests and National Tests which can be said to apply to the Pilot 1 schools with any degree of statistical certainty.

The results of the National Pupil Survey with respect to Classes 7 and 10 only show one significant effect, that is, a negative effect on the bullying indicator for Class 7 in 2015. At the same time this effect is short-lived, since it appears to cease during the following year (long-term effect).

As regards our effect analyses, we would repeat for the record that it is only two primary schools11 and two secondary schools that are referred to in these analyses. This produces very few observations and means that one should be careful with generalising significant findings on to other schools. Our analysis focus on measurable effects and do not reveal whether or not teaching effects exists outside our chosen parameters. All the results must therefore be read with the reservations mentioned above.

Discussion

Our trailing research has been based on three main issues which we believe are relevant for confirming or invalidating the positive findings which were revealed at the Pilot 1 schools in 2015 and 2016: how does the introduction of tablets at the Pilot 1 schools affect experiences and attitudes, how do the experiences and attitudes of the Pilot 1 schools develop over time and how does tablet usage affect the development of pupils’ learning? In this part of the trailing research, presented in this article (first cohort), we only focus on the quantitative part of the study based on the research questions:

  1. To what extent does the implementation of tablets affect learning outcomes at schools in Bærum Municipality (based on recorded data such as National Mapping Tests, National Tests and the National Pupil Survey)?

  2. To what extent does the implementation of tablets affect social enjoyment and learning environments at schools in Bærum Municipality? (based on the National Pupil Survey).

Mixed results

In our statistical analyses of the pupils’ learning outcomes before and after the introduction of tablets, we have provisionally found few effects (attached to both of the research questions) resulting from the introduction of tablets and little to indicate a strong connection in this respect. We have only registered two significant findings at the primary schools and these two findings bear no relation to each other. On the one hand, the results were negative with respect to arithmetic for Class 3 pupils at our pilot schools, while on the other hand, the results were positive with respect to arithmetic and English among Class 5 boys at our pilot schools. The secondary schools produced no significant findings with respect to being pilot schools. Overall we found no clear results showing how the introduction of tablets had affected the pilot schools the first year (for the first cohort).

Our results from the trailing research differs clearly from the findings in “Everyday Digital Schooling – review and analysis of the experiences acquired by five pilot schools in the Municipality of Bærum” (Bærum Municipality 2015) which generally showed positive attitudes and experiences from the implementation of tablets. This might be explained by different focus areas, development work versus research, as well as the fact that the outcome measures lie outside the intervention in our trailing research. Also internationally there are similar findings that when using outcome measures which are not a part of the intervention, the effects are often modest (Cheung & Salvin 2012; 2013).

However, the Municipality of Bærum appears to have been successful with the practical implementation of tablets at the pilot schools and the establishment of good, positive attitudes among the pupils, teachers and head teachers who have been using tablets. The introduction of tablets also appears to have contributed towards positive school developments in several areas which are not necessarily reflected in the pupils’ learning outcomes in the short term (presented in this article).

Building on the findings from the review by Tamim et al. (2015a) it seems like many of the pilot schools still need to adopt a greater degree of awareness and place more emphasis on how and when tablets can and should be used in order to improve the digital didactics of lessons. And Cheung and Salvin also emphasise that “(…) the question is no longer whether teachers should use educational technology or not, but rather how best to incorporate various educational technology applications into classroom settings” (p. 102). As part of this digital didactic, other Norwegian studies shows that it might be necessary to develop and strengthen the teachers’ digital competence and class management in technology-rich environments, so that they can be more successful at exploiting the options afforded by tablets in a better, more suitable and didactic manner (Krumsvik, Egelandsdal, Sarastuen, Jones & Eikeland 2013). To change the teacher’s role from the “sage on the stage” to the “guide on the side” (Van Dusen, 2000, 14) when using tablets in teaching includes incorporating different digital didactic methods, and clarifying them in subject descriptions, competence aims and assessment forms (deeply entrenched structures) based on the national curriculum (LK06, KD 2006).

Changes take time

According to school researcher Michael Fullan (2001), changing school activities and culture is simultaneously a challenging, time-consuming process, something which often takes three to five years before any changes can become a natural part of the practices of the school in question. This is also supported by Cuban, Kirkpatrick and Peck (2001), who shows that changes in the classroom often take place slowly and in small steps, and that thorough, rapid changes are rare. This in turn is related to the fact that teachers who are weary of reforms develop, over time, a resistance to and are sceptical about comprehensive changes being made to their own daily teaching activities and what they believe constitutes good learning for their pupils. Fullan (2001) also points out that it is not uncommon to experience a temporary relapse or standstill during the early implementation phase (an implementation dip), before one starts to experience positive changes in one’s own practices and improvements in the pupils’ performance.

In the light of the relatively short-lived experiences for these pilot schools there may therefore be grounds for being patient with respect to any anticipated effects resulting from using tablets for teaching purposes, and it should also be borne in mind that the development of the pupils’ digital competences has two functions. In addition to “building bridges” between the other basic skills, and thus helping to improve learning outcomes with respect to the various subjects, digital competencies are also important for pupils being able to master the academic demands of schools and higher educational institutions and the demands of their future professional careers. This applies also to the development of other competencies, also referred to as 21st century skills12, which agree with those pointed out by the Ludvigsen Committee in its official report, entitled “Schools of the Future – Renewal of Subjects and Competences (KD 2015). Also Fullan’s (2013) six principles for twenty-first century skills underlines the importance of focusing on such transferable skills, which can be related to the national framework for digital tools as a basic competence (National Education Directorate 2016) and the new general part of the national curriculum (National Education Directorate 2018) in Norway. It is therefore important to keep in mind that tablets can contribute to pupils’ development of such 21st century skills based on providing quick and rapid access to more learning sources, updated information and different learning platforms, which in turn may pave the way for stimulating and developing pupil competencies with respect to critical and independent reflection. These are transferable skills which the outcome measures in this study do not focus explicitly on, but nevertheless can contribute to improve the quality of education at Bærum schools if these opportunities are exploited to the full.

To conclude, Bærum schools seem to have made good progress with their practical implementation of their school development ambitions, but the empirical implications of this first cohort of the trailing research show that they still have some way to go to improve learning outcome based on the implementation of tablets in the schools. The methodological implications of the study makes it possible to follow the up the first cohort with a second cohort as well as replicate the same research design in other municipalities in Norway. With some reservations, this might contribute to the enhanced use of outcome measures outside the interventions to improve the quality of the research on educational technology.

Limitations

There are a number of limitations in this study. First, in this part of the trailing research we have only presented the quantitative data of the study. This might be a certain limitation since the trailing research consists of several other data sources which give a broader picture of the implementation of tablets in Bærum Municipality. Second, the limited sample of schools is also a clear limitation of the study. Third, implementation of technology takes time and thus the short time frame of this study gives certain limitations.

Acknowledgement

We would like to thank Johanne Fyhn, Ingrid P. Gulbrandsen og Øystein Lorvik Nilsen for their contribution to this study. We would also like to thank Bærum Municipality and Rambøll Management for giving us the opportunity to carry out trailing research on the “Everyday Digital Schooling” pilot project. Finally, we would like to thank University of Bergen and the project “Digital competence in higher education” for the support.

References

Almås, A.G. & Krumsvik, R. (2008). Teaching in Technology-Rich Classrooms: is there a gap between teachers’ intentions and ICT practices? Research in Comparative and International Education 2(3), 103–121. https://doi.org/10.2304/rcie.2008.3.2.103

Bærum Municpality (2015). Everyday Digital Schooling – review and analysis of the experiences acquired by five pilot schools in the Municipality of Bærum. Oslo: Bærum Municipality.

Cheung, A.C.K. & Slavin, R. E. (2012). How features of educational technology applications affect student reading outcomes: A meta-analysis. Educational Research Review 7, 198–215. https://doi.org/10.1016/j.edurev.2012.05.002

Cheung, A.C.K. & Slavin, R. E. (2013). The effectiveness of educational technology applications for enhancing mathematics achievement in K-12 classrooms: A meta-analysis. Educational Research Review 9, 88–113. https://doi.org/10.1016/j.edurev.2013.01.001

Cole, M. (1996). Cultural psychology. A once and a future dicipline. Cambridge, MA: Harvard University Press.

Cuban, L., Kirkpatrick, H., & Peck, C. (2001). High access and low use of technologies in high school classrooms: explaining an apparent paradox. American Educational Research Journal, 38(4), 813–834.

Escueta, M., Quan, V., Joshua, A. & Oreopoulos, N.P. (2017). Education Technology: An evidence-based review. NBER working paper series. National Bureau of Economic Research. Working Paper 2374, https://doi.org/10.3102/000283120380048

Farlie, R. W. & Robinson, J. (2013). Experimental Evidence on the Effects of Home Computers on Academic Achievement among Schoolchildren. American Economic Journal: Applied Economics, 3(5), 211–240. https://doi.org/10.3386/w19060

Finne, H., Levin, M. & Nilssen, T. (1995). Trailing research. A model for useful program evaluation. Evaluation, 1, 11–31. https://doi.org/10.1177/135638909500100102

Fetters, M., Curry, L.A. & Creswell, J.W. (2013). Achieving Integration in Mixed Methods Designs—Principles and Practices. Health Research and Educational Trust, 6 (48), 2134–2156.

Fullan, M. (2001). Leading in a Culture of Change. Retrieved 16.04.2017 from: https://doi.org/10.1111/1475-6773.12117

Fullan, N. (2013). Great to Excellent: Launching the Next Stage of Ontario’s Education Agenda. Retrieved 20.03.2018 from: http://www.michaelful-lan.ca/wp-content/uploads/2013/09/13_Fullan_Great-to-Excellent.pdf

Genlott, A.A. & Grönlund, Å. (2016). Closing the gaps – Improving literacy and mathematics by ICT-enhanced collaboration. Computers & Education, 99, 68–80. https://doi.org/10.1016/j.compedu.2016.04.004

Jahnke, I., Bergström, P. Mårell-Olsson, E., Häll, L. & Kumar, S. (2017). Digital Didactical Designs as research framework: iPad integration in Nordic schools. Computers & Education 113, 1–15. https://doi.org/10.1016/j.compedu.2017.05.006

Januszewski, A., & Molenda, M. (Eds.). (2007). Educational technology: A definition with commentary. New York: Routledge.

Krumsvik, R. (2008). Situated learning and digital competence. Education and Information Technology, 4(13), 279–290. https://doi.org/10.1007/s10639-008-9069-5

Krumsvik, R. (2009a). Ein ny digital didaktikk. I H. Otnes (red)., Å være digital i alle fag. Oslo: Universitetsforlaget.

Krumsvik, R. (2009b). Situated learning in the network society and digitized school. European Journal of Teacher Education, 2(32), 167–185. https://doi.org/10.1080/02619760802457224

Krumsvik, R. (2012). The Digital school and Teacher Education in Norway. In, Jahrbuch Medienpädagogik 9 (s. 455–480). Dortmund: VS Verlag für Sozialwissenschaften. https://doi.org/10.1007/978-3-531-94219-3_20

Krumsvik, R. & Almås, A.G. (2009). The Digital Didactic. In, R. Krumsvik (ed.), Learning in the Network Society and Digitized School. New York: Nova Science Publishers.

Krumsvik, R., Egelandsdal, K. Sarastuen, N.K., Jones, L. & Eikeland, O.J. (2013). The SMIL-study. The relationship between ICT and learning outcome in upper secondary school. UiB/KS.

Kulik, C. L., & Kulik, J. A. (1991). Effectiveness of computer-based instruction: an updated analysis. Computers in Human Behavior, 7, 75–94. http://dx.doi.org/10.1016/0747-5632(91)90030-5.

Kunnskapsdepartementet (KD) (The Ministry of Education) (2006). Læreplanverket for den 10-årige grunnopplæringen. Oslo: Kunnskapsdepartementet.

Kunnskapsdepartementet (KD) (The Ministry of Education) (2015). NOU 2015: 8. Fremtidens skole – Fornyelse av fag og kompetanser. Oslo: Kunnskapsdepartementet.

Lave, J. & Wenger, E. (1991). Situated Learning. Legitimate peripheral participation. Cambridge: Cambridge University Press.

Lave, J. & Wenger, E. (2003). Situeret læring. København: Hans Reitzels Forlag.

Mayer, R. E. (2010). Learning with technology. In H. Dumont, Istance, D., and Benavides, F. (ed.), The Nature of Learning. Using Research to Inspire Practice. Paris: Center for Educational Research and Innovation, OECD.

van der Meer, A. & van der Weel, F. (2017). Only three fingers write, but the whole brain works: A high-density EEG study showing advantages of drawing over typing for learning. Front. Psychol. 8: 706. https://doi.org/10.5176/2251-1865_cbp17.1

National Education Directorate (2016). National framework for digital skills as a basic competence. Oslo: The National Education Directorate.

National Education Directorate, The (2018). The general part of the national curriculum. Oslo: The National Education Directorate.

OECD (2008). Measuring improvements in learning outcomes: Best practices to assess the value-added of schools. Paris: OECD.

OECD (2015). Students, Computers and Learning: Making the Connection. PISA. Brussels: OECD Publishing.

Perrin, N. (2011). US digital media use: A snapshot of 2012. Retrieved 20.09.2017 from: http://www.scribd.com/doc/79910462/eMarketer-US-Digital-Media- Usage-A-Snapshot-of-2012

Piaget, J. (1967). Biology and knowledge. Paris: Gallimard.

Säljö, R. (2005). Lärande och kulturella redskap: Om lärprocesser och det kollektiva minnet. Stockholm: Norstedts Akademiska Förlag.

Säljö, R. (2017). Apps and learning: A sociocultural perspective. In N. Kucirkova & G. Falloon (Eds.), Apps, technology and younger learners: International evidence for teaching (pp. 3–13). London: Routledge.

Stahl, G. (1993). Supporting situated interpretation. Proceedings of the Cognitive Science Society (CogSci ‘93), Boulder, CO, s. 965–970 http://www.cs.colorado.edu/gerry/publications/conferences/1990-1997/cogsci93/CogSci.html.

Sung,Y.T., Chang, K-E. & Liu, T-C. (2016). The effects of integrating mobile devices with teaching and learning on students' learning performance: A meta-analysis and research synthesis. Computers and Education, 94, 252–275.

Tamim, R. M., Bernard, R. M., Borokhovski, E., Abrami, P. C., & Schmid, R. F. (2011). What forty years of research says about the impact of technology on learning: a second-order meta-analysis and validation study. Review of Educational Research, 81, 4–28. http://dx.doi.org/10.3102/0034654310393361

Tamim, R.M., Borokhovski, E., Pickup, D., Bernard, R.M. & El Saadi, L. (2015a). Large-Scale, Government-Supported Educational Tablet Initiatives. British Colombia: Commonwealth of Learning.

Tamim, R.M., Borokhovski, E., Pickup, D., Bernard, R.M. & El Saadi, L. (2015b). Tablets for Teaching and Learning: A Systematic Review and Meta-Analysis. British Colombia: Commonwealth of Learning.

Van Dusen, G. (2000). Digital dilemma: Issues of access, cost, and quality in media— enhanced and distance education. ASHE-ERIC Higher Education Report, 27(5), 1–120.

Vygotsky, L. (1978). Mind in Society. Harvard: Harvard University Press.

Wertsch, J. (1998). Mind as action. New York: Oxford University Press.

Wenger, E. (1998). Communities of practice: learning, meaning, and identity. New York: Cambridge University Press.

1In other words, the percentage above the limit for concern with respect to the expected level of learning in this context.
2At the same time the incentive also included equipping all teachers and employees with their own tablets.
3 https://skoleporten.udir.no/rapportvisning/grunnskole/laeringsmiljoe/elevundersoekelsen/nasjonalt/indikatorveiledning.
4No significant differences could either be attributable to the fact that there are no appreciable differences between the pupils and schools in the two groups, but it could also mean that there are many observations in each group.
5See the report here: https://www.ssb.no/utdanning/artikler-og-publikasjoner/er-det-forskjeller-i-skolers-og-kommuners-bidrag-til-elevenes-laering-i-grunnskolen?fane=om#content
6Source: https://www.udir.no/eksamen-og-prover/prover/administrere-nasjonale-prover/#finn-og-analyser-resultatene
7This is also referred to as a Common Trend Assumption.
8Bekkestua Primary School first opened its doors in August 2014 with three class 1s during the first year and then expanded by one new class per year. They are therefore not included in most of the analyses.
9The reason for this was that the most recent update of the iOS 10 operating system for iPads had resulted in some of the functions relating to some of the National Tests failing to work properly.
10The first test studies whether or not the number of years included in the “after” measurements produces a significant impact on the results. This test has been prepared by just looking at 2015 as an “after” measurement. The same results are seen here as in Table 7, but there are no significant effects for Classes 1 and 2 and a negative significant effect for Class 3 in arithmetic. At the same time the effects are greater, but the conclusions are no different. The second test looks at the class effects. In other words, the significance of not having the same pupils in Class 1 before and after the introduction of tablets. This test has been prepared by following the same pupils before and after the introduction of tablets and then comparing them with other pupils. The test shows that there are no particular class effects, since the same results and conclusions also appear in Table 7.
11Bekkestua Primary School first opened its doors in August 2014 with three class 1s during the first year and then expanded by one new class per year. They are therefore not included in most of the analyses.
12 http://edglossary.org/21st-century-skills/

Idunn bruker informasjonskapsler (cookies). Ved å fortsette å bruke nettsiden godtar du dette. Klikk her for mer informasjon