How to identify and understand digital literacy among 9th grade Norwegian students: Examining the influences from school and home on students digital literacy
- Side: 159-175
- Publisert på Idunn: 2010-06-11
- Publisert: 2010-06-11
- Creative Commons (CC BY-NC 4.0)
Since 2003, the biennial national ITU Monitor survey has been carried out by the Network for IT Research and Competence in Education to gather information about the use of information and computer technology (ICT) in Norwegian schools, including students, teachers and school leaders. In the ITU Monitor 2009, the students were given an assessment to measure their digital literacy in addition to questions about when and how often they use ICT in school.
The aim of this paper is to identify and understand how digital literacy among 9th grade students is related to time spent on computers at school and outside school, family background and priorities from the schools. The findings from the study reveal a large variation in digital literacy and this can indicate a digital literacy divide among the 9th grade students. Further, the results show how digital literacy is related to both school factors (e.g. access to computers and schools with an ICT-supportive climate) and home factors (e.g. family background and grades). Overall, the findings indicate how systematic factors at the individual and school levels have an impact on digital literacy.Keywords: digital literacy · access, performance · family background · school management · ICT supportive climate
In most developed countries, the emergence of information and communication technology (ICT) has had an important impact on the transformation of an industrial society into an information and knowledge society (Castells, 1996; Mioduser, Nachmias & Forkosh-Baruch, 2008). ICT has changed the way we organize our working lives (the use of technology eliminated certain jobs, but provided new jobs), plan our leisure activities (use of technology for games, listening to music and watching movies) and communicate with each other (using the Internet, cell phones, personal digital assistants (PDAs) or other devices).
The emergence of ICT has influenced how educational and teaching practices are prescribed and organized – the investment in infrastructure, changing the curriculum and adaptations of educational practices towards the new curriculum. The latest reform in the Norwegian school system by the Norwegian Ministry of Education and Research in 2004 introduced five basic skills and competence goals for all subjects in the school system, from 1st grade in primary school through upper secondary education. These five basic skills are as follows:
The ability to express oneself orally
The ability to read
The ability to do arithmetic
The ability to express oneself in writing
The ability to make use of ICT
The aim of this article is to identify1This article provides additional analysis on the results of 9th grade (lower secondary school) from the ITU Monitor 2009 survey (Berge, Hatlevik, Kløvstad, Ottestad & Skaug, 2009). The ITU Monitor is a longitudinal quantitative survey of the use of and attitudes towards ICT by students, teachers and school leaders (7th grade, 9th grade and 2nd year of upper secondary school). and understand how digital literacy relates to time spent using computers at school and outside school, family background and priorities from the schools.2Among students and school leaders from lower secondary school in 9th grade.
Many different notions, concepts and terms, such as digital literacy, digital competence, digital skills, digital tools, 21st century skills and IT skills are used to describe how students can take advantage of ICT and digital technologies in learning activities. The approach of this paper is to build on digital literacy in order to understand the knowledge and mastery required to use ICT ( ) to function in an information society (Arnseth, Hatlevik, Kløvstad, Kristiansen & Ottestad, 2007, p. 36).
Digital literacy is a complex and multifaceted term (Mioduser, et al., 2008). There are different types of approaches to the term, and the literature consists of both narrower and broader definitions of digital literacy (Gentikow, 2007, p. 47). According to Erstad (2008, p.188f), digital literacy is related to the understanding of how to use ICT in ways that go beyond reading, writing and arithmetic. It is more than words and covers sounds, pictures and combinations thereof – usually denoted multimodal texts (Arnseth et al., 2007, p. 37).
In this paper, a broader understanding of the term digital literacy compared with -narrower concepts as technical, IT or digital skills, will be used in moving from mastering a simple use of ICT to exploring and solving more complex problems and challenges. ( .) it has to do with consuming, processing, applying and producing knowledge and information disseminated through digital media (Arnseth et al., p. 36–37).
The term digital literacy provides us with a framework to describe and identify how students use information, share information and develop knowledge on how to benefit from ICT and digital media. However, a more functional approach to digital literacy includes students need to have a critical approach to ICT and to make use of ICT in their learning activities. Arnseth et al. (2007) stated digital literacy is more than operational skills. We seek to understand how pupils find, understand and assess information critically and constructively (Arnseth et al., 2007, p. 37).
Putting digital literacy into practice involves going through the steps from a general term such as digital literacy to what we believe the term embodies (which in turn can be specified), to specific questions that shed light on the indicators (Arnseth et al., 2007, p. 41). One hallmark of this process is to formulate a more scholastic digital literacy that takes into account how the use of ICT is described in the competence goals found in the curriculum, according to The Norwegian Directorate for Education and Training (2010). Moreover, obstacles to the competence goals are the rather vague formulations and descriptions of how to use ICT (Erstad & Quale, 2009; Krumsvik, 2008).
Empirically, digital literacy has been measured both with self-report questions (Arnseth et al., 2007) and with multiple-choice questions (Berge, Hatlevik, Kløvstad, Ottestad & Skaug, 2009). The aim of these studies has been to identify digital literacy as it relates to the curriculum and the conditions that ensure a sustainable development of digital literacy in the Norwegian school system.
The ITU Monitor 2007 tried to balance an understanding of digital literacy as something general, on the one hand, and an ability to function in different types of media-rich surroundings on the other. The survey using self-report3According to Christensen & Knezek (2008) self-report measures are often used to analyse information about ICT attitudes and ICT competencies. items was built by an international panel on an assignment for the Educational Testing Service that operationalized digital literacy to include the concepts of access, management, integration, elaboration and creation.
In the ITU Monitor 2009, an assessment was used to reveal the students level of digital literacy within selected areas,4These areas are developed by the International Society for Technology in Education (ISTE) in The National Educational Technology Standards for Students. mainly basic knowledge of ICT and problem-solving with ICT, along with ethical considerations, communication and the use of multiple sources. The selection of areas was required in order to find the areas of digital literacy most appropriate to assessments and because the selection would be comprehensive if it included all types of digital literacy. According to Berge et al. (2009, p. 6) the sum of the answers is not an absolute measure of digital literacy, but it provides a good empirical indication of the students level of knowledge.
Overall, the experiences from operationalizing digital literacy in the ITU monitoring in 2007 and 2009 illustrate the importance of a broad understanding of the concept and to avoid a more narrow and instrumental definition of digital literacy.
The Organization for Economic Cooperation and Development (OECD) (2006) uses the notion of access as an indicator to identify and compare how far countries have adopted and used ICT. Access to computers can be defined as whether students do or do not have physical access to a computer. According to Pedró (2007, p.7) the first digital divide is related to the access to technologies; for example, this is the difference between students or groups of students having access to computers.
In the past decade, the Norwegian Ministry of Education and Research has invested heavily in infrastructure and technology in order to implement ICT in schools at all levels. The Ministry depends on how the local municipalities, as owners of the primary schools, and the county municipalities, as owners of the upper secondary schools, prioritize the use of available resources. The statistics indicate differences between municipalities and school types: primary school versus lower secondary school versus upper secondary school. The number of students per computer is one indicator of the implementation of ICT in schools (OECD, 2006). The number of students per computer in 2009 was lower than in the school system four or five years ago. In 2009 the national average was 3.46 students per computer in primary schools (Directorate, 2009). However, in the regional municipalities averages vary from 1.0 student per PC to 8.4 students per PC.
One limitation of defining access as the number of students per computer is that this definition does not take into account the quality of the technology (e.g. great differences between old and new computers). Moreover, access can be defined in a more multidimensional way, as access to computers when needed5A problem with a question about access when needed is that even if computers are rarely used in teaching practice the students can still experience access when needed. or access to computers that meet the required demand to complete the tasks. The ITU Monitor 2009 (Berge et al., 2009) revealed differences in the availability of PCs among school levels, as the students in upper secondary school reported slightly easier access to computers and more frequent use of ICT at school compared with 7th and 9th grade students. For example, 87% of the students in their second year of upper secondary school (VG2) and 80% of the students in the 9th grade (Berge et al., 2009) had experience, in whole or in part, with access to a computer at school when required. One possible explanation as to why these differences occur could be that the county municipalities have made greater efforts than local municipalities to accomplish and follow the national intentions and plans.
Time spent using ICT and school performance
The OECD (2006) used time spent using a computer at a specific place, such as a school, home, or with friends, or participating in a particular activity, using a spreadsheet, communicating, playing music, programming, as an indicator of the level of use and the adoption of ICT among students.
The OECD (2006) did measure how frequently students used computers to conduct a variety of tasks – e-mail, the Internet, programming – and how confident they were in carrying out these tasks. In their analysis the OECD clustered the results from the questions about activities into two major indexes: 1) ICT Internet/entertainment and 2) ICT program/software use. The OECD revealed a complex and non-linear relationship between ICT use and academic performance in reading and science. Moreover, the results indicated that pupils reporting moderate use of ICT had higher scholastic achievement levels in reading and mathematics than pupils reporting high or low use of ICT. However, when examining their findings more closely (OECD, 2006; Figure 4.6 p. 65 in the current OECD report), the results showed that performance in mathematics had a different relationship with the Index of ICT Internet/entertainment use compared with the Index of ICT program/software use. The performance in mathematics was approximately 0.2 standard deviation better for moderate ICT Internet/entertainment users compared with frequent ICT Internet/Entertainment users. The performance in mathematics was approximately 0.05 standard deviation better for moderate ICT program/software users compared with frequent ICT program/software users.
Overall, a difference measured to 0.05 standard deviation was a rather small difference, and one major question was whether this difference was constant for all the items in the index of ICT program/software use.
Nevertheless, these findings (OECD, 2006) indicate that the amount of time spent using ICT cannot be taken for granted as indicators of what students are learning or if they are learning as they use ICT. It is necessary to gather more detailed information about what students are doing and how they are benefiting from using technologies as part of their learning activities (Berge et al., 2009).
Previous studies (OECD, 2006; Fækjær & Birkelund, 2007) have shown that a students family and social background can be relevant and influence his or her academic achievements and performance. Family background can also be relevant to a students use and understanding of ICT.
The OECD (2006) has developed and used indicators of socio-economic status, including educational level and economic status. The results show how socio-economic status is related to performance in subjects and ICT leisure activities. According to Pedró (2007, p.7), the second digital divide is linked with the uses given to digital technologies. He suggests that socio-economic status is important to generate different approaches to digital technologies and services (IBID).
The ITU Monitor 2007 revealed the importance of family background, including the parents educational level and books at home, on digital literacy. Results from the survey indicated that integration and creation as indicators of digital literacy had a positive correlation with parents education levels and books at home (Arnseth et al., 2007).
Schools with a supportive ICT climate
According to Law, Pelgrum & Plomp (2008), one major objective with the international SITES 2006, which involved more than 20 countries, was to identify how teachers and schools organized their teaching, what ICT facilities the school provided, how teachers used ICT in their pedagogical practice and the problems and obstacles teachers experienced in relation to technology.
Both SITES (Ottestad, 2008a) and ITU Monitor 2007 (Arnseth et al., 2007) showed that there are differences between how school leaders and teachers prioritize and use ICT in educational practice. Ottestad (2008b) listed the relevant indicators of digitally competent schools. However, the link between school leaders choices, digitally competent schools and students use of ICT need to be elaborated.
The SITES study (Ottestad, 2008a) does not show any relationship between school leaders and students performance. However, more qualitative studies in which groups of teachers and schools are followed in a longitudinal project of learning networks (Lærende nettverk) (Berge et al., 2009) reveal the importance of how school leaders prioritize how the school works with ICT. Further, ITU Monitor 2009 (ibid) also highlighted the differences between school levels as the results showed variations among schools at the same level. Therefore, it seems important to determine whether there are any important hallmarks to identify schools with a supportive ICT climate.
The purpose of this article is to answer the following two research questions:
What are the relationships among digital literacy, access, time students spend on the computer, grades and family background?
Do any of the following factors affect digital literacy: access to computers, time on the computers, family background, grades and how the school leaders prioritize the use of ICT in their school?
Procedures and participants
The information to be analysed was collected as part of ITU Monitor 2009. The selection of schools was drawn on information provided by the Directorate. Synovate Norway6Synovate Norway (former MMI) is a market research firm. handled the practical selection and contacted the schools. The selection was stratified according to region, school type and number of pupils. The survey was conducted using a web-based questionnaire administrated by the schools.
The number of classes selected was 126. Analysis shows that 63% of the school leaders and students from 49% of the schools answered the questionnaires. Overall, there were 682 students (356 females) and 80 school leaders from the 9th grade who answered the survey.
The student questionnaire was developed on the basis of the ITU Monitor 2007, PISA ICT 2003 (OECD, 2006) and PISA ICT 2006 (OECD, 2005) and questions from the digital assessment test in the municipality of Oslo. The questionnaire had 70 items, and data from some of these questions formed the basis for this article. The school leader questionnaire was developed from ITU Monitor 2007 and experiences from the SITES study (Ottestad, 2008a).
The students were asked whether they had access to a computer at school when needed. They were also asked about their use of computers: time spent on computers at school and time spent on computers at home.
The students were asked about their parents education levels, from no education to higher educational institutions, and the number of books at home, coded as 1 = no books, 2 = 1–100, 3 = 101–500, 4 = 501 and more books.
They answered questions about grades in five subjects: Norwegian, English, social science, mathematics and science. For these questions, 1 was the lowest grade and 6 was the highest grade. The study concluded with a test in digital literacy, which was designed as a multiple choice test with one statement and the students could choose from four different answers, one true and three false.
School leaders were asked questions about how they prioritized ICT, how they perceived the integration of digital learning resources in teaching practices and how they perceived the collaboration between students and teachers. The answers to these questions were incorporated into an indicator labelled schools with an ICT-supportive climate.
A test was developed to measure and identify digital literacy. The test focused on two relevant forms of competence – basic ICT and problem-solving with ICT – and three other themes: ethical assessments, use of multiple sources and communication. Emphasizing these themes contributed to a broader understanding of digital literacy. However, the sum of the answers is not an absolute measure of digital literacy, but it provided a good empirical indication of the students level of knowledge (Berge et al., 2009).
All statistical analysis was conducted using Statistical Package for the Social Sciences (SPSS) version 16.
The questions from the assessment were analysed, studying the p-value for each question and the correlation between each question and the total score of the test.
Bivariate correlations were computed to examine the relationships among digital literacy, grades, family background, motivation and time spent on computers per week.
Multilevel analysis was used to identify differences in the test results among students in the 9th grade to find answers to the second set of research questions.
The means and standard deviations for all measures are reported in Table 1. The first question in Table 1 is a statement about access to computers at school. Four out of five students agreed with the statement that they had access to a computer at school when needed. The next two questions in Table 1 were about time spent on computers at school and at home. The results indicated more frequent use of computers at home compared with use at school, and these findings are consistent with the previous findings in the ITU Monitor 2005 and 2007 (Arnseth et al., 2007).
Table 1: Statistical means and standard deviations for access to computers, hours by computers, digital literacy, family background and grades
|Access to computers at school||3.04||0.83|
|Hours on computer at school (weekly)||2.34||1.01|
|Hours on computer at home (weekly)||3.68||1.48|
|Fathers level of education||3.47||0.76|
|Mothers level of education||3.57||0.65|
|Books at home||2.41||0.72|
The average score on the test in digital literacy was 6.88 points on the 14 questions (approximately 49% correct). Initially the test in digital literacy consisted of 17 questions, but analysis of the questions revealed that three items were too difficult for the students and therefore they were removed. Overall, the test has some relative easy questions (p > 0.70) in combination with many rather difficult questions (p < 0.43). From a traditional test theoretical perspective (Crisp, 2007), the test could be better balanced with more moderately difficult questions (p-value between 0. 45 and 0.60). In retrospect, the test seems too difficult for students in Year 9, even after the three most difficult items were removed, and the results from the ITU Monitor 2009 showed the test to be more suitable for upper secondary level 2 students (VG2).
Bivariate correlations between all the measured concepts were computed and are presented in Table 2. Access is significantly positive related to digital literacy and ICT-supportive schools. Time on computers at school is significantly positively related with time at home and at an ICT-supportive school. Time at home is significantly positively related with digital literacy, grades and an ICT-supportive school.
Table 2: Correlation of access, time spent on computers, digital literacy, family background, grades and ICT-supportive resources
|2. Time school||.24**||1|
|3. Time home||–.02||.19**||1|
|4. Digital literacy||.07*||–.01||.16**||1|
|5. Fathers education||–.05||–.08||–.02||.18**||1|
|6. Mothers education||–.06||–.04||.03||.16**||.51**||1|
|7. Books at home||–.09**||.02||.06||.18**||.15**||.22**||1|
|9. ICT supportive school indicator||.15**||.27**||.11**||.15**||.05||.08||.10**||.07*||1|
|Note: * < 0.05 and ** < 0,01|
High scores on the assessment of digital literacy are significantly positively related with high levels of grades, high levels of fathers education, high levels of mothers education and a larger number of books at home. Digital literacy is also positively related with the use of ICT at home and the factor labelled ICT-supportive school. Moreover, high levels of grades are significantly positively related with high levels of parents education and the number of books at home.
The results from multilevel analysis are reported in Table 3. Multilevel analysis is just regression (Bickel, 2007), measuring the relationship between a dependent variable (digital literacy) and several independent variables. The benefits of using multilevelanalysis are that the analysis takes «into account that students from the same school can have a number of things in common compared with students from other schools» (Berge et al., 2009, p. 12).
Table 3: Results from multilevel analysis with score in digital literacy as dependent variable (9th grade)
|Models 7All items and indicators are centralized, except the indicator of ICT-supportive climate at school.||Model 1(baseline model)||Model 2||Model 3|
|FIXED EFFECT ESTIMATE|
|Grades (average from five subjects)||0.84**||0.11||0.81**||0.11|
|ICT-supportive climate at school||0.41*||0.18|
|Books at home||0,26*||0.11||0.27*||0.11|
|Time spent on computer at home||0.22**||0.05||0.22**||0.05|
|Access to computer at school||0.24*||0.09||0.20*||0.09|
|* < .05 and ** < .01|
|Predicted variance at the pupil level||11.59%||15.1%|
|Predicted variance at the school level||100.00%||96.55%|
|Predicted variance total||16.20%||19.55%|
Table 3 consists of three models: Model 1 is the baseline model including scores from assessment in digital literacy and schools. In Model 1, approximately 5.4% of the variation can be explained by differences among schools, and approximately 94.6% can be explained by differences among students.
Both Model 2 and Model 3 are promising. The main difference between these two models is on the selection of questions for the school leaders. In Model 2, the school leaders were asked whether they give priority to using external courses or lecturers (formerly presented in ITU Monitor 2009), and in Model 3 the school leaders were asked several questions about the level of the ICT-supportive climate at their schools. Model 2 offers a higher predicted variance between schools compared with Model 3 (100% in Model 2, 96.55% in Model 3). However, Model 3 provides a higher predicted variance between students and a higher predicted variance total (19.55% in Model 3 vs. 16.20% in Model 2).
Model 3 shows that grades, books at home, time on computer at home, access to a computer at school when needed and an ICT-supportive climate at school affect digital literacy. A change in a grade by one grade point will yield a 0.81-point change in test scores; a change in a book factor by one unit will yield a 0.27-point change in test scores; a change in computer time at home by one level will yield a 0.22-point change in test scores; a change in experienced access to a computer by one unit will yield a 0.20-point change in test scores, and a change in the ICT-supportive school indicator by one unit will yield a 0.41-point change in the test scores.
Overall, the results from the study show that digital literacy is positively related to the academic performance of the students (grades), the home situation of the students (e.g. family background and time on the computer at home) and the school (e.g. access to computers and schools with an IC- supportive climate).
The results show that the students meeting the standards in five different subjects also meet the demands of the competence goals in digital literacy. Grades on average from five subjects seem to be the most important factor influencing digital literacy. One reason could be that students with positive experience and high self-confidence from their academic performance have an advantage on the assessment of digital literacy compared with students with negative experiences and low self-confidence from their previous academic assignments. Another reason could be that the assessment of digital literacy is related to academics, because the questions were developed in order to identify digital literacy as described in the curriculum.
Digital literacy has a positive relationship with time spent on computers at home, parents education and the number of books at home. The multilevel analysis reveals that books at home and use of ICT at home have a positive impact on digital literacy. The importance of family background on digital literacy correlates appropriately with academic performance (Directorate, 2008). One explanation could be that parents are often role models for the children, and when parents emphasize the importance of education orally or by their attitudes this influences how the children perceive and value school, including setting competence goals.
The digital divide between students
The statistics from the Directorate (2008) show that the availability8The notion of students per computer provides quantitative information, but not data about the quality of the available infrastructure and computers in the schools. of computers is rapidly improving in Norwegian schools, and this indicates a decrease in the first digital divide, which was access to technologies. However, there are still variations among the schools, and the results from ITU Monitor 2009 revealed that approximately 20% of the students in the survey do not have access to computers when needed. These findings support the necessity to continue focusing on students access to technologies. School leaders and school owners need to emphasize the implementation of ICT equipment in their schools.
The availability of equipment is not enough to avoid differences between how students use ICT and benefit from using ICT in school. In order to prevent an emergence of a second digital divide (Pedró, 2007), school leaders and the school owners have to emphasize the educational and administrative use of ICT in schools (Cuban, 2001).
The digital literacy divide – the differences in how students are able to make use of ICT in their learning activities – is related to the differences in socio-economic status among the pupils (Pedró, 2007). The results from this study, in other words, the differences in how students are able to give the correct answers to an assessment of digital literacy developed from the competence goals in the curriculum, show that digital literacy has a positive correlation with family background. High scores on the assessment of digital literacy correlate with highly educated parents and many books at home.
According to Brown & Duguid (2002), new technologies are often met with a very optimistic perception of the benefits from the implementation of the technologies. These benefits include changing social structures and providing equality between students. However, one important finding from this study is that ICT does not seem to remove the impact of family background. It seems to be more difficult for outsiders in the educational system whose parents have lower levels of education to benefit from using ICT in school compared with students whose parents have higher levels of education. Overall, the findings from this study indicate that ICT in school is reinforcing the impact of students background on the performance level of digital literacy as defined by the school system and the national curriculum. Therefore, it is necessary to have a critical perspective on the disadvantages of implementing new technologies in schools (Brown & Duguid, 2002).
Variations in digital literacy among schools
The basic assumption of a multilevel model (Bickel, 2007) is that a parameter, digital literacy, varies at two levels or more – individual, class, school, region. In this study, multilevel analysis was conducted to examine how variations in the digital literacy scores can be attributed to differences among students or differences among schools (a model with two levels).
The multilevel analysis of the baseline model, consisting of students digital literacy score and their schools, reveals variations in digital literacy scores. The results show how a smaller percentage (approximately 5.4%) of the variation in the digital literacy scores can be explained by differences among schools (Berge et al., 2009, p. 15). The larger part of the variations in digital literacy score can be explained by differences at the individual level.
Further multilevel analysis of other models, including more information about the students and the schools, shows high levels of predicted explained variances among the schools and rather low levels of predicted explained variances among the students (Table 3). It seems that the schools are more equal and similar when controlling for grades, books, time at home and access to computers and an ICT-supportive climate at school (Model 3 in Table 3).
Overall, it seems that the most important explanation of how students attain digital literacy is connected to systematic factors at the individual level, and characteristics of the students, rather than factors at the school level. However, the results from the survey indicate that factors at the school level contribute to the goals in the national curriculum, according to how the school leaders prioritize the use of resources on ICT in combination with how they support teachers in implementing technology in their teaching practices (e.g. developing an ICT-supportive climate). Therefore, it is important to have a closer examination of the factors at the school level (e.g. as the indicator of an ICT-supportive climate) to identify how schools influence the possibility for students to develop digital literacy.
The findings from this study support the need for further research on digital literacy. It is important to continue using the term digital literacy, and to develop sustainable indicators to identify, describe and understand digitally literate students and digitally literate schools.
The results indicate that some systematic factors on an individual level have an impact on how the students acquire digital literacy. However, the models from the multilevel analysis (Table 3) explained a relative modest percentage of the variances within the schools. Therefore, it is important to identify other factors at the individual level that influence digital literacy.
Furthermore, the experience of access at school and time spent on the computers at home were positively related to digital literacy. This indicates the importance of how students are asked about their access to and use of ICT. OECD (2006) did find differences between the amount of time spent on the Internet/entertainment compared with time spent on ICT program/software. Therefore, it would be interesting to know if uses of certain digital tools or resources are related to performance. It might also be necessary to carry out studies to identify how students use digital learning resources when they participate in certain learning activities at school, and if any steps or choices in the learning process could be related to or have a positive impact on students academic performance and digital literacy.
The results also indicate that factors at the school level are important; however, a closer examination of these factors is required. For example, from the study, there are obstacles with Model 2 and Model 3 from the multilevel analysis (Table 3). An ICT-supportive climate at school is a factor in Model 3, whereas the single item gives priority to using external courses or lecturers is a factor in Model 2. One major obstacle with the question priority to using external courses or lecturers is that teachers report they are most satisfied with in-house training when developing their own digital competence. One obstacle with the indicator of an ICT-supportive climate is a combination of several questions (e.g. about school leaders priorities, collaboration and integration), and we do not know the unique contribution from each item on digital literacy. Therefore, a closer elaboration of factors at the school level is required in further studies.
Overall, the analysis of the results from the students in 9th grade reveals the importance of grades, number of books at school, access to computers and an ICT-supportive climate for digital literacy. The combination of several particular themes supports the complexity of identifying digitally literate schools and students (Mioduser et al., 2008).
In this paper digital literacy has been measured using a multiple-choice test consisting of questions on basic ICT, ethical assessments, use of multiple sources, communication and problem-solving with ICT. There are several important challenges in the process of identifying and describing digital analysis: 1) to have a broader perception of digital literacy, ranging from demonstrating digital skills, such as the use of a specific software, towards production, ethical judgement, critical thinking, collaboration and creativity; 2) prevent assessment-driven teaching practices, such as by emphasizing the assessment of digital literacy as a formative evaluation; and 3) to ensure that the identification and understanding of digital literacy is theory driven and not solely defined from what is possible to measure in a quantitative way.
|1||This article provides additional analysis on the results of 9th grade (lower secondary school) from the ITU Monitor 2009 survey (Berge, Hatlevik, Kløvstad, Ottestad & Skaug, 2009). The ITU Monitor is a longitudinal quantitative survey of the use of and attitudes towards ICT by students, teachers and school leaders (7th grade, 9th grade and 2nd year of upper secondary school).|
|2||Among students and school leaders from lower secondary school in 9th grade.|
|3||According to Christensen & Knezek (2008) self-report measures are often used to analyse information about ICT attitudes and ICT competencies.|
|4||These areas are developed by the International Society for Technology in Education (ISTE) in The National Educational Technology Standards for Students.|
|5||A problem with a question about access when needed is that even if computers are rarely used in teaching practice the students can still experience access when needed.|
|6||Synovate Norway (former MMI) is a market research firm.|
|7||All items and indicators are centralized, except the indicator of ICT-supportive climate at school.|
|8||The notion of students per computer provides quantitative information, but not data about the quality of the available infrastructure and computers in the schools.|