Oppgrader til nyeste versjon av Internet eksplorer for best mulig visning av siden. Klikk her for for å skjule denne meldingen
Ikke pålogget
{{session.user.firstName}} {{session.user.lastName}}
Du har tilgang til Idunn gjennom , & {{sessionPartyGroup.name}}

Mind the Gap: Divergent Objects of Assessment in Technology-rich Learning Environments



Faculty of Arts, Folk Culture, and Teacher Education, Telemark University College, Norway toril.aagaard@hit.no



Department of Teacher Education and School Research, University of Oslo, Norway andreas.lund@ils.uio.no

Computers change the conditions under which learners complete assignments. Based on interviews with teachers from upper secondary school, we examine the strategies developed to deal with challenges emerging from this contextual change. The results indicate the emergence of divergent teacher strategies that seem intimately linked with fundamental assumptions about learning and knowledge. These differences are understood as expressions indicating divergent objects of assessment developing among teachers. If we want to design assessment practices with ecological validity in the networked society, such gaps are important to examine, discuss, and act upon.

Keywords: assessment, digital technologies, tertiary contradictions, discursive manifestations

Introduction: setting the scene

In a significant editorial on the thematic issue “Assessment for the digital age,” McFarlane (2003) claims that the conventional ways of performing assessment seem to disturb and delay the development of technology-mediated learning practices. Since then, the explosion of new social network technologies can be said to have highlighted an “awkward relationship between new ‘21st century’ media practices and existing educational systems” (Hickey, Honeyford, Clinton, & McWilliams, 2010, p.107). Eight years after McFarlane’s editorial, Selwyn (2011) questions the high hopes for technology in education. Claiming that popular and political discourses about education and technology tend to be reduced to concerns about whether technology is a “good” or a “bad” thing, he recommends that we go beyond observing what happens and focus on the contradictions, compromises, and conflicts that are the underlying mechanisms explaining educational practices. He suggests that this can be done by studying issues such as how the “lived” experience of teachers and students influences their use of technology (Selwyn, 2011, p. 37).

We believe that Norwegian upper secondary schools represent an interesting research arena for what Selwyn describes. Since the late 1980s the Norwegian government has paid special attention to assessment in education – an interest reinforced by the PISA results published in 2000 (Nusche, Earl, Maxwell, & Shewbridge, 2011). Also, the 2006 Knowledge Promotion Curriculum (K06) listed digital competence as a basic competence on par with reading, writing, oral proficiency, and numeracy. The reform initiated development of educational infrastructures and today all Norwegian students in upper secondary schools have access to digital tools, even during exams1. Allowing students to use computers changes the conditions under which learners show their capacity to complete assignments. This broad effort to promote the use of technology in Norwegian upper secondary schools – and in assessment – gives reason to believe that the teachers working there can provide useful insight into their lived experiences of coping with the transformed contexts for assessment. Against this backdrop, we ask:

  • What are some of the strategies teachers develop in order to meet the challenges emerging from students having access to computers while responding to tasks?

This article reports on a study of how Norwegian teachers approach issues of learning and assessment in educational environments, in which all students have their own laptops. We have analyzed how 13 teachers at two Norwegian upper secondary schools attempted to come to terms with assessment issues in this transformed context.

Cultural Historical Activity Theory (CHAT) (Engeström, 1987; Engeström & Sannino, 2011) provides concepts for studying the development and transformation of activities in their institutional contexts and has therefore been chosen as the theoretical framework. Valid assessment practices have developed over historical time and have become institutionalized. However, when such practices meet challenges that emerge in current and, thus, much shorter time scales, contradictions arise. CHAT offers a conceptual framework, as well as a methodological approach to unpacking and understanding such contradictions.

Before describing theoretical concepts in more detail, we first review the emerging research on assessment and ICT, in order to relate the present study to the research field. Next, we present an empirical analysis of the discursive manifestations of contradictions (Engeström & Sannino, 2011). Finally, we discuss the findings and suggest issues for further research.

Traditional and emerging assessment practices

In 2009, Erstad wrote in this journal that “existing models of assessment are typically at odds with the high-level skills, knowledge, attitudes and characteristics of self-directed and collaborative learning that are increasingly important for our global economy and fast-changing world” (p. 207-208). He calls for the development of new types of assessment designed to engage students using technology while responding to complex real-world tasks. However, studies of such cultural transformation should be “analyzed as historical products which themselves are subject to dynamic transformation and change as people act within and on them” (Daniels, 2012, p.9). It is therefore essential to shed light on both traditional and emerging assessment practices.

Traditionally, assessment has been intended to capture pupils’ achievements in a delimited subject area, at a certain point in time (Assessment and Learning Research Synthesis Group, 2003). Such point-in-time assessment is a historically powerful practice linked to accountability and selection mechanisms (Labaree, 1999). Designed to prove student-learning outcomes and maintain construct validity and reliability, tests with only a few controllable variables are often favored.

Since the late 1990s (e.g., Black & Wiliam, 1998), more learning-oriented assessment practices have emerged. The focus is on where learners are in their learning trajectories, where they need to go, and how best to get there (Kreisberg, 1992). A particular example of such a formative assessment practice is Dynamic Assessment (DA) (Poehner & Lantolf, 2005), which builds on Vygotsky’s (1978) zone of proximal development. In DA, instruction and assessment are treated as a dialectic unit. Assistance may come from peers, experts, or cultural artifacts, such as digital technologies. Building assessment practices based on sociocultural learning perspectives implies the recognition of knowledge as shared and developed through collective and mediated processes of meaning-making (Daniels, 2009; Vygotsky, 1978). Griffin, Care, and McGaw (2012) claim that nothing is too hard to measure. Success merely depends on how measurement is defined and how we organize situations in which learning occurs and can be made visible. But due to a lack of experience with how to assess collaborative and interactively constructed learning (Scardemalia, Bransford, Kozma, & Quellmalz, 2012), such learning is often considered difficult to assess. However, some researchers explore, for example, how specific computer games might allow for the participatory assessment of inquiry learning (Hickey et al., 2010) or how software can be designed to trace individual, as well as collective, learning trajectories (Lund, Rasmussen, & Smørdal, 2009). The teachers we interviewed reflect some of the concerns listed above as they try to design tasks that include learners’ use of blogs, Internet discussions, and games.

In order to review research focusing on how technology influences teachers’ assessment practices, we have searched the Academic Search Premier and ERIC databases for peer-reviewed articles between 2005 and 2012. For both databases we used the following keywords to delimit the results: “assessment”, “technology”, and “teacher”. The term “secondary school teachers” was added for the ERIC database. This search left us with eleven relevant articles. One text focused on using assessment criteria as a learning enhancement tool (Loveland, 2005). Eight show how the use of specific technologies in assessment has didactical impact (Cavanaugh & Dawson, 2010; Enriquez, 2010; Feldman & Capobianco, 2008; Krucli, 2004; Murphy & Rodriguez-Manzanares, 2009; Peake, Duncan, & Ricketts, 2007; Savage & Fautley, 2011; Savenye et al., 2003). One article (Straub, 2009) analyzes the advantages and shortcomings of research on the adoption of computers in education. Straub also questions what influences an individual’s tendency to adopt a technology. He finds that often, affective aspects of technology development and teacher responses to change are neglected. He suggests that future research on technology adoption should “examine the consequences of technology to create a holistic understanding of how technology change influences the organization and the individual” (p.645), which is also an issue in the present study. Overbay, Petterson, Vasu, and Grable (2010) investigate the relationship between levels of teachers’ constructivist commitment and their reported use of technology. The results indicate that constructivist practices and beliefs influence teachers’ use of technology. This resonates with findings in the present study (cf. the analysis and discussion sections that follow). These two latter articles call for more in-depth studies to reveal how teachers’ perspectives and affective responses relate to the implementation or use of technology for educational purposes.

Analytical framework for studying challenges and response

For Leont’ev (1978) and Engeström (1987), there is no activity without an object. This makes the concept of the object essential in a CHAT study of assessment. The object is what gives direction to the activity – its true motive. An institutional object of activity is considered to be multi-faceted, always including multiple interpretations and voices that invite a multitude of possible actions. A teacher’s task is to evaluate the quality and value of student processes and productions. Consequently, valid assessment is the object of their activity. However, actors construct the real meaning of valid assessment as they make sense of their actions and activities. What gives our actions direction is affected during such processes, but it is also affected by history. This implies that the construction of objects does not happen on the spot, but rather that it is a longitudinal process.

From a CHAT perspective (Engeström, 1987), people’s engagement in developing and changing established institutionalized activities is understood as a response to emerging tensions. Such tensions make people question conventional and historical ways of doing things. Consequently, such tensions are seen as potential springboards for change that are important to identify and act on in institutional change efforts. Tensions are embodied in praxis and can therefore be studied. Contradictions, however, are systemic, embedded in history, and develop over time. Consequently, they cannot be studied directly. Thus, tensions indicate contradictions and can be traced by analyzing their discursive manifestations (Engeström & Sannino, 2011).

Engeström describes various types of contradictions (Engeström, 1987). Even though primary contradictions (e.g. teachers that disagree on a practice), as well as secondary contradictions (for example, between exam rules and regulations and the availability of digital resources), represent relevant approaches to analyzing our data, they can also be viewed as feeding into a tertiary contradiction, which we use as our analytical focus. Tertiary contradictions can be found between an activity system and a future, ideally more sophisticated version of the activity system. For example, if traditional ways of constructing the object (in our case, valid assessment) appear to be insufficient, new qualitative forms of activity might emerge as solutions to the contradictions of the preceding form. If that is the case, a renewed version of valid assessment develops and gives directions for the activities in a revised and hopefully improved activity system. This describes a tertiary contradiction that emerges as “invisible breakthroughs” – innovations from below (Center for Activity Theory and Developmental Work Research, n.d.). However, such innovations initiated from for example individual teachers, may become so hard to introduce and integrate at an institutional level that those involved simply give up. Thus, even if contradictions are seen as springboards for change, it is important to note that the need states do not necessarily trigger the expansive development of new practices: A need state “may be 'resolved' through regression or through expansion” (Engeström, 1987, p.166).

In the empirical analysis that follows, we will show how the concept of tertiary contradiction can help examine how teachers cope with educational challenges connected with assessment when their learners use digital and networked technologies. However, we first briefly describe the context and discuss our methodological approach.

The context of the study and its methodology

Our empirical analysis is based on ten semi-structured interviews with individual teachers (30–90 minutes), two semi-structured group interviews involving six teachers (105-130 minutes), and one preparatory meeting between the interviewer (I), a counselor, and a teacher. The intent of the preparatory meeting prior to the interviews was to inform the teachers about the project, and to be informed about discussions on assessment in a digital age. The teacher shared relevant experiences and reflections during this meeting. Consequently, such experiences and reflections were included as data. All data were gathered in 2010 and 2011. This adds up to a total of 950 minutes of audio-recorded interviews, of which 760 minutes are transcribed. For the group interviews, the informants were asked to prepare, present, and discuss specific assessment issues with the aim of identifying what characterizes assessment practices that work well when ICT is integrated into schooling. All the 13 informants involved, teach Norwegian as a first language.

Teachers Andrew, Kim, Ingrid, and Michelle (names changed) work at a Central upper secondary school. The school is located in an urban area characterized by a population with high socio-economic status. When the school opened in 2006, all students received their own laptops. On the web, the school is presented as future-oriented, and adapted to modern pedagogical thinking and technology.

Teachers Therese, Henry, Tony, Ruth, Ann, Harris, Katherine, Bill, and Rose work at Sutherland upper secondary school, which is an old school located in a historically distinguished building in a smaller Norwegian city. According to its website, its vision is to be a school that provides knowledge and culture of a high academic quality in a safe and inspiring environment. Before the curriculum reform in 2006, all their students already had their own laptops. Even though this is not a comparative study, we include this contextual information, because we need to keep in mind that institutional cultures influence people’s practices and strategies (Olson, 2003; Schofield, 1995).

The selection of informants was based on their interest in acting as informants; thus, this was a case of purposive sampling (Oliver, 2006). This implies that the participants are not necessarily representative speakers of the larger teacher community. However, their reflections are hardly atypical and can be seen as empirical carriers of more general concerns.

The individual interviews started by asking the informants to describe their assessment practices, before elaborating on their reflections on and experiences with students having access to computers while responding to tasks. Through semi-structured interviews (Kvale & Brinkmann, 2009), we framed the exchanges on assessment issues but avoided pre-determined categories that might compel participants to ignore their local contexts or miss out on the discovery dimension. To prevent sending normative messages or constraining the talks, we mostly took the role of a listener (Boote & Beile, 2005) in both kinds of interviews. Informants who willingly shared their experiences and reflections characterized all interviews.

The process of selecting, analyzing, and presenting qualitative data from studies of complex social realities is challenging, because it involves reductionism. Delimited categories with clear criteria for inclusion or exclusion rarely appear. Our analysis was done in two steps. To gain an insight into the informants’ interests in terms of student access to computers, while responding to tasks, we first performed a content analysis, identifying the recurring issues. To understand the types of challenges they face, as well as the strategies they use dealing with these challenges, discursive manifestations of contradictions are analyzed using the categories (Engeström & Sannino, 2011) presented below:

Table 1. Discursive manifestations of contradictions (Engeström, 2011a, p. 34)

Manifestation Features Linguistic cues
Double bind

Facing pressing and equally unacceptable alternatives in an activity system:

Resolution: practical transformation (going beyond words)

“we”, “us”, “we must”, “we have to” pressing rhetorical questions, expressions of helplessness

“let us do that”, “we will make it”

Critical conflict

Facing contradictory motives in social interaction, feeling violated or guilty

Resolution: finding new personal sense and negotiating a new meaning

personal, emotional, moral accounts  

“I now realize that...”

Conflict

Arguing, criticizing

Resolution: finding a compromise, submitting to authority or majority

“no”, “I disagree”, “this is not true”

“yes”, “this I can accept”

Dilemma

Expression or exchange of incompatible evaluations

Resolution: denial, reformulation

“on the one hand... on the other hand”, “yes, but”

“I didn’t mean that”, “I actually meant”

These four types of manifestations do not come across as discrete units; overlaps and grey areas are common. Before we put the concepts to work in the analysis, the outcome of the content analysis will be presented.

Analysis and findings

Overview of the thematic issues brought up by the participants

The content analysis of the various interviews resulted in an overview of recurring issues brought up by the teachers. How many of the 13 teachers who spoke about the various themes are listed in Figure 1 below:

Figure 1. Numbers of teachers speaking of the recurring themes

The thematic analysis presented in Figure 1 shows what the informants perceived to be important. However, it does not visualize the diverse and even conflicting opinions voiced on the same themes. From analyzing the informants’ reflections and articulated experiences across the themes, a pattern emerged in the shape of dichotomous views on the following four issues: concerns about and opportunities for learning, what counts as valid knowledge, teacher responsibilities in assessment, and the impact of the established exam system.

Discursive manifestations of tertiary contradictions

In this part of the analysis, we seek to present the teachers’ co-existing and dichotomous views referred to above and thus to show how the notion of discursive manifestations (Engeström & Sannino, 2011) of a tertiary contradiction materializes. Excerpts are selected and combined, because they represent empirical carriers of a tertiary contradiction. Seeing them in combination, it is possible to observe patterns in the discourse that give an insight into the contradictions as underlying mechanisms explaining the assessment practices that are enacted in Norwegian classrooms today.

Concerns about and opportunities for learning

In one of the interviews, Therese (Th), a colleague from Sutherland, focuses on particularly weak students’ learning outcomes when they have access to computers during task work. Reflecting on student motives for copying and pasting material found on the Internet, she says:

Th: The weakest [learners] have problems making things on their own — they find so much great and fine and nice material — why should they make it their own and thereby reduce the quality [of the original material]? […] It is sad that they have to know so little.

Therese equates finding and using material from the Internet with copying. Phrases such as “problems making things their own” and that “It is sad that they have to know so little” indicate that she values knowledge as an attribute of the individual and, to use a Bakhtinian term, that students do not appropriate what they encounter; they merely master certain reproductive techniques. Appropriation is, indeed, a challenging process because it involves taking something from other people’s contexts and adapting it to one’s own, instilling one’s own intentions within it (Bakhtin, 2000). If students experience that it is possible to respond to assignments and “succeed” without having to invest themselves personally in the process, they would seem to enact Engeström’s observation that “The history of school is also a history of inventing tricks for beating the system, and for protesting and breaking out” (1987 p. 96). The interview with Therese is characterized by her speaking about the pros and cons of student access to computers, and even if she speaks of computers as being beneficial for many students, quotes like the ones above reflect personal and emotional involvement, which is typical of a critical conflict (Table 1).

Tony’s (T) experience from Sutherland contrasts with Therese’s. In the following excerpt, he shows how computer games can be used in innovative task designs:

T: I challenged a group on computer games: “Can you study the narrative patterns in computer games, then? Can you compare them with movies and novels?” And the three boys, [previously] all getting the grade 3 [slightly below average] all the time, they [now] produced some very good presentations. […] they had found articles on the Internet in which students at their own level had written some analyses. [...] And what I thought was good about these boys was that they found out what was interesting for their audience – students at their own age who didn’t know much about the topic […] And it resulted in a massive amount of questions from the girls. It was a lot of fun.

In many ways, Tony’s account can be read as a resolution of Therese’s critical conflict; we see a teacher “finding new sense and negotiating a new meaning” (cf. Table 1). Tony shows the possibilities of using computers to engage students in innovative, personal, meaningful learning processes that involve collectively oriented learning. His account indicates that these students benefit from tasks that let them work across contexts, on the Internet and in school, taking what interests them as a point of departure. Thus, when learners identify parallel plots in the computer games, they are challenged to do the same with a novel on their reading list. Building on learners’ interests and the social dimensions of the web, the combination of tasks expands the horizon of educational opportunities. In addition, Tony takes on responsibilities beyond that of providing knowledge:

T: (…) and then, uh… how can I relate this to the syllabus? […] ”Oh yes! It is like that!” Right? But you have to stretch out […]

Through Tony’s reflections, we see how he searches for solutions by developing assessment practices that still are aligned with the rules (the syllabus) of the established activity system. This is no easy effort. He has to “stretch out” after having a eureka moment: “Oh yes! It is like that!” Thus, Tony’s account reveals a “resolution [that] often takes the shape of personal liberation and emancipation” (Engeström & Sannino, p.374), another indicator of a critical conflict.

During the interview, Tony points to affordances and shares experiences from tasks, in which he found technology to mediate engagement and learning, again emphasizing the liberating aspects of such experiences:

T: I use peer-based texts that they sit and read […] for example, when they interpret song lyrics. They choose their favorite song lyric, present it, present themes and translate it into Norwegian. I found several databases on the Internet where people of their own age had interpreted and analyzed and… then, one [student] that I had never believed would enjoy interpreting poems, found a database with the group “The Killers” and a favorite song, which was “The Spaceman.” And then, there was this long forum string where they [young people] had interpreted this. And then, he understood the point about metaphors and a lot of literary techniques and things like that. And then, he stood there telling his colleagues, no... classmates... they almost fell to the floor... I never thought I would hear a boy like that say that, “Now, I understand the point of metaphors and things like that.”

Both his students’ surprise (“They almost fell to the floor…”) and his own surprise (“I never thought I would hear a boy like that say…”) come across as personal and emotional expressions. As he highlights how technologies can be used to bridge a gap between the school subject and the learners’ lives, he addresses the ecological validity (Barowy & Jouper, 2004) of educational tasks and activities, i.e. to what extent they match with practices outside school. Thus, Tony is involved in negotiating a new meaning, a typical response to critical conflicts (Engeström & Sannino, 2011).

Both Therese and Tony highlight the fact that a challenge emerging from student access to technology is to motivate students to invest in learning processes. Therese expresses serious concerns about how her less resourceful students might experience additional difficulties in a networked learning environment. Tony’s strategy is stretching out, developing new kinds of reflection-oriented tasks. Their different foci echo the contradiction between an existing and an emerging variant of an activity system.

What counts as valid knowledge?

Katherine (K) is an experienced teacher who is also from Sutherland. Early in the interview she describes the problems that emerged when computers were introduced to her school:

I: But with the technology, what went wrong?

K: Yes, it was then that this view of knowledge emerged: that it is not so important what you know anymore, but how you use it [technology]. You should just learn how to find it [information], so the subject in itself becomes less important than the method. […] I believe that they need a foundation, a professional base. If not, they won’t reach the top... of the taxonomy. So, therefore, I mean it’s wrong that you... that the students are whipped around with demands that they should discuss all the time, that that the only thing that is worth something… that they are discussing something. And it becomes... it just becomes nonsense when they miss out on basic knowledge. They must have something… they must know something to be able to use it further, and I believe that this permeates all subjects. And with access to technical aids during the exam, the result will be groups of learners that don’t know anything, that haven’t learned anything, haven’t understood anything, but they think that they know it, and the teachers pretend that they know it.

Katherine makes a connection between the use of computers and deteriorating factual knowledge. In addition, she questions the value of processes that require reflection and how such processes may cover up a lack of “basic knowledge.” We see a conflict pertaining to what counts as valid knowledge and how to learn. Katherine favors a view in which knowledge is acquired in a stepwise manner, from basic knowledge to “the top… of the taxonomy.” She contrasts this with what she perceives as too much emphasis on discussion and reflection and how to use technology – what Saljö (2010) refers to as the performative nature of learning. Katherine is ruthless in her conclusion (no knowledge, no learning, no understanding) and even escalates the conflict by accusing colleagues of covering up their pupils’ lack of knowledge. The last two sentences come across as quite emotional. They give the excerpt an air of being a deep historical dilemma that involves not merely differing views of technologies, but also the fundamental assumptions about knowing and learning. Her account is an expression of “incompatible evaluations” (cf Table 1) of existing and emerging versions of the activity system of schooling. In the absence of any clear resolution, her response is to deny any potential value in emerging practices and reappraise the value of subject content and past practices.

Three of the four informants from Central upper secondary school describe concrete examples of how they use social media to connect subject content with learner life worlds, expand communicative possibilities, and make collective learning processes feasible. One of these is Ingrid (Ing) – a dedicated and experienced teacher with an interest in exploring the educational potential of digital tools:

Ing: There is something very sad about fighting against… ehh… when you, year after year, sense that students become less and less able to understand why, in school, they are not able to use the technology as they use it in all other situations in life. Thus, if we want them to be motivated and experience what we do as relevant, then we don’t have any alternative.

While Katherine is concerned and quite harsh, Ingrid does not seem worried that students will miss out on any relevant knowledge by having access to technologies. In contrast to Katherine, she promotes the value of working with content considered relevant for today’s society and reflects on how school subjects should change in order to match “other situations in life” – a case of increased ecological validity (Säljö, 2010). Speaking with her colleague Michelle (M), they reflect on how the relevance of knowledge changes. Michelle says:

M: I heard something sensible yesterday that I am sure you believe in [laughs] since you fancy the digital. […] They [she refers to presenters at a book fair] talked a lot about the text reality the students should be educated for. I am very interested in this! What text reality are they entering?

Michelle and Ingrid spent most of the group interview (130 minutes) describing and discussing the qualities and shortcomings of technology-mediated assessment practices. As they become aware of how their students work with new text types, Michelle and Ingrid seek to construct relevant and meaningful assessment practices. Like Tony, they seem to be aware of contradictory motives in current education, as well as being personally engaged in resolving them. Once again, we see how a tertiary contradiction becomes discursively manifested through the articulations of a critical conflict.

Teachers’ responsibilities in assessment

One traditional responsibility that may be challenged by the presence of computers is controlling individual learning outcomes. Ann (A) from Sutherland describes the complex and sometimes contradictory signals she must pay attention to:

A: But I think that is bad for a nation, a society, that we educate people without knowledge inside their minds […] But, at the same time, I know that I must be more conscious about what tasks I give them. […] I must give them tasks that force them to receive and absorb knowledge through the computer, through the mediating tools. And I must give them tasks [whose outcomes] I can measure...

Hesitations and her use of discursive markers, such as “But” and “But at the same time […] And I must...,” indicate a clear dilemma. Ann continues by claiming that working with problem-solving tasks is not acknowledged among her colleagues and that she has stopped giving her learners these kinds of tasks. Just like Katherine, Ann evokes a broader teacher community to support her line of action. Again, however, this leaves her with a dilemma:

I: But… you said that you know that working with problem-solving is smarter when it comes to learning because…

A: Because it activates the students […], but since I don’t trust that they do the assignments that I give them then… then I must feed them a little bit more, be more of a mom, a control freak, overprotecting, and all that stuff.

Ann’s dilemma is that she struggles to combine, on the one hand, the institutional responsibility of controlling student progress, keeping an overview of their individual learning outcomes, with the meaningful learning processes that come with open-ended and problem-solving tasks, on the other. When she says during the interview that working at Sutherland is like being “placed 100 years back in time,” this indicates that the focus on developing assessment practices to control student learning is a historical and institutionalized practice. Here, the tertiary contradiction is brought into the open in the form of a dilemma. Ann wants to engage more in problem-solving activities, but she also expresses distrust in and the need to control her students, which is a case of incompatible concerns.

Impacts of the established exam system

Ruth (R), an experienced but young teacher from Sutherland, brought up the availability of digital tools during exams and how this makes it difficult to teach students factual knowledge. Like several colleagues, she suggests that assessment without access to digital tools should be reintroduced in order to correct a situation that obviously frustrates her:

R: […] on the exam there is just pen, paper, and their own head – nothing more. It’s more realistic then.

I: Ok, why?

R: Because the students in high school are immature— they don’t understand that tools… ehh… they think it means that they don’t have to read. They don’t understand that they have to know something. […] They don’t understand that a lot of knowledge must be in their heads. Then, you can opt for easier tasks on exams and realistic demands, and… it is very good to have a lot of tools and use a PC in the learning process, but [you need] not necessarily see the learning process in the exam situation.

Ruth’s hesitations can be read as a conflict, perhaps even a critical conflict, when we include her emotional engagement. The combination of immature students (she stops in the middle of the word, as if she catches herself saying something controversial) and computer support during exams is held to be responsible for the challenges she faces. Like Therese and Katherine, Ruth is quite explicit in articulating a historically rooted view of learning as an individual and cognitive effort. When she points to the value of cultural tools during the learning processes, she makes a clear distinction between learning processes and exam situations in which the learner is expected to demonstrate individual competence, isolated from social or material assistance. In Ruth’s case, the resolution is found in reintroducing historical practices. Thus, we see how history and cultural traditions that are built into objects might “bite back” (Engeström, 2011b, p.17), making objects resistant to change.

In contrast, the following quote shows how Ingrid, in her search for tasks that have more ecological validity, feels restricted by existing exam types. When speaking of how her blog projects differ from many other school blogs, she was questioned as follows:

I: Do you have the impression that this [writing blog]… kick starts other processes as compared to a situation in which the task wasn’t given as a blog task?

Ing: Yes, because they are asked, “What do you think?”, right? Not just, “What does it say here, or what is this about?”, but “What do you think — what kind of value did you get from reading this text?” I often give them these kinds of questions [...] I have got responses from several classes that those school… typical school tasks… are not very fun to adapt to blogging [...] There is something about the subjective voice a blog has. A good blog usually always has a subjective voice so that you feel... that you get an idea of the person behind the text. […] I believe they document writing competence through writing blogs, even if many Norwegian teachers probably don’t agree with that, because this is not within the more traditional genres, such as articles, essays, and novels, that we review all the time and that they [the students] will encounter during an exam.

Like Tony, Ingrid evokes a personal account of how she develops tasks in an attempt to resolve a critical conflict. She argues with some of her colleagues about how competence can be fostered through writing blogs, “even if many Norwegian teachers probably don’t agree with that.” While Ruth considers exams with just pencil and paper to be “realistic,” we have seen that Ingrid considers the use of technologies during assignments to be relevant because students have access to such tools in their daily lives and will use them when they start to work. However, the following quote from Ingrid and Michelle’s talk confirms that even though teachers might engage in exploring the affordances of technologies that stimulate new ways of developing competence, the exam still asks students to document individual and traditional subject knowledge.

Ing: There is still a gap between group processes, collective ways of working that are possible to cultivate through, e.g., Web 2.0 tools, and traditional way of assessing [...], and that gap is problematic. And I realize that I am dragged into exam-oriented thinking because I know that both I and the students will pay if we don’t […]”

Here, the tertiary contradiction between existing and future practices is explicitly linked to the affordances of social media. Still, strong historical and institutional forces seem to prevent her from achieving expansive development. The linguistic cue, “And I realize that I am dragged into exam oriented thinking,” points to a conflict with personal and emotional overtones. It also shows how difficult it is for teachers to be individual carriers of innovation without sufficient political and institutional support.

Discussion

Analytical categories

The interviewed teachers hold various views of both learning and knowledge, and consequently, they suggest or employ different strategies related to assessment. As we have shown through the excerpts, teachers such as Katherine, Ruth, Ann, and Therese speak of learning as acquiring factual content that should be tested on an individual basis, isolated from social or material assistance. Traditional assessment practices, as a historically developed object, seem to guide these assumptions. The advantage of such a view is that testing and controlling student learning becomes a manageable job and that the teacher role of providing knowledge seems immediately meaningful. For analytical purposes, we call informants promoting such approaches “knowledge providers.”

Alternative reflections and responses to the situation are shown in the quotes from Tony, Ingrid, and Michelle. They represent teachers whose response to the tense situation is developing meaningful practices and tasks, treating knowledge and learning as emerging in personally engaging, social, and often collaborative processes. They are in a process of constructing a future object. For analytical purposes, informants promoting such strategies are labeled “searchers.”

What characterizes the assessment interests, challenges, proposed solutions, and strategies of these two groups is summarized in Table 2 below:

Table 2. Perspectives and strategies of knowledge providers and searchers

  Knowledge providers Searchers
Assessment interest
  • Accountability. Controlling individual learning outcomes

  • Affordances. Monitoring and supporting individual and collective learning processes

Challenging issues
  • Availability of digital tools at exams

  • Students not being mature enough for reflection and responsible use of technology

  • Unproductive copy-and-paste strategies

  • Reduced learning outcomes, negligence of factual knowledge

  • Colleagues giving uncritical support of changing practices

  • Exam types still stifle experimentation

  • Opening for students’ personal engagement and reflections in assessment practices

  • Lack of common standards for responses to students’ cut-and-paste practices

  • Colleagues maintaining role as knowledge providers

Proposed solutions
  • Re-establish exams without access to technologies in order to control and account for students’ learning

  • Re-acknowledge students’ fact orientation and ability to recall factual knowledge

  • Open up for exams with unlimited access to digital resources in order to take advantage of material and social assistance

  • Develop reflection-oriented tasks

  • Include distributed environments

Strategies applied Historically oriented regressive approach. Identify dilemmas and conflicts. Submitting to existing and authoritative practices in order to resolve a tense situation. Placing problems with educational authorities (and to some extent, with colleagues) out of their own reach. Future-oriented, transformational approach, but mostly on an individual level. Identify critical conflicts. Placing problems with educational authorities (and to some extent, with colleagues), but mainly within their own reach by searching for solutions and for things to make sense on a personal level.

We are aware that categorizing informants in this manner implies simplification and reductionism (Kvernbekk, 2005). The categories have been developed for analytical purposes based on the notion that patterns help us see “what goes with what” and that contrasting, as well as comparing, is a “pervasive tactic that sharpens understanding” (Kvale & Brinkman, 2008, p.234).

Guided by fundamental assumptions about learning and knowledge

The excerpts discussed in the previous section indicate that traditionally valid assessment practices designed for accountability - measuring individual cognitive capacity - do not seem to transfer well into currently emerging technology-rich contexts. Hence, the evolving challenges call for responses that are “in tension with the long-established social practices of the settled work settings” (Daniels, Edwards, Engeström, Gallagher, & Ludvigsen, 2009, p.1). In other words, we observe a situation that reflects tensions between the object of activity in an existing and an emerging version of the activity system – a tertiary contradiction.

Table 2 shows how teachers respond and relate very differently to this contradiction. The characteristics of “knowledge providers” and “searchers” suggest that these groups take different cultural and historical points of departure when they engage in constructing or re-constructing the object of activity. They seem guided by different assumptions of learning and knowledge. This seems to resonate with Overbay et al. (2010) who found that constructivist commitments among teachers influenced their use of technologies (cf. review section). For this reason, different assessment interests emerge making teachers identify challenges from different positions and propose almost contrary solutions to such challenges. By adopting more or less expansive or regressive strategies, they come to engage in developing divergent objects of activity.

The emergence of divergent objects

According to Engeström (1987), expansive learning manifests itself as changes in the object of the shared activity of learners. However, while historical and future objects continuously develop (and will, consequently never reach a ‘final state’), more immediate and personal situational objects appear as instantiations of longitudinal processes. Such divergent, situational objects might prompt teachers to develop different practices and generate different interests, perspectives on challenges, solutions, and strategies, as summarized in Table 2. As they struggle, the teachers engage in a continuous process of constructing the object. This construction is modeled in Figure 2.

Figure 2. Divergent situational objects

The middle triangle represents the current unresolved situation, and the broken arrow represents tensions as knowledge providers and searchers engage in different strategies. The figure illustrates how some of the underlying contradictions seem to impact existing assessment practices. While one strategy of solving the problem is seeking to reverse and conserve practices that have proven to be controllable and therefore dependable over time, a contrary strategy is seeking to develop new practices that make sense in light of the digital world that students encounter.

In the words of Engeström and Sannino, “developmentally significant contradictions cannot be effectively dealt with merely by combining and balancing competing priorities” (2011, p. 371). How, then, can teachers be assisted in such situations? For example, joining networks working on “the edge of competence” and fostering a culture of questioning established practices is seen as increasingly important for institutional development (Hakkarainen, Palonen, Paavola, & Lethinen, 2004). Without organizing collective forums for discussing and developing meaningful assessment practices, educational institutions seem to underuse the potential for expansive learning and professional advancement in the collective zone of proximal development. This resonates with Roth & Lee:

Learning occurs whenever a novel practice, artifact, tool, or division of labor at the level of the individual or group within an activity system constitutes a new possibility for others (as a resource, a form of action to be emulated), leading to an increase in generalized action possibilities and therefore to collective (organizational, societal, cultural) learning. (Roth & Lee, 2007, p.205)

Institutional culture and educational policies might impact the teachers’ more or less expansive strategies. For new objects of activity to evolve, such objects must be recognized and worked on collaboratively across several levels: classrooms (micro), institutions (meso), and educational policies (macro) (Engeström, 1987). Individuals or teacher communities cannot only make fundamental decisions concerning the activities of learning and assessment if they are to be sustainable.

Although the considerations and concerns shared in the interviews are articulated through individual voices, there are deep institutional and cultural overtones. The voices of the teachers are saturated with institutional concerns that materialize by referring to guidelines, concepts of valid knowledge, colleagues’ adaptation to technology-rich practices, and an unequivocal concern for what is best for the learners. Consequently, such utterances should not merely be read as illustrations or examples of a phenomenon under development but as empirical carriers of the possible transformation of the activity system of assessment. They are not statistically generalizable, but they are arguably analytically generalizable, because the results generated by one situation, grounded in the analysis of similarities to and differences from the others, could guide or give input for analyzing similar situations (Kvale, 1996).

Conclusion

Our study was guided by the research question: ‘What are some of the strategies teachers develop in order to meet challenges emerging from students having access to computers when responding to tasks?’ Our data and analysis show that teachers adopt very different strategies. These are aligned with fundamental assumptions about learning and knowledge. The findings corroborate recent research on how views of learning and assessment practices seem intimately intertwined (Black & William, 2006). Moreover, Engeström’s notion (1987) of tertiary contradiction has helped us understand how diverse and even contradictory strategies emerge among teachers dealing with assessment in technology-rich environments. We have sought to make such strategies and the tensions that drive them visible by using Engeström and Sannino’s (2011) analytical categories. The emergence of divergent strategies and situational objects has made us aware of the relevance of engaging teachers in discussions of their reasons for either holding on to traditional ways of assessing student learning or moving on in a search for new models of assessment.

While McFarlane (2003) claims that conventional ways of performing assessment seem to disturb and delay the development of technology-mediated learning practices, we also argue that giving access to technology during assessment is not sufficient if the intention is to bring about innovative and productive technology-enhanced learning. We need to address deeper notions of learning, along with institutional impact and policymaking, to understand how a transformation of the current activity system of schooling can be aligned with learning in the networked society.

Teacher experiences with assessment in technology-rich environments are quite new, and our knowledge is still too fragile to make broader claims and guidelines for further didactical development. Before contradictions can be resolved by practical, collective, and transformative actions, there is a need to further examine which fundamental assumptions it would make sense to rest tomorrow’s assessment practices on. A shared understanding will better prepare teachers, researches and policymakers to go collectively beyond current assessment practices and explore assessment practices designed for our digital age.

References

Assessment and Learning Research Synthesis Group. (2003). A systematic review of the impact on students and teachers of the use of ICT for assessment of creative and critical thinking skills: Social Science Research Unit, Institute of Education, University of London.

Bakthin, M. (2000). The dialogic imagination. Four essays by M.M. Bakhtin. Austin, TX: University of Texas Press.

Barowy, W., & Jouper, C. (2004). The complex of school change: Personal and systemic codevelopment. Mind, Culture and Activity, 11(1), 9-24.

Black, P., & Wiliam, D. (1998). Inside the Black Box: Raising Standards Through Classroom Assessment. Retrieved November 19th, 2012, from http://blog.discoveryeducation.com/assessment/files/2009/02/blackbox_article.pdf

Black, P., & William, D. (2006). Assessment for learning in the classroom. In J. Gardner (Ed.), Assessment and Learning. London: Sage.

Boote, D. N., & Beile, P. (2005). Scholars before researchers: On the centrality of the dissertation literature review in research preparation. Educational Researcher, 34(6), 3-15.

Cavanaugh, C., & Dawson, K. (2010). Design of online professional development in science content and pedagogy: A pilot study in Florida. Journal of Science Education and Technology, 19(5), 438-446.

Center for Activity Theory and Developmental Work Research. (n.d.). The Activity System. Retrieved 1.5.2012 from http://www.edu.helsinki.fi/activity/pages/chatanddwr/activitysystem/

Daniels, H. (2009). Vygotsky and Pedagogy. New York: Routledge Falmer.

Daniels, H. (2012). Institutional culture, social interaction and learning. Learning, Culture and Social Interaction, 1(1). doi: 10.1016/j.lcsi.2012.02.001

Daniels, H., Edwards, A., Engeström, Y., Gallagher, T., & Ludvigsen, S. R. (2009). Activity Theory in Practice: Promoting learning across boundaries and agencies. London: Routledge.

Engeström, Y. (1987). Learning by expanding. An activity-theoretical approach to development research. Helsinki: Orienta Konsultit Oy.

Engeström, Y. (2011a). Methodology and methods in activity theory. Power point presented at the PhD Course in Cultural Historical Activity Theory, University of Oslo.

Engeström, Y. (2011b). Cultural-historical activity theory and the concept of object. Power point presented at the PhD Course in Cultural Historical Activity Theory, University of Oslo.

Engeström, Y., & Sannino, A. (2011). Discursive manifestations of contradictions in organizational change efforts: A methodological framework. Journal of Organizational Change Management, 24(3), 368-387. Retrieved 15.5.2012 from http://dx.doi.org/10.1108/09534811111132758

Enriquez, A. G. (2010). Enhancing student performance using tablet computers. College Teaching, 58(3), 77-84. doi: 10.1080/87567550903263859

Erstad, O. (2009). The assessment and teaching of 21st century skills project. Nordic Journal of Digital Literacy, (3-4), 204-211.

Feldman, A., & Capobianco, B. M. (2008). Teacher learning of technology enhanced formative assessment. Journal of Science Education and Technology, 17(1), 82-99.

Griffin, P., Care, E., & McGaw, B. (2012). The changing role of education and schools. In P. Griffin, B. McGaw & E. Care (Eds.), Assessment and teaching of 21st century skills (pp. 1-16). Dordrecht: Springer.

Hakkarainen, K., Palonen, T., Paavola, S., & Lethinen, E. (2004). Communities of networked expertise. Professional and educational perspectives. Oxford, England: Elsevier.

Hickey, D.T., Honeyford, M.A., Clinton, K.A., & McWilliams, J. (2010). Participatory assessment of 21st century proficiencies. In V. J. Schute & B. J. Becker (Eds.), Innovative assessment for the 21st century: Supporting educational needs. New York: Springer.

Kreisberg, S. (1992). Transforming power. Domination, empowerment, and education. New York: State University of New York Press.

Krucli, T. E. (2004). Making assessment matter: Using the computer to create interactive feedback. English Journal, 94(1), 47-52.

Kvale, S. (1996). InterViews. Thousand Oaks, CA: SAGE.

Kvale, S., & Brinkmann, S. (2009). Interviews. Learning the craft of qualitative research interviewing. California: Sage.

Kvernbekk, T. (2005). Pedagogisk Teoridannelse. Insidere, teoriformer og praksis. Bergen: Fagbokforlaget.

Labaree, D. F. (1999). How to Succeed in School Without Really Learning: The Credentials Race in American Education: Yale University.

Leont'év, A. N. (1978). Activity, consciousness and personality. New Jersey: Prentice Hall.

Loveland, T. R. (2005). Writing standards-based rubrics for technology education classrooms. [Article]. Technology Teacher, 65(2), 19-30.

Lund, A., Rasmussen, I., & Smørdal, O. (2009). Joint designs for working in wikis. In H. Daniels, A. Edwards, Y. Engeström, T. Gallagher & S. Lundvigsen (Eds.), Activity theory in practice. Promoting learning across boundaries and agencies (pp. 207-230). London: Routledge.

McFarlane, A. (2003). Editorial. Assessment for the digital age. Assessment in Education, 10(3).

Murphy, E., & Rodriguez-Manzanares, M. A. (2009). Learner centredness in high school distance learning: Teachers' perspectives and research validated principles. Australasian Journal of Educational Technology, 25(5), 597-610.

Nusche, D., Earl, L., Maxwell, W., & Shewbridge, C. (2011). OECDs reviews on Evaluation and Assessment in Education – Norway Retrieved 13.5.2013 from http://www.oecd.org/norway/48632032.pdf

Oliver, P., & Jupp, V. (2006). Purposive sampling. The SAGE dictionary of social research methods (pp. 244-245): Sage.

Olson, D. R. (2003). Psychological theory and educational reform. How school remakes mind and society. Cambridge: Cambridge University Press.

Overbay, A., Patterson, A. S., Vasu, E. S., & Grable, L. L. (2010). Constructivism and technology use: Findings from the IMPACTing Leadership project. Educational Media International, 47(2), 103-120. doi: 10.1080/09523987.2010.492675

Peake, J. B., Duncan, D. W., & Ricketts, J. C. (2007). Identifying technical content training needs of Georgia agriculture teachers. Journal of Career and Technical Education, 23(1), 44-54.

Poehner, M. E., & Lantolf, J. P. (2005). Dynamic assessment in the classroom. Language Teaching Research, 9(3), 233-265.

Roth, M. W., & Lee, Y-J. (2007). “Vygotsky’s neglected legacy”: Cultural-historical activity theory. Review of Educational Research, 77(2), 186-232.

Savage, J., & Fautley, M. (2011). The organisation and assessment of composing at key stage 4 in English secondary schools. British Journal of Music Education, 28(2), 135-157.

Savenye, W., Dwyer, H., Niemczyk, M., Olina, Z., Kim, A., Nicolaou, A., & Kopp, H. (2003). Development of the digital high school project: A school-university partnership. Computers in the Schools, 20(3), 3-14.

Selwyn, N. (2011). Education and Technology: Key Issues and Debates. London, New York: Continuum International Publisher Group.

Scardemalia, M., Bransford, J., Kozma, B., & Quellmalz, E. (2012). New assessments and environments for knowledge buildings. In P. Griffin, B. McGaw & E. Care (Eds.), Assessment and teaching of 21st century skills (pp. 231-300). Dordrecht: Springer

Schofield, J. W. (Ed.). (1998). Increasing the Generalizability of Qualitative Research. London: Sage.

Straub, E. T. (2009). Understanding technology adoption: Theory and future directions for informal learning. Review of educational research, 79(2), 625-649.

Säljö, R. (2010). Digital tools and challenges to institutional traditions of learning: technologies, social memory and the performative nature of learning. Journal of Computer Assisted Learning, (26), 53-64.

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge: Harvard University Press.

1 In 2012, The Norwegian Directorate for Education and Training piloted a project with full Internet access for students during all parts of the national exam. The pilot was successful, and the project will be scaled up in 2013.

Idunn bruker informasjonskapsler (cookies). Ved å fortsette å bruke nettsiden godtar du dette. Klikk her for mer informasjon