Oppgrader til nyeste versjon av Internet eksplorer for best mulig visning av siden. Klikk her for for å skjule denne meldingen
Ikke pålogget
{{session.user.firstName}} {{session.user.lastName}}
Du har tilgang til Idunn gjennom , & {{sessionPartyGroup.name}}

Clicker Interventions at University Lectures and the Feedback Gap

Postdoc., Department of Education, University of Bergen
Professor, Department of Education, University of Bergen, Norway

The article presents a mixed methods study on clicker interventions conducted in collaboration with four philosophy teachers at fourteen university lectures. The aim was to examine how feedback from the interventions were received and used by teachers and students. The data material comprises a quasi-experiment based on 6,772 student responses, student logs, a student survey and semi-structured interviews with the teachers. Findings show that students experience feedback that supports their self-monitoring and understanding of the content, and that the peer discussions enhanced student performance. The teachers also experienced an increased awareness of the students’ understanding of the topics. Yet, the findings indicate a gap between the reception and use of the feedback.

Keywords : student response systems, university lectures, interactive learning environments, formative feedback, higher education

Introduction

Lecturing large student groups makes it difficult to involve and engage students in situ. If the teacher poses questions during the lecture, many students will refrain from raising their hands out of fear of being publicly embarrassed (Caldwell, 2007; Krumsvik, 2012). Consequently, the teacher gets answers from only a few brave students, and these answers may not be representative of the student group. This article focuses on the use of a Student Response System (SRS) as a means to making university lectures more interactive. A major advantage of using SRS is that it enables the teacher to ask questions and collect answers from the whole student group anonymously (Krumsvik, 2012). During the lecture, the teacher can pose topic-related multiple-choice questions on a big screen permitting the students to answer using a wireless hand-held device called a “clicker”. The student answers can then be presented in a histogram for the teacher and students to see. The use of this technology is often combined with peer discussions, using the “classic” or the “peer instruction” approach (Nielsen, Hansen, & Stav, 2016). In the “classic” approach, students are asked a question which they then discuss with their peers before answering individually. In the “peer instruction” approach, the students are asked a question that they answer individually before they discuss their answers with their peers and re-answer the same question. In both cases, the teacher usually follows up on the student answers in a plenary discussion.

This article presents a study of such clicker interventions seen through the lenses of formative feedback in the context of large lectures in Examen philosophicum. This is a mandatory philosophy course for first-year students enrolling at Norwegian universities, introducing the academic way of thinking, working and writing, and philosophical perspectives on academic culture and formation, and key issues in the discipline the students are studying.

Studies have shown that formative assessment and feedback practices can improve student learning and instruction (Evans, 2013; Hattie & Timperley, 2007), and that clicker interventions provide students with an opportunity to reflect on their own understanding and to receive feedback from their peers and the teacher (Egelandsdal & Krumsvik, 2015; Krumsvik, 2012; Krumsvik & Ludvigsen, 2012; Ludvigsen, Krumsvik, & Furnes, 2015). Clicker interventions have also been found to have an immediate positive effect on student achievement (Crouch & Mazur, 2001; Mazur, 1997; Rao & DiCarlo, 2000; E. L. Smith, Rice, Woolforde, & Lopez-Zang, 2012; M. K. Smith et al., 2009; Chien, Chang, & Chang, 2016; M. K. Smith, Wood, Krauter, & Knight, 2011; Zingaro & Porter, 2014). Furthermore, teachers can also receive feedback on their teaching. By collecting student answers during the lecture, the teacher receives immediate feedback on the number of students who have understood the material (Lantz, 2010).

Experiencing feedback and acting on it, however, is not necessarily correlated. In the research literature there are several examples of students failing to make use of feedback in their coursework – often referred to as “the feedback gap” (Evans, 2013; Jonsson, 2013). Most studies on clickers are typically small scale, focusing on the students’ activities in the classroom and immediate outcomes. Little attention has been paid to how clicker interventions affect the students’ coursework between the lectures and how the teachers experience and use feedback from clicker interventions. To get a broad picture of how such interventions affect both students and teachers, it is important to look at the relationship between the reception of and the use of feedback. Hence, the purpose of the study presented in this article is to gather data on how feedback from the clicker interventions is received and used by students and teachers, drawing on both qualitative and quantitative sources.

Using design-based research, we teamed up with four teachers that we followed in 14 lectures given to first-year students at the beginning of their first semester. The study was designed to answer the following questions:

  1. How do the students perceive and use feedback from the clicker interventions?

  2. Do peer discussions improve student performance on clicker questions and how do the students perceive the discussions?

  3. How do the teachers perceive feedback from the clicker intervention and for what purpose(s) do the teachers consider the feedback useful?

Previous research

Although studies have found that feedback from clicker interventions raises students’ awareness of their understanding of a subject (Krumsvik & Ludvigsen, 2012; Ludvigsen et al., 2015; Egelandsdal and Krumsvik 2015), whether and how students use feedback from clicker interventions are hard to find. One exception is Ludvigsen et al. (2015), who interviewed six students on this issue. The students claimed that they used the feedback to discuss difficult concepts with their peers, adjust and focus their reading, and identify difficult topics that they needed to explore further.

Several studies have shown that peer discussion increases the number of students answering correctly when the same clicker question is re-answered after discussion (Crouch & Mazur, 2001; Mazur, 1997; Rao & DiCarlo, 2000; E. L. Smith et al., 2012; M. K. Smith et al., 2009; Vickrey, Rosploch, Rahmanian, Pilarz, & Stains, 2015). M. K. Smith et al. (2009), Porter, Bailey Lee, Simon, and Zingaro (2011) and Egelandsdal and Krumsvik (2016) have found that this improvement also transfers to new cases by posing a second clicker question on the same topic with the same level of difficulty as the first question after peer discussion. In these studies, the amount of students answering correctly to the second question increases notably from the amount answering correctly to the first question asked before discussion. Nevertheless, studies have also shown that clicker results can sometimes misrepresent students’ understanding (James & Willoughby, 2011; J. K. Knight, Wise, Rentsch, & Furtak, 2015; Wood, Galloway, Hardy, & Sinclair, 2014), which illustrates the importance of the teacher following up students’ answers and asking them to explain their choices, to get a deeper understanding of their perspectives.

University teachers, as experts in their fields, have a wide range of experiences to draw upon when planning a lecture. First-year students, on the other hand, who are venturing into a new academic domain for the first time, have a more limited horizon of understanding. Schwartz and Bransford (1998) found that lecturing can be an effective form of instruction when students are engaged in pre-lecture learning activities that help them develop differentiated knowledge about the topics of the lecture. Interpretations of their findings suggest that these activities made the students more receptive of the information from the lecture; helping them sort out and focus on the most relevant features. This shows that students’ prior understanding has a big impact on the learning outcome from a lecture – in other words, it is important that the lecture address questions that the students are able to ask. Hrepic, Zollman, and Rebello (2007) compared how students and experts (physics instructors) perceived information presented in a videotaped lecture. Both groups were given a pre-test, and were asked to determine if and when the same questions were addressed in the lecture. Their findings show that the experts found questions addressed more frequently and thoroughly than the students did. This illustrates how the asymmetric level of understanding between the teacher and students can become a challenge in the planning of lectures. Seen through the lens of formative assessment, the feedback from the clicker interventions can provide the teacher with an opportunity to correct misconceptions and address difficult topics synchronously during the lecture, as well as asynchronously in the planning of future lectures (Black & Wiliam, 2009). For instance, D'Inverno, Davis, and White's (2003) experienced students’ answers to clicker questions as revealing “very deep and important gaps in (their) understanding” – showing that the students’ learning needs were at a more fundamental level than what had been assumed. This challenged the teachers to change their teaching based on the student response. In a lab-based study, Anderson, Healy, Kole, and Bourne (2011) found that feedback from clicker interventions also helped save teaching time since the teacher could focus on the most relevant topics for the students’ current level of understanding. Kolikant, Drane, and Calkins (2010) followed and interviewed three university teachers who used SRS in their classes (in mathematics, physics and engineering). They experienced that SRS helped them assess the students’ understanding and address misconceptions. One of the teachers stated that she had previously weighted the various parts of the content equally, but that the use of clickers had made her devote different amounts of time and attention to the topics based on the needs of the students.

Research design and methods

The study was conducted at three different faculties, faculty of law, faculty of psychology and faculty of social sciences, and comprised of first year students enrolled in various programmes at these faculties. In collaboration with four teachers using clicker interventions for the first time we conducted clicker interventions in lectures on “history of philosophy”, “ethics”, and “language and argumentation” as part of the course examen philosophicum.

To conduct the interventions we used a response system from from Turning Technologies. The hardware of this system consisted of wireless handheld devices (“clickers”) designed for the students to answer multiple choice questions and a receiver that could be connected to the laptop of the lecturer. The software allowed the lecturers to integrate clicker questions for the students to answer in PowerPoint slides.

The interventions were planned using a design-based research approach. Design-based research is characterized by advancing design, research and practice concurrently to try out and study instructional designs in real educational contexts (Collins, Joseph, & Bielaczyc, 2004; Wang & Hannafin, 2005). The instructional design in the study used the peer instruction approach (described below in 2.2). We provided the teachers with the instructional design, advice on how to construct the clicker questions, the student response system (including 500 clickers) and technical support. The teachers were responsible for the content of the lecture and creating the clicker questions. The student population comprised first-year students attending the fourteen lectures during which the clicker interventions were implemented. Clickers were distributed to all students present at each lecture. The average attendance was 193 students per lecture. In each lecture, two to three clicker interventions were conducted, and a total of 35 interventions were conducted over the 14 lectures.

To study the interventions a Mixed Methods Design was used where qualitative and quantitative data were collected both concurrently and sequentially: (QUAN + QUAL) => QUAL + quan (Johnson & Christensen, 2016). In the first phase, data was collected concurrently through the quasi-experimental interventions (N: 871 1 ) and the students writing logs (N: 23) about their experiences. In the second phase after the interventions, the teachers (N: 4) were interviewed about their experiences. These semi-structured interviews were (sequentially) informed by the findings from the data collected through the quasi-experiments and student logs. We were also granted access to parts of an evaluation survey (N: 403) on the students’ experience of the intervention at the end of the semester.

Table 1

Design of the study

Aim

To explore the use of an instructional design using a Student Response System (SRS) for the purpose of promoting formative feedback at several large university lectures.

Main question

How is feedback resulting from clicker interventions in ex.phil. lectures received and used by teachers and students?

Research question

1. How do the students perceive and use feedback from the clicker interventions?

2. Do peer discussions improve student performance on clicker questions and how do the students perceive the discussions?

3. How do the teachers perceive feedback from the clicker intervention and for what purpose(s) do the teachers consider the feedback useful?

Population

First year students at the University of Bergen attending ex.phil.-lectures on “language and argumentation”, “ethics” and “history of philosophy” and their teachers.

Data collection

Mixed-methods design

Quasi-experiments (N: 871 students)

Student logs (N: 23)

Student survey (N: 403)

Semi-structured interviews (N: 4 teachers)

The instructional design and quasi-experiment:

Structure: Short lecture – Q1 – peer discussion – Q1ad – teacher follow-up

In the interventions, the teachers (1) first posed a clicker question (Q1) after a short lecture on a topic that the students answered individually. To avoid the majority answer affecting students’ responses, the class results were hidden from the students (Perez et al., 2010). The students were then allowed to (2) discuss their answers for two minutes before re-answering the same question (Q1ad) again. The purpose of the quasi-experiment was to measure the change in correct responses between Q1 and Q1ad. (3) The student answers were then displayed for the students and teacher, and the teacher followed up on the student answers by asking them to explain their reasoning and providing them with his own explanations.

We identified three main types of questions that the teachers used in the interventions: 6 recall questions, 14 evaluative questions and 15 case questions. The recall questions required the students recall facts. The evaluative questions required the students use their understanding of content to assess the correctness of various claims. The case questions required the students use their understanding of content to interpret a case and select the appropriate answer. See examples of the three question types in Table 2.

Table 2

Examples of the question types (translated from Norwegian)

Analysis

The data from the different sources were analysed separately. The data from the quasi-experiments were analysed by comparing the difference in the numbers of students answering Q1 and Q1ad correctly. We also calculated the average difference for each group and for all interventions. The survey was analysed descriptively and presented in a histogram. The semi-structured interviews were fully transcribed. Both the interviews and the student logs were first analysed for meaning categorization and meaning condensation in Nvivo. This categorization was based on the statements of the teachers and students that were related to how they perceived the interventions as a whole, how they experienced various parts of the interventions and their activities related to the interventions between the lectures.

Findings

As illustrated in Table 3, the findings are presented in conjunction with the three research questions.

Table 3

Data presented in conjunction with the three research questions

Student logs

Student survey

Quasi-experiment

Teacher interviews

3.1 Student perception and use of feedback

X

X

3.2 The peer discussions

X

X

3.3 The teachers’ experience and the follow up-phase

X

X

Student perception and use of feedback from the clicker interventions

Student perception of feedback from the clicker interventions

The survey (see Table 4) shows that 71 percent of the students (N: 397) experienced the number of clicker interventions to be suitable, while 20 percent of the students experienced the number as too few, and 5 percent experienced the number as too many. A total of 92 percent of the students (N: 400) experienced the clicker lectures as more entertaining than the lectures without clickers. Few students (4 percent) experienced the use of clickers as diverting the focus away from the content of the lecture; 83 percent disagreed with a statement that it diverted focus (N: 397). These findings show that the students in general had a positive experience of the clicker interventions. The 23 students writing logs also reported mostly positive experiences with the interventions. Only one of them wrote that she was uncertain of whether the interventions were worth the interruptions to the lecture. This student was particularly unhappy with the time the interventions took and the noise they generated. Another student with reduced hearing stated that she did not benefit from the peer discussions because of the noise in the auditorium when everyone was having a discussion at the same time. She stated, however, that she still found the interventions useful for reflecting on her own understanding, and that they made her feel more engaged and helped her to remember details from the lectures better.

Table 4

Students’ experiences of the clicker lectures: survey data

«Compared to other lectures..»

Totally Agree

Agree

Neither Agree, nor Disagree

Disagree

Totally Disagree

Do not know

Total (N)

«..I learned more from the clicker lectures»

161

147

60

13

10

12

403

«..the clicker lectures made me more aware of my own content understanding»

215

125

38

8

5

9

400

«..the clicker lectures were more entertaining»

276

91

20

1

7

5

400

«..the use of clickers draws the focus away from the content of the lecture.»

4

10

45

129

202

7

397

In terms of feedback, the majority of students perceived the clicker interventions as useful for both self-monitoring and enhancing their content understanding. As illustrated in Figure 1, 86 percent of the students (N: 400) experienced the clicker lectures making them more aware of their own understanding than the lectures without clickers, and 76 percent of the students (N: 403) experienced learning more from the clicker lectures than the lectures without clickers. As shown in Table 5, the students reported several benefits from the clicker interventions in their logs. The students experienced the interventions contributing to increasing attention, participation, engagement and motivation. Several of the students also experienced receiving feedback supporting their self-monitoring and understanding of the content.

Figure 1

Students’ experience of the clicker lectures: survey data.

Table 5

Number of students experiencing

Increased attention

17

Feedback: self-monitoring

15

Feedback: content understanding

12

Increased participation

10

Increased motivation

9

Increased engagement

7

Several of the students wrote that they valued the interventions because they were able to test their understanding on specific examples, as opposed to traditional lectures where the teacher simply provides them with “correct information”. Combining the questions with peer discussions creates two situations where the students need to use their understanding actively – first by answering and reflecting on the question individually and then by explaining their answer in the peer discussions.

Twelve students concluded that the interventions helped improve their understanding of the content. They related this to either (1) increased retention or (2) better understanding of the topic. In the first case, the students wrote that the interventions made it easier to remember important information and details from the lecture. Others claimed that they remembered the parts where the “clickers” were used best.

In the second case, several students wrote that the interventions strengthened their content understanding beyond simple recall of information. Some students emphasized that they benefited from working so thoroughly with the topics in the interventions. One student stated that her experience was that interventions had helped her understand a chapter she had been struggling with when preparing for the lecture. Another student experienced the difference between various concepts becoming clearer. Similarly, some students found that the content of the lecture was more structured compared to other lectures because of the interventions.

Fifteen of the students wrote that the interventions made them more aware of their own understanding. These students emphasized that the interventions helped them identify concepts they did not understand, discover and correct misunderstandings, and confirm what they had understood.

Student use of the feedback from the clicker interventions

Only sixteen out of twenty-three students answered the questions about how they used the information from the interventions after the lectures. Eleven students wrote that they used the feedback while six students wrote that they did not use it. Assuming that the remaining students did not use the feedback either, this indicates a split in the student group on whether the students apply the feedback in their coursework.

The six students who explicitly wrote that they did not use the information gave different explanations for this. A couple of students wrote that they did not use the information because they had not started working actively with the material towards the exams yet (the lectures were held early in the semester). One student stated that she had not used the feedback because she had worked with the texts in other ways. Two students wrote that they found the interventions useful for focusing better, becoming more engaged and improving their content understanding, but that they did not think about, or use, the information afterwards. One student claimed that she remembered the cases from the clicker questions so well that she did not feel a need to look at them in her own coursework. This student related the question about “using information from the clicker interventions” to using the questions for repetition. Although she wrote that she considers the interventions useful for testing her understanding, she did not relate this to her own coursework. Hence, it is hard to know whether the interventions were used more or less unintentionally by the student when studying.

Table 6

Number of students using feedback from the lectures

Used the questions in their course work

Discussed the questions with peers

6

5

Changed the study focus

3

As shown in Table 6, the students who wrote that they used the feedback made use of it in three different ways; (1) they use the clicker questions in their coursework; (2) they discussed the clicker questions with their peers; and (3) they changed their study focus in light of the information from the interventions.

Three of the students who used the clicker questions in their coursework wrote that they looked up and used the questions when studying. They stated that using the clicker questions made the subject easier to understand and organize. Three other students wrote that they reflected on the questions while reading without physically going back and looking at the questions. In both cases, it seems that the students used the questions as points of reference for their understanding of the course material. Moreover, students who stated that the interventions were useful for creating discussions about the content with their friends emphasized that this made them more aware of the different concepts.

The students who wrote that they used feedback from the interventions used it to adjust their study focus. One student wrote that she worked actively with different concepts and with questions from the lecture, and adjusted her focus based on this information. Another student stated that she used the information about her own understanding from the lectures to restructure her notes. She also read more about qualitative methods because the content reminded her of that.

Another student concluded that she did not need to read more about a topic because of the interventions. If the student has truly understood the topic, this might be a valid conclusion. On the other hand, this illustrates that answering questions correctly also might lead or mislead students into becoming overly confident in their own understanding if the questions are too narrow or too easy and fail to represent the depth and complexity of the topic.

The peer discussions and clicker questions

Table 7

Percentage of correct answers to the clicker questions pre- and post-discussion for each intervention.

Intervention

Topic

Question type

Q1 correct answers (n)

Q1ad correct answers (n)

Change in correct answers:

Q1ad – Q1

1 Group a*

History of philosophy

Recall

55 % (188)

85 % (194)

30 %

2 Group a

History of philosophy

Evaluative

65 % (199)

82 % (201)

17 %

3 Group a

History of philosophy

Evaluative

30 % (200)

35 % (197)

5 %

4 Group a

History of philosophy

Recall

41 % (171)

49 % (171)

8 %

5 Group a

History of philosophy

Recall

67 % (175)

78 % (173)

11 %

6 Group a

Ethics

Case

31 % (172)

35 % (168)

4 %

7 Group a

Ethics

Evaluative

80 % (172)

94 % (172)

14 %

8 Group a

Ethics

Evaluative

62 % (170)

65 % (162)

3 %

9 Group a

Ethics

Evaluative

19 % (105)

22 % (103)

3 %

10 Group a

Ethics

Evaluative

57 % (105)

70 % (103)

13 %

11 Group a

Ethics

Evaluative

52 % (105)

59 % (105)

7 %

12 Group b*

History of philosophy

Recall

62 % (255)

86 % (254)

24 %

13 Group b

History of philosophy

Evaluative

71 % (262)

85 % (263)

14 %

14 Group b

History of philosophy

Evaluative

40 % (262)

54 % (254)

14 %

15 Group b

History of philosophy

Recall

36 % (256)

45 % (252)

9 %

16 Group b

History of philosophy

Recall

71 % (246)

82 % (250)

11 %

17 Group b

Ethics

Case

37 % (259)

56 % (246)

19 %

18 Group b

Ethics

Evaluative

85 % (255)

95 % (245)

10 %

19 Group b

Ethics

Evaluative

65 % (245)

71 % (226)

6 %

20 Group b

Ethics

Evaluative

28 % (191)

35 % (184)

7 %

21 Group b

Ethics

Evaluative

50 % (179)

59 % (164)

9 %

22 Group b

Ethics

Evaluative

62 % (178)

65 % (173)

3 %

23 Group c**

Language and argumentation

Case

35 % (203)

35 % (206)

0 %

24 Group c

Language and argumentation

Case

66 % (206)

81 % (205)

15 %

25 Group c

Language and argumentation

Case

40 % (203)

43 % (205)

3 %

26 Group c

Language and argumentation

Case

74 % (204)

90 % (206)

16 %

27 Group c

Language and argumentation

Case

67 % (157)

74 % (156)

7 %

28 Group c

Ethics

Case

93 % (157)

97 % (150)

4 %

29 Group c

Ethics

Case

82 % (148)

87 % (143)

5 %

30 Group d***

Language and argumentation

Case

47 % (183)

69 % (179)

22 %

31 Group d

Language and argumentation

Case

75 % (182)

92 % (178)

17 %

32 Group d

Language and argumentation

Case

97 % (191)

97 % (180)

0 %

33 Group d

Language and argumentation

Case

99 % (190)

100 % (170)

1 %

34 Group d

Language and argumentation

Case

78 % (197)

89 % (199)

11 %

35 Group d

Language and argumentation

Case

78 % (201)

89 % (197)

11 %

Group a avarage

51 %

61 %

10 %

Group b avarage

55 %

67 %

12 %

Group c avarage

65 %

72 %

7 %

Group d avarage

79 %

89 %

10 %

Avarage for all groups

60 %

70 %

10 %

* Group a and b: Four lectures for each group. Same interventions for both groups, intervention 1 equals intervention 12 etc.

** Group c: Three lectures held by two different teachers.

*** Group d: Three lectures.

As shown in Table 7, the peer discussions did indeed improve student performance on clicker questions. The average improvement for all 35 interventions between the pre- (Q1) and post-discussion question (Q1ad) was 10 percent. When controlling for the type of questions asked, we find no difference in the average improvement between the evaluative questions (9 %) and case questions (9 %), while the improvement on the recall questions (15.5 %) is 6.5 percent higher; however, only six questions of this type were posed. Looking at each question pair, we also find that the students’ answers varies between showing no improvement and an improvement of 30 percent post-discussion. This shows that some questions do yield higher improvement rates than others.

Peer discussions might also support the students’ learning process in ways that do not necessarily result in immediate effects. Several students wrote that the discussions provided an opportunity for reflection and active engagement with the content and with their peers. They stated that they learned from both listening to other students’ explanations and from explaining their own opinions. One student wrote that the peer discussions and plenary session helped her gain a deeper understanding because the interventions made her look at the content from different angles. Both the peer discussions and the follow-up phase make different voices in the student group emerge. This might create an opportunity for the students to distance themselves and critically assess their own understanding based on other voices having a variety of perspectives. The discussions also made it clearer for some of the students why they decided on a specific answer in retrospect.

One student wrote, however, that she changed from a correct answer to an incorrect answer because of the discussion. This experience is in line with studies that have found that correct answers do not automatically imply correct understanding. Although changing from a correct to an incorrect answer negatively affects the clicker statistics, this does not mean that peer discussions are damaging or that the student has no understanding of the topic. Answering clicker questions correctly is not an end in itself, but part of the student’s learning process.

Based on the log data, the students deemed the relationship between the lecture and the questions as good, but some questions were criticized for being too easy. Some of the students argued that these questions led to poorer discussions. When the questions were difficult, the students described them as “good”, “more educational”, and “more motivating”.

The teachers’ experience and the follow-up phase

Table 8

Number of teachers’ experiencing

Feedback on student understanding

4

Increased student engagement

3

Increased student attention

2

A break in the lecture

2

A new way of preparing for the lectures

2

The teachers’ perception of the clicker interventions was generally positive, and all teachers stated that they would like to use response technology more in the future. As shown in Table 8, the teachers appreciated the immediate feedback on student understanding, increased student engagement and attention, the breaks in the lecture, and that the interventions offered a new way of preparing for and structuring the lectures. One of the teachers also experienced the students as easier to engage in plenary discussions later in the semester. He speculated on whether the interventions had functioned as an “ice breaker” earlier in the semester, and that this had also helped the students to get involved in lectures without the use of clickers.

The only negative experiences mentioned by teachers were related to loss of time due to the interventions. Two teachers mentioned that the time it took to hand out the clickers was a problem. One of them also experienced the transition between the interventions and lecturing as a disturbance. Another teacher stated that the interventions took more time from the lecture than he anticipated. These objections seem to have been experienced as minor disruptions. However, one of the teachers said that even though his experience with clickers had inspired him to use online response technology in other lectures, he would hesitate to use clickers again because of the time it took to distribute them.

Although too much time spent handing out the clickers is irritating, and noise in the transition period between the interventions and the lecture is undesirable, the fact that the clicker interventions take time from the lecture is not necessarily negative in itself. One of the teachers stated that he considered the interventions as useful for slowing down the tempo and reducing the amount of material presented. He argued that it might be better to focus on a few important points rather than provide the students with lots of information where little or nothing is retained.

Since the interventions decrease the length of the lecture, the teachers need to select carefully what they want to cover. Two of the teachers mentioned that planning the interventions and creating the clicker questions challenged them to focus on what is most important to convey:

…it's a new way of preparing the lectures. You become more focused on what the main points are and what’s most important. You have another tool to push yourself to really concentrate on what is central and what is less important.

This might be an additional benefit from using clicker interventions that has been overlooked in previous studies, suggesting that planning the interventions could contribute to raising the teacher’s awareness of the content. Using a new approach to teaching also challenges the teachers’ traditional conception of a successful lecture. In a traditional lecture, a teacher is likely to evaluate the session based on his own experience of performance. With the clicker interventions, the focus shifts from the teacher to students. One of the teachers made an interesting reflection on this when talking about how he experienced the time spent on the interventions:

All these interventions took a lot of time as well, so it was a bit choppy and I lost a bit of my own sense of progress, but I'm just not sure that… when you are talking, I know that psychologists say that time goes much faster for you than for the audience, and you may, subjectively, experience that you have great drive and great coherence, but it is not certain that the students are experiencing it the same way, although I feel that I have a very driving lecture with a lot of punch and energy, it may well be that the students fall asleep after a quarter of an hour. But here they were more engaged. So how they felt about it and how I experience it are two different things.

In the excerpt above, the teacher addresses the difference between his experience as a teacher and the experience of the students. Even though the teacher might consider his lecture successful because he was able to convey what he considers most important to his students, this does not mean that the students learned anything. All four teachers recognized the value of the interventions as an instrument for assessing the students’ understanding and using the information in situ:

You get immediate feedback on whether what you have explained has been understood generally when you apply it on an example. If there are many incorrect answers, this indicates that it is generally not understood and that there is a need to elaborate more. So it works in the sense that you get immediate feedback on whether there is a need to say something more about a point or whether to proceed.

The students also experienced the teachers using the feedback from the clickers to explain better and clear up misunderstandings. Several students emphasized that they appreciated that the teachers took the time to follow up on the student answers that were incorrect as well as the correct one. They argued that this made them more aware of “why” the different alternatives were correct or incorrect. Some students also mentioned that they found the teacher’s follow-up important to confirm their own understanding. Although the teachers seemed to value and use the feedback information in situ, only one teacher mentioned the interventions as useful for planning future lectures. He considered the interventions useful for both adjusting the focus of future lectures and making adjustments in the reading lists.

Discussion and conclusion

As illustrated in Table 9, the findings can be organized in two main types of feedback: feedback contributing to self-monitoring (for both student and teachers) and feedback contributing to an increased content understanding (for students).

Table 9

How feedback from the clicker interventions is received and used by students and teachers.

Feedback: Self-monitoring

Feedback: Content understanding

Feed up

Feed back

Feed forward

Students’ experience

Increased awareness of important concepts and how they are related.

More structured lecture.

Increased awareness of their current understanding and misunderstandings.

Adjust their focus when studying.

Use clicker questions when practicing

Discuss various concepts with peers

Increased retention.

New knowledge constructed through the peer discussion and the teacher’s explanation.

Student performance

Peer discussions enhance student performance.

Teachers’ experience

Increased awareness of what is most important for the students to learn.

More structured lecture.

Increased awareness of the students understanding.

Synchronous: Explain key concepts and use more time on difficult parts.

Asynchronous: Make adjustments in the reading list and adjust future lectures.

In the first case, the survey data show that most of the students’ experienced feedback from the clicker interventions that contributes to self-monitoring. The students wrote in their logs that the interventions supported their self-monitoring by increasing their awareness of their own understanding of various concepts (feed back). They also wrote that the interventions made the lectures better organized, and made the students more aware of important concepts and how they were related (feed up). With regard to the question of how to improve (feed forward), the students seemed to be most focused on the clicker interventions as an event taking place in the lecture, and less focused on how the information could be used later on. Although many of the students experienced that the interventions provided them with feedback on their own understanding, only 11 (out of 23) students wrote that they used this information in their coursework. Furthermore, only three of these wrote that they used the information to adjust their focus when studying. The others wrote that they use clicker questions when practising and discussing various concepts with their peers. These activities seemed to help the students organize and clarify the various concepts. It is known that it is challenging for first-year students to choose a productive focus (Nicol, 2009). If the clicker interventions can contribute to give the students a better overview of what is important in the course and help the students monitor their own understanding, this could be important support for these students. Our findings and the findings from previous studies (Egelandsdal & Krumsvik, 2015; Ludvigsen et al., 2015) show that most students do experience feedback that helps them monitor their own understanding from the clicker interventions. When it comes to acting on this feedback, however, there seems to be a feedback gap for many students.

The variations in whether students use the feedback from the interventions, and how they use it, is likely related to individual differences in student understanding, motivation and ability self-regulate their studying. In a review, Jonsson (2013) found that a number of students use feedback passively as an indicator of progress or to motivate themselves, but they lack strategies for using the feedback actively. This also appears to be the case in our study; many of the students whose experience was that the interventions made them more aware of their understanding do not appear to use this information in their coursework. Students’ decision to use feedback is also related to their perception of the information and the opportunity to use it in near future (Jonsson, 2013). Although feedback from the clicker interventions can be used immediately, it is not required in the same way, for instance, as feedback on a text that is going to be revised. In other words, feedback from the clicker interventions might not be considered immediately relevant for the students’ coursework. In addition, lectures are not the only course activities, nor are they the only source of feedback. Thus, the perceived relevance of feedback from the clicker interventions in the students coursework is at least dependent on the teacher making the students aware of the how she considers the clicker questions in relation to the learning intentions, and the exams (some might be more central than others) and how the feedback could be used (to practise more on topics that are identified as difficult, to use similar questions when practising, etc.). Perhaps even more importantly, the teacher could facilitate activities that are related to the concepts used in the clicker interventions at seminars or through assignments to create opportunities to use the feedback in near future.

In the second case, the survey data show that many students experience feedback contributing to enhancing students’ content understanding from the interventions. The students’ logs show that the students experienced increased retention and an improved understanding of the topics because of the clicker interventions. Increased retention might be a result of both increased student attention and a “testing-effect”, while enhanced understanding of the topics is likely to be related to increased opportunities to construct new knowledge. The quasi-experiment shows that the peer discussions might be particularly useful for immediately improving the students’ understanding of the topics. The average improvement resulting from the discussions is 10 percent. This is notable considering the shortness of the discussions (two minutes) and the sizable sample (6772 student responses) on which the findings are based. Consequently, the results support previous findings on the effects of peer discussions in clicker interventions. As previous studies have shown, the clicker results may sometimes misrepresent the students’ understanding (James & Willoughby, 2011; J. K. Knight et al., 2015; Wood et al., 2014); that is, correct answers do not always mean that the students have understood the concepts well, and incorrect answers do not always mean that the students have little or no understanding. On the other hand, the benefits from peer discussions are not necessarily dependent on the students improving their answers after discussing. One student experienced the interventions as useful for reflecting on her own understanding even though she answered correctly both before and after the discussion. Another student experienced that she changed from a correct to an incorrect answer because of the discussion. Although such instances do not have a positive effect on the clicker results, they might still help the students to monitor and develop their understanding. This illustrates the importance of looking at the peer discussions and clicker interventions from a variety of perspectives.

All the teachers perceived the interventions to be useful for gaining insight into the students understanding and using this information in the lecture. The introduction of clicker interventions in the lecture seemed to create a shift, for some of the teachers, from focusing solely on their own presentations of the content to the students’ understanding of the subject matter. The findings indicate that the use of clicker interventions helped the teachers to assess and get a more realistic picture of how the students understand the topics of the lecture (feed back). This is important because the teachers’ understanding of a domain, as experts, radically differs from their students who might be introduced to various concepts for the first time (Hrepic et al., 2007). It seems that the interventions also contributed to raising some of the teachers’ awareness of what is important for the students to learn (feed up). Two teachers stated that constructing the clicker questions truly forced them to focus on what was essential in their lecture. The teachers followed up on the student responses by asking them to explain their choices and provided them with their own explanations for the incorrect and correct alternatives (feed forward). The students expressed their appreciation of the follow-up by the teacher and said that this contributed to developing or confirming their understanding. Some students emphasized that it was useful that the teachers also followed up on the alternatives that were incorrect. Like the students, the teachers seemed mostly interested in the clicker interventions as an event taking place in the lecture, and the immediate use of the information. One of the teachers stated that he considered the information from the interventions useful for planning future lectures and for making adjustments in the students’ reading lists. The others seemed less focused on how they could use the information in future teaching activities. These findings indicate that there is a feedback gap for the teachers as well as the students.

It might not be surprising that the teachers are less concerned with using information from the interventions after the lectures. Since the teachers used response technology for the first time, it is likely that their efforts and focus were primarily directed at getting the interventions working as an integrated part of their lectures. Studies have found, however, that teachers’ perceived advantages increase as they became more experienced with using clickers (Draper & Brown, 2004; Kolikant et al., 2010). For instance, Kolikant et al. (2010) studied three teachers who had been using clickers 2–4 times before. They found that even though the teachers initially had started using SRS to change their students (to make them more active in class), they experienced that using the technology also transformed their teaching. As Boscardin and Penuel (2012, p. 404) have noted, the use of clickers requires teachers to develop expertise not only in the topics of the lecture and creating appropriate questions, but also in “pedagogical skills to adjust and modify instruction..”. In other words, using feedback from the interventions requires both additional effort and a pedagogical understanding of how to use the information. This is something that is likely to improve with experience. Nevertheless, the teachers may also benefit from initial support in aligning the use of response technology with the course as a whole and in utilizing feedback from the interventions. Not all insights can be gained from personal experience.

Limitations and suggestions for further research

While zooming in on how students and teachers receive and use feedback from clicker interventions provides us with a focused view of the instructional design, a limitation is the lack of information about how the interventions can be related to the course as a whole. We believe that the next step would be to study the clicker interventions in a broader perspective, that is, how lectures and clicker interventions can be aligned with other course activities, the overarching learning intentions and exams. When it comes to student feedback, the focus of this study was on feedback information. We did not study the affective side of feedback. We believe that considering clicker intervention feedback from both an informational and affective perspective may help explain differences in the students’ use of feedback in their coursework. We also believe that it would be purposeful to study teachers’ professional development (with and without external support) over time with the use of response technology, and how this affects students’ reception and use of the feedback.

References

Amundsen, G. Y., Damen, M.-L., Haakstad, J., & Karlsen, H. J. (2017). NOKUTs utredninger og analyser: Underviserundersøkelsen 2016 (1-2017). Retrieved from: http://www.nokut.no/no/Nyheter/Nyheter-2017/Store-variasjoner-i-norske-studenters-faglige-forutsetninger-og-studieinnsats

Anderson, L. S., Healy, A. F., Kole, J. A., & Bourne, L. E. (2011). Conserving time in the classroom: The clicker technique. The Quarterly Journal of Experimental Psychology, 64(8), 1457–1462. DOI: https://doi.org/10.1080/17470218.2011.593264

Black, P., & Wiliam, D. (2009). Developing the Theory of Formative Assessment. Educational Assessment, Evaluation and Accountability, 21(1), 5–31.

Boscardin, C., & Penuel, W. (2012). Exploring Benefits of Audience-Response Systems on Learning: A Review of the Literature. Academic Psychiatry, 36(5), 401–407. DOI: http://dx.doi.org/10.1176/appi.ap.10080110.

Caldwell, J. E. (2007). Clickers in the Large Classroom: Current Research and Best-Practice Tips. CBE – Life Sciences Education, 6(1), 9–20.

Chien, Y.-T., Chang, Y.-H., & Chang, C.-Y. (2016). Do we click in the right way? A meta-analytic review of clicker-integrated instruction. Educational Research Review, 17, 1–18. DOI: http://dx.doi.org/10.1016/j.edurev.2015.10.003

Collins, A., Joseph, D., & Bielaczyc, K. (2004). Design Research: Theoretical and Methodological Issues. The Journal of the Learning Sciences, 13(1), 15–42. DOI:10.2307/1466931

Crouch, C. H., & Mazur, E. (2001). Peer Instruction: Ten years of experience and results. American Journal of Physics, 69(9), 970–977. DOI: http://dx.doi.org/10.1119/1.1374249.

D’Inverno, R., Davis, H., & White, S. (2003). Using a personal response system for promoting student interaction. Teaching Mathematics and its applications, 22(4), 163–169.

Deslauriers, L., Schelew, E., & Wieman, C. (2011). Improved Learning in a Large-Enrollment Physics Class. Science Education International, 322(6031), 862–864. DOI: http://dx.doi.org/10.1126/science.1201783.

Draper, S. W., & Brown, M. I. (2004). Increasing Interactivity in Lectures Using an Electronic Voting System. Journal of Computer Assisted Learning, 20(2), 81–94.

Egelandsdal, K., & Krumsvik, R. J. (2015). Clickers and formative feedback at university lectures. Education and Information Technologies, 1–20. DOI: http://dx.doi.org/10.1007/s10639-015-9437-x.

Egelandsdal, K., & Krumsvik, R. J. (2016). Peer discussions and response technology: short interventions, considerable gains. Nordic Journal of Digital Literacy, Accepted for publishing.

Evans, C. (2013). Making Sense of Assessment Feedback in Higher Education. Review of Educational Research, 83(1), 70–120. DOI: https://doi.org/10.3102/0034654312474350

Hake, R. R. (1998). Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics, 66(1), 64–74. DOI: http://dx.doi.org/10.1119/1.18809.

Hattie, J., & Timperley, H. (2007). The Power of Feedback. Review of Educational Research, 77(1), 81–112.

Hrepic, Z., Zollman, D. A., & Rebello, N. S. (2007). Comparing Students’ and Experts’ Understanding of the Content of a Lecture. Journal of Science Education and Technology, 16(3), 213–224. DOI: http://dx.doi.org/10.1007/s10956-007-9048-4.

James, M. C., & Willoughby, S. (2011). Listening to student conversations during clicker questions: What you have not heard might surprise you! American Journal of Physics, 79(1), 123–132. DOI: http://dx.doi.org/10.1119/1.3488097.

Johnson, R. B., & Christensen, L. (2016). Educational Research: Quantitative, Qualitative, and Mixed Approaches: SAGE Publications.

Jonsson, A. (2013). Facilitating productive use of feedback in higher education. Active Learning in Higher Education, 14(1), 63–76. DOI: https://doi.org/10.1177/1469787412467125

Knight, J. K., Wise, S. B., Rentsch, J., & Furtak, E. M. (2015). Cues Matter: Learning Assistants Influence Introductory Biology Student Interactions during Clicker-Question Discussions. CBE Life Sci Educ, 14(4), ar41. DOI: https://doi.org/10.1187/cbe.15-04-0093

Knight, J. K., & Wood, W. B. (2005). Teaching more by lecturing less. Cell biology education, 4(4), 298–310. DOI: http://dx.doi.org/10.1187/05-06-0082.

Kolikant, Y. B.-D., Drane, D., & Calkins, S. (2010). “Clickers” as Catalysts for Transformation of Teachers. College Teaching, 58(4), 127–135.

Krumsvik, R. J. (2012). Feedback Clickers in Plenary Lectures: A New Tool for Formative Assessment? In L. Rowan & C. Bigum (Eds.), Transformative Approaches to New Technologies and Student Diversity in Futures Oriented Classrooms: Future Proofing Education (pp. 191–216). Dordrecht: Springer Netherlands.

Krumsvik, R. J., & Ludvigsen, K. (2012). Formative E-Assessment in Plenary Lectures. Nordic Journal of Digital Literacy, 7(01).

Lantz, M. E. (2010). The use of “Clickers” in the classroom: Teaching innovation or merely an amusing novelty? Computers in Human Behavior, 26(4), 556–561. DOI: http://dx.doi.org/10.1016/j.chb.2010.02.014.

Ludvigsen, K., Krumsvik, R. J., & Furnes, B. (2015). Creating formative feedback spaces in large lectures. Computers & Education, 88(0), 48–63. DOI: http://dx.doi.org/10.1016/j.compedu.2015.04.002

Mazur, E. (1997). Peer instruction: a user’s manual. New Jersey: Prentice Hall.

Nelson, M. M., & Schunn, C. D. (2009). The nature of feedback: How different types of peer feedback affect writing performance. Instructional Science, 37(4), 375–401.

Nicol, D. (2009). Assessment for learner self-regulation: enhancing achievement in the first year using learning technologies. Assessment & Evaluation in Higher Education, 34(3), 335–352. DOI: https://doi.org/10.1080/02602930802255139

Nielsen, K. L., Hansen, G., & Stav, J. B. (2016). How the initial thinking period affects student argumentation during peer instruction: students’ experiences versus observations. Studies in Higher Education, 41(1), 124–138. DOI: https://doi.org/10.1080/03075079.2014.915300

Perez, K. E., Strauss, E. A., Downey, N., Galbraith, A., Jeanne, R., & Cooper, S. (2010). Does Displaying the Class Results Affect Student Discussion during Peer Instruction? CBE – Life Sciences Education, 9(2), 133–140.

Porter, L., Bailey Lee, C., Simon, B., & Zingaro, D. (2011). Peer instruction: do students really learn from peer discussion in computing? Paper presented at the Proceedings of the seventh international workshop on Computing education research.

Rao, S. P., & DiCarlo, S. E. (2000). Peer instruction improves performance on quizzes. Advances in Physiology Education, 24(1), 51–55.

Schwartz, D. L., & Bransford, J. D. (1998). A time for telling. Cognition and Instruction, 16(4), 475–522. DOI: https://doi.org/10.1207/s1532690xci1604_4

Smith, E. L., Rice, K. L., Woolforde, L., & Lopez-Zang, D. (2012). Transforming Engagement in Learning Through Innovative Technologies: Using an Audience Response System in Nursing Orientation. Journal of Continuing Education in Nursing, 43(3), 102–103. DOI: http://dx.doi.org/10.3928/00220124-20120223-47.

Smith, M. K., Wood, W. B., Adams, W. K., Wieman, C., Knight, J. K., Guild, N., & Su, T. T. (2009). Why Peer Discussion Improves Student Performance on In-Class Concept Questions. Science, 323(5910), 122–124. DOI: http://dx.doi.org/10.1126/science.1165919.

Smith, M. K., Wood, W. B., Krauter, K., & Knight, J. K. (2011). Combining Peer Discussion with Instructor Explanation Increases Student Learning from In-Class Concept Questions. Cbe-Life Sciences Education, 10(1), 55–63. DOI: http://dx.doi.org/10.1187/cbe.10-08-0101.

Vickrey, T., Rosploch, K., Rahmanian, R., Pilarz, M., & Stains, M. (2015). Research-Based Implementation of Peer Instruction: A Literature Review. Cbe-Life Sciences Education, 14(1). DOI: https://doi.org/10.1187/cbe.14-11-0198

Wang, F., & Hannafin, M. J. (2005). Design-based research and technology-enhanced learning environments. Etr&D-Educational Technology Research and Development, 53(4), 5–23. DOI: https://doi.org/10.1007/bf02504682

Wood, A. K., Galloway, R. K., Hardy, J., & Sinclair, C. M. (2014). Analyzing learning during Peer Instruction dialogues: A resource activation framework. Physical Review Special Topics – Physics Education Research, 10(2), 020107.

Yoder, J. D., & Hochevar, C. M. (2005). Encouraging active learning can improve students’ performance on examinations. Teaching of Psychology, 32(2), 91–95. DOI: http://dx.doi.org/10.1207/s15328023top3202_2.

Zingaro, D., & Porter, L. (2014). Peer instruction in computing: The value of instructor intervention. Computers & Education, 71, 87–96. DOI: http://dx.doi.org/10.1016/j.compedu.2013.09.015.

1This number is based on the largest number of students answering a clicker question for each student group. The amount of students answering each question during a lecture varied.

Idunn bruker informasjonskapsler (cookies). Ved å fortsette å bruke nettsiden godtar du dette. Klikk her for mer informasjon