Video Feedback in Higher Education – A Contribution to Improving the Quality of Written Feedback
The purpose of this article is to promote the significance of feedback regarding students’ working with written texts in higher education and to point out how technology can develop the quality and form of teachers’ feedback. The results of studies and tests completed in eight separate subject areas demonstrate that video feedback simplifies and increases the efficiency of responding to students’ work, as it allows the opportunity to achieve increased levels of precision and quality in the feedback process. Students emphasize their learning dividend and the inspiration they experience from working with this format. They actively use their teacher’s comments and acquire a stronger emotional bond with him/her as well.
Keywords: Video feedback, screen capture, feedback, higher education.
Keywords: Video feedback, screen capture, feedback, higher education.
IntroductionStudents rate receiving feedback on their written assignments as being particularly significant in their evaluation of academic courses and study programs (Debuse, Lawley & Shibl, 2007; Pepper & Pathak, 2008). Traditionally speaking, students receive feedback from their teachers in the form of comments outlined on paper and e-mail, or they have conversations with their teacher (Brick & Holmes, 2008). Experiences from British universities show that written feedback is consistently the dominant form used in academic settings (Hyland & Hyland, 2006; Sugita, 2006), and the same holds true in a Norwegian context (Dysthe & Engelsen, 2007). However, the most common manner in which feedback is transmitted takes place through electronic channels in the form of LMS1
LMS - Learning Management System. In this project Fronter was used as a Learning Management System organizing users and managing e-learning content.(34%) or e-mail (20%), even if paper format maintains a strong position in this regard (38 %) (ibid.).
Teachers wishing to offer precise and systematic feedback on their students’ written assignments face a challenging process due to the fact that both time and resources for giving feedback are limited, while students’ need for and expectations of feedback and assessment are usually high. Additionally, requirements made of teachers have increased. For instance, through the implementation of the Quality Reform (White Paper no. 27, 2000-2001)2
St.meld. nr. 27 (2000-2001), particular focus was placed on the significance of students’ receipt of “frequent feedback” (ibid.:15). Moreover, the contact with students and its ensuing continuous dialogue have assumed a different character in that several studies are now Internet-based, and even students on campus communicate to a great extent with their teachers via LMS and other types of interactive programs.
In the present project we have studied the degree of significance video feedback3
Video feedback is understood in this context to be a video recording of the action taking place on the screen combined with an audio recording of the teacher’s comments regarding what is being shown. This is usually described as video screen capture or video screencast.can have for assuring and optimally increasing the quality of feedback given on written assignments in a university context. The project has concerned utilizing the screen capture program JING and producing video feedback for students distributed via the learning platform Fronter. By using digital folders as a communication medium, students and teachers have in an asynchronous way exchanged hand-in assignments and feedback in the form of screen capture.
While screen capture technology is not new (see the ensuing discussion), as a tool for giving college and university students feedback, this working form is still in its beginning stages and seems to contain unexploited potential (Stannard, 2007a, 2008a). Lumadue & Fish (2010) assert that this simple technological tool represents nothing less than a paradigm shift for the purpose of giving students high-quality feedback on their academic work.
The main purposes of this study are to describe how screen capture can be used as a feedback form for students’ written work and to investigate how video feedback can contribute to developing the quality and form of feedback given in various subjects.
Feedback for higher education studentsThe quality of feedback given to students appears to have a high level of significance for their working with the subject at hand and the actual learning dividends acquired by them. One study conducted by Debuse et al. (2007) shows that feedback is decisive in order for students to understand and receive support for their own learning process and developing the level of insight needed to understand their own strengths and weaknesses. If students do not receive feedback, they also find it difficult to maintain the motivation they need to make progress in their academic work (Pepper
& Pathak, 2008). Feedback is especially important for new students, as demonstrated by Poulos and Mahoney (2008) whose study concludes that “these students’ feedback goes beyond providing information on how to improve assessment marks. The ‘effective feedback’ for these students is that which provides emotional support and facilitates integration into university” (ibid., p. 152).
Similarly, Lumadue and Fish (2010) stress the significance of what they describe as “quality feedback”, claiming that educational institutions that are successful in giving systematic and constructive feedback contribute to creating an effective learning situation of decisive significance for students’ learning dividends and grades (Debuse et al., 2007; Higgins, Hartley & Skelton, 2002). According to Wolsey (2008), one important aspect of feedback is that it establishes a relationship between teacher and student, a factor which in and of itself promotes learning. Feedback includes helping students understand the requirements and standards forming the basis for the grades they receive (Glower & Brown, 2006). It also provides a direction for future work undertaken within the subject. Wolsey (2008) concludes by stating that systematic and good feedback is a conclusive and determinant factor in the improvement of students’ written work.
Furthermore, in a meta-analysis of factors that students claim influence their actual learning dividends, Hattie concludes the following (Hattie, 2009; Hattie and Timperley, 2007):
“Of all the factors that make a difference to student outcomes, the power of feedback is paramount in any list. The overall effect-sizes of feedback from over 1000 studies based on 50,000+ students reveal that feedback is among the highest of any single factor, and it underpins the causal mechanisms of most of the factors in the top 10-20 factors that enhance achievement.” (Hattie 20094
However, there are several challenges facing the person giving and receiving feedback. Research shows that feedback can be vague, unclear and confusing (Crawford, 1992; Goldstein & Kohls, 2002; Zamel, 1985). One result of this situation is that students use teachers’ feedback without truly understanding what it implies and what they in reality should correct or improve (Crawford, 1992; Stannard, 2008a). Another consequence of poor quality feedback is that students simply ignore it (Bartholomae, 1980; Hyland, 2003). This may be due to the fact that the feedback is often comprehended as being inconsistent and contradictory (Fregeau, 1999; Zamel, 1995). In the worst possible case a teacher’s inability to give constructive feedback as well as his/her possible emphasis on criticism and negative responses can operate in a dysfunctional manner, hampering the overall learning process (Cohen & Cavalcanti, 1990) and halting “intellectual growth” (Summerville & Johnson, 2006).
Lumadue and Fish (2010) summarize their impression of how feedback is given to higher education students in the following manner:
“Practices must be improved (Holmes & Papageorgiou, 2009), as instructional feedback is often vague, non-specific (Debuse et al., 2007; Higgins et al., 2002), inconsistent and infrequent (Holmes & Smith, 2003). The ability for faculty within higher education to provide quality feedback in a timely manner has become a challenge due to larger class sizes and increased workloads (Debuse et al., 2007), which has resulted in many teachers reducing the frequency of assignments (Gibbs & Simpson, 2004).” (Lumadue & Fish, 20105
Future-oriented and multimodal feedbackWhat then is effective and high-quality feedback? Basing his own conclusions on an extensive amount of material, John Hattie (Hattie & Timperley, 2007), asserts that effective feedback must
answer the following three questions (ibid.): 1) Where am I going? (What are the goals?), 2) How am I going? (What progress is being made toward the goal?), and 3) Where to next? (What activities need to be undertaken to make better progress?) This line of thought corresponds to Biggs’ (1999) description of constructive alignment and is characterized by Hattie and Timperly (2007) as effective future-oriented feedback. Furthermore, they utilize the terms Feed up, Feed back and Feed forward (ibid.) with regard to the three questions listed above. The answers to these questions are dependent on the feedback’s format and quality and on which level they are operating. According to Hattie and Timperly, an ideal situation is when the student both seeks the answers to these three questions and searches for the inner connection between them. Doing so not only makes requirements of the student’s academic skills, but also to his/her metacognitive skills and degree of personal insight. It also requires that student and teacher practice meaningful communication as well as have a positive dynamic between them (Sadler, 1989) and that they focus on both feed back and feed forward. Stated in a different manner: …it is closing the gap between where the student is and where they are aiming to be which leads to the power of feedback. (Hattie, 2007, p. 15)
In recent years alternative forms of feedback have arisen as a result of new technological possibilities. For example, Gardner (2004) has studied feedback in the format of recordings, while Ware and Warschauer (2006) have analyzed how various computer-mediated communication forms influence students’ involvement and motivation for textual development (Brick & Holmes, 2008, p. 340). These and similar studies (McLaughlin et al., 2007; Stannard, 2006, 2007b) confirm the fact that students prefer receiving feedback in the format of sound or video instead of exclusively in written form.
Russell Stannard at the University of Westminster (2006, 2007b) has further developed these ideas through his attempt to use screen capture to give feedback in language instruction. Stannard bases his study on Richard Mayer’s (2001) principles of combining oral and visual feedback in what is called “dual coding”. Mayer maintains that there is a tendency to overload the visual channel in the media and that a balanced use of picture and sound makes the two channels complement one another (see Clark & Mayer, 2003; Paivio, 1986). Moreover, Brick and Holmes (2008) claim that combining speech, picture/video and the printed word also has significance for various learning styles:
The use of speech, graphics and the written word seems to cater to the widest variety of learning styles, reaching those with a preference for auditory and visual learning who are less likely to benefit from standard single mode written feedback. (Brick & Holmes, 2008, p. 340)
Stannard (2007 b) and Lumadue & Fish (2010) indicate through their findings that this multimodal form of feedback has significant potential in higher education. Stannard (2006, 2007b) is willing to call it a new direction for higher education feedback, and Lumadue & Fish (2010) go as far as describing screen capture in feedback work as being “A Paradigm Shift for the 21st Century”.
However, there is reason to assume a critical and inquiring approach with regard to this “technological optimism”, especially when it concerns the question as to how we can best provide feedback on students’ written work. For example, one approach is associated with the belief that feedback and technology should always be contextualized (Attwell, Pachler & Pimmer, 2010), an idea that requires an understanding of unique contextual characteristics. In this particular instance the context is formed by, for example, the interaction between technology and the particular subject’s distinctive character, existing educational and advisement traditions as well as the teacher and student’s digital literacy. The latter can be especially significant when feedback is provided in a
new form requiring the user to be competent, familiar with and confident in the technology, primarily as it concerns understanding and mastering a new genre (Mathisen & Wergeland, 2010). Bakhtin (1998) claims that people who master a language perfectly in one context may experience a feeling of complete helplessness in new communication spheres due to the fact that they cannot master the particular genres.
Another approach that philosopher Hans Skjervheim (1996) warns against is the danger of committing “the instrumentalist fallacy”, meaning the problems that arise when technology oversteps its legitimate boundaries, resulting in human qualities and values becoming objectified and human beings subordinated to systematism, regimes and formats. Stated more simply: if we are to succeed in implementing future-oriented and multimodal feedback through the aid of a certain type of technology, there is reason to study exactly “why” this occurs before concerning ourselves with the “how” aspect of the occurrence, as it is only in this manner that we may prevent ourselves from making the instrumental mistake (ibid.).
Screen capture technologyScreen capture refers to programs whose common feature allows users to make a video recording of all movements and changes on the screen while simultaneously recording a synchronous recording via a microphone. Another common feature is that the compressed video files may be easily distributed either by saving the recording as a sound/picture file that can be uploaded to an LMS or attaching it to an e-mail. Depending on the program type, there is also an opportunity for the user to receive a hyperlink to the recording which has been uploaded to an Internet server, or, by clicking the keyboard once, users can share the video file with others on Facebook, Twitter or Flickr. A third feature is that these programs are usually quite user-friendly and designed with an intuitive user interface.
There currently exist several different programs (Jing, Camtasia, Wink, Snagit, AutoScreenRecorder, Matchware, etc.) that may be downloaded for free or upgraded at various costs to “pro” versions. There are also Internet-based programs (MailVu, Vocaroo, Screen Toaster, Screencast-O-Matic, etc.) in which both recording and distribution occur via a Web-solution. These programs are marketed as being user-friendly, multimedia programs for Internet publishing and communication. The fact that these programs allow users to save both time and money is emphasized, as communication becomes more efficient when it is not limited to merely exchanging text, but also includes picture, movement, color and sound.
In the present tests (Fig. 3) the program JING6
Figure 1. The screen dump shows the preliminary work done before making a screen capture recording in connection with providing comments on logs written by lower-level student groups pursuing a Bachelor’s Degree in Education.With regard to a university setting, recording possibilities have largely been used to make instructional videos and only experimentally to make recordings of the teacher’s review of assignments handed in for correction. If used for the latter purpose, video feedback may contain individual comments and/or direct itself towards groups. The starting point is written work handed in for review (Word, Excel, Pdf or similar) and displayed on the screen while the teacher uses the cursor and makes spoken comments. The text to be commented on has often been either highlighted (Fig. 1), or given written comments using proofreading tools (Fig. 2).
Figure 2. The screen dump shows the preliminary work done before making a screen capture recording in connection with a project assignment in an Internet-based study program on gender equality.
The study’s unique characteristics and methods
Background of the studyThe background for testing came about as a result of the encounter between various academic subjects and technology, or, stated differently: a significant professional challenge met a possible technological solution. In connection with a teacher training course in university pedagogics7
http://www.uia.no/no/div/sentre/pedagogisk_utviklingssenter/kursprogram/uniped-kursduring which one of the topics discussed was the teaching staff’s work with assessment, feedback, advisement and follow-up of students’ written assignments, several challenges were identified: participants expressed the increasing need for feedback, groups and individual students being reached (on campus and in Internet study programs), as expectations regarding feedback are growing (see the Quality Reform8
St.meld. nr. 27 (2000-2001)) and workloads are increasing. Additionally, teachers expressed a desire to create more student involvement and provide feedback consisting of quality, precision and variation. Students for their part signaled to teachers that they need follow-up, and they were also looking for more personal contact, a factor corresponding to international studies (Debuse et al., 2007; Higgins et al., 2002).
As a result of a presentation of the screen capture tool JING, various tests were performed in several subject areas (Fig. 3). Teachers’ reasons for wanting to try video feedback were varied and complex, although teachers having large student groups and heavy workloads felt that the possibility of improving their feedback efficiency was very appealing (Fig. 3, see teachers A, B, E and F).
|Teacher/ study||Subject||Type of feedback||Indiv./ groups||Number of recordings||Data|
|A|| Education, BA|
| Feedback/comments on logs from lower-level groups|
|Gr.|| 4 feedback sessions per group|
10 gr. x 4 à
|B|| Engineering, BA|
|Feedback/advisement on factual individual assignments and reflection notes||Indiv.|| 3 feedback sessions per individual|
30 indiv. x 3 à
|C|| German language|
|Feedback/advisement on individual assignment||Indiv.|| 1 feedback session per individual|
1 recording x 3 à
|D|| Gender Equality,|
LIK-900, Internet study program
|Feedback/advisement on individual project work||Gr.|| 1 feedback session per group|
3 gr. x 1 à
|E|| IT and info.systems, BA|
| Feedback on project reports|
|Gr.|| 3-4 feedback sessions per individual|
40 indiv. x 3-4 à
|F|| Nursing for Mentally Challenged, BA|
| Feedback on written assignments|
+ user manuals
|Indiv & Gr.|| 2-3 feedback sessions per individual|
40 indiv. x 2-3 à
MethodAs shown by the material, the research process has developed at the same rate as the various participants’ developmental work in their respective subject areas. There has thus been progressive focus on individual topics in which both teachers’ and students’ oral feedback has led to inquiry development, which has in turn led to designing the teacher interviews.
The present study is grounded in a case study paradigm. Yin (2003, p. 13) defines the case study research method “as an empirical inquiry that investigates a contemporary phenomenon within its real-life context; when the boundaries between phenomenon and context are not clearly evident; and in which multiple sources of evidence are used”. Case study research excels at bringing us to an understanding of a complex issue and can expand on experience or add strength to what is already known through previous research, as in this manner it emphasizes detailed contextual analysis of a limited number of events or conditions and their relationships. On the other hand, critics have
claimed that case studies lack in rigor, reliability and addressing the issue of generalizability in contrast to quantitative methods (Hartley, 2004).
In spite of these research-related challenges making comparison and generalization problematic, its breadth and composition give the material a strong potential for gathering an extensive amount of information as well as detailed, context-dependent knowledge about this phenomenon (Flyvbjerg, 1991). The main intention is therefore to create a deeper understanding and uncover important aspects of video feedback as a phenomenon (Andersen, 1997; Yin, 2003), knowledge that may be empirically (quantitatively) tested in the next stage.
Participants and proceduresDuring the testing period, video feedback was given on written work completed by both individuals and groups. While response was primarily based on written assignments (logs, factual assignments, reflection notes and project work), technology was also utilized for relaying messages, instructions and user manuals. The present test refers to considerable breadth with respect to academic subjects: education, engineering, the German language, gender equality, IT and information systems as well as nursing for the mentally challenged. In addition, the number of participating students and amount of feedback given students vary (Fig. 3).
The test data consist of two sets of inquiries administered to students consisting of 10 questions in which the answer percentage was respectively 100% (A) and 50% (B) (see fig. 3). A questionnaire in the form of a written interview administered to six teachers (A-F), and a project report in study programs C and D based on an inquiry (Einstabland & Letnes, 2010). Students in studies E and F provided informal feedback to their teachers (appears in the interviews).
ResultsTesting was completed in various study programs during the academic year 2009/2010. Regarding topics, the collection of feedback from the material centered around the following five topics: 1) quality and clarity, 2) efficiency, 3) learning dividend and future oriented feedback, 4) availability and proximity, and 5) technological threshold and co-worker support.
1. Quality and clarityOne unique challenge in working with student feedback is to avoid being vague, unclear and confusing (Crawford, 1992; Goldstein & Kohls, 2002; Zamel, 1985). Using video feedback allows good opportunities to positively influence this situation through the multimodal interaction between picture, sound and text (Brick & Holmes, 2008). Students’ replies directly affect this problematic situation, as demonstrated by the following statements:
- The great advantage that screen capture has over written feedback is that screen capture gives a much clearer impression of what is being commented upon and assessed.
- We know exactly what you are commenting on… A good combination because we see the text highlighted AT THE SAME TIME while we are listening to what you are saying.
corrects German grammar (Fig. 4) (in which precision and great detail are essential), this interaction becomes especially visible. The teacher’s experiences are in this case identical with those of the students (Einstabland & Letnes, 2010), and may be summarized in the following manner:
- Our experiences and feedback from students also indicate that assignment correction via JING causes a high degree of learning effect and is thus good educational practice. We believe that this is not least because the combination of sound and picture produces an effect that is more than the sum of its parts. (ibid:, p. 11)
Figure 4.The screen dump shows the preliminary work done before making a screen capture recording in connection with written assignments on German grammar.
A further point brought up by students is the opportunity to repeat playing the recording and the influence this form of feedback has on their memory, especially in cases where the feedback sessions are numerous, detailed and perhaps complicated. Students can replay the comments and advice given over and over if necessary (see Neutzsky-Wulff, 2009; Stannard, 2007b):
- It’s easier for me to remember the feedback later when I listen and watch it than when I just read it.
- Haven’t used it (the recording) very much up to now, but will probably go through it very carefully before the exam.
a conjunction with other similar tests (Stannard, 2006, 2007b). Students and teachers express that the quality and precision of feedback sessions increases, and the feedback content is regarded as being meaningful and providing a distinct starting point for change and improvement.
2. EfficiencyRussell Stannard (2007b, 2008a, 2008b) asserts that a two-minute video recording containsapproximately 400 words, the equivalent of a sheet of standard notepaper containing written text. The recordings completed with JING are limited to 5 minutes each, and usually the entire recording time is utilized. This means that students receive feedback equaling 2 ? sheets of notepaper in each recording. If this is converted, it implies that the education teachers (A) in this test gave 100 sheets of notepaper of feedback to their student groups. Stannard (ibid.) claims that a conservative estimate reveals that this is at least four times more feedback than students would usually receive from written feedback.
Teachers confirm that the extent of feedback is increasing at the same time the level of work being done is decreasing. However, this may vary depending on the kind of written work receiving comments and the technological threshold teachers may have to overcome. The most enthusiastic comments are nonetheless in line with Lumadue and Fish (2010), who claim that this technology represents what may be called a time change in higher education:
- We save an enormous amount of time! (…) Give good feedback using one quarter of the time usually spent doing this, and it doesn’t get misunderstood to the same degree that written feedback does. (Teacher F)
- I’m very satisfied, and the students like this a lot. It’s much easier to give feedback through a recording than having to write everything down. (Teacher E)
- Digital feedback is more time-cost efficient… It also challenges me to be clear when making precise points, something that prohibits any kind of digression or comments that are beside the point. (Teacher D)
- Students were given better and more varied feedback, but I spent more time on the feedback than if I had just given them an answer key. (Teacher B)
- It’s probably less work for the person making corrections than by writing down the answers, and this will make it possible to give better feedback.
- JING is a very good replacement for traditional advising, and if it’s not possible to complete an advisory session in real time, this is absolutely the best alternative.
3. Learning dividend and future-oriented feedbackAs presented in Figure 3, video response was initially based on students’ logs, factual assignments, reflection notes and project work completed in six subject areas. It is therefore important to ask about the degree of significance students assign to this feedback form with respect to their academic work and learning dividend. For example, do they receive the necessary feed forward (Hattie & Timperly, 2007), and are there differences between the various subjects and texts that receive response? Concerning the question about whether this feedback form has been significant for learning dividends, students’ comments are mostly positive - yet general in nature.
- … we have gotten feedback that we are on the right track. It has been easier to stimulate thought processes concerning other more appropriate methods while at the same time becoming easier to accept the praise we received.
- Difficult to describe the learning dividend, but it is educational. We understand what we have done well and what was not as successful.
- The feedback has been positive for my learning dividend. I get to study the assignment once again in a very thorough manner, which in turn helps me to relearn the material while getting tips about how I can do better on the next assignment.
- I don’t really think it’s had any influence on my academic learning dividend, but as a study method this is without doubt the best form of feedback I’ve ever tried.
- Would say that written feedback would be equally positive IF we had gotten the same amount of information as we get from you through this screen capture method.
- It works especially well in grammar. (Teacher C)
- I’ve experienced that feedback quality has improved. (Teacher A)
- Students received better and more varied feedback… (Teacher B)
4. Availability and proximityThe basis for the test was, in accordance with Mayer (2001), combining verbal and visual feedback in “dual coding”, in this manner improving and increasing the efficiency of asynchronous communication with students. However, test experiences caused side effects that proved to be more significant than we could have predicted. For instance, regarding the question ”Has this feedback had any significance for achieving closer proximity to your teacher?”, students’ statements were clear and unanimous, an aspect of which includes the feeling of being “seen” by the teacher:
- We feel like you’re almost in the same room as us when we look at the screen.
- Yes. The feeling of being “seen”. Personal. Shows involvement. Feel that we students get to know the teacher a little better by hearing his voice.
- You learn more about how they look at logs. As mentioned, the feedback feels more personal, so it feels like we get closer contact with our teacher. You more easily gain trust in your teacher. It creates a longed-for closeness to the teacher, and you feel that the work you do isn’t in vain, that you get proper feedback on the work that we’ve put a lot of time and effort into. You aren’t any longer just one face among 1,000 others at the university.
- You want to perform better since the teacher is putting so much effort into the feedback. Involvement is contagious.
- We get to take part in your reflection on our text. It’s easy to follow, thorough and personal.
- I feel that the feedback’s quality has improved, and that it is clearly more efficient… I have also gotten closer to my students and can be more personal and encouraging than by merely giving them written feedback. (Teacher A)
5. Technology and co-worker supportIn answer to the question “Have you experienced any technical problems? (If so, what kind?)”, the students’ unanimous reply has been that the playback process has been free of problems. If any problems have arisen, they have been connected with the recording and the quality of sound. Teachers relate having the same experiences, claiming that using the program requires a low technological knowledge threshold in relation to both making recordings and distributing them.
All teachers see the screen capture program JING as being simple and intuitive, and have discovered their own way of utilizing the medium during their first test of it (see Fig 3). Regarding the level required with respect to working with JING in its start-up phase, the teachers’ attitudes have been similar to those of teachers C and D:
- To summarize, we assert that JING is quite a user-friendly aid having a low threshold for both sender and receiver. Even though we do not have a great deal of experience with digital tools, we did not run into any great difficulties. Naturally, the sender (teacher) must invest a certain amount of time before using a “product” for the first time so that you will be able to defend the use of it with your students. However, looking ahead, this is time well spent. (Einstabland & Letnes, 2010, p. 10)
http://www.uia.no/no/div/sentre/pedagogisk_utviklingssenter/kursprogram, in association with a course in university pedagogics10
http://www.uia.no/no/div/sentre/pedagogisk_utviklingssenter/kursprogram/uniped-kursand through information published on the Internet11
http://www.uia.no/no/div/sentre/pedagogisk_utviklingssenter/ressursbank/skjermopptak. However, perhaps the instructional form utilized most has been informal demonstrations and follow-ups given from co-worker to co-worker, according to teachers. This impression corresponds with findings in Norway Opening Universities’ publication ICT Monitor (Wilhelmsen et al., 2009), for example, in a status report about ICT use in higher education confirming that 7 out of 10 teaching staff members receive help from co-workers. At the University of Umeå12
http://www.umu.se/om-universitetet/aktuellt/aktum/aktum-2010-nr-1/ikt-coach-stottar/, the use of co-workers has gone one step further through hiring “ICT coaches”, individuals who give teaching staff members individual support and advisement regarding educational and technical questions associated with Internet-based instruction and the use of information and communication technology in connection with their instruction. This approach appears to be a good format as it implies that digital (user) competency is constructed
around a common understanding of the potential represented by this digital tool in relation to the teacher’s own subject.
Conclusion and further researchThe purpose of both the questions posed in this article and the testing has been to acquire knowledge regarding the degree of significance video feedback has for students’ written work. In summary, students signal the following quite clearly: video comments are regarded as being more precise and nuanced than written feedback, and as such give students a greater amount of inspiration and motivation when completing future academic work. Similarly, learning dividends increase as do opportunities for processing feedback and achieving a closer relationship with the course teacher. Teachers point out the same experiences, emphasizing that this working form simplifies and increases the efficiency of working with feedback while at the same time allowing the opportunity to achieve increased levels of precision and quality.
The factors that strongly confirm the study’s findings are the sample’s broad scope and the variation in format, as well as teachers’ and students’ overall understanding of the learning dividend. Video feedback as a feedback aid has in the present study confirmed the results of similar studies (Brick & Holmes, 2008; Lumadue & Fish, 2010; Stannard, 2008a, 2007b). There is reason to assert that this digital working form will be able to provide an important supplement to written comments and student-teacher meetings. The degree to which video feedback causes a time difference when working with giving feedback to students, as claimed by Lumadue & Fish (2010), remains to be seen. However, it may appear that screen capture technology can provide a new direction in feedback work, and that a unique challenge is to create a multimodal “blend” that corresponds with the field’s unique characters in addition to the forms of feedback traditionally utilized.
However, certain reservations should be noted: in this type of project, there may be an element of a Hawthorne effect (Franke & Kaul, 1978), which can threaten the findings’ validity. One feature of the study highlighting this problem is on the one hand students’ feeling close to their teacher and on the other hand the inability to define their learning dividends in specific terms. Moreover, Kluger and DeNisi (1996) claim that we can only to a moderate degree expect positive effects of feedback interventions. Based on a background of comprehensive meta-analyses, they point out that on the contrary, we should expect negative effects in 33-38% of the cases. Still, no feedback systems works for everyone (ibid.), and any effective feedback intervention requires a combination of feedback form, task type and individual differences (ibid.).
Finally, there remain several unanswered questions that future empirical research should elaborate. For example: is it correct that learning dividends gained through the use of video feedback are greater than through the use of written feedback? Are there students with unique learning styles that profit from the use of video feedback? How can one optimize the interplay between the visual and verbal? What is a good combination of feedback forms within various academic disciplines?
ReferencesAndersen, S. S. (1997). Case-studier og generalisering: forskningsstrategi og design. Bergen: Fagbokforlaget.
Attwell, G., Pachler, N. & Pimmer, C. (2010). Towards Work-Based Mobile Learning: What We Can Learn from the Fields of Work-Based Learning and Mobile Learning. International Journal of Mobile and Blended Learning, 2(4), 1-18.
Bakhtin, M. M. (1998). Spørsmålet om talegenrane. Bergen: Ariadne Forlag.
Barholomae, D. (1980) The study of error. College Composition and Communication, 31, 253-369.
Biggs, J. B. (1999). Teaching for Quality Learning at University: What the Student Does. Philadelphia: Society for Research into Higher Education & Open University Press.
Brick, B. & Holmes, J. (2008). Using screen capture software for student feedback: towards a methodology. IADIS International Conference on Cognition and Exploratory Learning in the Digital Age, (CELDA).
Caracelli, V. J. & Greene, J. C. (1997). Crafting mixed-method evaluation design. I: J. C. Greene and V. J. Caracelli (eds.), Advances in mixed-method evaluation: The challenges and benefits of integrating diverse paradigms. New Directions for Program Evaluation, No. 74. San Francisco: Jossey-Bass.
Clark, R. & Mayer, R. (2003). e-Learning and the science of instruction: proven guidelines for consumers and designers of multimedia learning. San Francisco: Jossey-Bass Publishers.
Cohen, A. D. & Cavalcanti, M. C. (1990). Feedback on compositions: Teacher and student verbal reports. In: Kroll (Ed.), Second Language Writing. Cambridge, UK: Cambridge University Press.
Crawford, J. (1992). Student Response to Feedback Strategies in an English for Academic Purposes Program. Australian Review of Applied Linguistics, 15(2), 45-62.
Debuse, J., Lawley, M. & Shibl, R. (2007). The implementation of an automated assessment feedback and quality assurance system for ICT courses. Journal of Information Systems Education, 18(4), 491-502.
Dysthe, O. & Engelsen, K. S. (2007). Variations in higher education portfolio assessment.
A nationwide survey in Norway across institutions and disciplines. Paper presented at the Norgesuniversitetets konferanse om e-portfolio, 20 September 2007. Gardermoen Norway.
Einstabland, Å. L. & Letnes, O. (2010). Om bruk av JING som verktøy for tilbakemelding på skriftlige innleveringer. (Utviklingsprosjekt, Pedagogisk Utviklingssenter) Kristiansand: Universitetet i Agder.
Flyvbjerg, B. (1991). Rationalitet og magt. Bind 1. Det konkretes videnkap. Odense: Akademisk Forlag.
Franke, R.H. & Kaul, J. D. (1978). The Hawthorne experiments: First statistical interpretation. American Sociological Review, 43(5), 623-643.
Fregeau, L. A. (1999). Preparing ESL students for college writing: Two case studies. The Internet TESL Journal, 5(10). Available at http://iteslj.org/Articles/Fregeau-CollegeWriting.html (accessed 30 November 2011)
Gardner, S. (2004). Knock-on effects of mode change on academic discourse. Journal of English for Academic, 3(1), 23-38.
Gibbs, G. & Simpson, C. (2004). Does your assessment support your students’ learning? Centre for Higher Education Practice, London: Open University Press.
Glaser, B. G. & Strauss, A. L. (1967). The discovery of grounded theory: strategies for qualitative research. Chicago: Aldine.
Glower, C. & Brown, E. (2006). Written feedback for students: too much, too detailed or too incomprehensible to be effective? Bioscience Education e-Journal, 7. Available at http://www.bioscience.heacademy.ac.uk/journal/vol7/beej-7-3.pdf (accessed 30 November 2011)
Goldstein, L. & Kohls, R. (2002). Writing, commenting and revising: The relationship between teacher feedback and student revision online. Paper presented at the American Association of Applied Linguistics Conference, 6–9 April 2002, Salt Lake City, Utah.
Hattie, J. (2009). The black box of tertiary assessment: an impending revolution. I: Meyer, L. H. et al. (eds.), Tertiary Assessment & Higher Education Student Outcomes: Policy, Practice & Research. Retrieved from: Wellington: New Zealand: Ako Aotearoa, 259-275. Available at http://akoaotearoa.ac.nz/ako-aotearoa/ako-aotearoa/resources/pages/black-box-tertiary-assessment-impending-revolution (accessed 30 November 2011)
Hattie, J. & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112.
Hartley, J. (2004). Case study research. In Catherine Cassell & Gillian Symon (Eds.), Essential guide to qualitative methods in organizational research. London: Sage.
Higgins, R., Hartley, P. & Skelton, A. (2002). The conscientious consumer. Reconsidering the role of assessment feedback in student learning. Studies in Higher Education, 27(1), 53-64.
Holmes, K. & Papageorgiou, G. (2009). Good, bad and insufficient: Students’ expectations, perceptions and uses of feedback. Journal of Hospitality, Leisure, Sport & Tourism Education, 8(1), 85-96.
Hyland, D. (2003). Focusing on form: Student engagement with teacher feedback, System 31 (2), 217–230.
Hyland, K. & Hyland, F. (2006). Interpersonal aspects of response: Constructing and interpreting teacher written feedback. In: Hyland, K. & Hyland, F. Feedback in Second Language Writing. Cambridge: CUP.
Kluger, A. N. & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–284.
Lankshear, C & Knobel, M. (2006). Digital Literacy and Digital Literacies: – Policy, Pedagogy and Research Considerations for Education. Nordic Journal of Digital Literacy, 1(1) 12–24.
Lumadue, R. & Fish, W. (2010). A Technologically Based Approach to Providing Quality Feedback to Students: A Paradigm Shift for the 21st Century, Academic Leadership, the online journal. 8(1). Available at http://www.academicleadership.org/article/A_Technologically_Based_Approach_to_Providing_Quality_Feedback_to_Students_A_Paradigm_Shift_for_the_21st_Century (accessed 30 November 2011)
Mathisen, P. & Wergeland, B. (2009). Web-basert bilde-lyd mentoring - Pedagogiske muligheter og utfordringer. Nordic Journal of Digital Literacy, 4(3-4), 173-188.
Mayer, R. E. (2001). Multimedia learning. New York: Cambridge University Press.
McLaughlin, P., Kerr, W. & Howie, K. (2007). Fuller, richer feedback, more easily delivered, using tablet PCs. Proceedings for the 11th International Conference on Computer Aided Assessment, Loughborough University, Loughborough, 327-340.
Neutzsky-Wulff, A. C. (2009). Nærhed i fjernundervisning – om brugen af audio & video i sprogundervisning. Tidsskriftet Læring & Medier 2(2) Available at http://ojs.statsbiblioteket.dk/index.php/lom/article/view/3915/3423 (accessed 30 November 2011)
Paivio, A. (1986). Mental Representations. New York: Oxford University Press.
Pepper, M. B. & Pathak, S. (2008). Classroom contribution: What do students perceive as fair assessment. Journal of Education for Business, 83(6), 360-367.
Poulos, A. & Mahoney, M. J. (2008). Effectiveness of feedback: The students’ perspective. Assessment and Evaluation in Higher Education, 33(2), 143-154.
Raaheim, A. (2011). Læring og undervisning. Bergen: Fagbokforlaget.
Sadler, R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144.
Skjervheim, H. (1996). Deltakar og tilskodar og andre essays. Oslo: Aschehoug.
Stannard, R. (2007a). Goodbye to lecture notes. The Guardian, Tuesday 18 September 2007. Available at http://www.guardian.co.uk/education/2007/sep/18/link.link24 (accessed 30 November 2011).
Stannard, R. (2007b) Using screen capture software in student feedback. HEA English Subject Centre Commissioned Case Studies. Available at http://www.english.heacademy.ac.uk/explore/publications/casestudies/technology/camtasia.php (accessed 30 November 2011).
Stannard, R. (2008a). A new direction in feedback. Humanizing Language Teaching, 10(6). Available at http://www.hltmag.co.uk/dec08/mart04.htm#C1 (accessed 30 November 2011).
Stannard, R. (2008b). Using Innovative Technology to Improve the Feedback Experience for Students. JISC, RSC - Regional Support Center. Retrieved from: http://www.rsc‐london.ac.uk/fileadmin/docs/case_studies/Innovation_and_Student_Feedback.pdf (accessed 30 November 2011).
Summerville, J. & Johnson, C. S. (2006). Rural creativity: A study of district mandated online professional development. Journal of Technology and Teacher Education, 14(2), 347-361.
Sugita, Y. (2006). The impact of teachers’ comment types on students’ revision. ELT Journal, Oxford Journal, 60/1, 34-41. Available at http://eltj.oxfordjournals.org/content/60/1/34.full (accessed 30 November 2011).
Varis, T. (2008). European and global approaches to digital literacy. Nordic Journal of Digital Literacy, 3(1), 53–60.
Ware, P. D. & Warschauer, M. (2006). Electronic feedback and second language writing. In: Hyland, K. and F. Hyland. (2006). Feedback in Second Language Writing. Cambridge: CUP.
Wilhelmsen, J., Ørnes, H., Kristiansen, T. & Breivik, J. (2009). Digitale utfordringer i høyere utdanning. Norgesuniversitetets IKT-monitor. 1/2009. Available at http://norgesuniversitetet.no/files/NUV-rapp_1_09_Digitale_utfordringer.pdf (accessed 30 November 2011).
Wolsey, T. D. (2008). Efficacy of instructor feedback on written work in an online program. International Journal on ELearning, 7(2), 311-329.
Yin, R. K. (2003). Case study Research: Design and Methods. Thousand Oaks, California: Sage.
Zamel, V. (1985). Responding to student writing. TESOL Quarterly, 19, 79-101.
|1||LMS - Learning Management System. In this project Fronter was used as a Learning Management System organizing users and managing e-learning content.|
|2||St.meld. nr. 27 (2000-2001)|
|3||Video feedback is understood in this context to be a video recording of the action taking place on the screen combined with an audio recording of the teacher’s comments regarding what is being shown. This is usually described as video screen capture or video screencast.|
|8||St.meld. nr. 27 (2000-2001)|