Oppgrader til nyeste versjon av Internet eksplorer for best mulig visning av siden. Klikk her for for å skjule denne meldingen
Ikke pålogget
{{session.user.firstName}} {{session.user.lastName}}
Du har tilgang til Idunn gjennom , & {{sessionPartyGroup.name}}

Exploring Nature through Virtual Experimentation



Postdoc. University of Gothenburg, Sweden annika.lantz-andersson@ped.gu.se



Ph.D. Candidate. University of Gothenburg, Sweden emma.petersson@gu.se



Professor. University of Gothenburg, Sweden roger.saljo@ped.gu.se

In the present article, a study of the use of a virtual lab in environmental science teaching is reported. The lab was used as part of regular instruction; the idea was to provide a context to learn about experimentation as a research method. The study builds on a sample of 80 of 511 students, and uses pre- and post-test data of students’ insights about concepts and procedures relevant for designing an experiment in environmental science. The results show that students discovered some principles of how to organize an experiment. A majority of the students appropriated some of the relevant terminology and procedures relevant for organizing experiments. However, the findings also pointed to limitations in how students were able to reason about experimentation. A major problem for the students was to understand the role an experiment plays in resolving an issue. Such insights do not emerge from using the virtual lab per se, but rather from realizing the role an experiment plays as part of a scientific study of a problem.

Keywords: virtual experiments, science learning, science literacy

Introduction

The background of this study is an interest in two related issues. The first one concerns problems of how to organize learning in times of rapid changes in the knowledge base. The second issue concerns how digital tools – virtual labs – may support learning and understanding of basic modes of scientific reasoning in environmental science. The connection between the two issues can be found in the manners in which experimentation may be seen as a generative practice for producing knowledge. Recognizing what an experiment is, and how it is conducted, are keys to understanding critical principles for how to generate knowledge. Given the rapid development of digital media, it seems fruitful to ask if virtual experimentation may contribute to learning such generative skills. However, our interest is what can be achieved within normal educational practices. Thus, we are not focussing on testing sophisticated digital tools in contexts where there are media experts and other extra resources around, as is commonly the case in research on digital tools in classrooms (cf. Ludvigsen & Mørch, 2010). Rather, our interest is to scrutinize signs of learning in settings where a virtual lab is offered as an optional resource for students to use when learning about a specific environmental issue: ocean acidification.

Learning in knowledge intensive societies

At present there is an unprecedented expansion of human knowledge following large investments in science and research in most countries. Already a hundred years ago, John Dewey (19661) pointed to the problems of how to organize instruction so as to accommodate to the increasing production of knowledge. One of his reflections concerned the difficulties that this development posed for schools when preparing students for their future work and life in a democratic society undergoing rapid change. In 1916 he argued that “industry at the present time undergoes rapid and abrupt changes through the evolution of new inventions. New industries spring up, and old ones are revolutionized.” (1966, p. 119). As a consequence, “an attempt to train for too specific a mode of efficiency defeats its own purpose. When the occupation changes its methods, such individuals are left behind with even less ability to readjust themselves than if they had a less definite training” (loc. cit.). Thus, the increase in knowledge produced cannot be met by making education more specialized or by tailoring it too tightly to current conditions.

What Dewey observed about the difficulties of preparing young children for rapidly changing social conditions and technological shifts is a relevant premise for addressing contemporary issues in education as well. Metaphors such as the information or knowledge society have been invented to point to the dynamics characterizing “fast capitalism” (Gee, Hull, & Lankshear, 1996) in a globalized world where production of information has reached levels that were hard to imagine even a few decades ago. Immense numbers of scientific publications pour out findings, observations and claims that, in principle, would be relevant for education to take note of. New fields, such as in the areas of environmental science and the life sciences, emerge. But incorporating all these developments is not possible; education cannot continuously accommodate to new inputs at the pace at which they are currently produced; rather it has to exercise a high degree of selectivity and consider how to accommodate to new circumstances brought about by the intense knowledge production.

To deal with the problem of how to handle this situation of a rapidly increasing knowledge base, Dewey proposed that learning should take place through inquiry. Inquiry as a pedagogical concept has many sides to it, and it is a cornerstone of Dewey’s pragmatist approach to knowledge, learning and interaction with the world. The definition Dewey (1938, p. 108) provides is quite complex and implies that inquiry “is the controlled or directed transformation of an Indeterminate situation” into one that is comprehensible as a “unified whole.” Thus, inquiry is an “operation activated in response to doubt” (Talisse, 2002, p. 76), which, in turn, generates an insight or solution of a problem. Inquiry, accordingly, implies that the interaction with the environment is guided by a question, something that the individual qua learner is wondering about; it is when we are in what Dewey refers to as an indeterminate situation that we engage in inquiry in order to transform the problem encountered into something that we can grasp and act on.

For Dewey inquiry is a characteristic of science as a human activity; science is the “conduct of inquiry as inquiry” (Dewey & Bentley, 1949, p. 238), i.e. the continued search for indeterminate situations that may be transformed into something that we understand. This makes science a model for how learning can be organized. If students learn how scientists formulate questions and how they study them, they will develop an insight into the nature of scientific knowledge and its character at a more general level.

Learning through virtual experimentation

Experimentation is important in many fields of research. Learning about the logic of how experiments are conducted therefore qualifies as a significant constituent in the development of science literacy. Doing experiments involves understanding a particular language with mediating terms and concepts such as sample, control group, variable and so on (Lemke, 1990). However, appropriating such terms is not enough; a deeper insight into experimentation implies that one learns how to organize such specific knowledge generating practices. One must be familiar with how experiments may be designed in order to give valid answers to specific problems. Such knowledge has conceptual, practical and performative elements, and the learner has to appropriate a range of insights necessary for structuring empirical studies in expected manners (Ault & Dodick, 2010).

In recent years, a large number of resources for performing virtual experiments have been produced. Virtual labs, in many areas and at all levels of education, are available online. Also, there is an intense technical development, where major players in science, such as NASA,2 science museums, universities and other institutions take active part. On the Internet, one can participate in activities of performing experiments in various branches of physics,3 chemistry,4 life sciences5 and other fields.

The research on the implementation of such resources for teaching has primarily focused on the design of virtual tools (cf. Furberg, 2010). The basic aim of these studies is to test hypotheses relating to the claim that virtual tools “offer the potential to improve learning” (Ramasundaram et. al, 2005, p. 22), since they familiarize students with working methods that resemble methodologies practised by scientists. Thus, virtual tools “include learning from observation, developing hypotheses to explain observations and testing of hypotheses with datasets” (ibid., p. 23). For instance, Ramasundaram and colleagues (2005) developed an environmental virtual field laboratory (EVFL) offering a complement to non-virtual fieldtrips. The idea behind EVFL is to mimic a traditional fieldtrip through 3D animations, and focus questions and simulations, related to the environmental properties of flatwood landscapes in Florida. The authors argue that EVFLs enhance learning by offering instructional opportunities that are not available in non-virtual labs. Similar arguments are presented by Heermann and Fuhrmann (2000), who argue that virtual labs have potentials for improving learning and increasing student motivation. Claims made in the literature are that virtual tools are less time-consuming, more flexible, clean, rapid, cost-effective, safe, and that they open up for types of experimentation that otherwise might not be possible for students to engage in (Dalgarno et al., 2009; Shim et al., 2003; Zacharia, 2008). Using virtual tools in educational settings is believed to provide opportunities for inquiry learning and for the learning of scientifically relevant modes of working (Bell, T. et al., 2010; Shim et al., 2003). In a recent literature review on the uses of ICT in environmental education (EE), Fauville et al. (2013) conclude that the association of EE and ICT in the classroom challenges long-established teaching traditions and opens up for a wide range of instructional opportunities. Fauville et al. argue that there is a rich variety of digital tools and applications, but far less research on what such resources imply for student learning. It is argued that such tools make it possible to overcome budget, time and security issues by giving students possibilities to virtually conduct experiments that are otherwise not possible to run in schools (Fauville et al., 2013).

However, as is well known from attempts to introduce digital technologies in school, the world of teachers and students is very different from that of designers (e.g. Arnseth & Ludvigsen, 2006; Lantz-Andersson, Linderoth & Säljö, 2009). As argued by Krange and Ludvigsen (2009) “to improve students’ knowledge constructions, it is not enough – nor in principle possible – to perfect the design of the technology” (p. 268). Also in research, there is a focus on presenting new technologies, and making claims about their advantages, rather than studying the concrete use of them and the consequences for learning processes and outcomes. Only relatively few studies analyse how students reason when using a virtual laboratory environment and critically attend to the pros and cons (cf. Chen, 2010).

Following this line of reasoning, virtual experiments introduce new practices that imply a remediation in the Vygotskian sense (Wertsch, 2007; Vygotsky, 1978). Experiences are made in new manners, and the virtual context does not mimic the traditional hands-on experiment in the science lab. The virtual environment as part of school work must be understood in terms of its own set of affordances. Also, and as we have already alluded to, new technologies introduce their own problems when it comes to implementation in the classroom (Cuban, 1986).

The study

The present study is part of a project that seeks to explore how, and if, students profit from working with a virtual lab. The project also includes extensive video documentation of students while working with the virtual tool (to be reported on). The issue we address here concerns to what extent students appropriate the fundamentals of what doing experiments implies. The focus is on scrutinizing students’ written answers to identify how they formulate what characterizes an experiment and elaborate on how an experiment may be designed in order to provide information relevant for a problem. In other words, is it possible to identify signs of learning in students’ ways of picking up concepts and modes of reasoning relating to how to conduct experiments after engaging with a virtual lab?

The study has been carried out as a part of a bi-national collaboration between schools in the USA and Sweden on issues of climate change and habitat preservation. In this case study we have merely used empirical material from schools in the USA. The empirical material is naturalistic in the sense that the classroom activities did not include any interventions or participation from researchers, rather the classroom activities involving the virtual lab were part of a regular instructional setting. We would like to stress that in this particular study we do not have access to data about how students used the lab in the classroom situation; rather the analysis focuses on exploring possible outcomes. However, the strength of this material is that it contains documentation of learning outcomes of over 500 students working in a naturalistic instructional setting with a virtual lab. As far as we have found, there is no study in the literature with a similar dataset.

Setting and participants

The students worked on issues relating to ocean acidification and they had access to a virtual lab called Acid Ocean Virtual Lab (AOVL). The virtual lab was integrated into the regular classroom activities, and the teachers of each class organized the teaching as they saw fit. Four teachers and altogether 511 students (aged 12 to 18) participated in the study. The age distribution must be taken into account when interpreting the results. The research data were generated through a pre-test and a post-test given after a period of one to two weeks. In the post-test, 469 of the 511 students took part and the same questions were given again. In the weeks of lessons in-between the pre- and post-test of this study, the students worked independently during one lesson with the AOVL in the marine science teaching. The teachers did not cover any part of ocean acidification before the activity was presented, nor did they intervene in the students’ activities. Thus, the point of the present study is to see if any traces of learning through a digital tool may be seen in this environment, where the tool is offered as a resource for learning.

The analysis builds on a sample of 80 students.6 These students were randomly selected from the group that took part in both pre- and post-test. The selection was done in order to limit the burden of the analysis of the empirical material.

Acid Ocean Virtual Lab

In order to understand the logic of the study, a brief presentation of the AOVL is necessary. The AOVL is a virtual lab where students get an opportunity to study acidification of the ocean and its impact on the growth of sea urchin larvae. It consists of three parts that the students attend to and use: (a) information regarding basic facts about ocean acidification; (b) lab session, and (c) measurement exercise and information about the consequences of ocean acidification on marine organisms. The lab is designed to mimic a ‘real’ lab setting with equipment such as beakers, pipettes etc. (Figure 1).

In the virtual lab students perform activities such as setting up replicate cultures, feeding the larvae, making water changes, and observing the changes in growth of the sea urchin larvae over time. In the third part, the students measure the growth of samples of larvae from water with different pH levels and compare them. The outcome of the experiment is based on statistical data from authentic scientific research.

Figure 1. Screenshot of the lab session in Acid Ocean Virtual Lab

Problem to be solved by students

The assignment given to the students before and after their use of the virtual lab is a question that serves as a target of our analysis of the impact of the AOVL activity. The problem was formulated as follows:

You are an environmental scientist who is hired to complete an environmental impact report for a proposed project. Tropical Fisheries of Hawaii plans to open a fish hatchery on the Luau River, and the river opens to a bay with a large coral reef. Biologists are concerned that water discharge from the hatchery could impact the pH of the river and the bay. What sort of an experiment could you do to see if a change in pH might have an effect on the growth of the coral?

Apart from this particular task, all other questions in pre- and post-test are multiple-choice. The task thus implies suggesting an informative experiment that would provide relevant information about the consequences on the corals of a change in the pH in the river and the bay. The task is quite demanding and it should be noted that the students were required to do this in writing.

Analysis of data

The idea behind the study is to analyse to what extent students begin to appropriate (Rogoff, 1990, p. 193ff; Wertsch, 1998, p. 32ff) a language and modes of reasoning which emulate the ideas and practices of how to do experiments. The task is quite complicated, since it implies that students have some understanding of what an experiment is in a more precise sense. As Gyllenpalm, Wickman & Holmgren (2010) show, the term experiment is used in different ways in classrooms, and often in a quite vague sense as referring to any manipulation of objects or as a synonym to trying and testing something. Here the task is one of designing an experiment in the scientific sense.

The analysis thus has to be commensurate with the multidimensionality of the task and our analytical interests. To analyse if, and in that case how, students’ reasoning changed between the two occasions, the following data on the pre- and post-tests were attended to:

  1. The use of scientific terms

  2. The nature of the reasoning engaged in when responding to the question

We will now briefly explain these indicators.

A) Analysis of use of scientific terms

As we have already pointed out, learning how to carry out experiments implies appropriating a certain terminology. A part of the analysis implied examining students’ use of experimentally relevant terms, before and after their use of the AOVL. The keywords selected were:

PH

Acid/basic/neutral

Sample

Test/measure/examine/observe

Over time/before and after

Control group

Control

Environment

Compare

Each student’s use of the selected keywords in the pre- and post-tests was counted. One keyword could appear more than once in an answer. The terms selected are considered central when it comes to understanding the nature of the particular experiment to be designed by the students. For instance, the terms pH, acid, basic and neutral are terms that are important in relation to the experiment that the students are to outline, because the problem concerns water quality and a change in pH. Other important and experimentally relevant terms are: sample, test, measure, examine, observe, over time, before and after, control group, control, environment and compare. Also, the choice of these terms was dictated by findings in previous research in the area of science literacy (e.g. Lemke, 1990) as well as by scrutinizing the terminology used in the AOVL.

B) Analysis of student reasoning

The second level of our analysis concerns the manners in which students attempted to outline an experiment that could shed light on the problem of the possible impact of the fish hatchery on acidity and the growth of the coral. This is thus an issue of searching for signs of appropriation of a specific mode of reasoning that involves combining semiotic tools in a relevant manner. The analysis of the students’ answers resulted in a hierarchical description of learning outcomes that ranges from not providing any suggestion about how to proceed, to giving a functional account of how an experiment could address the issue. A five-level category system is empirically derived, meaning that the students’ answers on the pre- and post-test formed the basis for describing and delimiting the categories. The category system is designed to describe how students’ answers to the problem approximate a description of a relevant experiment. The five-level category system is as follows:

Category 1. Don’t know/no answer

Category 2. Suggests solution to the problems with the water (but does not describe a study)

Category 3. Suggests testing the water (or pH or corals)

Category 4. Suggests testing the effects of water status on corals

Category 5. Outlines study/experiment

As can be seen, categories 3, 4 and 5 imply that the students suggest some kind of investigation to deal with the problem given, while categories 1 and 2 do not outline any study at all (cf. Hakkarainen, 2004 and Bell, P. & Linn, 2000 on the use of category system for the analysis of student inquiry and argumentation in science education). The details will be presented in the next section.

Results

At a general level, the results yield evidence of an increasing familiarity with experimentation as a way of addressing an issue of the kind described in the problem. However, at the same time it is also obvious that the activities had quite varied implications for student reasoning.

Use of language of experimentation

The empirical material documents an increase in student uses of relevant distinctions in the context of doing experiments. More of the target terms were used on the post-test as is evident from Table 1.

Table 1. Use of keywords on pre- and post-test

Keywords

Pre-test

Post-test

pH

68

88

Test/Measure/Examine/Observe

40

57

Sample

26

44

Over time/Before and after

7

14

Acid/Basic/Neutral

15

14

Control

6

11

Compare

8

10

Control group

0

3

Environment

2

2

Total

172

243

Terms students used most frequently on the post-test were pH, test, measure and sample. The word pH can be found in the formulation of the question, which might have induced students to use it frequently. For other, potentially relevant, distinctions, the differences between occasions are small. In statistical terms, the mean number of terms used between occasions increases significantly (from a mean of 2,2 to 3,0, z<.001, Wilcoxon signed ranks test).

Table 2 shows an example of how students, when answering the post-test, use their experience from the laboratory session so as to be able to extend their scientific language.

Table 2. Examples of use of experimentally relevant distinctions

 

Pre-test

Post-test

Student 1

«I could take a sample of the water from the Luau River, and then test the small amount of water contacting the pH with the sample water and see what happens.» (Category 3)

«You could take samples from the water before they build the hatchery. Then you could take a sample of the water after they build the hatchery (the water it might be, but not literally after they build the hatchery). Take a pH test with both of these samples and compare the results. This will tell you how much more pH there is. Then, you think about how does pH affect the coral – for example, if there is more pH, then the coral will die/live. So with the samples’ results and and your reasoning/thinking, you can tell what will happen.» (Category 3)

The student illustrates a move from a short, and rather cryptic, answer mentioning a few of the relevant terms to a more explicit account of how one could investigate the matter. On the post-test, terms such as test, pH, before and after, compare and sample appear as elements in an outline of some kind of research study, even though the student does not describe an experiment in a more precise sense.

Student reasoning

The most interesting aspect of the results is of course how the students managed to understand the logic of experimentation. The unit of analysis here, thus, is the reasoning they engaged in as they formulated their answers. The categories mentioned above are empirically derived and give an overview of the types of accounts that were given. The category system is hierarchical, i.e. the higher up in the system an answer is classified, the more elaborated it is, and the closer it approximates the design of a valid experiment.

Below the criteria for each category are described, and there are also a few empirical examples to illustrate prototypical answers.

Category 1. Don’t know/no answer

This category includes answers that give no indication about how to deal with the issue formulated in the question. Alternatively, the answer is just a one word reply which in no obvious way connects to the question.

«Do something»7 (Pre)

«I don’t know» (Post)

Category 2. Suggests solution to problem with water (but does not describe a study)

Answers in this category suggest a solution to the problem with the water, should the hatchery be built. The students, however, are not answering the question posed about what kind of experiment could be designed to investigate the possible effects of the hatchery. Instead, they predict what will happen if the hatchery is built.

«Try to move the hatchery to a place a little farther away from the water» (Post).

Category 3. Suggests testing the water (or pH or corals)

Answers in the third category imply that one should perform one or more tests to settle the issue: test of water, of pH or of corals. These suggestions about testing are given without any specification about what function the test would have in relation to the question. Answers including conceptions connected to pH, such as acid and basic are also placed in this category. Example of an answer is:

«Acidic test» (Post)

Category 4. Suggests testing the effects of water status on corals

Answers in the fourth category explicitly suggest testing the effects of water status on corals. This implies testing the growth of corals in different water conditions, for example in water with different pH levels.

«Extract coral samples from the river bay and put them in different pH solutions and see if the solutions affect the coral.» (Pre)

Again there was a certain variation in how explicit the answers were when specifying how to carry out the experiment, and whether, for instance, the students explicitly used the term experiment in their account. But the important specification here is that students argue that the functional effects of water on corals should be examined.

Category 5. Outlines study/experiment

In the fifth category the answers describe/outline an empirical study/experiment. An example of such answers is:

«To observe the natural pH of the water, I would first gain a sample of that water and put it in six separate jars. I would then get a sample of water with a more acidic pH and also have that in six different jars. Then, I would take sample of the coral and put six samples in the natural pH of the water and then put six with a more acidic pH. I would let the coral sit in the water for a week so that the water could take affect on the coral. After this process, I would take the samples of coral and make six slides for them. I would put them under microscopes and note the size of each sample and note whether or not the more acidic pH had taken affect on the coral. With the six different samples of each, it would allow me to observe and make sure that the experiment was consistent rather than just chance of luck.» (Post)

These answers are thus more precise and specify how empirical observations in an experiment-like context could be made over time. The student texts may be more or less explicit, but the reasoning implies that an experiment which could provide insights into the effects of water acidity on corals is described. A less elaborate example is the following:

«An experiment that you could do is: get a few samples of coral. Put the coral in its normal environment with normal pH, put another coral in an acidic pH, and put a coral in a basic pH. Then observe to see how the coral grows and if it grows better or worse in which environment.» (Pre)

In this case, the participant argues that one should take «a few samples of coral» and put them in different environments. Then one should observe coral growth; thus, the argument is that one arranges a situation in which the environment is systematically varied in order to find out the effects on coral growth, which is treated as outcome measure.

Learning in the context of a virtual lab

The above results indicate the variation in responses to the problem posed among students. At a general level, the answers given on the second occasion are more elaborate and relevant, and include more of the expected terminology. A critical issue is of course if it is possible to connect the answers given on the second occasion to the experiences made while using the AOVL. However, the AOVL is the only activity of this kind that has taken place in the classroom.

The general shifts in the nature of the answers are presented in Figure 2. Of the 80 students in the sample, 35 gave answers that were classified in a higher category on the second occasion than on the pre-test. The most common shift is from Category 1 upwards.

Figure 2. Students’ change of category position

Twenty students answered «I don’t know» (or similar) on the pre-test. Four of them did not change category position on the post-test. Consequently, 16 students changed category upwards. Table 3 exemplifies (cf. Table 2) how students come up with more elaborate answers on the post-test.

Table 3. Change of category

 

Pre-test

Post-test

Student 2

(No answer) (Category 1)

«An experiment that includes the measuring of coral after it have been exposed to different pH levels.» (Category 4)

Seven students changed their category position to a category with a lower rank. In Table 4 there is one such illustration, where Student 3 gives an account that is vaguer than on the first occasion.

Table 4. Change of category

 

Pre-test

Post-test

Student 3

«I would check the pH of the river and bay. Then I would see how different pH’s effect the growth of the coral.» (Category 4)

«Check the pH.» (Category 3)

For 38 of the 80 students the answers were placed in the same category on both pre- and post-test. Most of these students’ answers were placed in Category 3 on both occasions. Pre- and post-test answers in Table 5 show an example of the apparent continuity in the way many of these students answered on the two occasions.

Table 5. No change of category position

 

Pre-test

Post-test

Student 4

«I would use pH paper and test what color it becomes.» (Category 3)

«I would take some water and test the stuff with that pH paper thingy to see if there is a rise in the acidity.» (Category 3)

At a general level, the results give some indications that students improved their ability to use scientific terminology and produce a scientific-like narrative when reasoning about an environmental problem after interacting with the AOVL. The analysis of the students’ reasoning shows that 43,5 per cent of the students are categorized as giving a more elaborate answer on the second occasion, while 47,5 per cent keep to their original reasoning. Nine per cent of the students are classified on a lower level after working with the lab.

An interesting observation is that the wide age distribution does not seem to play any decisive role. The students in the older age group did not write more elaborate answers on either occasion. The younger age group (12–14) used more scientific terms in the pre- and post-test compared to the oldest age group (17–18). However, further studies are needed in order to investigate this particular issue.

Signs of a learning trajectory

As the data material covers an instructional period, it is interesting to see if there is any sign of a learning trajectory in the sense that students become increasingly familiar with what it means to conduct an experiment. At a general level, there is no clear evidence that students easily transfer their experiences when doing a virtual experiment to producing a written answer where they are to describe how to design a similar experiment. Most students already on the first occasion realize that the situation depicted in the problem requires some kind of “test.” Suggestions given are that one should «test the water» or «test the corals». This type of explanation is the one described as characteristic of Category 3, which is the most frequent classification in the material.

A frequent answer on the post-test is that the students explicitly mention that one has to study the effects of water status on the corals. This may be seen as a further specification of the reasoning in Dewey’s sense (Dewey & Bentley, 1949), since causality is included. This is the essence of Category 4, where 35 per cent of the answers end up on the post-test. Few students, however, end up in Category 5 where the procedures of a relevant experimental design are specified. But the learning trajectory of this sociocultural skill seems to imply a series of steps where one realizes that (a) one has to perform tests, (b) tests have to be informative and say something about how water quality/acidity affects corals, and, as a next step, that one is able to (c) describe a coherent experiment where factors are isolated and samples compared in such a manner that conclusions about the relationship between acidity of water and growth of the corals may be drawn.

However, there is another layer to the development that is interesting. Many students seem to respond by suggesting that one should do tests or observations when the fish hatchery has been built. Consider the following answer.

«You could test the pH before the hatchery opened and then test the ph after the hatchery opened and see the difference. If the pH has gone up and the fish and the coral are suffering or dying then the hatchery would need to be shut down or make some changes» (Pre)

This, in our view, signals something important about how a problem of this kind is understood, and how students position themselves when responding. Many students react to the problem primarily as an issue of whether the hatchery should be allowed to be built or not, or, alternatively, they argue that if it is built and the water deteriorates, the hatchery should be closed down. Thus, they respond as concerned citizens facing an environmental problem, rather than as someone who is to investigate a problem and provide information relevant to taking a decision. In our opinion this is an interesting observation, since it signals that in order to address the question as a research issue, one has to position oneself differently than when reacting to it as an environmental problem.

Discussion

At a general level, the results show that it is possible to identify signs of learning in students’ ways of picking up concepts and modes of reasoning regarding how to conduct experimentation after working with the virtual lab. More than 40 per cent of the students’ answers are more informative on the second occasion. Their use of terminology relevant for describing experiments increases somewhat in absolute (although not in relative) terms, and this could be read as a sign that some of the distinctions that concern the activity of doing experiments have been noticed. On the second occasion, more students explicitly argue that one has to systematically test the effects of different types of water on the corals in order to provide an answer (Category 4), and there are more students who describe an experiment that would provide the information asked for (Category 5).

At the same time, and as might be expected, the results show that the effects are far from uniform. Some students seem to profit, but for a substantial proportion of the students there are no or very limited visible effects on their performance on the second occasion. In fact, almost 50 per cent of the students respond in approximately the same manner on the two occasions and a small group (9 per cent) wrote less elaborate answers. Thus, a tool of this kind will not on its own do the job of making students understand experimentation. A teacher is very clearly needed in order to frame the activity. It is not enough to add technology of this kind to reach all students; rather it has to be embedded in a systematic pedagogical arrangement where it fulfils a specific function.

As argued in the introduction, it is not reasonable to expect dramatic effects on student understanding of the principles of experimentation from such a relatively short experience. Also, it must be remembered that to outline a study that would provide the information asked for is much more demanding than responding to questions by giving definitions or providing factual information. To describe an experiment in writing may be seen as a performative engagement with learning materials that is not always expected by students. However, to be able to outline an experiment could be regarded as a step in the process of understanding what the nature of science and scientific knowledge is, how research results are produced and validated; and this is, as we have argued paraphrasing Dewey (1966), different from learning about its end products only. Furthermore, when producing a narrative outlining a research study, one has to actively produce a model in which terms and information are organized in a systematic manner so as to be relevant for answering a question within a reasonably coherent “thematic pattern” with concepts that derive their power through their interconnectedness as Lemke (1990) argues. The most interesting result, in our opinion, is the finding that a significant difference between the students is how their answer is co-ordinated with the question asked, i.e., how students position themselves when answering. The problem is formulated as one of finding information relevant to whether a hatchery would have an effect on water acidity, or if this would not be the case. When looking closely at a clear majority of the responses produced by students, one finds that they, in fact, were answering slightly different questions. Some of the answers argue that the hatchery should not be built, or, alternatively, it should be built somewhere else. Others argue that the water might damage the corals. Many of the answers deal with the general relationships between water quality and effects on corals, but without contextualizing this as a matter of finding out by means of an experiment whether the hatchery should be built or not. Put differently, a major difference between the students concerns whether they (a) suggested a solution to the problem asked by outlining a study/an experiment that could resolve the issue ahead of building the hatchery, or (b) engaged in analysing the problem of what would happen to the water if the hatchery was built; or (c) engaged in a more general reflection on the nature of the relationship between water quality and coral growth. The difference reflects whether students position themselves as analysts of a problem in search of information that would serve as part of a decision-making process, or if they position themselves as citizens concerned about the environment. Differences in how students position themselves when answering such questions are also shown in previous research (e.g. Murphy & McCormick, 1997).

Rather few of the students answer in a way that reveals an understanding of how a scientific study could provide direct information on the issue by specifying in advance what would happen if the hatchery was built. Thus, when responding to the problem posed it is not enough to know about scientific concepts and experimentation, one must also realize how an experiment may address a particular concern, and what the role of a scientific inquiry could be for decision-making. If we connect the observations made to the wider issue of understanding experimentation as an element of science literacy, our study shows how, at group level, some improvement was obviously made in understanding the logic of experimentation. However, the important issue of how an experiment may provide an answer to a problem of the kind presented cannot be understood by reference to the experimental method per se – whether performed virtually or in the traditional manner. It is not by practising how to do virtual experiments – in the sense of manipulating symbolic information on a screen or working in a traditional school science lab – that one learns how to bridge between a problem and the information that an experiment could provide on a specific issue. Being able to bridge between a problem and the experimental method – i. e. engaging in inquiry in Dewey’s terms – requires a sequence of experiences where students encounter several examples of how such transformations of converting a problem into a relevant experiment take place. This is a discursive skill of thinking within a particular thematic pattern (Lemke, 1990), where one appropriates a mode of reasoning that allows one to formulate a research problem rather than reacting to an environmental problem.

This study forms a foundation for further investigation of which types of activities evolve when students engage in virtual lab work, and to what extent they are engaged in inquiry – transforming a situation that is indeterminate to something that is understood. Analysing how students engage with the virtual lab, and how this tool structures the interaction between students and between students and teachers, would add interesting information to understanding the product – student knowledge. It is still an open question in what sense students perceive virtual lab activities as lab activities, and whether they, when engaging with such tools, construe experimentation as a distinct mode of generating knowledge. Most likely, interactive support from the teacher will continue to play a critical role in fostering such understanding.

Acknowledgements

The research reported has been funded by the Marcus and Amalia Wallenberg Foundation and the Swedish Research Council and conducted within the University of Gothenburg Learning and Media Technology Studio (LETStudio), and the Linnaeus Centre for Research on Learning, Interaction and Mediated Communication in Contemporary Society (LinCS).

References

Arnseth, H.C., & Ludvigsen, S. (2006). Approaching institutional contexts: systemic versus dialogic research in CSCL. International Journal of Computer-Supported Collaborative Learning, 1(2), 167–185.

Ault, C. R., Jr., & Dodick, J. (2010). Tracking the footprints puzzle: The problematic persistence of science-as-process in teaching the nature and culture of science. Science Education, 94(6), 1092–1122.

Bell, P., & Linn, M. C. (2000). Scientific arguments as learning artifacts: designing for learning from the web with KIE. International Journal of Science Education, 22(8), 797–817.

Bell, T., Urhahne, D., Schanze, S., & Ploetzner, R. (2010). Collaborative inquiry learning: Models, tools, and challenges. International Journal of Science Education, 32(3), 349–377.

Chen. S. (2010). The view of scientific inquiry conveyed by simulation-based virtual laboratories. Computers & Education, 55(3), 1123–1130.

Cuban, L. (1986). Teachers and machines: the classroom use of technology since 1920. New York, NY: Teachers College Press.

Dalgarno, B., Bishop, A., Adlong, W., & Bedgood, D. (2009). Effectiveness of a virtual laboratory as a preparatory resource for distance education chemistry students. Computers & Education, 53(3), 853–865.

Dewey, J., & Bentley, A. F. (1949). Knowing and the known. Boston, MA: Beacon Press.

Dewey, J. (1938). Logic: The theory of inquiry. New York, NY: Henry Holt.

Dewey, J. (1966). Democracy and education. New York, NY: The Free Press.

Fauville, G., Lantz-Andersson, A., & Säljö, R. (2013). ICT tools in environmental education – reviewing two newcomers to schools. Environmental Education Research. doi: 10.1080/135046

Furberg, A. (2010). Scientific inquiry in web-based learning environments. Exploring technological, epistemic and institutional aspects of students’ meaning making. Oslo: Oslo University Press.

Gee, J. P., Hull, G., & Lankshear, C. (1996). The new work order. Behind the language of the new capitalism. St Leonards, Australia: Allen & Unwin.

Gyllenpalm, J., Wickman, P-O., & Holmgren, S-O. (2010). Teachers’ language on scientific inquiry: Methods of teaching or methods of inquiry? International Journal of Science Education, 32(9), 1151–1172.

Hakkarainen, K. (2004). Pursuit of explanation within a computer-supported classroom. International Journal of Science Education, 26(8), 979–996.

Heermann, D. W., & Fuhrmann, T. T. (2000). Teaching physics in the virtual university: the mechanics toolkit. Computer Physics Communications, 127(1), 11–15.

Krange, I., & Ludvigsen, S. (2009). The historical and situated nature of design experiments – Implications for data analysis. Journal of Computer Assisted Learning, 25(3), 268–279.

Lantz-Andersson, A., Linderoth, J., & Säljö, R. (2009). What’s the problem? Meaning making and learning to do mathematical word problems in the context of digital tools. Instructional Science, 37(4), 325–343.

Lemke, J. L. (1990). Talking science: Language, learning, and values. Norwood, NJ: Ablex.

Ludvigsen, S., & Mørch, A. (2010). Computer-supported collaborative learning: Basic concepts, multiple perspectives, and emerging trends. In E. Baker, P. Peterson, & B. McGaw (Eds.), International encyclopedia of education (3rd ed.) (pp. 290–296). Oxford, England: Elsevier Science.

Murphy, P., & McCormick, R. (1997). Problem solving in science and technology education. Research in Science Education, 27(3), 461–481.

Ramasundaram, V., Grunwald, S., Mangeot, A., Comerford, N. B., & Bliss, C . M. (2005). Development of an environmental virtual field laboratory. Computers & Education, 45(1), 21–34.

Rogoff, B. (1990). Apprenticeship in thinking: cognitive development in social context. New York, NY: Oxford University Press.

Shim, K-C., Park, J-S., Kim, H-S., Kim, J-H., Park, Y-C., & Ryu, H-I. (2003). Application of virtual reality technology in biology education. Journal of Biological Education, 37(2), 71–74.

Talisse, R. (2002). Two concepts of inquiry. Philosophical Writings, 19 & 20, 69–81.

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.

Wertsch, J. V. (1998). Mind as action. New York, NY: Oxford University Press.

Wertsch, J. V. (2007). Mediation. In H. Daniels, M. Cole & J. Wertsch (Eds.), The Cambridge companion to Vygotsky (pp. 178–192). New York, NY: Cambridge University Press.

Zacharia, C. (2008). Comparing the influence of physical and virtual manipulatives in the context of the physics by inquiry curriculum: The case of undergraduate students’ conceptual understanding of heat and temperature. American Journal of Physics, 76(4), 425–430.

1 This was pointed out by Dewey already in his first version of Democracy and education from 1916 (Dewey, J. (1916). Democracy and education. An introduction to the philosophy of education. New York: Macmillan).
2 www.nasa.gov/offices/education/about/tech_prod_e_edu_overview.html
3 http://3d2f.com/tags/virtual/experiments/physics/; http://www.school.nd.ru/en/products/show.php?pid=18
4 http://www.chm.davidson.edu/vce/
5 http://www.hhmi.org/biointeractive/vlabs/
6The analysis of the material implies that we were interested in how the students handled the problem of designing an experiment. There were 3 students in the original sample of 80 who in the post-test simply responded “The lab that we did in class” or similar. These students were exempted from the sample and 3 new students were randomly selected. The reason for this procedure is that although the answer is in some sense correct, the students are not responding to the assignment and it is hard to know exactly how they interpreted the problem.
7 Text in « » indicates an original quote from students’ writings. The illustrations here are taken from the entire sample of 469 students.

Idunn bruker informasjonskapsler (cookies). Ved å fortsette å bruke nettsiden godtar du dette. Klikk her for mer informasjon