In this time of upheaval with the COVID-19 pandemic and other societal crises globally, it is of especially high importance that we can rely on, and have trust in research. The COVID-19 pandemic has in many ways shown that we have neither the experience nor a sufficient current state of knowledge upon which to build our research, since this is an extraordinary global crisis. It is therefore more than ever important to understand the distinction between incremental and fundamental changes in teaching practice (Cuban, Kirkpatrick & Peck 2001) as a consequence of COVID-19 where home schooling, remote teaching, and other ways of trying to continue business as usual seem to have become (more or less) mainstream. This has redressed the need for both quantitative and qualitative approaches within educational technology research (ETR), methodological pluralism and a continuous awareness around privacy/data protection (and GDPR). In retrospect, we can observe how education was affected by this COVID-19 pandemic in March 2020 (school/university lockdowns which contributed to reduced contagion) and we can see how the COVID-19 pandemic, the health sector and society in general is affected by education in August 2020 (semester start with preliminary evidence pointing towards more contagion among higher education students). This creates epistemic uncertainty around what kind of measures will best deal with this invisible enemy (1-metre social distancing, face masks, quarantine, lockdowns, washing of hands, etc.) and we can see that health technology is one of the measures being applied to help out with this situation. However, experience from ETR very often shows how challenging it can be to implement educational technology (ET) in educational settings, and we can see that the health sector is experiencing some of the same problems. For example, Tamim et al.’s (2015) systematic review found that educational frameworks or research-based evidence in some of the large-scale implementations of ET internationally were deficient, and therefore too often resulted in failure. One recent example of how complicated implementation of technology in the health sector can be is the “Smittestopp” app, launched on 16 April 2020 to trace and prevent COVID-19 contagion in the Norwegian population (Norwegian Institute of Public Health 2020a) (NIPH). A sum of 45 million Norwegian kroner was invested in the app, but by 15 June 2020 it was stopped due to problems with its functionality and the General Data Protection Regulation (Norwegian Institute of Public Health 2020b). Even if we all wished that this app could be successful and helpful in tracing and preventing COVID-19 in the population, the lesson learned from this implementation of technology seems to be that privacy enhancing health technology (as educational technology) is complicated to develop and implement. The conclusion so far from NIPH is that such app implementation shows that “(…) it can be demanding to get such new methods in place. They must be developed, tested and improved” (Norwegian Institute of Public Health 2020b, p. 1). This is in line with the experiences from ETR, where educational implementation attempts often require a current state of knowledge as a starting point, sufficient piloting, and quality assurance with both quantitative and qualitative approaches and methodological pluralism, in order to avoid unexpected pitfalls. Despite all good intentions, we can still ask if educational research and ETR recognize both quantitative and qualitative methods as being equally important, as well as how the strengths in one method can compensate for the weaknesses in another in methodological pluralism such as Mixed-Method Research. This is because educational history and e.g. Gage’s (1989) article “The paradigm wars and their aftermath” has shown that achieving methodological pluralism might be easier said than done. The question is whether the digital era, COVID-19, and other factors are gradually changing some of the “paradigm wars” from the past, or if this is as it always has been.

One of the reason for this focus is that in the digital era it is reasonable to assume that we find a spectrum of approaches, methodological pluralism and methods applied within ETR – both because the technology provides us with new opportunities for methodological innovation, data collection, etc., but also because ETR (like other research areas) needs a variety of research designs, methodologies and data sources to strengthen the current state of knowledge within its area. This, and the fact that almost all the educational bachelor, master and doctoral study programs in the Nordic countries and elsewhere have (more or less) equal ECTS in quantitative and qualitative methods, should strengthen the possibilities for both quantitative and qualitative publications and methodological pluralism (MP) within ETR (and other educational research areas). However, even if many educational technology researchers, national research councils and authorities, educational study programs, editors of scientific journals, practitioners, etc., endorse a certain balance between quantitative and qualitative research in scientific journals as well as methodological pluralism in principle, there is reason to ask if there is a gap between the arena of formulization (e.g. national research policy priorities (and funding) of methodological pluralism), the arena of transformation (e.g. academic institutions priorities of methodological pluralism) and the arena of realization (e.g. research groups, researchers, editors’ priorities towards methodological pluralism) (Lindensjö and Lundgren 2001; Linden 2016). In this editorial, I will highlight some of the aspects which might influence this gap.

Addressing the methodological imbalance within educational technology research

Internationally we can see that both journal editors and educational technology researchers are concerned about imbalance between the number of quantitative and qualitative articles published in research journals (Lopez, Valenzuela, Nussbaum, & Tsai 2015; Twining, Heller, Nussbaum and Tsai 2017 ; Perez-Sanagustín, Nussbaum, Hilliger, Alario-Hoyos, Heller, Twining, et al. 2017). In addition, it seems as though there is room for improvement when it comes to methodological pluralism in ETR. The concept of methodological pluralism means that one endorses and values a variety of research designs, methodologies, methods and data collection techniques in ETR, and this implies that no single research method is “better” or superior to any other per se – it depends on the research questions we are striving to find answers for (Hesse-Biber & Johnson 2015). In this way, one avoids methodological pitfalls of the type “If your only tool is a hammer, then every problem looks like a nail”. In practice, this means that methodological errors cannot be revealed by simple empirical testing since the errors themselves govern which empirical data should apply. The awareness of the distinction between an object and subject ontological phenomenon of research is therefore essential. Often methodological pluralism deals with the use of multiple methods to analyse the same research question(s) in a single study (but it can also be attached to pluralistic publication patterns in journals, pluralistic research program and pluralistic research disciplines). One of the reasons for these concerns is that educational research and ETR often deal with the field of practice and practitioners in school settings, and therefore methodological pluralism in such research often deals with a focus on context, diversity, inclusion, and empowerment. Ontologically and epistemologically this calls for an awareness of avoiding object-ontology being applied to obviously subject-ontological research phenomena (and vice versa), which may create a risk of measuring something other than what one is strictly not able to measure. Due to school lockdowns, quarantines, home schooling, loneliness, social distancing, insecurity, vulnerability, etc. among pupils and students, the COVID-19 situation has strengthened the importance of such foci and methodological pluralism.

A quite strong indicator for examining whether a balance between quantitative and qualitative publications and methodological pluralism are being realized in practice is the publication patterns of scientific articles in scientific journals over time. In the following section I will take a glimpse into some the publication patterns concerning these issues.

Firstly, a comparison of the Google Scholar h-5 index for Educational Technology journals from 2014–2018 by Bond, Zawacki-Richter & Nichols (2019) found that the journals ranked among the top four educational technology journals are Computers & Education, British Journal of Educational Technology, International Review of Research in Open and Distributed Learning, Internet and Higher Education. The top journal within both this review and several of the latest Journal Citation Reports from Clarivate Analytics shows that Computers and Education have dominated the ETR area for many years and is number 31 of the 263 leading journals within Education & Educational Research in general (Clarivate Analytics 2019). Based on Computers and Education’s own initiative, three essential articles (Lopez, Valenzuela, Nussbaum, & Tsai 2015; Perez-Sanagustín et al., 2017, Twining, Heller, Nussbaum and Tsai 2017) address whether there is an imbalance between the number of quantitative and qualitative articles published in highly ranked research journals. In their review from 2011 to 2015 within Computers and Education, Perez-Sanagustín et al. (2017) find that:

There is a preponderance of quantitative research within Computers & Education. Only 55 of the 352 papers [15%] about ICT in schools published in Computers & Education between 2011 and 2015 inclusive adopted a qualitative approach . . . We need to redress the balance between quantitative and qualitative research . . .

In addition, a systematic review from Lai & Bower (2019) about research designs within educational technology shows that among 365 published papers within Computers and Education between 2015–2017, only 4.6% applied “pure” qualitative methods while 10.7% were case studies.

Educational technology research trends from 2002 to 2014 published in Scientometrics by Baydas et al. (2015) found that quantitative studies dominate in the ETR journals and in the 1255 papers examined. Ivanović & Ho (2019) find that the most cited articles within educational research are published in top educational journals, which seem to have a bias towards publishing a quantitative studies. Avenier & Thomas (2015) find that among the ten most recognized scientific journals in their research field, there are very few qualitative studies published.

Several of the abovementioned reviews, addressing whether there is an imbalance between the number of quantitative and qualitative articles published in ETR journals, seem to indicate that there is a certain imbalance concerning this issue internationally – especially in the highest ranked journals.

However, when it comes to methodological pluralism, Perez-Sanagustín et al. (2017) found that 34% of the articles from 2011–2015 employed mixed methods, while Lai and Bower (2019) found that from 2017–2019 in Computers and Education, only 5.5% were mixed. Baydas et al. (2015) found that 12.7% used mixed methods in their review within ETR journals and the 1255 articles. Bond, Zawacki-Richter & Nichols (2019) do not provide any explicit information about the degree of a potential balance/imbalance between quantitative and qualitative studies as well as methodological pluralism in BJET in their review, and state that “future research will further explore how contributions to BJET have changed over time, by analysing the types of articles published, as well as the methodologies used” (p. 40). However, Baydas et al. (2015) found that 40.3% were quantitative studies, 27.8% were qualitative studies, and 12.8% were mixed-method in their review of BJET from 2002–2014.

If we take a glimpse into some of the Nordic educational journals and ETR journals in e.g. Idunn and those included in the evaluation of Norwegian educational research (DAMVAD Analytics 2017), the journals that are most frequently used for publishing academic educational research seem to have a qualitative profile. However, the international educational journal among these with the highest impact factor and a level 2 journal, Scandinavian Journal of Educational Research, seems to have another profile with a bias towards quantitative studies published1. In Table 1 we can see how both the imbalance/balance and methodological pluralism are realized from 2006–2020 in Nordic Journal of Digital Literacy (NJDL).

Table 1

An overview of papers published in Nordic Journal of Digital Literacy from 2006–2020 (Skaar & Krumsvik 2020).

Research method

Number of papers

Percent of papers

Case study


21.35 %



17.71 %



14.06 %



10.42 %



9.90 %



7.29 %

Literature review


4.69 %

Survey / Interviews


4.17 %

Action research


2.60 %



2.08 %

Surveys / Ethnographic


1.56 %

Survey / Interviews / Observation


1.56 %

Position paper


1.56 %

Survey / Observation


1.04 %



100.00 %

Research methods (summary)

Number of papers

Percent of papers



78.13 %



11.46 %



10.42 %



100.00 %

Even if there are nuances in this overview, it nevertheless shows a bias towards qualitative papers (78.13%) published in NJDL since it was launched in 2006. There are of course many reasons for this publication pattern, and it might reflect the findings of the evaluations of Norwegian educational research, which show that there has been a bias towards practice-oriented and qualitative research published within the educational area over the last fifteen years (NRC 2004; NRC 2006; Gilje 2010; DAMVAD Analytics 2017; NRC 2019). This might be related to the threefold mandate educational researchers often follow, which have a practice-oriented profile, as the international committee who recently evaluated the Norwegian educational research describes it: “Users in the education sector. The education sector includes both researchers and users (…) Users in central government bodies and agencies (…) Other users” (NRC 2019, p. 72). It may also be related to what NRC in 2006 found concerning methodology in educational research: “The methodological competence in the use of quantitative methods have been particularly deficient in recent years” (NRC 2006, p. 19–20). However, this seems to have gradually changed over the last 10–15 years if we take a look at the Norwegian Publication Indicator, which shows that Norwegian educational researchers increasingly publish in internationally renowned academic journals – also increasingly based on quantitative methods.

Concerning NJDL more specifically, the majority of research published in NJDL since 2006 has been practice-oriented, which is natural within this educational technology field, with its close connection to educational science and teacher education research. This tendency towards practice-oriented research has also probably had a certain impact on what kind of research designs are applied (as in other educational research). On the one hand, this is natural, since the area is a “moving target” with continual rapid development of new technologies that often call for explorative and ethnographic research designs (14.06%), theoretical articles (17.71%) and case studies (21.35%). In addition, the period since 2006 has seen the implementation of several national strategies concerning educational digitalization, the implementation of new “deeply entrenched structures” (Cuban, Kirkpatrick & Peck 2001) in the national curriculum and general plans for teacher education concerning digital competence and professional digital competence in Norway, which might have increased the tendency towards explorative research among ETR researchers. Such research designs are often applied if researchers seek a more thorough understanding of the distinction between incremental and fundamental changes in teaching practice in schools and where the current state of knowledge remains limited, insufficiently capturing this “new terrain”.

However, if we focus on the last five years (from 2016–2020) the publication pattern changes to a certain degree, where 24.4% are mixed-method, 59.2% are qualitative and 16.4% are quantitative articles. This might indicate a narrowing of the imbalance between quantitative and qualitative papers published in NJDL. Nevertheless, as with other ETR journals, NJDL needs to monitor this development and address any potential imbalance between quantitative and qualitative research published in NJDL in the years to come. And it is important to keep in mind that we still need more primary studies of qualitative research and meta-synthesis of qualitative research, while at the same time increasing the proportion of quantitative studies. As mentioned in earlier editorials, we also need more experimental studies and RCT studies (and basic research)4 about educational technology because such research has a lot to say about how to set up systematic reviews and meta-analyses (which are very often the most cited articles within the current state of knowledge in every research field). For example, if a subject discipline does not have a tradition for carrying out intervention studies, effect studies, RCT studies and so on as primary studies, it is hard to carry out meta-analyses, and also difficult to carry our systematic reviews (which are acknowledged as two of the most cited and important literature reviews within research communities and for informing the policy level). Since educational sciences, teacher education and educational technology research do not have the same long traditions as other disciplines (e.g. medicine and psychology) for such research designs, this seems to be one of the reasons for why there are fewer meta-analyses and systematic reviews within these areas. In NJDL we can see that 4.69% of the papers were literature reviews (and no meta-analysis), while Baydas et al. (2015) found in their review of 1255 ETR papers from 2002 to 2014 that 21.1% were literature reviews. However, only 0.8% were meta-analyses. Even if this seems to be gradually changing within the ETR area in the last five years, it is important to have a focus on these kind of “research on research” articles in the coming years. When it comes to methodological pluralism, we can see that 11.46% are mixed methods in NJDL from 2006–2020, but this has increased to 24.4% from 2016–2020, which is more in line with international trends.

There can, of course, be many explanations for the gap between the intention and realities when it comes to a balance between quantitative and qualitative approaches, and methodological pluralism in some of the top international journals within ETR. However, Twining, Heller, Nussbaum and Tsai (2017) mentions that internationally this can be attached to how different countries assess research quality,2 where it is often applied metrics and numerical data, which seem to favour quantitative and ‘objectivist’ approaches. When it comes to endorsing and prioritising methodological pluralism in research, this can reflect the research traditions, the “paradigm wars” (Gage 1989) and the paradigm stances upon which different research areas mainly rely. In Jennifer Greene’s (2007) overview we find e.g. a span between these different paradigm stances where it is not possible to mix methodological paradigms in the same study in the Purist stance. This might affect the perception of endorsing and prioritising methodological pluralism in research designs. On the other hand, the Dialectic stance and the Alternative paradigm stance (Greene 2007) with dialectic and pragmatic (and partly critical realism) underpinnings finds it natural to mix methodological paradigms within the same study, creating a more fertile ground for endorsing and prioritising methodological pluralism. In addition, there are of course several other aspects which influence this area and it is not within the scope of this editorial to examine all of these, but rather highlight one of the approaches that attempts to narrow the gap between the arenas of formulization, of transformation and of realization when it comes to methodological pluralism – the mixed-method approach.

Mixed-method research – a Trojan horse for positivism or a promising methodological pluralism approach?

We find traces of dichotomies and tensions between quantitative and qualitative approaches back in ancient times, where e.g. Aristoteles tried to bridge the gap and was the pioneer for “the golden mean” and an early methodological pluralism. However, from a critical point of view one can ask why methodological pluralism (MP) and mixed-method research (MMR) should be considered within ETR. Pearce (2015) seems to capture the essence for this rationale:

At times we aim to explore and discover, and at other times we aim to test and confirm. Always operating from an inductive standpoint runs the risk of continually suggesting new and creative theory with no systematic assessment of when and where theory holds beyond the local setting. Using only deductive procedures severely limits the possibility that new discoveries are made or previously unconsidered explanations for social behavior are obtained (Pearce, 2015, p. 46).

Methodological pluralism’s “core message” is that strengths in one method can thus compensate for weaknesses in another, and that e.g. the quantitative data can show the strength of associations while the qualitative findings show the nature of those associations (Fetters et al. 2013). Both MP and MMR have similarity with triangulation, but triangulation is not a method per se and first and foremost is applied to enhance the validity in a study. As Greene notes: “The domain of mixed methods is richer and broader than even an expansive definition of triangulation” (Greene 2015, p. 749). MMR, as the third research paradigm or third methodological movement (Johnson & Onwuegbuzie, 2004, Teddlie & Tashakkori, 2009), integrates both methods as a part of the research design to fulfil the abovementioned aims with the complementary strengths of the two methods in the same study. Today these are often labelled Mixed Method Research (Johnson et al., 2007) or Mixed Research (Johnson and Christensen 2017). However, based on previous experiences from multiple methods and mixed methods with a certain dominance of quantitative research, critics have asked if such design still is “A Trojan Horse for positivism” (Giddings & Grant, 2007) or “Positivism dressed in drag” (Giddings, 2006). However, while this might have been a problem some decades ago, the awareness of such possibilities seems to be dealt with in the abovementioned third methodological movement and where paradigm differences also exist inside the MMR movement. Some are close to a quantitative paradigm (QUAN + qual, often called explanatory designs), where the studies are dominated by quantitative data. Others are close to a qualitative paradigm, with most of their data from qualitative sources (QUAL-quan, often called exploratory designs). Yet others prefer to maintain equal status between the quantitative and qualitative strands (QUAN+QUAL). MMR is also often marked by an awareness around the point of integration of the two strands. Fetters et al. (2013) mentions that this can be done through 1. Integration at the Study Design Level, 2. Integration at the Methods Level, 3. Integration at the Interpretation and Reporting Level and. 4. “Fit” of Data Integration. And Schoonenboom and Johnson (2017) mention seven primary design dimensions that complement Fetters et al. (2013): Purposes, Theoretical Drive, Timing, Point of integration, Typological vs. interactive design approaches, Planned or emerging and Complexity. Such design dimensions increase the possibility of coherence in MMR and the possibility that the methodology applied aligns with the research question, the paradigm stances and the data analysis.

Another typical MMR trait is that “in mixed methods studies, research questions drive the methods used” (Onwuegbuzie and Leech 2006, p. 477). E.g. we can see in purely mixed-methods designs, into which quantitative and qualitative elements are equally integrated, that the research question has both qualitative and quantitative components, reflecting the research questions’ alignment with the ontological and epistemological underpinning and coherence with the methodology. For example, in an MMR case study (Fetters et al. 2013) the qualitative research questions can reflect a social constructivist paradigm (Stake 1995; Merriam 2009), while the quantitative research question reflects a post-positivist viewpoint (Yin 2012, Flyvbjerg 2011, Hyett et al. 2014). But these research questions can also be nested together as long as they align with the methodology and the paradigm stances (Greene 2007). Such MMR case studies are often based on a cumulative data analysis process based on both quantitative and qualitative data sources. In this way it will be possible to identify and check for diversity versus uniformity in the data material, which makes it possible to avoid potential biases overlooked earlier in the data analysis process. Such checks for supporting evidence as well as negative evidence aim to increase the internal generalizability between participants and methods as a whole in the MMR case study, in order to avoid the claim of cherrypicked data only to support interpretations (of which qualitative research is sometimes accused) (Maxwell 2010).

Two examples can illustrate the methodological pluralism of MMR, how the strengths in one method can compensate for weaknesses in another, and where the quantitative data can show the strength of associations while the qualitative findings show the nature of those associations (Fetters et al. 2013). The first is the study “Digital competence and digital inequality in upper secondary school. A mixed-method study” (Krumsvik et al. 2020) with the research question “Is there a connection between pupils’ social background, grades and digital competence in upper secondary school, and how do school leaders and teachers perceive this relationship?”. In this explorative, sequentially mixed-method design, “sequential design” means that the different phases build on each other. In such MMR designs “the researcher first collects and analyses qualitative data, and these findings inform subsequent quantitative data collection” (Fetters et al. 2013, p. 2137). This implies a form of integration through “building” (Fetters et al. 2013), which in this study means that the results from the qualitative interviews generated items for inclusion in the survey (N= 17 529). The quantitative data are intended to show the strength of these possible connections while the qualitative data are intended to show their possible nature. Choosing this type of design implies the interlinking of the different qualitative and quantitative elements in both the design and the analyses, so that could complement one another and provide a more holistic and pluralistic analysis and insight of the research phenomenon.

The second example is mentioned by Judith Schoonenboom and Burke Johnson (2017, p. 20):

“Louise Marie Roth’s research (2006) (…), tackles gender inequality in the workplace. She was interested in understanding the gender-wage gap among highly performing Wall Street MBAs, who on the surface appeared to have the same “human capital” qualifications and were placed in high-ranking Wall Street securities firms as their first jobs. In addition, Roth wanted to understand the “structural factors” within the workplace setting that may contribute to the gender-wage gap and its persistence over time. [...] Roth conducted semi-structured interviews, nesting quantitative closed-ended questions into primarily qualitative in-depth interviews [...] In analyzing the quantitative data from her sample, she statistically considered all those factors that might legitimately account for gendered differences such as number of hours worked, any human capital differences, and so on. Her analysis of the quantitative data revealed the presence of a significant gender gap in wages that remained unexplained after controlling for any legitimate factors that might otherwise make a difference. [...] Quantitative findings showed the extent of the wage gap while providing numerical understanding of the disparity but did not provide her with an understanding of the specific processes within the workplace that might have contributed to the gender gap in wages. [...] Her respondents’ lived experiences over time [qualitative data] revealed the hidden inner structures of the workplace that consist of discriminatory organizational practices with regard to decision making in performance evaluations that are tightly tied to wage increases and promotion.

Here we can see that analysing this from a quantitative (and object-ontological perspective) gives some insight to the research phenomenon, while the emic, qualitative (and subject-ontological) perspective contributes to a broader insight to the research phenomenon and illustrates how the strengths in one method can compensate for weaknesses in another in methodological pluralism.


In this editorial I have focused on whether there is an imbalance between quantitative and qualitative publications patterns in ETR journals, and the importance of an awareness around this in the coming years. The editorial has also focused on whether there is a gap between the arena of formulization, the arena of transformation, and the arena of realization (Lindensjö and Lundgren 2001; Linde 2016) concerning methodological pluralism. It has not been the within the scope of this editorial to discuss all aspects which might influence this gap, but rather focus on how the strengths in one method can compensate for weaknesses in another. Methodological pluralism is also important in order to avoid what happened during the “paradigm wars” within educational research (Gage 1989), where object-ontology was sometimes applied to obviously subject-ontological research phenomena, as well as reinforcing the importance of recognizing that both quantitative and qualitative methods are equally significant. An example of the opposite was experienced by the sociologist Renee Fox in her article in Science, “Medical Scientists in a Chateau” (Fox 1962). Her qualitative findings on problematic structures around the funding of medical research created uproar among researchers in Belgium. They believed that her findings were not scientifically reliable as her article did not use statistics at all (Fox 1964). The saying that “when all you have is a hammer, all problems look like nails” (Kaplan 1964) captures some of this subject-ontological critique that Fox experienced from researchers with an object-ontological standpoint. Without an awareness of how the distinction between subject- and object-ontology lays the groundwork for our view of knowledge, one may be in danger of measuring something other than what one is strictly not able to measure per se with only one single method. The distinction between incremental and fundamental changes in teaching practice as a consequence of COVID-19 can exemplify this, and where methodological pluralism seems to be the more appropriate design to apply to capture both the etic and emic perspectives. However, as mentioned in the title of this editorial, MMR as applied to endorse methodological pluralism within educational technology research might be easier said than done, one of the reasons for which might go back to the abovementioned “Paradigm wars” (Gage 1989) from the past. However, if we really want to endorse methodological pluralism within ETR today and bridge the gap between the arena of formulization, arena of transformation and arena of realization, we need to admire the value of complementary methodological strengths within the academic staff and within the research groups. For research groups to endorse methodological pluralism and MMR we require members with both quantitative competence and qualitative competence to fulfil their ambitions in MMR publishing. In addition, research groups, as communities, need to thoroughly dig into what MMR actually is, critically examining the pros and the cons of MMR and the requirements for satisfactory MMR design. And in order to strengthen the possibilities for further endorsement of methodological pluralism within academia, it is important that bachelor, master and doctoral study programs also integrate ECTS within MMR. Even if quantitative and qualitative methods are highly valuable methodologies, the sign of the times also call for methodological pluralism, which seems especially valuable when seeking to understand the distinction between incremental and fundamental changes in teaching practice (Cuban, Kirkpatrick & Peck 2001) as a consequence of COVID-19, where home schooling, remote teaching, and other ways of trying to continue business as usual seem to have become (more or less) mainstream. A recent and interesting quantitative study (N = 12 686) from Bakken, Pedersen, Von Soest & Sletten (2020) published in August 2020 about remote teaching and home schooling in Oslo during COVID-19 shows that:

Almost half of the students were fairly or very satisfied with the teaching. Many believe that home schooling has worked well. But 61% thought they had learned less than they used to (…) The findings show a large spread in terms of how much and what types of teaching they have received (Bakken et al. 2020, p. 4)

In many ways this report is important and valuable to expand the current state of knowledge from a quantitative perspective about home schooling and remote teaching. However, it also illustrates the need for more qualitative research knowledge on whether the degree of satisfaction and large spread in types of teaching are attached to incremental or fundamental changes in teaching practises across the schools, or other factors. Both quantitative approaches and qualitative approaches are therefore valuable in separate studies to examine such phenomenon, but the Editorial has also redressed the need for methodological pluralism in times where we both need to explore and discover, as well as test and confirm in the same study.


Avenier, M.-J., & Thomas, C. (2015). Finding one’s way around various methods and guidelines for doing rigorous qualitative research: A comparison of four epistemological frameworks. Systemes d’Information et Management (French Journal of Management Information Systems), 20(1), 61–98.

Bakken, A., Pedersen, W., von Soest, T. & M. A. Sletten (2020). Oslo-ungdom i koronatiden. En studie av ungdom under covid-19-pandemien. NOVA Rapport 12/20. Oslo: NOVA.

Baydas, O., Kucuk, S., Yilmaz, R. M., Aydemir, M., & Goktas, Y. (2015). Educational technology research trends from 2002 to 2014. Scientometrics, 105(1), 709–725.

Bond, M., Zawacki-Richter, O., & Nichols, M. (2019). Revisiting five decades of educational technology research: A content and authorship analysis of the British Journal of Educational Technology: Revisiting five decades of educational technology research. British Journal of Educational Technology, 50(1), 12–63.

Clarivate Analytics (2019). Journal Citation Reports from Clarivate Analytics 2019. London: Clarivate Analytics.

Cuban, L., Kirkpatrick, H., & Peck, C. (2001). High Access and Low Use of Technologies in High School Classrooms: Explaining an Apparent Paradox. American Educational Research Journal, 38(4), 813–834.

DAMVAD Analytics (2017). Education research in Norway. Statistical analysis of publications and research personnel. Copenhagen; DAMVAD Analytics.

Fetters, M. D., Curry, L. A., & Creswell, J. W. (2013). Achieving Integration in Mixed Methods Designs-Principles and Practices. Health Services Research, 48(6pt2), 2134–2156.

Flyvbjerg, B. (2006). Five misunderstandings about case-study research. Qualitative Inquiry, 12(2), 219–245.

Fox, R. (1962). Medical Scientists in a Château. Science, 3515(136), 476–483.

Fox, R. (1964). An American sociologist in the land of Belgian medical research. In P. E. Hammond (Ed.), Sociologists at work: The craft of social research (pp. 399–452). Garden City, NY: Doubleday.

Gage, N. L. (1989). The Paradigm Wars and Their Aftermath: A “Historical” Sketch of Research on Teaching since 1989. Educational Researcher, 18(7), 4.

Giddings, L. (2006). Mixed-methods research: Positivism dressed in drag? Journal of research in nursing, 11(3), 195–203.

Giddings, L. S., & Grant, B. M. (2007). A Trojan Horse for positivism? A critique of mixed methods research. Advances in Nursing Science, 30(1), 52–60.

Gilje, Ø. (2010). Praksis og poeng. Pedagogers publiseringsmønstre på 2000-tallet. Norsk Pedagogisk Tidsskrift, 6(94), 506–514.

Greene, J. (2007). Mixed Methods in Social Inquiry. San Francisco: Wiley.

Greene, J. (2015). The Emergence of Mixing Methods in the Field of Evaluation. Qualitative Health Research, 6(25), 746–750.

Hesse-Biber, S. og Johnson, B. (2015). The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (Oxford Library of Psychology). Oxford: Oxford University Press

Hyett, N., Kenny, A., & Dickson-Swift, V. (2014). Methodology or method? A critical review of qualitative case study reports. International Journal of Qualitative Studies on Health and Well-Being, 9(1), 23606.

Ivanović, L., & Ho, Y.-S. (2019). Highly cited articles in the Education and Educational Research category in the Social Science Citation Index: A bibliometric analysis. Educational Review, 71(3), 277–286.

Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed Methods Research:  A Research Paradigm Whose Time Has Come. Educational Researcher, 33(7), 14–26.

Johnson, R. B., Onwuegbuzie, A. J., & Turner, L. A. (2007). Toward a definition of mixed methods research. Journal of Mixed Methods Research, 1(2), 112–133.

Johnson, R. B., & Christensen, L. (2017). Educational Research: Quantitative, Qualitative, and Mixed Approaches. US: SAGE Publications Inc.

Kaplan, A. (1964). The Conduct of Inquiry: Methodology for Behavioral Science. San Francisco: Chandler Publishing.

Krumsvik, R.J., Jones, L., O-J, Eikeland, Rokenes, F.M., Hoydal, K. & Solstad, S. (2020). Digital competence and digital inequality in upper secondary school. A Mixed Method Study. In S. Doff, J. Pfingsthorn (Eds.), Media Meets Diversity @ School. Wie kann Lernen und Lehren in der digitalen Welt unter den Vorzeichen von Diversitat gelingen (pp. 215–236)? Trier: WVT Wissenschaftlicher Verlag.

Lai, J.W.M. & Bower, M. (2019). How is the use of technology in education evaluated? A systematic review. Computers and Education, 133, 27–42.

Linde, G. (2016). Det ska ni veta! En introduktion till läroplansteori, Lund: Studentlitteratur.

Lindensjö, B., & Lundgren, U. P. (2000). Utbildningsreformer och politisk styrning. Stockholm: HLS Förlag

Lopez, X., Valenzuela, J., Nussbaum, M., & Tsai, C-C. (2015). Some recommendations for the reporting of quantitative studies. Computers and Education91, 106–110.

Norwegian Institute of Public Health (2020a). Smittestopp – ny app fra Folkehelseinstituttet. Oslo: Folkehelseinstituttet. Retrieved 10.08.2020 from:

Norwegian Institute of Public Health (2020b). FHI stopper all innsamling av data i Smittestopp. Retrieved 10.08.2020 from:

Maxwell, J. (2010). Using Numbers in Qualitative Research. Qualitative Inquiry 16(6), 475 –482.

Merriam, S. B. (2009). Qualitative research: A guide to design and implementation. San Francisco, CA: Jossey-Bass

Onwuegbuzie, A. J., & Leech, N. L. (2006). Linking Research Questions to Mixed Methods Data Analysis Procedures. The Qualitative Report, 11(3), 474–498.

Pearce, L. (2015). Thinking Outside the Q Boxes: Further Motivating a Mixed Research Perspective. In S. Hesse-Biber and B. Johnson, The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (Oxford Library of Psychology) (p. 42–56). Oxford: Oxford University Press.

Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: Sage.

Tamim, R.M., Borokhovski, E., Pickup, D., Bernard, R.M., & El Saadi, L. (2015). Large-Scale, Government-Supported Educational Tablet Initiatives. British Colombia: Commonwealth of Learning.

The Norwegian Research Council (2004). Norsk pedagogisk forskning. En evaluering av forskningen ved utvalgte universiteter og høgskoler. Oslo: NRC

The Norwegian Research Council (2006). En nasjonal strategi for norsk pedagogisk Forskning. Oppfølgingsutvalgets anbefalinger etter Forskningsrådets evaluering i 2004. Oslo: NRC

The Norwegian Research Council (2019). Evaluation of Norwegian education research Report from the international expert committee. Final Report 22. February 2018. Oslo: NRC.

Pérez-Sanagustín, M., Nussbaum, M., Hilliger, I., Alario-Hoyos, C., Heller, R. S., Twining, P., & Tsai, C.-C. (2017). Research on ICT in K-12 schools – A review of experimental and survey-based studies in computers & education 2011 to 2015. Computers & Education, 104, A1–A15.

Schoonenboom, J., & Johnson, R. B. (2017). How to Construct a Mixed Methods Research Design. Kölner Zeitschrift für Soziologie und Sozialpsychologie, 69(2), 107–131.

Skaar, Ø.O. & Krumsvik, R.J. (2020). An overview of papers published in Nordic Journal of Digital Literacy from 2006–2020. Unpublished manuscript.

Teddlie, C. and Tashakkori, A. (2009). Foundations of Mixed Methods Research: Integrating Quantitative and Qualitative Approaches in the Social and Behavioral Sciences. Sage, London.

Twining, P., Heller, R.S., Nussbaum, M. and Tsai, C.C. (2017). Some guidance on conducting and reporting qualitative studies. Computers and Education, 106, A1-A9.

Yin R. K. (2009). Case study research: Design and methods. 4th ed. Thousand Oaks, CA: Sage.