Abstract:

This study of an initiative for recognising ICT competence is situated in a framework of formal training at upper secondary level. Specific assessment methods were used, but the classification of knowledge and the criteria for assessment were those used in formal training. Thus, there is a risk of excluding individual competence not included in the formal framework. The study also raises the question of whether recognition of prior learning should be a separate or an integrated part of the curriculum.

Keywords:Recognition of prior learning ·digital competence · informal learning · upper secondary level

Introduction

Using information and communication technology (ICT) is part of the daily life of many people today. This means that a lot of competence in handling ICT is developed through informal learning, without any formal or non-formal training. Informal learning concerning ICT could take place in working life as well as at home, in voluntary work or even at school. This informal learning might mean that knowledge and competence are invisible, or at least that competencies are not documented, which could reduce their value in, for example, the employment market. The recognition of prior (informal) learning is pointed out as one of the key areas in the improvement of adult learning (see e.g. OECD, 2003). ‘Assessing and giving credit for knowledge and skills acquired at work, in the home or community settings can ensure that adults do not waste time relearning what they already know’ (ibid., p. 11), and efforts are made in different areas. This article analyses the possibilities and problems arising when trying to close the ‘digital divide’ by giving formal recognition to prior informal learning concerning ICT. The analysis is based on an ethnographic study of an initiative for recognition of prior learning (RPL) in the area of ICT competence in Swedish adult education, which focuses on the relationship between informal learning and the school system, as well as between prior and new learning, and on what types of knowledge and competence that are actually recognised.

Digital competence and digital divide

Competence (or skills) related to ICT has been identified in lifelong learning policy as one of a number of general ‘key competences’ in terms of the ‘digital competence’ (EC, 2004)1, which makes the present study of recognition of ICT/digital competence relevant. However, the field is defined in different ways. For example, the definition of ICT as a ‘fourth basic skill’ (alongside reading, writing and mathematics) focuses on fundamental skills, while concepts such as ICT literacy, digital literacy, digital bildung and digital competence mean a broader perspective (see Søby, 2003). ‘Digital competence’ seems to be an inclusive concept that could encompass the dimension of bildung as well as the more technical fundamental skills. For example, Alexandersson and Limberg (2006) mention four dimensions when defining this digital competence: the technological (handling ICT as a tool); the didactic (developing knowledge through ICT); the critical (becoming ‘a critical member of society through ICT’, ibid., p. 3); and, finally, the dimension of information literacy (‘the ability to seek and use information effectively in various situations’, ibid., p. 4, an ability not solely related to digital competence). These dimensions are referred to when discussing digital competence in this article.

Even if access to and use of ICT is extensive, there is a ‘digital divide’ between different groups, a divide that has social consequences in terms of inclusion/exclusion in/from participation in central fields of society (Van Dijk, 2005) and in terms of competence. Van Dijk analyses this divide not as two-tiered but as a tripartite, consisting of an information elite, a majority group that participates in the information society to a certain extent, and a group of ‘outsiders’. Furthermore, the divide is described in terms of different aspects of access: motivational, material/physical, skills and usage access. Even if practice, according to Van Dijk (2005), is more important than formal education when it comes to developing digital skills, it is still argued that informal learning is not enough – ‘[o]perational skills will remain incomplete when they are only learned by trial and error’ (ibid., p. 92). However, there is one aspect of this divide that is not discussed in detail by Van Dijk (ibid.): the possibilities of (at least partially) closing the divide between informal learning and formal education, between ‘incomplete’ skills and the potential of adult education. Informal learning is not necessarily ‘trial and error’, but might involve other and more sophisticated learning strategies that result in more sophisticated skills and competencies. Here is a potential for recognising prior informal learning in ways that makes it easier to achieve formally accepted competence and credentials. This potential is a main argument for the development of recognition of ICT competence among individuals lacking the formal documentation of their competence.

Recognition of prior learning

There is no common concept for the recognition of prior learning – the phenomenon has different names in different parts of the world. For example, RPL is the common term/abbreviation in Australia and South Africa, AP(E)L (Accreditation of Prior [Experiential] Learning) in the UK, and PLA (Prior Learning Assessment) in the USA. In Swedish, the concept is ‘validering’ – a translation of the French word validation. This article will mainly use the abbreviation/acronym RPL.

As mentioned, this article presents a study of RPL in Sweden. A general definition of ‘validering’ is provided by the Swedish Ministry of Education (2003):

… a process including structured assessment, valuing, documentation and recognition of knowledge and competence that a person has, irrespective of how they have been acquired. (ibid., p. 19, my translation)

Therefore, RPL is seen as a process, which includes not only the very assessment of knowledge and competence, but also the steps of valuing, documentation and recognition.

A number of initiatives have been taken to develop RPL in Sweden since it was introduced in 1996 in connection with the Adult Education Initiative (AEI), a five-year initiative for the renewal and restructuring of Swedish adult education. During the AEI period, 1997–2002, two official reports on RPL were published with a focus mainly on ‘foreign vocational competence’ (Ministry of Education, 1998, 2001). Experimental projects were launched in a number of municipalities (Andersson et al., 2004). Later on, a national commission with a four-year mandate (2004–2007) was established with the role of encouraging and supporting the development of legitimacy, quality and methods in RPL (Ministry of Education, 2003). However, the main responsibility for this development is assigned to the local municipalities in regional co-operation.

RPL in the field of ICT

This article presents a qualitative study of the practice of RPL in the area of ICT/digital competence. The initiative was part of the local development of RPL in one municipality and was situated in the context of municipal adult education at upper secondary level. The design of the RPL process in this specific case will be described and a number of critical issues discussed.

There does not seem to be any widespread ‘tradition’ of RPL in the field of ICT. -Nevertheless, a survey of the situation in Sweden in 2000 shows that 44 municipalities (out of 275 that responded to the survey) had provided RPL for grades in ICT subjects (Valideringsutredningen, 2001). However, there are no figures for the number of participants. Another example is the Swedish National IT Programme 1998–2000, SWIT, where over 10,000 individuals (about 75% of whom were unemployed) participated in vocational training tailored to the needs of the IT enterprises. The selection of these participants among 80,000 applicants was a kind of ‘RPL process’ with interviews and highly formalised tests to find the most qualified. The conclusion reached was that the unemployed participants were more successful, in terms of later employment, compared to participants in other employment market programmes for the IT sector. Yet, it was found that this difference could be explained by more extensive contact with the employment market during the training programme rather than by the selection process (Bjørnåvold, 2000; Johansson & Martinson, 2001; OECD, 2001).

The initiative analysed in this article is based on ICT courses in upper secondary school and adult education, focusing on more ‘basic’ competencies than, for example, the SWIT programme. The curriculum of these courses mainly concerns the use of computers as tools for word-processing, calculating, presentations, databases and layout in addition to hardware. It also concerns, to some extent, information and communication via the Internet, but this is not the focus of these courses. In other words, the focus of the initiative analysed here is technological digital competence, and partly information literacy, while the didactical and critical dimensions are in the background. Consequently, I will use the concept ‘ICT competence’ rather than ‘digital competence’ in the description of the present case, as it does not encompass all the dimensions of the digital competence, as defined above. Finally, in relation to the tripartite digital divide (Van Dijk, 2005), the present study is mainly relevant in relation to the majority group mentioned above, the group which participates in the information society to a certain extent.

Knowledge and competence

Here, RPL is defined as a process including assessment and recognition of knowledge and competence. There are a number of different definitions and descriptions of knowledge and competence. A comprehensive overview of perspectives on knowledge and competence is beyond the scope of the present article, nevertheless a few ideas will be presented.

Focusing on knowledge, Carlgren (1992) takes as her starting point an extensive review of perspectives, and discusses four aspects – facts, skills, understanding and familiarity. These aspects should not be seen as different types of knowledge, rather, as different aspects of the same knowledge as a whole. Facts (information), particularly of a scientific and technological nature, are important parts of our knowledge. Skills could be practical as well and intellectual, and consequently aspects of different types of knowledge (or competencies). Understanding encompasses scientific understanding, but also technological understanding. Familiarity, which has a ‘tacit’ dimension, is mainly a matter of ‘practical wisdom’, a dimension of the bildung that is relevant in relation to different types of knowledge. Carlgren’s (ibid.) four categories are particularly interesting in the context of this study, as they are part of the perspective on knowledge used as the basis of the Swedish national curriculum. Therefore, these categories will be used later on in the article in the discussion of the empirical results.

When it comes to competence, one (of many) general definition(s) of this concept is ‘an individual’s potential to act in relation to a certain task, situation or context’ (Ellström, 1992, p. 21, my translation). In other words, competence is about the potential of putting knowledge into action (ibid.). This general idea of competence includes not only manual, social and intellectual skills, but also aspects such as attitude and personality. It should be noted that this definition focuses on the ‘actual’ competence of the individual. Another aspect of competence is ‘formal’ competence, as expressed in terms of grades, certificates, etc. This formal competence is also an important part of the context in the present study, as it is about a process of recognition and grading of actual but informal ICT competence.

If we relate these ideas to the present study, it could be expected that the participants in the initiative have some sort of actual, informal ICT competence. They volunteer because they are experienced in using ICT – the experience means that they have knowledge they can put into action. An interesting question therefore is what type of knowledge they have. On the one hand, it is likely that they have certain but varying ICT skills, and they are more or less familiar with different types of computer use. On the other hand, they do not necessarily have an understanding of why they do things in a certain way or of how the computer and software works. However, the focus of the analysis will not be what the participants know, but the RPL process and how it is designed to assess their informal knowledge and competence.

Research on RPL

There does not seem to have been any prior research on the micro-level processes of recognition of ICT or digital competence. Critical analyses of RPL are, generally speaking, comparatively rare, but the research is expanding and a number of studies have been carried out. Here, two themes in the research on RPL will be highlighted: firstly the correlation between RPL as a separate activity or as part of the curriculum and the educational process; and secondly, how different models of RPL could be adapted to or contribute to changing the ‘system’.

The introduction of a term like ‘validation’, or ‘RPL’, might result in a sort of reification, it becomes a separate ‘thing’ or activity. However, for example Wheelahan (2006) promotes RPL as an integrated part of all qualifications, but not as a technique for validating a complete exam. This integration of RPL and education is seen as necessary in the development of ‘graduateness’ – the point being that you cannot become a graduate merely from informal learning, but the formal educational process contributes with something more that is essential. Breier (2005) discusses ‘rpl’ as opposed to ‘RPL’, using the abbreviation RPL for Recognition of Prior Learning as a separate process taking place before/outside the educational process, and rpl for recognition of prior learning as an integrated part within the educational process. The key question here is to what extent RPL should be seen and organised as a separate activity, or as an integrated part of the curriculum and the educational process.

Numerous RPL models have been developed and described in different countries. One way of categorising these models is to distinguish between RPL adapted to the system and RPL changing the system (Andersson et al., 2003). The broad notion of ‘system’ could represent, for example, the curriculum of an educational programme, the school system in general or the system of the employment market. RPL adapted to the system is to a great extent a matter of convergent assessment (Torrance & Pryor, 1998) of knowledge and competence; for example, a competency-based assessment where the focus is on if the individual knows or can do certain things, is able to satisfy certain criteria, etc. This can be compared to a divergent assessment (ibid.), where what knowledge and competence the individual has is explored in a more unprejudiced manner. RPL changing the system uses divergent models of assessment to a greater extent, which also means that what the individuals already know is the basis of changing the system in a workplace or school.

Peters (2005) shows, for example, how control and exclusion can be the result when the RPL discourse is something candidates have to learn rather than something that acknowledges what they already know. Instead of focusing solely on the convergence of candidates’ knowledge with academic networks, we could also focus on the heterogeneity of their knowledge networks (Pokorny, 2006). Therefore, making (informal) learning visible has the potential to change a system, but the process of recognition might involve an unintended adaptation.

Aim of the study

The aim of this study is to describe and analyse a process of recognition of ICT competence, with a particular focus on how the process is organised and on how knowledge and competence are valued and recognition and learning are related in this process.

Design of the study

The empirical study has a qualitative, ethnographic approach (Hammersley & Atkinson, 1995), where participant observation has been combined with interviews and documents (e.g. tests) to provide a thick description of the RPL initiative and process studied. In this approach, data from different sources are combined and integrated in the presentation and analysis of results.

The data originate from two groups participating in the initiative, which was organised by a private provider of upper secondary and adult education. The initiative consisted of a four-week RPL process, which was carried out for the first time in May and the second time in October. The first group consisted of ten individuals and the second of eight. Two individuals were in both groups, i.e. there were 16 participants in all. The first group was observed more frequently; the participants in both groups were interviewed. Interviews were also conducted with the teacher responsible for both groups, and with the test constructor. Furthermore, informal conversations were also held during the observation periods. The ‘electronic tests’ (described below) were only available on the computers, and here I was allowed to take the tests informally to see how they worked. For ethical reasons, I did not ‘stand behind’ the participants when they took the different tests since it might have influenced their test performance and results.

The description of the RPL process that follows is an integrated interpretation of the different data. That is, data from observations, interviews, conversations and documents are complementary and have all contributed to the picture, and this ‘triangulation’ means a higher quality of the results. For example, the description of the testing and grading process is not only based on observations of taking tests and assessment conversations, but also on the contents of test documents and grading criteria, as well as on informal conversations and formal interviews with teacher, test constructor and participants, and on my own test-taking (electronic tests). All these sources provide complementary data that verify what is presented as the results. In the final discussion, a number of key themes in these results are analysed in relation to prior research.

The RPL process

The RPL process studied included a number of steps. This process is described in the part of the article that follows. It covers the aspects of invitation, information, participants, contents, organisation and schedule of the activities, testing and, finally, the grading. It should be noted that the whole process is included here, even if some steps are more central – it is in the testing and grading that the ‘actual’ and formal recognition takes place.

Invitation and information

Initially, interested persons were invited (through information sheets, advertisements in newspapers, etc.) to information meetings at the local information centre for adult education. The participants at the meetings were invited to take part in the RPL process, starting with an individual guidance conversation with an adult education counsellor. Individuals who had not attended the meetings were also invited to a guidance conversation.

The subject and course contents

One important part of the information concerned the ‘contents’ of the process. The RPL process was organised based on six courses taken from three subjects at upper secondary school level, i.e. the goal of the process was to obtain grades in these courses. The subjects and courses in question were Use of Computers: Computing (50 credits), Program Management (100 credits); Administration: Information & Layout A (50 credits), Information & Layout B (100 credits); Computer Technology: Database Management (100 credits) and Personal Computers (100 credits). In total, 500 credits were offered in the four-week RPL process, which can be compared to a normal study pace of 20 credits per week when you do not have relevant prior competence.

Self-assessment and admission

The guidance conversation included a self-assessment, where the individuals assessed whether they had competence in a number of areas. The self-assessment document was based on goals and grading criteria for the courses in question. For example, in relation to the most basic course ‘Computing’, the individual should agree with statements such as (translated from Swedish): ‘Knowledge of personal computers, networks and giving examples of computer usage in working life.’ ‘Being able to use commands for handling registers in Excel 2000 so as to be able to work with lists, e.g. a list of members.’

The individuals who seemed to have a reasonable level of competence were admitted to the RPL programme. Therefore, there were no formal admission tests, only an informal assessment made by the counsellor, who was not an expert on ICT competence, based on the conversation and the self-assessment.

Participants

The participants admitted to this initiative were mainly unemployed. The framing of the initiative actually made the unemployed the main target group, even if this was not explicitly expressed. The group-based organisation, with the process taking place in the daytime, limited the initiative to the individuals who were able to take part under those circumstances. In addition to this, there was also one employee whose employer allowed him to participate during working hours, and two persons who occasionally ‘switched’ from their participation (as unemployed) in a non-formal adult education initiative. The reasons for participating were mainly related to the employment market, but also partly the desire to obtain grades with an exchange value in admission to higher education and to acquire self-knowledge.

Organisation and schedule – with integrated learning opportunities

As mentioned, the data in this study originates from two groups, following similar programmes – both programmes covered the same subjects and courses, but the assessment process was somewhat different in the case of the second group. The RPL initiative was organised by a private adult education provider in the municipality. The organisation of the process was group-based. A number of participants met in the computer room and followed a four-week schedule. On the first day, there was an introduction for all the participants, and they were instructed to start with the same basic course, Computing, to familiarise themselves with the assessment instruments. Subsequently, the group-based process was mainly individualised.

During the RPL period for the first group, the computer room was available every morning, five days a week. No formal teaching was included in the process, as RPL should be a matter of assessing prior learning. However, the participants were allowed to study individually, using books and computers, to prepare themselves for the actual assessment. During the first two hours, the participants were allowed to revise, study and practise in the computer room. During the next two hours, the teacher was available as a resource and the participants were able to take the tests described below. The second group had a slightly different schedule. The computer room, and the teacher were available on Tuesday afternoon (4 hours), Wednesday morning (4 hours) and Friday afternoon (2 hours). Therefore, they also had 10 hours a week for testing and teacher support, but there was no guaranteed additional time when the computers were available for practising. It should be noted that even if there was some time for revision and study, and the teacher was available, there are still significant differences in this approach compared to ‘traditional’ education. The time was much more limited compared to the time it should take to participate in the corresponding courses, and the teacher did not provide any group-based teaching as would have been the case in a course where all the participants studied the same subjects.

The testing process

With the exception of the first computing course, recommended as an introduction, the participants could decide on the order of the courses. When they felt they were ready, they took the tests. The assessment was based on two different types of tests. Firstly, there were one or more ‘electronic tests’, computer-based, multiple-choice tests, in each course. The questions partly covered facts about computing, with multiple choice answers, and partly what could be called computing skills such as being asked to ‘click’ on the correct place on (an image of) the screen to perform a certain command in a program. Secondly, there was a ‘practical test’, which was constructed specifically for this initiative. This test consisted of a task to be solved to the best of the participants’ ability. For example, in the computing course, the task was to prepare for a holiday. The participant was asked to obtain information about the destination on the Internet. The information was to be presented in a text document with a particular layout, described on a sheet of paper. Furthermore, a budget for the trip was to be drawn up in a spreadsheet, and a presentation of the destination was to be made in a presentation program. The goal of this task was to select the competencies required in the course goals, and to combine these in one coherent task. The test constructor noted that there was a dilemma in that it was not possible to construct a single test that covered all aspects of the course – it was, rather, a matter of drawing up a focused and representative sample.

It should be noted that it was possible resit the test if the participant did not pass or wanted to improve the result. In the electronic tests, there was an integrated ‘question bank’, which means that the test changed to some extent each time it was taken. However, the similarities were fairly significant, and it was possible to learn quite a lot of facts the first time around to improve one’s result considerably by resitting the test. There was no information provided on the theory behind the creation of the test. It might have been a ‘silent policy’ to allow repeated tests in order to avoid treating novice participants unfairly. The tests did not take much time – therefore it was easy to take the ‘same’ test a number of times. An explicit aim was to test ICT competence rather than the ability to take tests, which was a rationale for allowing the participants to both practice the test and take resits. In the practical tests, repeated test taking was not that common as the tasks were more extensive. However, the test constructor coordinator was aware of a possible dilemma in the assessment – how should the result of a second test be graded, if the individual taking the test takes more time and performs the task more carefully, when one of the criteria for a higher grade is working quickly (see below concerning different criteria for different grades)?

This assessment of ICT competence was arranged in essentially the same way in the second group. However, the second time, the electronic tests were not included in the formal assessment, but used as an optional diagnostic assessment. Additionally, the practical tests were changed slightly in order to provide a better assessment.

The grading process

When the participant had taken the practical test, the teacher checked the answers, and then he conducted a short assessment conversation about the answers with the participant. The conversation helped the teacher understand why the participant had solved the problem in a particular way. The electronic test (to some extent in the first group), the practical test and the conversation, were used by the teacher as the basis for assigning the participant a particular grade. The grading scale used was the same as in upper secondary school and secondary education for adults, i.e. the four steps Fail, Pass, Pass with distinction and Pass with special distinction.

The teacher’s aim was to assess what the participants ‘really knew’ – and therefore the conversation was an important part of the assessment. The conversation helped the teacher understand what the participants that took the test were thinking. The teacher noted that a person could obtain a Pass with special distinction even if there were ‘three or four mistakes’ in the test if, during the course of the conversation, they showed an understanding of why something was wrong, and arrived at the correct solution through reasoning.

Generally, the participants obtained high grades. The context of RPL, with a pre-assumption that the participants already had (most of) the competence required in the course(s), made it unlikely that they would fail. Furthermore, when the individuals were allowed to study and prepare for the assessment, they were even more likely to pass the tests. Moreover, the general outline of the grading criteria made it rather unlikely that the participants would achieve the basic ‘pass’ grade. The criteria were drawn up for courses that the participants were studying, and one of the requirements for the Pass grade was to be able to solve problems with some assistance – assistance that was not available in this assessment. (The corresponding criterion for a Pass with distinction was to solve problems using ‘judgement and accuracy’, and for a Pass with special distinction to work ‘independently and to quickly reach the intended result’.) However, there were also a few participants who did not pass any courses, and who did not seem to have that ambition either. They were mainly interested in acquiring knowledge about their own competence, and the broad criteria for admission made this possible. Another factor behind this broad admission was that there were more ‘RPL places’ than applicants, which made it reasonable to allow ‘anyone’ to participate.

After the grading, the participant could start on a new course – depending on the initial self-assessment, individual choice and, finally, the time available.

Discussion

In this part of the article, the empirical results concerning the relationship of RPL to the school framework and to different ideas of knowledge, competence and learning will be addressed. Therefore, the discussion will address the key issues in the research on RPL, and the perspective on knowledge and competence, presented above.

The school framework – adaptation to a ‘system’

This RPL initiative mainly concerned assessing ICT competencies developed informally, to a large extent at home or at work, which are outside formal educational settings. However, the initiative was still situated in a certain ‘system’, namely the school framework. The opportunities for assessing competence were limited by a number of pre-defined courses. It was therefore not possible to validate those parts of ICT competence that were outside the framework of these courses. For example, some of the participants mentioned that they had specific competencies in graphical processing and programming. In addition to this, there was also the framework consisting of the assessment criteria for grades in these courses.

One problem with this is, of course, the convergent approach (cf. Torrance & Pryor, 1998), which makes some competence invisible and does not close the divide between informal and formal learning. Even if specific assessment methods were used, the classification of knowledge and the criteria for assessment were those used in formal training. Consequently, there is a risk of excluding competence not included in the formal framework. For example, the focus in the present case is what I have called ICT competence, while digital competence in a broader sense is not assessed. In addition to this, there is also the problem that the goals and criteria (to some extent) are based on participation in a course – like the possibility of problem-solving with some assistance, and criteria for ‘planning your work’ that more or less take a longer work process for granted. When the individual, as in this case, is not a course participant, it is not possible to apply all the criteria. The requirement for the basic pass grade, to be able to do things with some assistance, means that you cannot pass in this type of RPL if you need assistance, even though this is acceptable according to the criteria.

It could be questioned whether these limitations of the school context are problematic. It is not possible to assess and document all aspects of a person’s knowledge and competence. However, what complicates matters here is the aim in RPL to recognise knowledge in a divergent way, irrespective of how and where it has been acquired. The school framework makes it difficult to fulfil this ambition as only certain knowledge and competencies are valued in this system. This is particularly evident when the framework is only a few ICT courses, covering primarily the technological dimension of the digital competence. The didactical and critical dimensions as well as information literacy are present in other parts of the school curriculum, such as social sciences, but they are not focused upon in the ICT courses.

It should be noted that there are also advantages with the school framework, in that it is possible for the individual to obtain a grade, a formal competence that has a legitimate value nationally. Nevertheless, if RPL is to be situated within this framework, the implication of this study is that there is a requirement for some development, a minor change within the system to avoid subordinating informal learning to formal schooling. Alternatively, a major change is required, a change that gives a legitimate value to divergent learning experiences from (educationally) informal contexts. The ‘curriculum’ of informal learning is not classified like a formal curriculum, which calls for a re-classification of knowledge when recognising informal ICT/digital competence – a broad competence area with dimensions that could be present in social sciences, language and other subjects as well as in ICT courses.

Assessing knowledge and competence

What ideas about knowledge and competence are visible in this process for recognition of ICT competence? Furthermore, what types of knowledge and competence are valued? As mentioned, two types of tests were used. The multiple-choice ‘electronic’ tests used mainly in the first group assessed the type of knowledge that, to a large extent, could be memorised – facts and ‘simple’ skills concerning how computers and software work. The practical tests were of another type. Here, it was a matter of solving ‘practical’ tasks, preferably in a ‘smart’ way. The focus was on competence, in the sense of being able to do things; knowing what, how and why things should be done. Here, the requirements of the school framework were in part abandoned, due to the limitations mentioned above of not participating in a course and focusing on what was seen (by the test constructor and the teacher) as the key aspects of the ICT competence.

This could be related to the perspective of knowledge present ‘behind’ the national Swedish curriculum (particularly as expressed by Carlgren, 1992) of which these courses are parts. As mentioned, four key aspects of knowledge are discussed – facts, skills, understanding and familiarity. In this case, the focus of the assessment is the skills. The practical tests concern doing tasks. In addition to this, there are the electronic tests with a focus on facts. However, there are also aspects of the assessment that consider understanding. The tasks are not ‘simple tasks’, but those that are constructed to give a representative assessment of an entire course. Therefore, a certain understanding is probably necessary to be able to accomplish the tasks. Furthermore, there is also the grading conversation, where the teacher/assessor ‘looks for’ understanding – especially if there have been mistakes or solutions that were not as good when performing the task. Finally, some familiarity seems to be more or less necessary to be able to satisfy the requirements in the assessment. The course contents are extensive – the courses included in the initiative are equivalent to 25 weeks of studying, and a successful RPL process in only one or two of these courses means that the participant has accomplished this in a shorter time than participation in the corresponding courses would have taken. As a result, the participants are expected to know most of the course content before they start the process, and there is not much time for complementary studies.

If we reflect on the process from the perspective of ‘competence’, it seems as if it is a matter of assessing competence, at least to some extent. The participants are supposed to perform certain tasks, in a certain context, using manual and intellectual skills, which means that their potential of putting knowledge into action (Ellström, 1992) is tested. However, it is not a matter of assessing everything that could be included in the concept of competence (cf. e.g. Ellström, 1992), or even in the concept of ICT/digital competence. The use of ‘ICT competence’ in this article reflects the fact that the focus in the RPL initiative was the technological dimension of the digital competence, while other dimensions were in the background. Furthermore, it is an assessment that takes place in a specific context, and the fact that competence is assessed in one context does not necessarily mean that the individual has the potential to do what is expected in another context of practice. In addition to this, social skills, such as working in a group or making oral presentations, are not made visible in a short RPL process, which would be possible when participating in a longer course.

RPL and learning

A key issue when discussing RPL is, finally, the relationship between prior learning, recognition and new learning. Should RPL be a separate activity or part of the curriculum? In the official Swedish policy (Ministry of Education, 2003), RPL is mainly described as a ‘link’ in a chain of guidance, RPL and adult education. That is, the adult learner should be given guidance, prior learning should be assessed or validated (before or as a part of participating in education, but still as a separate activity), and complementary further education should be provided afterwards.

The initiative analysed here points to an alternative approach. The process was not based on the idea that the participants already had all the competence required, rather that they had the potential to pass the tests after some preparation. Therefore, the metaphor of the link in a chain could be substituted by the metaphor of the intertwined strand of a rope to be followed in the learning process. This second metaphor makes it easier to think about the combination of recognition of prior learning, and new learning, in a more integrated way, as part of the curriculum. Nevertheless, here, there is also the risk of subordinating informal learning to formal education, depending on whether the formal learning ‘strand’ is seen as necessary or complementary. To avoid this possible subordination, a different curriculum might be needed, a curriculum where the recognition of prior informal learning is an integrated part to a larger extent than is the case today – where recognition of prior learning is a matter of ‘rpl’ rather than ‘RPL’ (cf. Breier, 2005). The empirical case presents one idea of such an integrated curriculum: the teacher is available for individual support, but no traditional teaching is included. There is time and space for individual revision and some complementary studies before the testing and grading process, which means time for the participant to learn about the formal demands and to become aware of his or her (formerly) tacit knowledge and competence.

Conclusion

It is possible to at least partly close the digital divide through RPL, which makes informal competence visible and recognised. Nevertheless, there is also the group of ‘outsiders’ who lack competence to be recognised. In this group, a strategy based on RPL is not enough to close the divide. Yet, in an integrated strategy – combining access, informal learning, recognition processes and organised training – RPL could have a role in developing digital competence or literacy among ‘outsiders’ as well as in the majority ‘middle group’ that was the target group of the initiative studied in this article.

Acknowledgements

A grant from the Swedish Research Council has made this study possible.

References

Alexandersson, M. & Limberg, L. (2006). To be lost and to be a loser through the Web. Paper presented at the 34th NERA Congress, Örebro, 9–11 March 2006.

Andersson, P., Fejes, A. & Ahn S-e. (2004). Recognition of Prior Vocational Learning in Sweden. Studies in the Education of Adults, 36(1), 57-71.

Andersson, P., Sjösten, N-Å. & Ahn, S-e. (2003). Att värdera kunskap, erfarenhet och kompetens. Perspektiv på validering. Stockholm: Myndigheten för skolutveckling.

Bjørnåvold, J. (2000). Making Learning Visible. Identification, Assessment and Recognition of Non-formal Learning in Europe. Tessaloniki: Cedefop.

Breier, M. (2005). A disciplinary-specific approach to the recognition of prior informal experience in adult pedagogy: ‘rpl’ as opposed to ‘RPL’. Studies in Continuing Education, 27(1), 51-65.

Carlgren, I. (1992). Kunskap och lärande. In: SOU 1992:94, Läroplanskommitténs betänkande Skola för bildning. Stockholm: Utbildningsdepartementet.

Ellström, P-E. (1992). Kompetens, utbildning och lärande i arbetslivet. Problem, begrepp och teoretiska perspektiv. Stockholm: Publica/Norstedts Juridik AB.

EC (2004). Implementation of “Education and Training 2010” Work Programme. Working Group B “Key Competences”. Key Competences for Lifelong Learning. A European Framework. November 2004. Brussels: European Commission, Directorate-General for Education and Culture.

Hammersley, M. & Atkinson, P. (1995). Ethnography: Principles in practice, 2nd ed. London and New York: Routledge.

Johansson, P., & Martinson, S. (2001). Varför lyckades det nationella IT-programmet, Swit? – en jämförelse mellan två arbetssätt. Ekonomisk Debatt, 29(4), 293-302.

Ministry of Education (1998). Validering av utländsk yrkeskompetens. Stockholm: Utbildningsdepartementet, SOU 1998:165.

Ministry of Education (2001). Validering av vuxnas kunskap och kompetens. Stockholm: Utbildningsdepartementet, SOU 2001:78.

Ministry of Education (2003). Validering m.m. – fortsatt utveckling av vuxnas lärande. Stockholm: Utbildningsdepartementet, Ds 2003:23.

OECD (2001). Thematic review on adult learning, Sweden, Country note. Paris: OECD.

OECD (2003). Beyond Rhetoric: Adult Learning Policies and Practices. Paris: OECD.

Peters, H. (2005). Contested discourses: assessing the outcomes of learning from experience for the award of credit in higher education. Assessment and Evaluation in Higher Education, 30(3), 273-285.

Pokorny, H. (2006). Recognising prior learning: what do we know? In: P. Andersson & J. Harris, Re-theorising the Recognition of Prior Learning. Leicester: NIACE.

Søby, M. (2003). Digital Competence: from ICT skills to digital “bildung”. Oslo: ITU.

Torrance, H. & Pryor, J. (1998). Investigating formative assessment. Buckingham: Open University Press.

Valideringsutredningen (2001). Validering i kommunerna år 2000 – svaren på Valideringsutredningens enkät. Stockholm: Utredningen om validering av vuxnas kunskap och kompetens.

van Dijk, J.A.G.M. (2005). The Deepening Divide: Inequalities in the Information Society. Thousand Oaks, California: Sage Publications.

Wheelahan, L. (2006). Vocations, ‘graduateness’ and the recognition of prior learning In: P. Andersson & J. Harris, Re-theorising the Recognition of Prior Learning. Leicester: NIACE.