Abstract
This study, conducted between 2006 and 2011, enquired into student perceptions of Information and Communications Technology (ICT) and its assessment at aged 16. The prevailing orthodoxies amongst writers, commentators and educationalists are that the subject does not reflect the learning and use made by young people of technology. The voice of the learner, so often lauded in aspects of school democracy and in formative assessment, has not been heard in respect of the high-stakes assessment at the end of Key Stage (KS) 4 in schools in England. This research was a step in filling that void.
Taking an interpretive phenomenological approach three phases of empirical data collection were used each building on the previous ones. To bring the student perception and voice to the fore a repertory grid analysis was initially used to elicit constructs of learning and assessment directly from the students. This was followed by a questionnaire and semi-structured interviews across a sample of state-funded schools in England. The use of a multiple-phase data collection allowed phenomena to be distilled with successively more depth at each phase.
Three phenomena emerged as central to the students’ views. Firstly students identified ICT as a subject that was predominantly about their future lives. They equated what they were doing in school with their perceptions of the needs of future education, employment and as a tool for life. Secondly they, in common with many commentators, saw creativity and ICT as being intrinsically linked. Thirdly their views were dominated by the culture of the school in which they were studying. The institutional habitus gave an enculturation to their perceptions which coloured everything else. Thus they valued creative and open-ended activity in the use of technology, but only where that contributed to formal, in-school, learning.
Author:
Pete Bradshaw
Publication Date: 2011
[s2If !is_user_logged_in()]More… To see the complete Case Study, please Login or Join.[/s2If]
[divide margin_top=”10″ margin_bottom=”10″ color=”#a0a0a0″]
[s2If is_user_logged_in()]
[two_third_last]
Prologue
This research, and resultant thesis, was carried out in schools, with 16-year old students as respondents. It enquired into, and reports on, the views of those who were in the final year of Information and Communication Technology (ICT) courses in Key Stage (KS) 4 [1] at English secondary schools. Specifically it reports on their perceptions of the subject and its assessment.
I had been proactive in moving, as a teacher, from school to university and in taking on aspects of the role that were particular to the higher education sector. I became involved in working with design of new programmes of teaching and assessment. Universities, unlike schools, have the authority to set their own examinations and confer their own awards. In contrast, qualifications taken by school students are administered by external awarding bodies not by schools themselves. I had been a chief examiner for many years in ICT and had experience of assessment design at that level.
In 2000 I had joined a university research and development project at Ultralab [2] specifying, implementing and facilitating online learning communities for the National College of School Leadership. Three years later I moved within Ultralab to work on projects on assessment and creativity and to lead the online cohorts of the MA in Education. This gave me an insight, and practical experience, of teacher education on a national scale, which facilitated a move to work on the Applied ICT strand of the PGCE [3] at Nottingham Trent University (NTU), leading it from 2006. Through this I had contact [4] with a range of schools in the East Midlands of England. While at NTU I worked on education programmes at undergraduate and postgraduate level and had a particular responsibility for quality assurance (QA). This is especially pertinent to discussions of validity of assessment here. The two aspects of initial teacher education – working with local schools and QA of assessment – led to the initial ideas for this thesis.
The initial stimuli for the research came from students’ and teachers’ comments about ICT supplemented by an unease I had picked up while visiting schools and talking to trainee teachers about the OCR [5] National qualification in ICT, an increasingly popular option for ICT in schools [6] (Vidal Roderio, 2010) and reportedly the fourth most popular qualification at 16 in England in 2009 (Paton, 2010b). There was a concern, however, that passing it “look[ed] like a screenshot hoop-jumping exercise [with] endless amounts of ‘evidence’ seem[ing to be] the order of the day for all [ICT] qualifications” (Teacher, 2006).
A further issue was raised in a discussion on the Naace [7] online community’s mailing list where it was reported that conversations with students revealed that they do not appear to learn anything new in ICT at school. This discussion was summarised by Heppell (2007a) who contrasted the curriculum that was being experienced by students in their everyday lives with that which was handed down through the formal education system. The difference, he claimed, was partially caused by the rate of change of technology. What might be considered to be essential for inclusion in school curricula today would be obsolete tomorrow and, worse, was very quickly seen to be irrelevant by students, whose voice was not considered in the design of such curricula and its assessment (ibid.).
Three vignettes are provided here to illustrate what would appear to be a mismatch between the experience of learning about, and with, ICT in schools and students’ experience of learning, and use, of ICT out of schools. The first is from a national newspaper in England and the final two directly from students.
Naughton (2007:12), writing in The Observer, provided the first vignette:
There’s a surreal quality to it, conjuring up images of kids trudging into ICT classes and being taught how to use a mouse and click on hyperlinks; receiving instructions in the creation of documents using Microsoft Word and of spreadsheets using Excel; being taught how to create a toy database using Access and a cod Powerpoint presentation; and generally being bored out of their minds. Then the kids go home and log on to Bebo or MySpace to update their profiles, run half a dozen simultaneous instant messaging conversations, use Skype to make free phone calls, rip music from CDs they’ve borrowed from friends, twiddle their thumbs to send incomprehensible text messages, view silly videos on YouTube and use BitTorrent to download episodes of Lost. When you ask them what they did at school, they grimace and say: “We made a Powerpoint presentation, Dad”. Yuck!
It was interesting to note the plethora of names of pieces of software in these accounts. Even just a year later many of them seemed out-dated. By 2008 students would, for example, probably have been using iPlayer or 4oD to watch missed television programmes, update profiles in Facebook, and post images on Flickr. This was evidence of the changing technological landscape, leading to a disparity between school curricula and assessment and students’ exposure to, and experience with, technology (Mcfarlane, 2001; Threlfall and Nelson, 2006; Heppell, 2007b). While the use of different software does not imply different underlying learning, knowledge or skills, many of the things that Naughton (op.cit.) describes would have been impossible only a few years earlier. No software would have existed to make these tasks accessible to all but a few technological experts.
A second account was heard directly from a student. Tellingly, for this thesis, he put assessment at the heart of ICT education:
I find our education is based around assessment and therefore we are given what is required to pass these exams at the highest possible ability. We might even be given the syllabus of what is expected. Would it not be better to be given a greater depth of knowledge and a more true knowledge than just given what is required to pass exams?
Student recorded by Millwood (2008)
The final vignette, also from a student, addressed this mismatch between assessment and what is done beyond school from another angle – that of the inadequacy of the examinations. Writing on a gaming forum [8] a 16-year old said:
… just did AQA [9] GCSE [10] a few days ago and I am sure anyone else who did will agree it is shamefully and embarrassingly easy for GCSE.
(‘addonai’, 2007)
This was from someone who has just taken an examination. This view that ICT assessment is too easy was echoed in by the popular press (see for example Daily Mail, 2007).
It was these vignettes and other comments like them that inspired me to undertake the research. I wanted to find out how representative they were of students in schools who were approaching their ICT examinations and undertaking coursework. I wanted to find out their perceptions of the subject and of its assessment at 16. It was with these issues in mind that I set out on the journey towards a thesis. A journey that was not without surprises and changes of direction but one that maintained the notion of the primacy of hearing the student voice from the outset.
Conclusions
Three phenomena emerged from the research. Student perceptions of ICT were largely focused on its utility and relevance for later life, for further education or for employment. This end justified almost any means of obtaining a qualification in ICT. They saw that creative aspects of ICT use could be assessed although, when asked what should be added to a course, did not value things that were solely done at home. Their perceptions were dominated by the school and course they were following.
The prevailing orthodoxy as expressed anecdotally in the vignettes that initiated the study was that the ICT curriculum and its assessment are not fit for purpose in that they do not take account of the impact of technologies on young peoples’ lives and learning. This is especially true in the informal contexts where significant amount of technological use, and learning, takes place (Crook, 2008; Logicalis, 2009; Ofcom, 2011). Further is it is argued that the assessment process is too conservative to take into account this wide-ranging and often creative understanding of ICT (Heppell, 2007b; Selwyn, 2011). There is a relationship between structural, institutional, social and personal factors and assessment systems, which affect motivation and autonomy. It is in motivation and autonomy that perception may be most visibly manifested (after Ecclestone and Pryor, 2003). The assessment system itself is both subject to concerns of validity and reliability (Wiliam, 2001). It relationship to the agendas of learner voice (Ruddock et al., 2006; Walker and Logan, 2008) and personalisation is unclear (Underwood et al., 2008). In respect of this underlying knowledge landscape, this research has added to the field in three areas.
Firstly, and relating back to the vignettes in the Prologue, students taking ICT qualifications at 16 do not share the orthodox view of the assessment systems being unfit for purpose. They have high regard for their utility and for the skills they learn. They accept that what is in the specifications is of value and, in particular, cite its relevance for future life, employment or study. They do not talk explicitly about the underlying knowledge and understanding, however, focusing instead on the production of artefacts or solving problems. That is not to say that these are without cognitive endeavour, simply that students do not articulate this in anything other than the vaguest terms. Tapscott’s model (1998) of a system in which the learner is at the centre and the teacher as a facilitator to learning, supported by technology, is not one which is seen in KS4 ICT classes. The demands of the qualification are paramount leading to ‘working from a list’. This demand comes from the multiple high-stakes ways that the education system uses performance measures for. Success in qualifications at 16 is the prime indicator of secondary school success. This overrides any needs of students who, nevertheless, are accepting of what is and cannot see what might be. Their perceptions are heavily influenced by the school (as for Reay et al., 2001 in looking at choice at 18) and they devalue ICT learnt outside of the course they are following.
Secondly, students see technology very much as it is now, especially with relation to the content of an ICT course. Some technologies, such as games and mobile devices, are central to their outside of school but have not been adopted by the education system. Students cannot articulate how these technologies might be included in assessment systems. They see little value in the learning they do with, and about, technology outside of school in so far as a qualification in ICT is concerned. This may be compounded by policies which restrict their use in schools. Johnson et al. (2010) predict this to change in the next two to three years but when one considers the lack of use of Web 2.0 tools reported by Crook (2008) this would seem unlikely. Such participative and collaborative tools have been available to schools and students for at least six years but have yet to be widely adopted for learning, let alone for assessment. Technological changes should provide opportunities and imperatives for ICT curricula to change (Balanskat et al., 2006). Assessment needs to follow suit but students in this study are not cogniscent of this need.
Thirdly, learner voice is a key issue in education but has not entered the realm of engagement of students in high-stakes assessments. Learners are involved and consulted at many stages in the learning process and in the life of the institution. They are not, however, involved in the design of assessment processes and qualifications at 16. While they see that such assessment is germane for future education and employment they do not see any scope for changes to curriculum, except for the desirability of more open-ended tasks. Projects have shown that students are able to judge the work of others (Mcguire et al., 2004); Mitra and Dangwal, 2010) and this process of peer assessment was embedded in policy (DCSF 2008a; 2009) but it has not been applied to summative assessment. Mitra’s self-regulating learning systems (2003) are entering the mainstream, but the analogous self-regulating assessment systems, if they exist, are not. Such a system would have activity and not specification as its starting point. Churches’ digital taxonomy (2008) could be a tool for developing rigour in such a system with activities being judged according to such a framework. This would go some way to applying responses to socio-technological needs to the context of assessment (Facer, 2009), meeting calls for learner-centric assessment (Johnson et al., 2010) and promote internal motivations for success in students (Greenberg, cited in Gatto, 2005). It would also allow informal and non-formal learning to be considered alongside formal learning addressing the debate outline by the OECD (undated). Such an approach is seen in the CoPE awards (ASDAN, 2008) but is not part of the mainstream. With the increased focus on ‘tradition’ and ‘rigour’ in GCSEs (Gove, 2011; Paige, 2011), however, this would seem unlikely to happen with current government policies.
[/two_third_last]
[divide margin_top=”10″ margin_bottom=”10″ color=”#a0a0a0″]
References & Contacts
‘addonai’ (2007), Comment in the thread Just got back from ICT GCSE, 25 May 2007, 10.08 GMT [online] available at http://uk.gamespot.com/pages/forums/show_msgs.php?topic_id=25648227&page=1 accessed 02/10/08
ASDAN (2008), CoPE: The Certificate of Personal Effectiveness: Standards 2008 for levels 1-3. Bristol: ASDAN.
Balanskat, A., Blamire, R. and Kefala, S. (2006), A review of studies of ICT impact on schools in Europe. Brussels: EU Schoolnet.
Churches, A. (2008), Digital Taxonomy [online] available at http://edorigami.wikispaces.com/Bloom’s+Digital+Taxonomy accessed 18/6/10.
Crook, C. (2008), Web 2.0 technologies for learning at KS3 and KS4: The current landscape, Coventry: Becta.
Daily Mail (2007), Qualification that ‘an 11-year-old could pass’ is worth four GCSEs in school league tables. Daily Mail, 19 May 2007, available online at http://www.dailymail.co.uk/news/article-455813/Qualification-11-year-old-pass-worth-GCSEs-school-league-tables.html accessed 02/10/08.
DCSF (2008a), The assessment for learning strategy. London: Department for Children, Schools and Families.
DCSF (2009), Peer assessment, London: Department of Children, Schools and Families.
Ecclestone, K. and Pryor, J. (2003), ‘Learning careers’ or ‘Assessment careers’? The Impact of Assessment Systems on Learning.British Educational Research Journal, 29(4), 471-488.
Facer, K. (2009), Educational, social and technological futures: A report from the Beyond Current Horizons Programme, Bristol: Futurelab.
Gatto, J. (2005), Dumbing us down: The hidden curriculum of compulsory schooling, (2nd edition). Gabriola Island, BC: New Society.
Gove, M. (2011), Education for economic success, speech to the Education World Forum, 11 January 2011, available online at http://dfe.gov.uk/inthenews/speeches/a0072274/michael-gove-to-the-education-world-forum accessed 26/1/11.
Heppell, S. (2007a), ICT as a tool for creativity, e-mail on Naace mailing list 16 July 2007, used with permission.
Heppell, S. (2007b), Assessment and new technology: New straightjackets or new opportunities? [online] available at http://www.heppell.net/weblog/stephen/2007/01/29/Assessmentandnewtechnologyne.html accessed 17/6/10.
Johnson, L., Smith, R., Levine, A., and Haywood, K. (2010), 2010 Horizon report: K-12 edition. Austin, Texas: The New Media Consortium.
Logicalis (2009), Realtime Generation Survey. Slough: Logicalis.
McFarlane, A. (2001), Perspectives on the relationships between ICT and assessment. Journal of Computer Assisted Learning, 17, 227-234.
Mcguire, L., Roberts, G. and Moss M. (2004), Final Report to QCA on the eVIVa Project 2002 – 2004. Chelmsford: Ultralab.
Millwood, R. (2008), Simon, 15, England [online] available at http://www.futureknowledge.org/youth-voice/simon-15-england accessed 14/10/08.
Mitra, S. (2003), Minimally invasive education: a progress report on the “hole-in-the-wall” experiments. British Journal of Educational Technology, 34(3), pp.367-371.
Mitra, S. and Dangwal, R. (2010), Limits to self-organising systems of learning – the Kalikuppam experiment. British Journal of Educational Technology, (41)5, 672-688.
Naughton, J. (2007), Welcome to IT class, children; log on and be bored stiff. The Observer, 7 January 2007, Business section, p12.
OECD (undated), Recognition of Non-formal and Informal Learning. Paris: Organisation for Economic Co-operation and Development [online] available at http://www.oecd.org/document/25/0,3343,en_2649_39263238_37136921_1_1_1_37455,00.html accessed 18/12/06.
Ofcom (2011), UK Children’s media literacy. London: Ofcom.
Paige, J. (2011), Michael Gove pushes for return to more rigorous GCSE and A level exams. The Guardian, 18 June 2011, p.18.
Paton, G. (2010b), Pupils flock to ‘less demanding’ ICT course. Daily Telegraph, 15 January 2010 available online at http://www.telegraph.co.uk/education/educationnews/6998312/Pupils-flock-to-less-demanding-ICT-course.html accessed 17/6/10.
Reay, D. David, M. and Ball, S. (2001), Making a Difference? Institutional Habituses and Higher Education Choice. Sociological Research Online, 5(4).
Ruddock, J., Brown, N. and Hendy, L. (2006), Personalised learning and pupil voice: The East Sussex project. London: Department for Education and Skills.
Selwyn, N. (2011), Education and technology: Key issues and debates. London: Continuum.
Tapscott, D. (1998), Growing up digital: The rise of the Net Generation. New York: McGraw-Hill.
Teacher (2006), OCR Nationals. Message posted on OCR ICT-GCSE message board, 5 December 2006, anonymised.
Threlfall, J. and Nelson, N. (2006), A taxonomy of sources of difficulty in the assessment of ICT. Paper presented at the Association for Educational Assessment – Europe (AEA-E) Conference, 9-11 November 2006, Naples available online at http://www.aea-europe.net/userfiles/C3%20John%20Threlfall%20&%20Nick%20Nelson.pdf accessed April 12, 2007
Underwood, J., Baguley, T., Banyard,P., Coyne, E., Farrington-Flint, L., & Selwood, I. (2008), Impact 2007: Personalising Learning with Technology: Final Report. Coventry: Becta.
Vidal Rodeiro, C. (2010), Uptake of ICT and computing qualifications in schools in England 2007-2009 (Statistics Report Series No. 25), Cambridge: Cambridge Assessment.
Walker, L. and Logan, A. (2008), Learner engagement: A review of learner voice initiatives across the UK’s education sectors. Bristol: National Endowment for Science, Technology and the Arts (NESTA) Futurelab.
Wiliam, D. (2001), Reliability, validity, and all that jazz. Education 3-13, 29(3), 9-13.
[1] The English school system is divided into four Key Stages – KS1 for pupils aged 5 to 7, KS2 ages 7-11, KS3 ages 11-14 and KS4 ages 14-16. [2] Formerly a research unit at Anglia Ruskin University, Chelmsford, England. [3] PGCE=Postgraduate Certificate of Education, a programme of initial teacher education in England. [4] In 2009 I moved to the Open University to work on a professional development project for teachers of ICT and, since 2011 to lead the MA in Education. These roles continued to provide some access to schools. [5] OCR=Oxford, Cambridge and RSA Examinations. [6] 60,648 students were entered for OCR Nationals in 2008, representing 17.5% of the entries for ICT qualifications for the age group being considered. This rose to 118,081 in 2009 (34.8%) – see also Table 2.8. [7] Naace is the ICT subject association in the United Kingdom. [8] UK Gamespot at http://uk.gamespot.com. [9] AQA is a UK awarding body – The Assessment and Qualifications Alliance. [10] GCSE – General Certificate of Secondary Education, the predominant qualification taken by 16-year olds in England, Wales and Northern Ireland.
[divide margin_top=”10″ margin_bottom=”10″ color=”#a0a0a0″]
[review]
[/s2If]