Technology enhanced learning has the potential to develop and deliver innovative learning opportunities to improve the student learning experience (Conole, 2013; Sharples et al., 2015). There is now a wide range of learning trajectories from which curricula are currently being developed and for many this raises the question of how higher education educators can ensure that they choose appropriate, robust yet innovative learning designs. This is because a good learning design needs to assist with the delivery of course materials, learning support, and appropriate assessment strategies which will meet the learning outcomes demanded by educational institutions and employers.

Learning Design is more important today than ever before with the advent of new virtual learning environments and technological tools where a new set of affordances is needed to support learning. The origins of the term ‘learning design’ can be traced back to the instructional design research of the 1940s (Reiser, 2001). Things have certainly moved apace since then, but one important feature which has moved the field forward is to open the design process by making it “more explicit and shareable” (Conole, 2013). In a recent special issue on learning design in the British Journal of Educational Technology, Mor, Ferguson, and Wasson (2015, p. 222) suggest that ‘teachers have the advantage of an intimate knowledge of the context of the learning and the characteristics of the learners, ensuring that they produce a design that is fit for purpose’.

In our technology-rich environments it is not surprising that substantial progress has been made in the last 10 years in conceptualising learning design (e.g., Armellini & Aiyegbayo, 2010; MacLean & Scott, 2011). However, relatively few studies have investigated how educators in practice are actually planning, designing, implementing and evaluating their learning design decisions. Evaluating the success of a learning activity for instance, ‘by analysing the activity logs of students watching videos in online courses’ (Mor et al., 2015, p. 222) is more informative when compared to the overall pedagogy and design of the course. For example, preliminary work presented at the #design4learning conference on learning design across 157 Open University UK (OU) courses does seem to indicate that the way teachers design and implement their blended and online courses structurally influences how students learn (Toetenel & Rienties, 2016).

This special collection from the best papers presented at the #Design4Learning conference in 2014 supported by the Higher Education Academy and the OU provides some advice in this area through mapping a number of facets of learning designs that should be reviewed before embarking on the production of new curriculum, or revising an existing module. These variables include: the use of new technologies; flipped classrooms; live proctoring of electronic tests; online staff development; together with a strong theoretical framework for the evaluation of an intervention, while also not forgetting to assess the ‘status function’ of claims made about differing Learning Designs. Adopting this premise, the first paper in this issue shares the findings from a flipped classroom experiment.

Hernandez-Nanclares and Perez-Rodriguez (2016, this issue) applied a ‘Flipped Classroom’ approach to the teaching, in English, of a topic entitled ‘World Economy’ in a Spanish University. The approach adopted was that of Content and Language Integrated learning (CLIL), and, as Coyle et al. (2010) state, this type of pedagogy creates an “innovative fusion of non-language subject with and through a foreign language”.

This particular flipped classroom learning design also emphasised the role of classroom activities and increased the class time available for student-centred active teaching (Bowen 2012). Hernandez-Nanclares and Perez-Rodriguez found that this design increased student motivation and satisfaction with the course. Not all the students, however, liked this self-pacing that was required while following the subject material, but they did indicate that they were able to pace themselves successfully, which is an indicator of a good learning design. This example illustrates a move away from a teacher centred to a more learner centred activity based approach which has its theoretical basis in constructivism and the work of Piaget (1976) and Bruner (1996). Nonetheless, learning designs should also recognise the role of assessment since assessment is a major influence on learning (Rowntree, 1987). The next paper addresses this issue in the form of live proctoring.

Lilley, Meere and Barker’s paper (2016, this issue) tackles the issue of user authentication when students are undertaking online assessments. This is an issue that will become increasingly important, as students become more avid consumers of accredited worldwide courses, especially as MOOC accreditation is becoming more prevalent (Sharples et al. 2015). They devised a pilot study with a group of computer science students from seven different countries, namely Egypt, Kenya, Saudi Arabia, Slovakia, Trinidad and Tobago, UK and Zambia, in order to test out the process of remote live invigilation. The technical issues associated with his type of invigilation were not being tested since a commercial supplier was used in this study. The research question addressed whether live proctoring hindered or enhanced online assessment, whereby participants in this study did agree that remote live invigilation should be used more widely and there were some advantages to live proctoring. For example, a live proctor could assist the student if they were having technical problems. However, one student felt more stressed before the remote proctored exam than for any other type of invigilated examination because of the time needed to set up the software prior to the online test. This student would have rather spent the time revising than downloading the software and getting it to work.

The adoption of new learning designs and/or the employment of new technologies also requires staff development (Whitelock 2011). Campbell (2016, this issue) documents the use of ‘Talking Point’, which she describes as a flexible, targeted online staff development approach that works. Campbell designed a day’s professional online development for a distributed workforce at the OU. The design provided space for informal situated learning, peer interaction and community building. She evaluated three types of events and concluded that the online design encouraged attendance and opportunities for peer engagement, reflection and social learning. This strategy resonates with Lave and Wenger’s (2002) theory of a ‘community of practice’ and builds on current work by Rienties et al. (2013) who in a longitudinal study across five universities found that online professional development can be an effective medium for sharing diverse practice.

Johnson’s paper (2016, this issue) focuses on Learning Design and its social and unintended frictions in education, and draws attention to the ‘status function’ technological projects can have on project outcomes. The term status function is derived from Searle (2010) and describes a particular type of speech act uttered within any given community to sustain its ‘collective intentionality’. In other words, the status function maintains the power balance and hence the status quo within a project. Johnson argues that the status function of any research contract undertaken with external funders will not necessarily deliver sustainable technological enhanced learning since all the constraints or boundaries for any change are declared in the status functions. This can mean that when a project is not progressing well then it is more difficult to negotiate radical changes with the funder of the original contract. Instead technologies are tweaked and the predicted benefits fall short of original suppositions and expectations. Johnson draws upon two case studies that involve Learning Design to illustrate his thesis. One example has a more positive outcome than the other. However, a good lesson from this work is that theoretical clarity can assist with understanding empirical findings and raising the appropriate type of evaluation questions. Without theoretical clarity project evaluation is difficult and suggesting that a piece of software can undergo minor changes may not deliver sufficient positive outcomes.

The final paper in this series by Rienties et al. (2016, this issue) argues for an evidence based framework for learning analytics so that stakeholders can design, manage, implement and evaluate learning design interventions. The authors have developed an Analytics4Action evaluation Framework (A4AEF) that is being currently tested at the OU while working with eighteen modules across five disciplines. The example pedagogical interventions explain how both static and dynamic learning analytics data can provide insights for action. This is where learning analytics meets learning design and any bottlenecks in the learning process are identified. Immediate action can be taken. The next step in the framework is to evaluate the intervention. The importance of Rienties et al.’s plea for the use of a common Framework is crucial at this stage of the proceedings because comparisons can be made between the positive and negative impacts of a series of interventions across all the modules.

This final paper illustrates that Learning Designs are not static entities. They are complex multifaceted student centred activity schedules that evolve in tandem with the growth in knowledge in any given subject domain (Armellini & Aiyegbayo, 2010; Toetenel & Rienties, 2016). The nature of academic work is indeed changing but teaching still remains its major component. Making our teaching more explicit through sharing our learning designs not only nurtures a community of practice, but acknowledges that teaching is indeed a challenging but exciting profession.

Competing Interests

The authors declare that they have no competing interests.