Introduction

Across the globe many institutions and organisations have high hopes that learning analytics can play a major role in helping their organisations remain fit-for-purpose, flexible, and innovative. According to Tempelaar, Rienties, and Giesbers (2015, p. 158) “a broad goal of learning analytics is to apply the outcomes of analysing data gathered by monitoring and measuring the learning process”. Learning analytics applications in education are expected to provide institutions with opportunities to support learner progression, but more importantly in the near future provide personalised, rich learning on a large scale (Rienties, Cross, & Zdrahal, 2016; Tempelaar et al., 2015; Tobarra, Robles-Gómez, Ros, Hernández, & Caminero, 2014).

Increased availability of large datasets (Arbaugh, 2014; Rienties, Toetenel, & Bryan, 2015), powerful analytics engines (Tobarra et al., 2014), and skilfully designed visualisations of analytics results (González-Torres, García-Peñalvo, & Therón, 2013) mean that institutions may now be able to use the experience of the past to create supportive, insightful models of primary (and even real-time) learning processes (Arnold & Pistilli, 2012; Ferguson & Buckingham Shum, 2012; Papamitsiou & Economides, 2014).

Substantial progress in learning analytics research relating to identifying at-risk students has been made in the last few years. Researchers in learning analytics use a range of advanced computational techniques (e.g., Bayesian modelling, cluster analysis, natural language processing, machine learning, predictive modelling, social network analysis) to predict learning progression (Agudo-Peregrina, Iglesias-Pradas, Conde-González, & Hernández-García, 2014; Calvert, 2014; Gasevic, Zouaq, & Janzen, 2013; Tempelaar et al., 2015; Tobarra et al., 2014; Wolff, Zdrahal, Nikolov, & Pantucek, 2013). What these and other learning analytics studies have in common is a combination of (often longitudinal) data about the learners and their learning from a range of sources which will gradually improve the accuracy of predicting which learners are likely to fail.

In this article, we argue that one of the largest challenges for learning analytics research and practice still lies ahead of us, and that one substantial and immediate challenge is how to put the power of learning analytics into the hands of teachers and administrators. While an increasing body of literature has become available regarding how researchers and institutions have experimented with small-scale interventions (Clow, Cross, Ferguson, & Rienties, 2014; Papamitsiou & Economides, 2014), to the best of our knowledge no comprehensive conceptual model, nested within a strong evidence-base, is available that describes how teachers and administrators can use learning analytics to make successful interventions in their own practice.

There is an urgent need to develop an evidence-based framework for learning analytics with which students, researchers, educators, and policy makers can manage, evaluate, and make decisions about which types of interventions work well, under which conditions, and which do not. If institutions are going to adopt learning analytics approaches, the research community has to provide a clear conceptual model embedded into an evidence-based results approach that can

  1. accurately and reliably identify learners at-risk;
  2. identify learning design improvements;
  3. deliver (personalised) intervention suggestions that work for both student and teacher;
  4. operate within the existing teaching and learning culture; and
  5. be cost-effective.

In this article, we will work towards the development of a foundation of an Analytics4Action Evaluation Framework (A4AEF) that is being currently tested and validated at the largest university in Europe (in terms of enrolled learners), namely the UK Open University (OU, Calvert, 2014; Richardson, 2012). First, we will provide a short literature review of contemporary learning analytics studies, specifically focussed within the context of the OU, as this institution is taking a leading role in implementing learning analytics at scale. This will be followed by an argument that identifies the need for more robust evidence-based research. Second, we will build the foundations of the A4AEF model. Finally, we will provide one exemplar of how the A4EAF model has been used in practice by teachers, administrators and researchers.

The power of learning analytics

As learning analytics is a relatively new research field, it is not surprising that most of the research efforts have thus far focussed on seeking, developing and raising awareness of the conceptualisations, boundaries, and generic approaches of learning analytics (Arnold & Pistilli, 2012; Ferguson, 2012; Papamitsiou & Economides, 2014; Rienties et al., 2016). As indicated by Tempelaar et al. (2015), many learning analytics applications and dashboards available in virtual learning environments (VLEs) like Blackboard and Moodle generate VLE usage data, such as the number of clicks (Rienties et al., 2015; Wolff et al., 2013), the number of messages posted in discussion forums (Agudo-Peregrina et al., 2014), or the number of (continuous) computer-assisted formative assessments attempted (Papamitsiou & Economides, 2014; Tempelaar et al., 2015; Whitelock, Richardson, Field, Van Labeke, & Pulman, 2014; Wolff et al., 2013). User behaviour data are frequently supplemented by individual learner characteristics, such as prior educational attainment, socio-economic data, or motivation (Arbaugh, 2014; Calvert, 2014; Richardson, 2012).

However, a special issue on learning analytics in Computers in Human Behavior (Conde & Hernández-García, 2015) indicated that simple learning analytics metrics (e.g., number of clicks, number of downloads) may actually hamper the advancement of learning analytics research. For example, using a longitudinal data analysis of over 120 variables from three different VLE systems and a range of motivational, emotions and learning styles indicators, Tempelaar et al. (2015) found that most of the 40 proxies of “simple” VLE learning analytics metrics provided limited insights into the complexity of learning dynamics over time. On average, these clicking behaviour proxies were only able to explain around 10% of variation in academic performance. In contrast, learning motivations and emotions (attitudes) and learners’ activities during continuous assessments (behaviour) significantly improved explained variance (up to 50%) and could provide an opportunity for teachers to help at-risk learners at a relatively early stage of their university studies. Although a large number of institutions are currently experimenting with learning analytics approaches, few have done so in a structured way or at the scale of the OU, to which we now turn our attention.

Learning analytics studies at the OU

Lecturers and researchers at the OU have access to a substantial range of data pertaining to teaching and learning. Over the last decade several systems have been developed (Ashby, 2004; Inkelaar & Simpson, 2015; Richardson, 2012) for managing data tasks, such as logging assessment grades (Calvert, 2014; Tingle & Cross, 2010), handling tutor feedback to students, monitoring VLE activity (Rienties et al., 2015), surveying students (Ashby, 2004) and capturing the pedagogic balances within a module (Cross, Galley, Brasher, & Weller, 2012). The nature of distance learning means that teaching at the OU is manifested primarily in two forms: in the embedded teaching and learning design in module materials, learning activities and assessment (Conole, 2012); and in the direct distance teaching delivered by associate lecturers to their tutor groups (normally consisting of between 15–20 students) in (increasingly online) group tutorials (Wolff et al., 2013).

Demand for actionable insights to help support module and qualification design is currently strong and so web analytics, academic analytics and business intelligence are now often taken into account when designing, writing and revising modules and in the evaluation of specific teaching approaches and technologies (Rienties et al., 2016). A range of data interrogation and visualisation tools developed by the OU supports this (Calvert, 2014; Cross et al., 2012; Rienties & Alden Rivers, 2014). The use of such data can range from investigating the student experience of assessment, and contrasting the effectiveness of ‘revision designs’ in respect to observable activity on the VLE, to investigating retention and learning issues associated with concurrent study (studying more than one distance learning module at once), collaborative learning, online tuition, and wikis. More avowedly learning analytics is taking place at the institutional level - such as on predictive analytics for student success (Calvert, 2014; Wolff et al., 2013), ethical considerations of data use (Slade & Prinsloo, 2013), involvement with external projects such as the LACE Project (Clow et al., 2014) – and also at the module design level (Cross et al., 2012; Rienties et al., 2015).

For example, in a comparison of 40 learning designs at the OU with learner behaviour in the VLE, learning satisfaction and academic performance, Rienties et al. (2015) found that the way teachers designed online modules significantly influenced how learners engaged in the VLE over time. Furthermore, and particularly important for this special issue, the learning design of online modules significantly impacted learner satisfaction, whereby online modules with a strong content focus were significantly more highly rated by learners than online modules with a strong learner-centred focus, in particular activities requiring communication between peers and interactivity.

Real-time capture of learner data has to date predominantly focused on VLE activity, such as the use of learning tools and assessments. Computer-marked assessment (Whitelock et al., 2014) is a case in point and this allows for the recording of learning activity (the use of the tool) and student performance (their scores). These assessments are capable of providing immediate feedback to students based on their answer and can therefore perform both a summative and formative function (Tempelaar et al., 2015; Whitelock et al., 2014). Jordan (2014) has recently spoken of computer-marked assessment as ‘learning analytics’, and, indeed, the technology was an interventional response itself to issues identified by analysis of learning data (e.g., Isherwood, 2009). For example, the Centre for Open Learning in Mathematics, Science, Computing and Technology (COLMSCT) projects undertaken at the OU between 2005 and 2010 illustrate how the evolution of a learning technology takes place in tandem with the methods to measure its use.

Meanwhile, Tingle and Cross (2010) used individual anonymised weblogs of over 650,000 visits to online quizzes and podcasts to understand patterns of use for a first-year undergraduate module. Changes were made to the module in order to encourage greater participation, particularly by those in lower socio-economic groups. At present the OU is considering further opportunities for augmenting real-time monitoring including logging activity on specific key webpages, content or activities, and asking students questions about their module when they submit an assessment (Calvert, 2014; Rienties et al., 2016).

The OU has also been developing systems to help teachers identify at-risk students through the development of two predictive analytics models. A statistical model has been built by the university’s Information Office using logistic regression with a primary purpose of improving aggregate student number forecasts (Calvert, 2014). The results of this model are aggregated from the calculated probabilities of individual students being registered at key milestones during a module presentation. These individual probabilities are being re-purposed to guide targeted intervention at key points. During the development of the model a set of 30 variables was identified as the most effective explanatory ones from a list of around 200. According to Calvert (2014) these 30 variables can be broadly categorised into five groups: characteristics of the student, the student’s study prior to the OU and their reasons for studying with the OU, the student’s progress with previous OU study, the student’s module registrations and progress and finally the characteristics of the module and qualification being studied.

The OU Analyse project specifically aims to predict learners-at-risk (i.e., lack of engagement, potential to withdraw) in a module presentation as early as possible so that cost-effective interventions can be made. In OU Analyse, predictions are calculated in two steps:

The predictive modelling uses two types of data: demographic/static data and learner interactions with the VLE system. Demographic/static data include: age, previous education, gender, geographic region, Index of Multiple Deprivation score, OU study motivation category, the number of credits the learner is registered for and the number of previous attempts on the module among others. VLE data represent a learner’s interaction with on-line study material and VLE interactions are classified into activity types and actions. Each activity type corresponds to an interaction with a specific kind of study material (Rienties et al., 2016; Wolff et al., 2014; Wolff et al., 2013). For a detailed description of the technical specifications of OU Analyse, see http://analyse.kmi.open.ac.uk/.

Analytics4Action Evaluation Framework

Much of the current literature seems to be focussed on testing and applying learning analytics approaches using convenience sampling. It often lacks a robust design-based research (Collins, Joseph, & Bielaczyc, 2004; Rienties & Townsend, 2012) or evidence-based approach (e.g., A/B testing, Randomised Control Trials, pre-post retention modelling) of testing and validating claims and arguments (e.g., Hess & Saxberg, 2013; McMillan & Schumacher, 2014; Rienties et al., 2016; Slavin, 2008). Indeed, according to Collins et al. (2004, p. 21), design experiments should include and “bring together two critical pieces in order to guide us to better educational refinement: a design focus and assessment of critical design elements”. Furthermore, current applications of learning analytics focus on single studies, or a limited combination of case-studies in a single discipline (Arbaugh, 2014; Rienties et al., 2015).

Building on leading learning analytics work described in the previous section, OU strategic leadership has provided substantial support to implement a university-wide approach to learning analytics. One key project strand has been to develop an academically-sound institutional learning analytics framework that will enable practitioners to evidence improvements to the student experience more effectively. By working together with 18 introductory large-scale modules for a period of two years across the seven faculties representing disparate disciplines at the OU, a bottom-up-approach has been taken with Analytics4Action, working together with key stakeholders within their respective contexts.

During the 2014/15 period, fifteen of the modules were presented twice (the first presentation began in autumn 2014 and the second in winter/spring 2015) and three modules were presented once. The total number of registered students for these 18 modules over the 2014/15 period was 42,848. Registrations on the 18 modules are expected to be similar in the 2015/16 phase of the project. As illustrated in Figure 1, a holistic A4AEF has been developed to unpack, understand and map the six key steps in the evidence-based intervention process. This process has been used with each module in order to frame the evaluation of the interventions in terms of potential success criteria. Specific activity and/or decision making takes place at each step.

Figure 1 

Analytics4Action Evaluation Framework.

Key metrics and drill-downs

The first step is to bring together the key stakeholders in the module, such as teachers, learning analysts and administrators for the purpose of presenting, unpacking and understanding learning data available taken from various VLE and related systems. This is termed a data touch point meeting and the project held four of these with each module over a one-year period. These data touch points featured a review of weekly real-time (indicated by double line) and annually collected data (blue single line) about the students’ progression and usage of specific VLE tools, as indicated in Figure 2. The long-term intention is to integrate data sources so as to create a more coherent picture of the complex dynamics of students’ journeys, yet at present the data sources used are held on several institutional systems. The mix of data wranglers, data interpreters, multi-media designers, learning design experts, student-support staff, tutors and representatives of the two predictive analytics models mentioned previously who attended the data touch point meetings provided a broad range of expertise with which to unpack and translate the raw data and visualisations presented.

Figure 2 

Data sources used during data touch points.

Menu of response actions

As a second step in A4AEF we are providing a menu of potential response actions that teachers can take, based upon learning analytics data and visualisations. Based upon the discussions during the data touch point meetings, teachers have a range of options at their disposal to further improve the learning design of their module and the learner support provided. It would also be expected that they could make use of the institutional evidence hub of previous case studies (discussed later) and insight from strategic analysis. Whilst the range of options available to fine-tune their learning design is potentially infinite (Conole, 2012; Cross et al., 2012; Rienties et al., 2015), not all of these will be feasible within the confines set by cost, practicality, available staff time, etc.

The A4A project has categorised this menu of response actions based upon the Community of Inquiry (CoI), initially developed by Garrison and colleagues (2000; 2007). This categorisation is intended to help guide the choice of intervention for teachers. In the CoI framework, a distinction is made between three types of presence: cognitive presence, social presence and teaching presence. Cognitive presence is defined as “the extent to which the participants in any particular configuration of a community of inquiry are able to construct meaning through sustained communication” (Garrison et al., 2000, p. 89). In other words, cognitive presence is the extent to which learners use and apply critical inquiry. Social presence is defined as the ability of people to project their personal characteristics into the community, thereby presenting themselves to the other participants as ‘‘real people’’. A large body of research has found that for learners to critically engage in discourse in blended and online settings, they need to create and establish a social learning space (Cleveland-Innes & Campbell, 2012; Ferguson & Buckingham Shum, 2012; Rienties et al., 2015).

The third component of the Community of Inquiry framework is teaching presence. Anderson, Rourke, Garrison, and Archer (2001) distinguished three key roles of teachers that impact upon teaching presence in blended and online environments, namely: 1) instructional design and organisation; 2) facilitating discourse; 3) and direct instruction. By designing, structuring, planning (e.g., establishing learning goals, process and interaction activities, establishing netiquette, learning outcomes, assessment and evaluation strategies) before an online module starts (Anderson et al., 2001), a teacher can create a powerful learning design in which their voice and their teaching intentions can be embedded in the design, materials and environment.

Whilst a module is in presentation, a teacher can either facilitate discourse or provide direct instruction to encourage critical inquiry. According to Anderson et al. (2001, p. 7), “facilitating discourse during the course is critical to maintaining the interest, motivation and engagement of students in active learning”. Direct instruction allows teachers to achieve teaching presence by providing intellectual and scholarly leadership and sharing their domain-specific expertise with their learners, for example, by posting in discussion forums or contributing to online videoconferences.

Alongside these three types of presence, recent research has indicated that a fourth, separate category may be needed to complement the CoI, namely emotional presence (Cleveland-Innes & Campbell, 2012; Stenbom, Cleveland-Innes, & Hrastinski, 2014). In a study consisting of 217 students from 19 modules, Cleveland-Innes and Campbell (2012) found a distinct, separate factor for emotional presence (e.g., “I was able to form distinct impressions of some course participants”; “The instructor acknowledged emotion expressed by students”). In a follow-up study in a mathematics after-school tutorial in Sweden, Stenbom et al. (2014) found that emotional presence was a clearly distinct category in online chats that encouraged social interactions between pupils and tutors. In a recent literature review of more than 100 studies, Rienties and Alden Rivers (2014) identified approximately 100 different emotions that may have a positive, negative or neutral impact on learners in online environments.

Cleveland-Innes and Campbell (2012, p. 283) defined emotional presence as “the outward expression of emotion, affect, and feeling by individuals and among individuals in a Community of Inquiry, as they relate to and interact with the learning technology, module content, students, and the instructor”. As argued by Rienties and Alden Rivers (2014), it is important for both students and teachers to generate an inclusive learning climate, where students feel safe and empowered to contribute and participate. In particular in terms of linking teaching presence with emotional presence, it is essential that institutions and teachers need to consider how to provide emotional support. In line with Rienties and Alden Rivers (2014), we adjusted the Community of Inquiry model by adding emotional presence in Figure 3.

Figure 3 

Community of Practice including emotional presence (Rienties & Alden Rivers, 2014).

In Table 1, we have provided several examples of potential interventions based upon the experiences of the A4A project. We have distinguished between learning design interventions before the implementation of a (subsequent) module presentation based on learning analytics insights during the (previous) module presentation, and in-action interventions during a module presentation based upon insights from real-time learning analytics data. While the former is common practice in the OU and fits well with principles of design-based research to continuously update learning designs after evaluating quantitative and qualitative data, active intervention strategies within a presentation of module using real-time learning analytics are less common at present at the OU.

Learning design (before start) In-action interventions (during module)

Cognitive Presence
  • Redesign learning materials
  • Redesign assignments
  • Audio feedback on assignments
  • Bootcamp before exam
Social Presence
  • Introduce graded discussion forum activities
  • Group-based wiki assignment
  • Assign groups based upon learning analytics metrics
  • Organise additional videoconference sessions
  • One-to-one conversations
  • Cafe forum contributions
Teaching Presence
  • Introduce bi-weekly online videoconference sessions
  • Podcasts of key learning elements in the module
  • Screencasts of “how to survive the first two weeks”
  • Organise additional videoconference sessions
  • Call/text/skype student-at-risk
  • Organise catch-up sessions on specific topics that students struggle with
Emotional Presence
  • Emotional questionnaire to gauge students emotions
  • Introduce buddy system
  • One-to-one conversations
  • Support emails when making progress

Table 1

Potential intervention options (learning design vs. in-action interventions).

Menu of protocols

After a teacher has selected a particular learning design or active-intervention strategy, the third step is to determine which research protocol will be used to understand, unpack and evaluate the impact of these strategies. While in many fields such as medicine, agriculture, transportation, or technology, the process of development, rigorous evaluation, and sharing of results to practice using randomised experiments (Torgerson & Torgerson, 2008) and A/B testing (Siroker & Koomen, 2013) has led to unprecedented innovation in the last 50 years (Slavin, 2002), educational research and learning analytics in particular have yet to sufficiently adopt evidence-based research principles en masse (Hess & Saxberg, 2013; McMillan & Schumacher, 2014; Rienties et al., 2016; Torgerson & Torgerson, 2008).

A major potential problem of descriptive or correlational studies is the ability to generalise the findings beyond the respective context in which a particular learning analytics study has been conducted (Arbaugh, 2014; Hattie, 2009; Rienties et al., 2015). The issue of selection bias has been a particular concern in early learning analytics studies. A recent meta-analysis of 35 empirical studies (Papamitsiou & Economides, 2014) indicated that most learning analytics studies have focussed on analysis of a single module or discipline, using the contexts of teachers or learners who are keen to share their practice for research. Although these studies provide relevant initial insights into the principles of data analysis techniques, as well as testing the proof of concepts for data visualisation tools, without random assignment of learners or teachers into two or more conditions it is rather difficult to provide evidence of impact (McMillan & Schumacher, 2014; Rienties et al., 2016; Slavin, 2008; Torgerson & Torgerson, 2008).

At the OU, teachers will have the option to select between five choices in terms of protocols to implement their intervention strategies: 1) apply intervention to all students in the cohort; 2) quasi-experimental design; 3) pilot study with sub-sample; 4) A/B testing; and 5) randomised control trials (RCTs). The first protocol is rather straightforward, whereby all students will be able to potentially benefit from the chosen intervention strategy. The second option of quasi-experimental design has a range of variations, starting from the simple design of comparing the results of an intervention with a previous (or future) implementation of the same module. Whether or not a particular intervention has worked depends on the kinds of relations we are looking for, and how these are measured. Furthermore whether the two cohorts are similar in terms of individual characteristics at the start needs to be verified. An alternative quasi-experimental design option could be a switching replications design, whereby the first half of the cohort (Group A) will get a planned intervention in say week 3, and the other half of the cohort (Group B) will receive the same intervention in week 4. In this way, Group A forms the intervention group during week 3, and can be compared and contrasted with the control Group B in terms of their behaviour.

The third option of a pilot study within a module implementation might be an attractive option to test a proof of concept amongst a small number of groups or members of staff. By starting with a small sub-sample, crucial first hands-on experiences with a new approach can be explored and analysed. If the proof of concept was successful, a wider-scale implementation can bring further evidence of feasibility of the concept. If the pilot was unsuccessful, it will give teachers and researchers an indication of where to look at in order to further improve their approach. At the same time, given that in most pilot studies the assignment of groups or members of staff are not always randomised, one has to remain cautious in terms of causation and generalisation of findings across a wider context.

In the fourth option of A/B testing (Hess & Saxberg, 2013; Siroker & Koomen, 2013), both groups get a similar treatment at the same point in time but, for example, the content/look-and-feel/navigation of a learning unit is slightly altered. To illustrate this, group A gets exactly the same content as group B with the only difference that a video on “self-assessment and reflection” of the learning unit is positioned directly after a first reflection task, while for group B this video is posted before the self-assessment task two pages later. In this way, we are providing exactly the same content to both groups of students, but we can start to track what the optimum position is for the respective video in the learning unit, and for which types of learners. Ideally both A/B interventions should be considered as educationally valuable/progressive and not to adversely disadvantage the educational experience of the “other” group.

A final scenario could be to a full RCT (Rienties, Giesbers, Lygo-Baker, Ma, & Rees, 2014; Slavin, 2008; Torgerson & Torgerson, 2008), whereby, for example, we at random give a third of the cohort the learning unit previously described with two reflective videos on peer- and self-assessment, with integrated questions and feedback in the videos, a third of the cohort the same two videos without the feedback mechanisms, and a third of the cohort the same single video as before. By tracking the behaviours of learners in the two experimental conditions in comparison to the learners in the control condition, we should be able to determine the causal relations of the type and intensity of the intervention. Note that Rienties et al. (2016) have provided two worked-out examples of how institutions could implement these scenarios into their own practice.

Outcome analysis and evaluation

The fourth step of the A4AEF is to determine the impact of the respective intervention(s) on specific learning activities, learning processes, and/or learning outcomes. Data will have been collected before, during and after the intervention as defined by the evaluation plan (see Figure 1). Depending on the scope and underlying reasons for the interventions, the appropriate focus of the outcome analysis will be at a fine-grained level (e.g., in the A/B example comparing how many times a video was watched; for how long; by whom), or on a higher, more outcome-driven level (e.g., number of students who completed the learning unit and passed the module in the RCT example). In this phase, it is crucial for teachers to determine beforehand the key variables that will be influenced by the intervention(s). Common statistical techniques need to be applied in order to determine whether the interventions had a statistically significant impact on the target variables, while controlling for other common confounding factors (e.g., prior education, motivation, VLE engagement) that might influence the target variables. Finally, given that the population of most modules in the Analytics4Action is rather large, it is important that effect sizes (e.g., Cohen’s d, eta squared) are reported, as changes in modules might have a significant effect but with only a small effect size (Hattie, 2009; Slavin, 2008).

Institutional Sharing of Evidence

A range of repositories are available to OU staff through which they can share evidence and case studies. The OU Evidence Exchange hub is a searchable repository of information that is open only to OU staff and this is being used to record institutional knowledge relating to teaching and learning. The exchange seeks to give teachers who follow them access to previous interventions, information as to whether they were successful or not and any learning points. The Evidence Exchange Hub is structured on the basis of the Community of Practice model (see Figure 3) and designed to be a collation of evidence in one place in a common format so as to inform the development of a self-sustaining community of interest. The inclusion of a step specifically for evidence sharing is an attempt to prompt teachers to add their studies and recognises that evidence hubs such as the Evidence Exchange principally rely on voluntary contributions from staff. Staff are also encouraged as a matter of good-practice to share their evidence with a wider audience such as with Open Education Resources Hub (http://oerresearchhub.org/), the OU’s Scholarship platform and the LACE Evidence-Hub (Clow et al., 2014).

Deep dive analysis and strategic insight

In the longer term, by comparing and contrasting these different interventions over time across a range of modules from various disciplines, strategic insight will become available about which types of interventions under which conditions and contexts have a positive effect on increasing retention and learner satisfaction. Through periodically drawing together these insights, improvements can be made to the key metrics used to monitor performance and the menu of response actions refined around those which have the most value in positively impacting the learning experience and outcomes. In addition to the collation of evidence from these practice-based studies, institutional level deep-dive analyses are undertaken resulting in recommendations for changes to both key metrics and response actions.

Exemplar: Use of predictive analytics to inform tutor support

This section presents an exemplar of the intervention framework in use. It illustrates the way in which data about potentially struggling or at-risk students received from OU Analyse is being used in combination with tutors’ experiential knowledge and how this can inform supportive interventions within a module implementation. Drawing on data based on online behaviour, weekly predictive reports were generated by OU Analyse for a first year undergraduate health and social care module (N = 3000+) to predict the likelihood of each student submitting the next assignment. A small group of 5 tutors volunteered to pilot the use of weekly predictive reports to inform their tuition, which involved around 100 students. The pilot aimed to develop an approach to tuition in which any student who appeared to be struggling would be contacted by the tutor. This was an attempt to project teaching presence so as to help support the learner.

Tutors received weekly predictive reports from OU Analyse throughout the module. It was then left to the tutor to interpret the results from the predictive model in relation to their knowledge of the student and their practice experience and to choose when, or whether to act on the information by contacting the student. This mediation of the data was important because as highly experienced tutors, they already possessed an understanding of student behaviours and characteristics suggesting that he or she might be struggling with their studies. Tutors knew, for example, that assignment quality, level of online tutor group activity, employment circumstances or discussions with (or inability to contact) the student may indicate possible difficulties. In addition to generalised knowledge of typical patterns, the predictive data was interpreted in light of tutor knowledge of each student’s specific circumstances. While the predictive data may suggest a disengaged student experiencing difficulties, tutors may know for example, that he or she is not struggling but on holiday, taking a strategic decision to drop an assignment or choosing not to visit online forums preferring instead to work alone. As a consequence, whilst predictive analytics often confirmed tutor predictions, it was regarded by tutors as a crude measure compared with the details of experiential knowledge.

Predictive analytics influenced tutor support in two ways. First, while tutors reported that they were already providing additional support to many students who were highlighted in the data as ‘high risk’, predictive analytics helped them to decide when to intervene; for example, the analytics helped identify sudden or unexpected changes in behaviour that the tutor recognised as indicating they needed to contact the student to check that ‘everything was all right’. Second, the weekly delivery of predictive analytics encouraged tutors to engage in a regular cycle of systematically considering each student’s progress.

From the outset, predictive data and initiation of further support was treated cautiously by tutors. Informing a student that it was predicted that they would not complete or providing overbearing support was seen as potentially undermining student retention. If a student intervention appeared warranted, tutors would send an open-ended e-mail enquiring as to “how are you getting on?” but not discussing the predictive data. If ignored, tutors would step up their efforts using other channels of communication (text message, for example). However, they were reluctant to push enquiries beyond three attempts as this could be seen to be bullying the student. However, they would refer the matter to the OU Student Support Team.

Tutors reported that the resulting contact with students was often positive or at least, productive. Once data presentation issues were addressed, tutors reported that the use of the reports did not add significantly to their workload.

Discussion

In this article, we have argued that one of the greatest challenges for learning analytics research and practice still lies ahead of us, namely how teachers and administrators can use learning analytics to make successful interventions in their own practice. Demand for actionable insights to help support module and qualification design is currently strong (Conole, 2012; Cross et al., 2012; Rienties et al., 2015; Tobarra et al., 2014). The use of VLE data can range from investigating the student experience of assessment, and contrasting the effectiveness of ‘revision designs’.

At the OU, strategic leadership has provided substantial support to implement a university-wide approach to learning analytics. By working together with 18 introductory large-scale modules for a period of two years across five of the faculties at the OU, Analytics4Action provides a bottom-up-approach for working together with key stakeholders within their respective context. In total 45000+ students in these 18 modules were included in the academic year 2014/15. The A4AEF distinguishes six different key phases that teachers and institutions will need to go through in order to translate the insights from learning analytics into actionable interventions that can then be effectively evaluated for their impact:

  1. Reviewing key learning analytics metrics;
  2. Implementing response actions;
  3. Determining protocols;
  4. Outcome analysis and evaluation;
  5. Sharing evidence;
  6. Building strategic insight.

In the first step, it is essential to bring together key stakeholders in the module that can unpack and understand the key trends from various VLE and related systems. Given that institutions collect vast amounts of data, it is important that stakeholders work together to transform these data into information and knowledge where, when, and for whom (potential) interventions can have the most impact.

Once the key bottlenecks in learning design, processes and outcomes are identified, in the second step in A4AEF it is important to provide teachers with a list of potential response actions that they can take to further improve the learning design of their module and support for their students. While the range of options available to fine-tune their learning design is potentially infinite (Conole, 2012; Cross et al., 2012; Rienties et al., 2015), not all of these will be feasible within the confines set by cost, practicality, available staff time, etc. We have argued that the Community of Inquiry (CoI) developed by Garrison and colleagues (2000; 2007) can provide a solid theoretical framework to help teachers make informed learning design interventions.

As a third step, a teacher has to decide which protocol is going to be used to test the impact of the chosen intervention. At the OU, teachers will have the option to choose between five pre-defined protocols to implement their intervention strategies: 1) apply intervention to all; 2) quasi-experimental design; 3) pilot study with sub-sample; 4) A/B testing; and 5) RCTs. While from a research perspective choosing options 4 or 5 may make the most scientific sense, in reality it is often difficult for teachers to implement these designs for both practical and potentially ethical reasons. Our exemplary case-study used a pilot-study to test a new learning analytics approach amongst a small group of teachers, which provided crucial, fine-grained results that can help inform teachers to further improve their student support in the next implementation of their module.

The fourth step of the A4AEF is to determine the impact of the respective intervention(s) on specific learning activities, learning processes, and/or learning outcomes. When choosing Options 4 or 5 in the third step of A4AEF, this is a relatively straightforward exercise, while for Options 1–3 more effort and caution are needed when interpreting the data. The final two steps involve maximising the use of what has been learnt during Stage 4 by first contributing to institutional knowledge using repositories such as a searchable evidence exchange hub that enables teachers to see what has been done before, how successful it was, and what was learnt; and second, by providing data to develop strategic insight.

A major advantage of analysing the interventions using a common evaluation framework is that the potential positive and negative impacts of the interventions across modules can be contrasted, and possibly compared when sufficiently strong protocols are used. For example, when six modules have primarily focussed on interventions on cognitive presence, six on teacher presence, and six on emotional presence, it may be possible to start to compare the relative merits of these interventions. However, one has to remain vigilant in oversimplifying the potential merits of a particular intervention approach.

By working together in interdisciplinary teams consisting of teachers, learning designers, learning analytics specialists, educational psychologists, data interpreters, IT specialists and multi-media designers, we aim to continuously refine the learning experiences of our large cohorts of learners to meet their specific learning needs in an evidence-based manner. In the next 3–5 years, we hope that a rich, robust evidence-base will be available which will demonstrate how learning analytics approaches can help teachers and administrators around the globe to make informed, timely and successful interventions that will help each learner achieve their learning outcomes. We warmly welcome feedback from the readers of JIME on this article, and hope that our conceptual framework will lead to a rich discussion of how institutions, researchers and teachers can “measure” and unpack the impacts of interventions in real-world educational settings.

Competing Interests

The authors declare that they have no competing interests.