Citation Details: Conole, G. and Oliver, M. (2002). Embedding Theory into Learning Technology Practice with Toolkits. Journal of Interactive Media in Education, 2002 (8). ISSN:1365-893X []
Print: [PDF] [HTML]

Published: 25 July 2002

Editor: Simon Buckingham Shum (Open U.) []

Reviewers: Benay Dara-Abrams (BrainJolt), Martyn Cooper (Open U.), Simon Buckingham Shum (Open U.)

Embedding Theory into Learning Technology Practice with Toolkits

Grainne Conole1 and Martin Oliver2

1Research and Graduate School of Education
University of Southampton
Southampton, SO17 1BJ

2 Department of Education and Professional Development
University College London
1-19 Torrington Place
London, WC1E 6BT

Abstract: Expert and theoretical knowledge about the use of learning technology is not always available to practitioners. This paper illustrates one way in which practitioners can be supported in the process of engaging with theory in order to underpin practical applications in the use of learning technologies. This approach involves the design of decision-making resources, defined here as 'toolkits'. This concept is then illustrated with three practical examples. The ways in which this approach embeds specific theoretical assumptions is discussed, and a model for toolkit specification, design and evaluation is described.

Keywords: Frameworks, toolkits, learning technology, practitioners, theory and practice

Interactive Demonstration: This article describes the Media Adviser toolkit, which is available to download from, and the Evaluation Toolkit []


Learning technology is an inherently multidisciplinary field, and stakeholders include of researchers from different fields (educational research, cognitive psychology, instructional design, computer science, etc) as well as teaching subject-experts who engage with it as 'end users' or 'consumers'. This multi-disciplinarity is a common feature of emergent research areas and, in one sense, is a strength. However, if we are to capitalise on this richness of expertise, it is necessary to work towards a clear theoretical underpinning that allows these diverse cultures to engage with and develop the use of learning technology.

An important starting point for any discussion of this type is the realisation that learning technology use is shaped by contextual factors. Beetham argues, for example, that "Learning technologists have always started from the 'practical concerns of the classroom' and we tend to claim validity for our activities according to their impact in the classroom," and goes on to contend that within this area, "the majority of researchers are concerned to find relationships among the inputs to and outcomes of a learning process which is very poorly theorised, and that this has serious consequences for the future of learning technology research and practice" (Beetham, 2000a). Similarly, Oliver concludes that "an appropriation model of theory use implies that purpose will be determined (at least in part) by situationally specific issues such as the personal background and current needs of its user" (Oliver, 2000).

However, if the use of theory in learning technology is strongly shaped by context, how can academics that are new to this area be supported as they start to engage with it? Is it possible to provide general support and guidance whilst remaining sensitive to the situational influences described above? This paper will explore these issues through one pragmatic approach to applying theory to practice: the development of 'toolkits' that support decision making and which are derived from specific theoretical perspectives. The paper will begin with a definition of the term toolkit, along with related concepts (frameworks, wizards and models). Three toolkits will then be described, specifying the theoretical perspective that they draw upon, the methodology behind their development and use, and extracts from evaluation studies of their use with practitioners.

Resources for supporting decision-making

A range of aids and resources to facilitate decision-making processes has developed to support the use and integration of learning technologies. As a consequence, the terms 'tools', 'toolkits', 'frameworks', 'good practice' and 'model' abound, but are very rarely used with any consistency. Indeed, there is considerable confusion and overlap within the literature on the precise nature of these types of resources. Therefore, this section attempts to provide some definitions for these terms, along with illustrative examples of the different types of decision-making resource.


Any attempt to define 'tools' is necessarily broad, so for the purposes of this paper the term is used in the sense of mediating artefacts, drawing on the tradition of Activity Theory (Kuutti, 1997). Within this tradition, tools are artefacts located in a socio-historical context that form an integral part of human action. Such tools may be conceptual or embodied.

Good practice

The notion of good practice (or "best practice", in some uses) is ubiquitous yet ill-defined. Whilst it is usually used to denote guidelines that practitioners are exhorted to follow, this disguises the fact that the term also carries a moral message. Practice can only be judged to be "good" (or otherwise) in relation to a framework of values; thus for this particular paper, we will take "good practice" to denote practice that closely follows the tenets of a given theoretical perspective.


Models are representations, usually of systems. These are frequently visual representations, although formal models are more likely to be syntactic (or derived from an underlying syntactic representation), often being defined mathematically. Models may be tools, in that they can be used to carry out analyses or may permit certain assumptions to be expressed. Equally, however, they may be the object (i.e. purpose) of an activity, in that it may be necessary to construct a model of a system in order to develop an explicit understanding of how it works.


Aids to decision-making range from highly restrictive 'templates' or 'wizards', which provide high levels of support and step-by-step guidance but little possibility of user-adaptation, through to 'theoretical frameworks', which provide a context and scope for the work but leave the user to devise their own strategy for implementation.

Templates and Wizards

By way of a contrast to theoretical frameworks, another approach to supporting the use of learning technology involves the use of highly structured decision-making systems: templates and wizards. Generic templates are found in most software packages. They can provide structured, pre-defined layouts or structures for the user to base their document or presentation on. A wizard is a software tool that makes decisions on behalf of the user, based on solicited information and drawing on pre-defined templates. In most cases, the way in which these outputs are generated is hidden from the user. As a result, wizards and templates are relatively easy to use, but are restrictive in the range of outputs that can be achieved, and allow very little engagement with issues or response to the values and assumptions built into the system. There are many examples of templates and wizards that provide a generic structure that guides users through a set of options. Online shopping sites, book stores and travel centres often have 'wizards' which guide the user through a series of options or interests, helping them to focus in on topics of particular interest. It is evident that these types of semi-structured forms of support and guidance are becoming increasingly important as a way of guiding users through the plethora of information available online.


In contrast to the highly restrictive 'templates' or 'wizards', which provide high levels of support and step-by-step guidance but little possibility of user-adaptation, stand frameworks, which provide a theoretical context and scope for work but leave the user to devise their own strategy for its implementation.

Within this context, a number of pedagogic frameworks have been developed to support learning technology. All develop from a particular theoretical viewpoint (whether explicitly or implicitly), aiming to encourage the application of good practice according to a specific pedagogical approach. For example, Conole and Oliver (1998) have developed a framework for integrating learning technologies that builds on Laurillard's conversational framework. This provides a structured approach to integrating learning materials into courses. The framework is designed to support the process of 're-engineering' a course (Nikolova & Collis, 1998). It provides a framework in which various features of an existing course can be described and evaluated, allowing an analysis of strengths and weaknesses, the suitability of different media types (in particular the different educational interactions they support) and limiting factors, including resource issues and local constraints. The framework can be applied as a series of stages, starting with the review of existing provision, working through a process of shortlisting and selection of alternative teaching techniques, and concluding with a mapping of the new course.

The relationship between Frameworks and Wizards

Frameworks and wizards, as outlined above, both share a common aim of supporting a users' engagement with an area. Clearly, however, they are working at very different levels and making different assumptions about the type of support that the user might need. Theoretical frameworks provide a structure and vocabulary that support the exploration of concepts and issues. Wizards provide automated processes that support the production or selection of resources, and are predicated on the assumption that the user is primarily concerned with efficiency rather than critical engagement. Fundamental concerns (for example, about the suitability of using a particular type of resource) are either ignored or assumed by the wizards.

These two positions can be characterised as extremes of one continuum. At one extreme there are frameworks, which are flexible and versatile, but which offer relatively little support for practitioners attempting to engage with them. At the other there are wizards and templates, which are highly restrictive, but (by virtue of the constraints that they impose) are able to offer much closer support and guidance to users.

Between these extremes lie a range of resources, including checklists, guidelines and step-by-step tutorials. Toolkits can be viewed as a mid-point on this continuum: they are decision-making systems based on expert models (Oliver and Conole, 1999). (In this context, a model is taken to be a simplified account of reality that can be used to explain or predict certain features. In toolkits, models tend to be of design processes.) Toolkits are more structured than frameworks; they use a model of a design or decision-making process, together with tools provided at key decision-making points, to help the user engage with a theoretical framework and apply it in the context of their own practice. Each of the tools that is drawn upon as the user works through the process model is designed to help the user to access a knowledge base in order to make informed decisions. The format of toolkits means that they can be used in a standard, linear fashion, or can be "dipped into" by users whose level of expertise is stronger in some areas of the design process than others.

In summary, toolkits represent a mid-point between facilitated, uncritical development of resources and a deep engagement with fundamental issues and theories. They are not intended to replace expertise, although they are intended to reduce the need for prior expertise before practitioners are able to engage with fundamental issues in a meaningful way. As such, they can be viewed as a stepping-stone between uncritical and autonomously critical engagement with an area.

Are toolkits expert systems?

From the discussion of earlier drafts of this paper, it became clear just how closely toolkits were related to expert systems. The close relationship arises from the shared aim of both types of resource to encapsulate methods and theory within a software tool. In addition, it is recognised that both types of resource embodies "claims" about the world that can be reverse-engineered through claims analysis. What, then, is it that makes it appropriate to differentiate between these types of intelligent design aids?

Toolkits and expert systems differ in four important respects. Firstly, unlike most expert systems, the emphasis with a toolkit is not on providing answers or knowledge in response to a query. Instead, the focus is on modelling the user's practice. The creation and analysis of these models then forms the basis for the creation of plans, knowledge and understanding that is directly related to the user's context and cultural practices. (Importantly, the modelling of practice here is carried out by the user; it is not a model of the user created by the system.)

Toolkits are designed to address multiple distinct areas of knowledge, and in many ways the "tools" (in the sense given above, but here also alluding to the specific elements that are combined to make up a toolkit) designed for some of these areas could be described as being like expert systems, to the extent that some of them make recommendations based on a representation of an underlying knowledge base. However, we argue that one important defining feature of toolkits is the relationship between these modular elements ? the focus is on the sustained engagement, by the user, in order to draw together a string of such advice, rather than on one output generated by the system and presented to the user.

Another important distinction is that toolkits do not embody a claim to "expertise". Instead, their claims are far more tentative. Rather than attempting to be authoritative or definitive, toolkits are predicated on the basis of utility. Specifically, they are judged on how useful the system of classification used to represent the underlying knowledge base is in terms of supporting decision making. This represents an important move away from legitimation in terms of meta-narratives and towards a performative definition of value (Lyotard, 1979). However, unlike many other forms of commodified knowledge, it is the user, not the designer, who decides on the legitimacy of the representation. The descriptive systems, the frameworks drawn upon in the toolkit, simply act as a starting point that can be debated, adapted, revised and so on. Thus instead of being expert systems, they might more accurately be called "amateurs'" systems.

The plurality of that last point leads on to the fourth important distinction between expert systems and toolkits. Toolkits are about each user's domain, rather than one idealised domain. The framework and knowledge base provide a way of describing what users do in some area of practice ? hence the link made to modelling, above. The knowledge base described at each step can, and we argue should, be altered by the user to reflect their own (rather than some idealised general) practice. Moreover, toolkits are designed so that each knowledge domain described can be extended, as well as amended, by the users. The implication of this is that toolkits cannot be expert systems as traditionally understood, although with sustained use, an individual users' adapted instance of a toolkit might conceivably be described as an expert model of their own practice.

Essentially, then, we see toolkits as having a new niche in terms of modelling end user expertise. Unlike expert systems, which present themselves as authoritative and definitive, toolkits adopt a more postmodern position on the problems of practice, celebrating difference, contextuality and a democratic form of interaction that allows the user to create and direct instead of being directed. In this sense, they are perhaps best located as a means of representing and sharing practice, rather than a way of privately receiving advice on one's own practice (cf. Beetham, 2002).

Importantly, this modelling of experience an interative process, which may (but does not always) start from an initial, general given model (a "starter for 10") of the domain. This gives rise to a fifth distinguishing feature of toolkits, which arises as a corollary from the above points: that unlike systems which take a static snapshot of expertise, toolkits evolve (and reflect the users' evolving understanding of the domain as they do so).

The epistemology of toolkits

As the above discussion has begun to demonstrate, the distinction between toolkits and other kinds of decision support tools has its roots in a distinctive epistemological position. Whereas expert systems present an authoritative model of a particular realm of knowledge, toolkits are concerned with personal, contextual and often fragmented representations.

From discussions on an earlier draft of this article, it became necessary to clarify the 'classes' of knowledge toolkits are concerned with. When working with toolkits, there are two types of knowledge we are interested in ? and more specifically, are interested in transforming:

Theoretical knowledge, such as models, frameworks, and so on. Here, we are concerned with the transformation of these forms of knowledge as 'given', being legitimated by their acceptance within expert communities, into peformative kinds of knowledge (such as a system) that can help guide the user towards particular things (methods, tools, etc.) that would be of use.

The tacit knowledge essential to professional practice (McMahon, 2000). Here, we are interested in making this knowledge explicit (or, more accurately, helping practitioners to make their own knowledge explicit). This is achieved by the elicitation of users' own practices as part of their use of the tool. This elicitation may be achieved syntactically; however, at other times, it involves the construction of visual or multimodal representations (see the Appendix for an example of this), echoing the new forms of scholarly discourse and rhetoric discussed by Ingraham (2000).

The latter in particular is novel in the context of decision support tools. Traditionally, most expert systems have been restricted to formalised, well-defined areas of knowledge that can be represented in relatively stable, conventional ways. However, it is in the context of the situated, ill-defined, poorly understood areas of tacit knowledge about professional practice that toolkits demonstrate their value ? and this is perhaps their key benefits. The reason toolkits are so interesting is because they are able to extend the range of existing decision support tools beyond the individual instances of stable knowledge in well defined areas and out into the murky waters of professional practice. This is achieved by putting the provisional nature of models centre-stage, allowing them to be adapted and refined in the light of users' beliefs and local situations, and also by emphasising that users should describe their own (situated) practice rather than attempting to work with an externally-imposed 'ideal' model. It also emphasises the importance of viewing toolkits as representations to be contested, of drawing the users' attention to their rhetorical and constituting effect, so that they may debate, refine or replace the models that are used to reflect their practices.

There are, inevitably, limits on the kinds of knowledge that are appropriate to address toolkits. The kind of codification envisaged thus far concentrates on design and decision making, and we suspect that this may well represent the appropriate scope for tools of this type.

A rationale for toolkits

In recent years, nationally policy has placed a considerable emphasis on the embedding of new technology into the learning and teaching process. However, the embedding process is not trivial, and uptake has traditionally been patchy (Laurillard, Swift and Darby, 1993). One reason for this is the considerable range of skills that need to be acquired if embedding is to be carried out in a professional way (Phelps, Oliver, Bailey and Jenkins, 1999; Beetham, 2000b).

Despite the current policies which advocate that adapting and re-using existing learning materials and a more extensive use of learning technologies to support learning and teaching is a good thing (e.g. the Dearing report, National Committee of Inquiry into Higher Education, 1997), examples of this are few and far between. This can be traced to a number of factors. In particular, the 'not invented here' syndrome (HEFCE, 1996) is no doubt still present. However, more important perhaps is the issue of the time and skills required to evaluate and then adapt materials or gain expertise in understanding and utilising learning technologies. The latter is compounded by the fact that finding these materials in the first place is a non-trivial exercise (although the growth of subject-specific information gateways, portals and guidelines to resources will go some way towards alleviating this problem). Some of the barriers to uptake which have been identified include: the problem of finding relevant materials in the first place, the difficulty of adapting other peoples' materials, the issue of ownership and copyright, integration with other materials, including issues of style, definition and level, staff having the appropriate educational and/or technical skills required to evaluate, adapt and integrate materials, and concern about the currency of materials, particularly accuracy and whether it is up-to-date (Conole, Oliver and Harvey, 2000).

Equally significant is the problem that large amounts of research and theory on the use of such resources remains unfamiliar to practitioners. Whilst it would be unreasonable to expect subject specialists to dedicate themselves to the study of and engagement with this area, problems can arise when these people begin to adopt learning technology. In such situations, it is important to consider how these practitioners can best be supported, and whether that support should focus on addressing basic needs (such as efficient production, suggesting the need for tools such as wizards) or should encourage them to engage at a deeper level. The continuum of support outlined earlier suggests that one solution to this problem may be to provide a series of steps, through which support is steadily phased out whilst flexibility and versatility is introduced. Toolkits are intended to act as a mid-point in this transitional process, allowing greater engagement than wizards and templates can, but providing more structure and support than a theoretical framework.

Another important feature of toolkits concerns the decision-making activities that the user can engage with whilst following the expert model. As noted, these are intended to support the user in the process of making informed decisions; they rely on a structured description of a relevant knowledge base that can be automatically searched (based on information elicited from the user) in order to suggest approaches that might be relevant. By layering progressively more detailed information on the options available, the user is able to follow up when and if this is required. For example, if presented with a shortlist of suggested methods, a user might immediately be able to reject familiar and unsuitable options simply from their title. Unfamiliar options might require short explanatory or illustrative information before they could be accepted or rejected, and some might warrant detailed discussions (perhaps including illustrative case studies). Because this simple layered structure presents information 'on demand', rather than expecting the user to wade through tracts of material that may or may not be relevant, they are quicker and easier to use than handbooks, cookbooks or other traditional sources of advice and guidance. The implication of this is that toolkits should help to reduce the time required in planning work of this type. The aspiration is that toolkits can be used iteratively, with progressively more detailed analysis occurring once initial feedback about the viability of the initial design has been received.

In summary, toolkits are predicated on the assumptions that they will be:

derived from an explicit theoretical framework

easy-to-use for practitioners

able to provide demonstrable benefit

able to provide guidance, without being prescriptive

adaptable to reflect the user's practice and beliefs

able to produce outputs that reflect the local context

The development and evaluation of toolkits

The process of the development of each toolkit consists of a number of steps, which are can be coached in terms of framing questions.

Assessment of need: Is there a need for a toolkit in this particular area to support practitioners?

All three of the toolkits outlined below fit this criterion. Learning technologies have impacted significantly in the areas of curriculum design, evaluation and information handling. Their introduction has added a degree of complication and new forms of expertise to the extent that practitioners need support and guidance, which is a fundamental component of the toolkits. With respect to curriculum design, for example, key issues for practitioners include making sense of the range of learning technologies which can be used to support learning and teaching receiving guidance on how to integrate these effectively with traditional tools and techniques. A lack of understanding of the precise impact of learning technologies, a better knowledge of their strengths and weaknesses, and some understanding of potential costs and benefits, are some of the evaluation topics which practitioners now need to address. In terms of information handling, the shear enormity of scale of the resources and information available now, along with the increasingly sophisticated online information tools, gateways and portals, means that practitioners also now need some guidance on and understanding of how to utilise this.

Theoretical underpinning: What theory and models are relevant to the toolkit?

This step makes explicit the underlying theory and expert model to be used by the toolkit, providing a frame of reference and an initial structure for decision making.

Toolkit specification: How can the range of options available at each stage be translated into a practical but flexible form of guidance for non-experts?

A rough outline of the toolkit is drawn up, based on the framework, which will include the description and structuring of the options (knowledge base) open to users at each decision making step. The toolkits are designed to be easy-to-use, with information structured in layers of increasingly detailed material so as to support flexible use, allowing users to bypass sections with little or no relevance to them or engage deeply with content that they find important.

Toolkit refinement: How useful and flexible is the toolkit?

At this stage, a prototype of the toolkit is tested with end users and evaluated to assess its suitability, ease of use, flexibility and relevance. In particular, feedback from this formative evaluation stage is used to highlight which aspects of the toolkit the users find most useful, and whether there are any important steps or resources omitted. This information is used to iteratively improve the toolkit and provide a more detailed specification. Considerable attention is given to the users' needs, particularly how easy the toolkit is to use and how valuable and useful it is perceived to be. User trials also aim to demonstrate that the toolkit is usable without the support of expert guidance, which represents an important part of the initial assumptions behind the development of toolkits.

Inclusion of user defined features: Is the toolkit sufficiently flexible that it can be adapted by end users to take account of local factors?

User trials also assess the flexibility of the toolkit to take account of adaptation by end users. Identification of common adaptations, which are likely to be of wider value, can be incorporated into the core functionality of the toolkit at this stage.

Building shared resources: Are the completed toolkit plans produced by practitioners of any value as case studies or templates for other users?

Use of each toolkit produces some form of 'plan' for a particular task; for example an evaluation strategy to assess a range of web resources, an outline for a new curriculum with a map of learning and teaching tools and techniques, or an information plan for resources for a research project. The user can keep this private, perhaps iteratively refining it over time based on experience or developing understandings. However there are potentially a number of additional uses for the plan. It could be used as one of a suite of shared resources, for example a set of curriculum plans for a whole course, evaluation strategies for commonly encountered evaluation studies, or information resources to support a consortium-based research project. This is a valuable additional aspect of the toolkits and provides a means of addressing some of issues about re-purposing materials and sharing skills sets outlined by Phelps et al. (1999) and Beetham (2002).

Illustrative toolkits

This section will illustrate three examples of toolkits that have been developed and tested using the process outlined above. Key findings from the formative evaluations of these toolkits will be summarised and perceived benefits identified.

A toolkit for curriculum design

Media Adviser ( is a toolkit that can support practitioners redesigning curricula and, in particular, helps them to consider how to appropriately integrated learning technologies alongside more traditional learning and teaching methods. It derives from a broader conceptual pedagogical framework for integrating learning technologies into courses that maps out and compares traditional modes of delivery with learning technologies (Conole and Oliver, 1998; Oliver and Conole, 2000). It considers different learning and teaching methods in terms of their relevance and value against a set of four types of teaching activity, namely ? delivery, discussion, activity and feedback. These four activities were adapted from the more descriptive set of twelve interactions described in Laurillard's conversational model (Laurillard, 1993), and are intended to reflect the values of the socio-constructivist tradition that she draws upon. In addition to its role in providing support for curriculum re-design, the toolkit has been used successfully to engage practitioners in discussions about their own context and practice ? for example, by encouraging a group of practitioners to debate the relevance and meaning of the four 'types' of teaching activity, whether or not refinements or adaptations of this set are warranted, and how these activities might be perceived by learners (Oliver & Conole, 2002).

This was the first toolkit developed using the methodology outlined above. As the explanation below illustrates, the formative evaluation of an initial prototype combined with a process of iterative tailoring to meet users' needs is an important feature of this approach. The evaluation also generated a number of unexpected results. In particular, mapping teaching techniques (both traditional and new) in terms of their support for the four 'types' described above was originally intended as a way of identifying aspects of learning that were systematically emphasised or neglected in a course. However, evaluation showed this was at least as important as a way for individuals to express their own approach to teaching, or to develop ideas about new ways in which traditional resources (such as videos) could be incorporated into the course. Moreover, the simplicity of the description meant that it could be used as the starting point for comparisons or discussions between practitioners about the differences in their approach to teaching.

What was important with this toolkit was that it raised awareness of the potential uses of learning technologies, in particular in terms of educational value, and therefore formed a good starting point in terms of considering the role of these resources within a particular course. The toolkit was then used to aggregate these individual descriptions of techniques in order to form a more holistic course structure. Media Adviser provides a mechanism to help lecturers think about the different tools and techniques they could use in their teaching and most importantly how these could be considered under the four broad educational activities.

A toolkit for evaluation

The Evaluation Toolkit ( provides a structured resource to help practitioners evaluate a range of learning resources and activities (Conole, Crewe, Oliver, and Harvey, 2001; Oliver, McBean, Conole and Harvey, 2002). It guides them through the scoping, planning, implementation, analysis and reporting of an evaluation. It assists the practitioner in designing progressively more detailed evaluations, and allows users to access and share evaluation case studies. It consists of three sections: Planner, Adviser and Presenter, which guide the user through the evaluation process; from the initial scoping of the evaluation question(s) and associated stakeholders, through selection of data capture and analysis methods, and finally through the presentation of the findings. One of the emerging benefits of this toolkit is that the plan the user produces can be made available to other as a case study or example for particular types of evaluation (e.g. assessing the usability of a web site, research a teaching innovation, selecting a set of resources).

A toolkit for information processing

The information toolkit adopts a similar approach to Media Adviser and the Evaluation Toolkit (Conole, 2002). It provides a means of mapping information resources against types of information activity (defined in the toolkit as gathering, processing, communicating and evaluating). This helps the user to gain an understanding of the resources they are using and what they are using them for, to help them produce their own tailored information plan. The toolkit guides the user through the process of articulating their information needs and results in the production of an information plan for a particular task. The 'scope' of the task is one of the first stages of working through the toolkit; a task could involve considering all the information needs of a course team developing a course module, of managers of a research programme or a development project team. As with both Media Adviser and the Evaluation Toolkit, this can be used to build shared information plans in the form of case studies or templates. It is suitable for use across a range of levels of expertise and valuable for both learners and researchers.

Evaluation of the toolkits

The evaluation of the toolkits was designed to provide feedback on the usability of toolkits and to assess their potential impact on practice. The methodology used includes observational studies and a follow up workshop with new users (Oliver, MacBean, Conole & Harvey, 2002). In the initial stage, usability trials in the form of cognitive walkthroughs are carried out. These are used to improve the design and layout of a toolkit and to help the content developers to identify areas that required further work and improvement. In each case a researcher observes and records the user's activities. At the end of a session users also provide feedback on the overall use and value of a toolkit, areas for improvement, and whether format of a toolkit had influenced the way in which they had approached their planning process. Overall, this process of evaluation is used to address two fundamental questions: does the toolkit support the design of appropriate plans ("appropriate" as determined by the user), and does it help the user to engage with some of the more fundamental issues and concerns in this area?

Feedback from these trials is used to improve the toolkits and in particular the overall structure and navigation of the resources. Once an updated version of a toolkit had been produced, taking into account all the feedback from the initial usability trails, the second phase of the evaluation is carried out. This takes the form of one or more workshops comprising a range of potential users (e.g. lecturers, managers, researchers, and staff in university support services or national centres). During these one-day workshops, the participants work through a toolkit section by section. They are asked to keep a record of their activities and in particular to note any reflections as well as any benefits or problems encountered. At appropriate points the group is drawn together to discuss progress and in particular good and bad features of the toolkit. The workshops conclude with a general discussion of their experiences of using the toolkit and its potential value and use.

Early feedback on the toolkits in these evaluation stages tends to concentrate on usability issues and navigation. Users are often frustrated by the early navigational structure and layout of the toolkits, which can fundamentally impede their progress and hamper their understanding of the issues and concepts being described. In each case, the toolkit is refined in light of such comments, with the result that the usability is markedly improved by the end of the project.

In order to demonstrate the evaluation approach, and also to illustrate the kinds of ways in which users were able to engage with theory, two studies will be summarised ? one involving the evaluation toolkit (Oliver, MacBean, Conole & Harvey, 2002), the other Media Adviser (Oliver & Conole, 2002).

Evaluation of the Evaluation Toolkit

Feedback on the content and potential usefulness of the toolkits has generally been positive. Users clearly benefit from working through them and recognised that they were rich resources of material. However, there was some concern that the Evaluation Toolkit was deceptive in terms of its size and ease of use. (The toolkit comprised of three sections, each of which took around an hour and half.) In general it was recommended that users should be made more clearly aware of the time required to complete each toolkit and the level of detail and concentration required if the user were to gain optimal value from using the resources. It was encouraging to note that the improvements made to the toolkit as the result of initial feedback are noticeable in the way that the workshop users work through the resources much more easily.

Reassuringly, however, the toolkit was able to support complete novices.

Well, to be honest, I haven't had to produce an evaluation plan ever before ? so in that sense it was extremely helpful as it guided me through the process, explained some background ideas and suggested other sources of help.

It also proved to be of value to evaluators who had substantial previous experience.

Did it get me thinking about other things? Yeah it did actually because I was looking at a project we've already done and I ended up looking at analysis tools I wasn't familiar with, so that was useful.

Perhaps most encouraging was evidence that the toolkits encouraged reflective practice:

I really liked [it] making me think about the purposes of evaluation. I've completely changed my view of the evaluation by working through this.

Particular theoretical ideas were also engaged with by the participants. For example, Patton's utlization-focus approach (1997) conceives of evaluation as being an essentially rhetorical process, in which studies (and the way they are reported) serve political, use-based ends; by working with the toolkit, users' traditional practices were challenged by this theoretical perspective.

The emphasis on 'contexts' and reasons for evaluation throughout the toolkit is very helpful, because there is perhaps a tendency to report what was done on a particular project (as a factual account of what happened) rather than actually evaluating that project and how useful its outcomes are to the stakeholders. The toolkit seems to bring the user back to this point all the time ? which would be very helpful, particularly in a big project, or one with lots of interested parties.

Further related to this is the issue of choosing the evaluation question, which theorists such as Kvale (1996) link explicitly to questions of epistemology.

I tried to fill in every box but I actually found that quite useful because it forced me to think about things that I wouldn't have otherwise.

There were, however, some limits on willingness to engage with such theoretical positions. For example, the evaluation toolkit is designed to recommend particular methods for data collection and analysis based on the question asked. This is achieved by filtering the methods using the (editable) representations of the knowledge base on data collection and analysis methods. In this particular case, the rationale for selection was not apparent to the users, some of whom failed to see why particular empirical approaches followed from their stated perspectives on the world:

To be honest I just filled in my favourites. I didn't understand how it had come to those conclusions and I didn't agree with them. So I ended up just calling up the entire list and just selecting the ones I was going to use anyway.

The reasons for these objections were also discussed by the participants.

You have a preconceived idea and you're expecting something. You get frustrated when you didn't expect what you get.

I think that some people don't like things being hidden from them. They can see your suggestions, but they also want to see the rest.

Whilst the toolkit did support engagement with theory, then, it cannot require users to engage. Nonetheless, it is able to prompt the kind of reflection and engagement predicted earlier in this article.

What's really important is that it gets you to think about all this.

Clearly it's something we're all having to do more of. We're all having to reflect on what we're doing.

If it doesn't quite match what you were looking for at least it makes you think.

Evaluation of Media Adviser

A small-scale study with Media Adviser was carried out that involved three users, two drawn from one course team and one drawn from a different faculty. Each had experience of teaching on a number of courses, but had chosen one course that they felt needed changing. The jointly-taught course was a first-year unit in a medical programme, taken by around 120 students, roughly three quarters of whom were taking the course as part of a subsidiary subject. The other course was also a first year course, but was about economics, and involved a mix of historical review and case studies. It is taken by around twenty students, half from within the economics department, and half from the faculty of social sciences. This mix allowed differences in practice both within and between faculties to be illustrated. The workshop was run by one facilitator, with an additional observer taking notes and helping out when required.

Even from the first activity ? which simply involved listing the teaching methods used during the course ? participants started to reflect on fundamental issues of course design, such as the difference between a teacher-centred and a student-centred description of the course.

Are these teaching or learning media?

Similarly, there was reflection on the fact that "different groups of students might have different experiences of our lectures", showing the influence of theoretical perspectives such as phenomenography. They also distinguished between their intentions and the reality of what might actually happen, showing sensitivity to the limitations of a modelling exercise such as this:

Not so easy to determine - was there any discussion? - but can't force this from students; [it] may happen or may not (i.e. no guarantee that will get discussion or how much).

As with the Evaluation Toolkit, engagement with the software (particularly in this group context) led the lecturers to challenge their previously taken-for-granted course design practices, which they came to realise reflected tradition rather than any explicit theory of learning. This discussion drew on the earlier consideration of student-centred rather than teacher-centred models of course design, emphasising the values that they felt were central to their practice.

It makes you reflect on what? are possible for the student. Often one feels the number of lectures, tutorials etc. is given and immutable. It's useful to see how the course breakdown looks.

The second activity, which involved using the rater tool to describe their teaching, led to further questioning of the role of different educational techniques. On discovering that he had characterised lectures and handouts in an identical way, one participant began to wonder, "Why not replace lectures with handouts?" This led to a rich discussion of student expectations and institutional policies, raising participants' awareness of the marketing and political aspects of course design. It also highlighted the way in which familiar formats can appear to be engaging without actually involving the student in anything more than a passive, receptive role.

Students seem to want to feel that they have participated, and somehow, by sitting through a lecture, they think they have done.

The descriptive process also highlighted differences in teaching style. For example, one participant characterised their lectures as involving a high degree of activity and discussion for students; this contrasted with the two lecturers who both taught as part of one course team, for whom lectures were primarily a means of disseminating information to students. Similarly, all three had differing views about what constituted a tutorial. Importantly, there were differences between the two members of the course team that had not previously been recognised. This led to a discussion of how each participant ran their tutorials, and an exchange of suggestions about how they could be made more interactive and engaging. As one participant noted, such discussions provided obvious opportunities to extend lecturers' repertoire of techniques by learning about "the different ways in which people use handouts, tutorials, lectures, etc." The result of this was the early sharing of plans for change that were grounded in participants' own experiences.

As part of these discussions, the participants began to identify reasons why teaching techniques such as lectures differed. 'Disciplinary differences' were initially cited as one possible reason, but the existence of differences within the course team led to a more critical discussion of what this phrase might actually mean. Eventually, a number of influences were identified, each of which contributed to the process of determining the format of teaching, including:

Whether the teacher has a teacher-centred or student-centred view of learning.

Current trends in learning and teaching. ("If we'd done these ten years ago, the differences between us might have been much narrower.")

The status of knowledge and the type of discourse within a discipline. ("In arts, if a department came out where delivery [of information] was high there would be something very wrong"; "in science, there is a mass of basic information you need to have, whereas in History, it is different ? it doesn't matter if you know nothing about the 19th century.")

The content covered in the course.

The level of students being taught. ("For first years, the emphasis is on the delivery of information. Further on, they are expected to discuss rather than receive, so most lectures will change.")

The size of group being taught.

Student expectations and requirements. (For example, are they intrinsically or extrinsically motivated by the course?)

What other teaching techniques are used in the course.

In recognition of these influences, the participants recognised that there would never be agreement as to the 'right' way to describe a lecture, tutorial, etc. (This illustrates the points made about toolkits and multiple perspectives in earlier sections.) However, they felt that descriptions of techniques would "start off differently, but might converge" as users of the toolkit debated their understanding of the descriptive language and reached consensus over the meanings of terms.

In a similar vein, the participants discussed whether or not to introduce less familiar techniques, such as web-based teaching, computer-mediated communication, and so on. Importantly, there was valuable discussion about what these terms meant to the participants, and what role they might have in teaching and learning. One participant, for example, decided that what he meant by 'web pages' conflated at least two distinct activities ? the use of the web to deliver lecture notes, and the use of on-line bulletin boards to supplement class discussions. This clarification enabled him to plan changes to his course in greater detail, concentrating on pedagogic requirements rather than the technical systems available to him.

Given the comments made earlier about the potential value of toolkits for sharing representations of practice, it is interesting to note that participants in this study saw this as being valuable not only within a network of peers, but also as a way of communicating conceptions of learning and teaching to students.

The other area ? and perhaps the more useful one ? is to display it to students... Giving them this information will enable them to make judgements about what you as a lecturer are doing.


Although there is broad consensus that practitioners value some kind of support when starting to use learning technology, the most appropriate way of providing this support remains unclear. One factor that will influence this will be the relative importance that specific practitioners place on engaging with theory as opposed to simply producing resources in an efficient way. At present, the kind of resources that have been produced tend to be polarized between flexible but unsupportive frameworks on the one hand and supportive but constrained wizards and templates on the other. The work described in this paper has investigated the design and implementation of a form of expert system to support decision-making processing, defined here as toolkits, which can be viewed as a mid-point between these extremes.

This work has provided a definition of 'toolkits', which incorporate an expert model of a process and structured knowledge base. The toolkits described in this paper illustrate the ways in which support and guidance on theory and expert knowledge can be provided to practitioners in a way that can be interpreted in light of their disciplinary context and individual practices.

Feedback has been positive, with many users reporting that the toolkits helped them to reflect upon and structure their thought process. Other benefits identified include:

The ability to build up case studies covering common types of curriculum design, evaluation, or information maps.

Provision of structured expertise and a resource base, built on an explicit theoretical basis, which guides the user through the planning process and uses this experience to help them engage with the theory and related knowledge themselves.

The potential to carry out additional studies with a more diverse group of users to analysis the different ways in which these resources can be used and a clearer understanding of their benefits. Feedback from this type of study could be used to improve the value and relevance of the toolkit itself and also help define the key factors for success in producing toolkits and hence help define specifications for future related resources of this kind.

The potential to extend this model of developing generic toolkits to cover other areas of learning and teaching such as curriculum development, media selection, assessment or quality assurance.

Whether the mid-point represented by toolkits should be viewed as an end in itself, or as a step between facilitated production and critical engagement remains open to debate. However, early use of these resources to support both of these ends is encouraging. However, to extend this approach to new areas will require the identification or development of theories for learning technology coupled with the application of rigorous research methods; without this, toolkits will not be able to offer anything more than platitudes and surface guidance.


Beetham, H. (2000a), On the significance of 'theory' in learning technology research and practice, Positional Paper at the Learning Technology Theory Workshop, ALT-C 2000, Manchester. [cited]

Beetham, H. (2000b), Career Development of Learning Technology Staff: Scoping Study
Final Report, JISC Committee for Awareness, Liaison and Training, Available online at <> [cited]

Beetham, H. (2002), Developing learning technology networks through shared representations of practice, Proceedings of the 9th International Improving Student Learning Symposium, 421-434, Oxford: OCSLD. [cited] [cited]

Conole, G. (2002). Systematising Learning and Research Information. Journal of Interactive Media in Education, 2002 (7). ISSN:1365-893X [] [cited]

Conole, G. and Oliver, M. (1998), A pedagogical framework for embedding C&IT into the curriculum. ALT-J, 6 (2), 4-16. [cited] [cited]

Conole, G., Crewe, E., Oliver, M. & Harvey, J., (2001), A toolkit for supporting evaluation, ALT-J, 9 (1), 38-49. [cited]

Conole, G., Oliver, M. & Harvey, J. (2000), An Integrated Approach to Evaluating Learning technologies, International workshop on Advanced Learning Technologies, Proceedings of the IWLAT 2000 conference, Palmerston North, IEEE Computer Society Press, 117-120. [cited]

HEFCE (1996), Evaluation of the teaching and learning technology programme, Coopers and Lybrand report, HEFCE M21/96, [cited]

Ingraham, B. (2000) Scholarly Rhetoric in Digital Media, PrePrint Under Revision: Journal of Interactive Media in Education, <> [cited]

Kewell, B., Oliver, M. & Conole, G. (1999), Assessing the Organisational Capabilities of Embedding Learning Technologies into the Undergraduate Curriculum. The Learning Technologies Studio Programme: A Case Study, BP ELT report no. 10, University of North London. [cited]

Kuutti, K. (1997), Activity Theory as a Potential Framework for Human-Computer Interaction Research, in Nardi, B. (Ed), Context and Consciousness: Activity Theory and Human-Computer Interaction, 17-44, Cambridge, Massachusetts: MIT Press. [cited]

Kvale, S. (1996) InterViews: An Introduction to Qualitative Research Interviewing, London: Sage. [cited]

Laurillard, D., Swift, B. and Darby, J. (1993), Academics' use of courseware materials: a survey, Alt-J, 1 (1), 4-14. [cited]

Laurillard. D. (1993), Rethinking university teaching ? a framework for the effective use of educational technology, London: Routledge. [cited]

Liber, O., Olivier, B. and Britain, S. (2000), The TOOMOL project: supporting a personalised and conversational approach to learning, Computers and Education, 34, 327-333.

Lyotard, J-F. (1979), The Postmodern Condition: A Report on Knowledge, Manchester: Manchester University Press. [cited]

McMahon, A. (2000), The development of professional intuition, In Atkinson, T. & Claxton, G. (Eds), The Intuitive Practitioner: On the Value of Not Always Knowing What One is Doing, 137-148, Buckingham: Open University Press. [cited]

National Committee of Inquiry into Higher Education (1997) Higher Education in the Learning Society. DFEE: London. [cited]

Nikolova, I. & Collis, B. (1998), Flexible learning and design of instruction, British Journal of Educational Technology, 29 (1), 59 ? 72. [cited]

Oliver, M. & Conole, G. (1998), Evaluating Communication and Information Technologies: a toolkit for practitioners, Active Learning, 8, 3-8.

Oliver, M. & Conole, G. (2002), Supporting Structured Change: Toolkits for Design and Evaluation, in Macdonald, R. (Ed), Academic and Educational Development: Research, Evaluation and Changing Practice in Higher Education, 62-75, SEDA Research Series, London: Kogan Page. [cited] [cited]

Oliver, M. & Conole, G., (2000), Assessing and enhancing quality using toolkits, Journal of Quality Assurance in Education, 8 (1), 32-37. [cited]

Oliver, M. (2000), What's the Purpose of Theory in Learning Technology?, Positional Paper at the Learning Technology Theory Workshop, ALT-C 2000, Manchester. [cited]

Oliver, M., MacBean, J., Conole, G. & Harvey, J. (2002), Using a Toolkit to Support the Evaluation of Learning, Journal of Computer Assisted Learning, 18 (2), 199-208. [cited] [cited]

Patton, M. (1997), Utlization-focused evaluation, London: Sage. [cited]

Phelps, J., Oliver, M., Bailey, P. and Jenkins, A. (1999), The development of a generic framework for accrediting professional development in C&IT, EFFECTS report No. 2, University of North London. [cited] [cited]

Appendix: A use case to illustrate Media Adviser

This appendix is intended to illustrate one of the toolkits described in this paper ? Media Adviser, which is an instantiation of the pedagogic toolkit. (The toolkit is available to download from One application of the Media Adviser is in support of quality assurance procedures, many of which are founded on the principle of making tacit practices explicit. In the context of quality assurance, Media Adviser serves a particular role in relation to the introduction of new approaches to learning and teaching, such as the use of web-based materials or discussion areas. It has been shown, for example (Kewell et al., 1999), that practitioners find it hard to:

articulate the suitability or relative merits of, say, web pages over lectures;

assess the suitability of unfamiliar teaching techniques (with most practitioners, Learning Technologies provide a vivid illustration of this);

agree on the meaning of common terms such as "lecture"; or

gain an overview of the suitability of a mix of teaching techniques.

The process of describing and modelling required by Media Adviser allows implicit assumptions and tacit knowledge to be represented and used as the basis for decision making. It also provides a shared form of representation that enables practitioners with differing assumptions to identify and discuss the variations in their practice.

The first step towards this consists simply of entering basic information about the course such as its title and learning objectives. This elicitation is mainly for record purposes, and is not shown here. The next step (Figure 1) involves use of a tool that requires practitioners to describe and compare their teaching strategies in terms of a recognised educational model (Laurillard's conversational framework; Laurillard, 1999).

Figure 1: Media Rater

After this descriptive process, a linked tool, the Course Modeller (Figure 2), allows models of courses to be created by specifying how many hours students are expected to spend experiencing each teaching technique. These models can then be compared, and the suitability of different combinations of teaching techniques can be judged in terms of their impact on the way time is used within the course.

Figure 2: Course Modeller

On the basis of these models, a third tool can be used to allow practitioners to assess the likely cost (in terms of cash, time to prepare and time to sustain the approach). This tool is called Media Selector (Figure 3); it consists simply of a customisable card file system describing features of various learning and teaching approaches.

Figure 3: Media Selector

A typical use-case for this tool might be for an individual lecturer considering the redesign of their course, perhaps to include the use of the web. Starting from a description of their current practice, and a model of their current course, they might then introduce new forms of learning and teaching and experiment with the range of models that could result. These models could be discussed with colleagues, who might challenge the way existing practices are described or offer new ways of conceiving of the use of the web. Once consensus has been reached about one or two desirable models, the media selector tool could be used to assess the likely cost of these changes in terms of time and money. This might include a simple move from the current situation to some 'ideal' model, or it might involve working in the short term to some mid-point between the two due to limitations of time, expertise or resources.