Citation Details: Rourke, L. and Anderson, T. (2002). Using Peer Teams to Lead Online Discussions. Journal of Interactive Media in Education, 2002, (1). ISSN:1365-893X [www-jime.open.ac.uk/2002/1]
Print: [PDF] [HTML]

Published: 14 March, 2002

Editor: Xiufeng Liu (U. Prince Edward Island, CA) [xliu@upei.ca]

Reviewers: Martha Gabriel (U. Prince Edward Island, CA), William Hunter (U. Calgary, CA), Gilly Salmon (Open U., UK)

Using Peer Teams to Lead Online Discussions

Liam Rourke

University of Alberta
Department of Education Psychology,
Instructional Technology Area
lrourke@ualberta.ca

Terry Anderson

Athabasca University
320 10030 107 St.
Edmonton, AB Canada, T5J 3E4
terrya@athabascau.ca

Abstract:This study investigated an online course in which groups of four students were used to lead online discussions. The teams were examined for their ability to bring instructional design, discourse facilitation, and direct instruction to the discussions. The setting was a graduate-level communications networks course delivered asynchronously to a cohort group of 17 adults enrolled for professional development education. Interviews, questionnaires, and content analyses of the discussion transcripts indicate that the peer teams fulfilled each of the three roles and valued the experience. Students preferred the peer teams to the instructor as discussion leaders and reported that the discussions were helpful in achieving higher order learning objectives but could have been more challenging and critical.

Commentaries: All JIME articles are published with links to a commentaries area, which includes part of the article’s original review debate. Readers are invited to make use of this resource, and to add their own commentaries. The authors, reviewers, and anyone else who has ‘subscribed’ to this article via the website will receive e-mail copies of your postings.

1 Using Peer Teams to Lead Online Discussions

Computer conferencing has become a popular component of online learning. Asynchronous communication is one feature of these systems that researchers extol because of its ability to facilitate ‘anytime anywhere’ communication among peers and the instructor. Instructors, however, may not be as enthusiastic about this possibility. Hiltz (1988), for instance, has described teaching online as a bit like parenthood: “You are on duty all the time, and there seems to be no end to the demands on your time and energy” (p. 441). In Berge and Muilenburg’s (2000) survey of 1100 distance education instructors, a concern with increased time requirements was identified as the largest barrier to the adoption of networked forms of distance teaching. One possible solution to this problem was offered by Tagg (1994) who shared conference-moderating duties with his students. Not only did this alleviate demands on the instructor’s time, it produced unanticipated pedagogical benefits. Despite these encouraging results, few researchers have followed up on this approach. Those who have report contradictory results (Murphy, Cifuentes, Yakimovicz, Segur, Mahoney, and Kadali, 1996; Harrington & Hathaway, 1998). The purpose of this study was to explore the effectiveness of peer teams as online discussion leaders. Two questions guided the design of the investigation: Could the peer teams fulfill all the responsibilities of an effective online discussion leader? And, was the experience of being part of a discussion-leading team rewarding?

2 Literature Review

Computer conferencing is a web-based communication system that supports asynchronous, textual interaction between two or more persons. Messages are composed in the conferencing software and sent to a central location for retrieval from the World Wide Web (WWW). At this location, the messages are organized or ‘threaded’ to reflect some relevant feature of their overall structure, usually temporal, topical, or both.

One way to understand the educational purpose of these systems is to consider the root of the ‘conferencing’ metaphor, ‘confer:’ “To meet in order to deliberate together or compare views” (OED, 2000). In an educational context, this type of activity is conventionally referred to as ‘discussion’ (Bridges, 1979; Dillon, 1994; Gall & Gall, 1990; Hill, 1994; Wilen, 1990)

The pedagogical rationale for discussion is best understood from a constructivist perspective. Constructivists argue that knowledge is not so much discovered, or transmitted intact from one person to another, as it is created or ‘constructed’ by individuals attempting to bring meaning and coherence to new information and to integrate this knowledge with their prior experience. Discussion can be an excellent activity for supporting these efforts. Oliver and Naidu (1996) assert that explaining, elaborating, and defending one’s position to others (as well as to oneself) forces learners to integrate and elaborate knowledge in ways that facilitate higher-order learning. Research in face-to-face settings by Arbes and Kitchener (1974), Azmata and Montgomery (1993), Berkowitz and Gibbs (1983), Gall and Gall (1990), Maitland and Goldman (1973) has provided empirical support for these notions.

The mode of communication afforded by computer conferencing prompted some authors to speculate that it would be an ideal medium to support substantive discussion. Asynchronous communication allows students to deliberate over others’ contributions and to articulate coherent and logical responses. The act of encoding ideas in textual format and communicating them to others forces cognitive processing and a resulting clarity that is strongly associated with scholarly practice and effective communication (Feenberg, 1989; Logan, 1995). There is some evidence to support these predictions (Beals, 1991; Hara, Bonk, & Angeli, 2000; Hillman, 1999;Newman, Webb, and Cochrane 1996; Zhu, 1996); however, the results are not entirely positive (Bullen, 1998; Kanuka & Anderson, 1998; Garrison, Anderson, and Archer, 2001; Harrington and Hathaway, 1998; McLoughlin, & Luca, 2000). McLaughlin and Luca’s (2000) lament summarizes the problems many instructors experience with the technology:

Analysis shows that most messages are in the category of comparing and sharing information. There is little evidence of the construction of new knowledge, critical analysis of peer ideas, or instances of negotiation. The discussions do not appear to foster testing and revision of ideas and negotiation of meaning which are processes fundamental to higher order thinking. Only a small percentage of contributions can be categorized as higher order cognition and awareness of knowledge building. (p. 5)

As many researchers have noted, the technology itself has less impact than the application or instructional design to which the communication tool is applied (Clark, 1983). Several authors have pointed out that certain qualities must be present if the technique is to be pedagogically effective (Garrison, Anderson, & Archer, in press; Kitchener and Arbes, 1974; Pomerantz, 1998). Kitchener and Arbes caution designers of the inadequacy of having students discuss course content without the benefit of a specific procedure or the guidance of a trained facilitator. Garrison et al. arrived at similar conclusions after observing online discussions that contained little evidence of higher levels of cognitive activity. Their diagnosis was that often the goals of lessons do not lend themselves to advanced inquiry, and there are deficiencies in guiding and shaping discourse toward higher cognitive activities.

Building on these observations, Anderson, Rourke, Garrison, and Archer (in press) have identified three roles or sets of responsibilities that must be addressed if online discussion is to be a valuable component of students’ learning. Referred to collectively as ‘teaching presence,’ the three roles are: instructional design and organization; discourse facilitation; and direct instruction (see table 1). The first role includes responsibilities such as selecting topics from the course content that are suitable for discussion, implementing a specific discussion strategy, and establishing participation expectations. The second role, facilitating discourse, includes responsibilities such as drawing participants into the discussion; identifying areas of agreement and disagreement; and establishing a supportive climate for learning. The final role, direct instruction, includes responsibilities such as presenting content, diagnosing misconceptions, and providing assessment and feedback. Attending to each of these responsibilities is a complex and time-consuming task. Yet, each is necessary to ensure that the discussions contribute to the students' learning experience.

In industrial models of distance education, or 'big D. E.' as Garrison and Anderson (1999) have called it, these responsibilities can be distributed among specialists--instructional designers, discussion moderators, subject matter experts. In "little D. E.," on the other hand, and in on-campus education, they typically fall on the shoulders of a single instructor. In one of the few reports to present data on this subject, Harapniuk, Montgomerie, and Torgerson (1998) calculated that interacting with students in the computer conference component of a 13-week-course took an average of 7.5 hours per week. Thus, implementing an effective discussion is a time consuming task for instructors, who may not be able to fulfill all the responsibilities outlined by Anderson et al (in press).

Aside from this issue, researchers have identified other problems with the instructor exclusively taking the role of discussion leader. One consistently cited issue is the authoritarian presence that the instructor brings to the discussion. Beach (1968,1960), Bloxom, Caul, Fristoe, and Thomson (1975), Goldschmidt and Goldschmidt, (1976), Kremer and McGuiness (1988) each warn that this type of presence can inhibit the free exchange of ideas. As Kremer and McGuiness (1988) explain: “Where there is an obvious imbalance of power and expertise among those present, it is unlikely that an atmosphere conducive to openness, to debate, and to a free, frank, and fair exchange of opinion will ever be fostered” (p. 46). Ultimately, the concern is that instructor-led discussions can easily revert to the recitation structure, or initiate-respond-evaluate structure, of a traditional lecture in which the student is often a passive and unreflective audience member.

Coinciding with these disadvantages, are the advantages of using peers in the role of discussion leader. Aside from the obvious economic advantages reported by several authors (Bluxom et al., 1975; De Volder, Grave, & Gijselaers, 1985; Goldschmidt & Goldschmidt, 1976), there have been reports of affective and cognitive benefits. Beach (1974)provides a description of the environment in a peer-led discussion that is opposite to Kremer and McGuiness’ (1988) description of the instructor led discussion: “Student-led discussions provide a free and relaxed atmosphere for discussion, which makes students feel uninhibited in asking questions and challenging the statements of others” (p. 192). This type of environment supports the beneficial processes associated with discussion and leads to positive evaluations from the students (Bluxom et al., Tagg, 1994; Kremer and McGuinness; Murphy et al., 1996). A final benefit is the increased depth of understanding that comes from leading the discussion. Bluxom et al. note “the person who leads the group can acquire an increased mastery of the subject matter by learning it well enough to deal with it effectively in the group discussion context” (p. 224).

Some authors suggest that these results accrue with little cost (De Volder, De Grave, & Giselaers, Murphy et al.; Rabe, 1973); however, not all are in agreement. The students that Schermerhorn, Goldschmid, and Shore (1976) studied commented that they would “rather learn from the instructor than from peers because their peers do not know anymore than they do, and therefore might provide them with erroneous information” (p. 29). De Volder (1982) points out that discussion leaders who are subject matter experts function more effectively not only in the direct instruction role but also in the facilitating discourse role because they know when the discussion is going off-track; they can ask better questions; and they are better at stimulating discussion.

The research in the previous section was conducted primarily in face-to-face settings. One of the first researchers to experiment with the use of student moderators of online discussion was Tagg (1994). He evaluated a situation in which computer conferencing was used with a cohort group of graduate psychology students. Tagg reasoned that sharing moderating duties with students might ease the students’ apprehension about posting messages, transform the ‘unstructured melees’ into coherent discussions, and reverse the general opinion of his students that the conferences were not helping them to understand the course material. Tagg enlisted the services of two students selected from the class. One acted as a ‘topic leader’ whose role was to set the agenda for the discussions and offer some initial contributions. The other acted as a ‘topic reviewer’ whose function was to weave and summarize the discussions. He found that this improved the structure and coherence of the discussions, increased participation, and resulted in a much higher proportion of students reporting that the conference helped them understand the content.

Murphy et al. (1996) implemented a similar system in one of their computer conferences. The context was an undergraduate education course composed of students who had no previous experience with computer conferencing. Whereas Tagg (1994) had selected students from the class, Murphy et al. organized graduate students with some content expertise into teams of two to three to moderate. Aside from the objectives outlined by Tagg, the researchers also expected the graduate students to learn something about moderating online discussions. They report that the graduate students filled all of the roles required of moderators-- organizational, social, intellectual, and technical--and did so in a manner that “was more effective than a single instructor would have been and far outweighed any negative aspects” (p. 34).

Harrington and Hathaway (1994) reasoned that peer facilitators would remove any power imbalances in the discussions, encourage freedom of expression, and give students the feeling that they owned the discussions. In their implementation, two or three students and a teaching assistant led the others through a discussion that focused on dilemmas familiar to practitioners in the field. The goals of the discussion were to identify and reflect on taken-for-granted-assumptions. Harrington and Hathaway’s results were not as positive as Tagg’s (1994) or Murphy et al.’s (1996). They found that unsupported opinions dominated the discussion, the homogeneity of group members prevented the production of the multiple perspectives, and taken-for-granted-assumptions were rarely questioned. It appears that some of the tasks that instructors would address naturally as discussion leaders were not addressed by the student moderators.

The theoretical rationale for using students to lead online discussions is sound, and the preliminary results sufficiently encouraging to warrant further study. The purpose of this study is to examine this strategy more thoroughly by evaluating the performance of the peer teams against a specific set of responsibilities that are required of effective discussion leaders. We compare the performance of peer teams to the performance of the instructor on their ability to satisfy the three teaching presence roles of instructional design and organisation; discourse facilitation; and direct instruction. The results will shed light on the contradictory results presented in the literature (Tagg, 1994; Murphy et al., 1996; Harrington and Hathaway, 1998) and further our understanding of how computer conferencing can be used in a manner that proves satisfying and valuable to both students and instructors.

3 Method

3.1 Case

The case we examined was the “Using and Managing Communications Networks” course offered through the University of Alberta’s Faculty of Extension as part of its Master of Arts in Communications and Technology program. The course was populated by a cohort group of 17 adult students, working full time while they completed their program. The cohort’s history included a three-week face-to-face session at the beginning of their program and one previous online course that included computer conferencing as part of the delivery.

In this 13-week course, the computer conference was used to support content-based discussion that corresponded to course readings and assignments. The instructor moderated the first five weeks of discussion to provide a model for the peer teams. Peer teams of four, configured randomly by the instructor, moderated the balance of the conference, which was divided into one-week sessions.

3.2 Data Collection and Analysis

Three methods of data collection and analysis were employed in this exploratory study including quantitative content analysis, closed-ended questionnaires, and semi-structured interviews. Each is described in detail below.

3.3 Quantitative Content Analysis

One research technique that has been revived by computer-mediated-communication (CMC) researchers is the observational method known as content analysis. Berelson’s (1952) early definition of content analysis is one of the simplest and most direct-- “a research technique for the objective, systematic, and quantitative description of the manifest content of communication” (p. 18). It is systematic in the sense that a theoretical, a priori set of categories is constructed into which communication content is classified. It is objective in the sense that classification is rule-based and the reliability of classification is tested by having multiple coders classify the same content. Its quantitative character is evident in the process of converting communication content into discrete units and calculating the frequency of occurrence of each unit.

This description projects a wholly positivist enterprise; however, subsequent authors have questioned each of the characteristics presented in Berelson's (1952) definition, particularly when applied to different types of content. Potter and Levine-Donnerstein (1999)identify three types of content. With 'manifest content,' the features of communication that are salient for categorization reside on the surface of communication. This includes phenomena such as word counts or sex of participants. The classification of ‘latent pattern’ content relies on the identification of constellations of manifest content configured in particular ways. Potter offers the example of coding attire into two categoriesÑformal and informal. Although the presence of a tie is objective and manifest, coders need more evidence (a suit perhaps) before they can make a decision, which ultimately contains elements of subjectivity. 'Latent projective content' is the most subjective of the three. Categorizing the presence of humour in transcripts, in our experience, is a thoroughly subjective enterprise that defies even the most elaborate rule system. The content in this study--the three roles of teaching presence--is considered to be latent pattern. For instance, to identify a message as containing ‘direct instruction’ requires a few instances or a crescendo of instances of “summarizing the discussion.”

The coding scheme that we used to categorize the discussion leaders' messages was developed by Anderson et al. (in press) and is presented in table 1.

Table 1: Roles and responsibilities of teaching presence

Roles

Responsibilities

Instructional design and organization

Setting curriculum

Designing methods

Establishing time parameters

Utilizing medium effectively

Establishing 'netiquette'

Making macro-level comments about course content

Facilitating discourse

Identifying areas of agreement/disagreement

Seeking to reach consensus/understanding

Encouraging, acknowledging, or reinforcing student contributions

Setting climate for learning

Drawing in participants, prompting discussion

Assess the efficacy of the process

Direct instruction

Presenting content

Focusing the discussion on specific issues

Summarizing the discussion

Confirming understanding through assessment and explanatory feedback.

Diagnosing misconceptions

Injecting knowledge from diverse sources

The message was selected as the unit of analysis, i.e. the segments of transcript that would be categorized and represented numerically. Although we have experimented with other units for this type of analysis, we find the message to be the most practical while simultaneously providing sufficient context to make a decision. (For commentary on the use of the content analysis technique in computer mediated communication research see Rourke, Anderson, Garrison, and Archer, in press; for empirical application of the technique, see Anderson, Rourke, Garrison, and Archer, in press; Garrison, Anderson, and Archer; 2001; Garrison, Anderson, Rourke, and Archer, 2000; Rourke, Anderson, Garrison, and Archer, 1999).

The quantitative content analysis yields data describing the number of messages contributed by the discussion leaders and the proportion of these messages that contained elements of teaching presence. Comparisons are made between the performance of the five peer teams and the instructor.

3.4 Questionnaires

A closed-ended questionnaire, presented weekly, was used to gather information about the students’ perceptions of how well the discussion leaders were performing the teaching presence roles. The questionnaires consisted of ten items that corresponded to the roles outlined in table 1. For instance, the students’ perceptions of a discussion leader’s ability to perform the direct instruction role, was investigated with items such as: This week’s discussion leader provided knowledge from diverse sources,” and “This week’s discussion leader focused the discussion of specific issues.” Each of the ten items was followed by a five-point Likert scale anchored at one end by the response “Strongly Agree” and at the other by “Strongly Disagree”. The questionnaire was presented online, and a message was placed in the conferences at the end of each weekly discussion encouraging the students to complete the questionnaire. This data was not presented anonymously; however, only the principal research was aware of the respondents’ identity. Students were exempt from filling out the questionnaire during the week that their team led discussion.

Because we used only one non-random case, the questionnaire data was analyzed using the logic of quantitative single-case designs. The entire class was regarded as the ‘case,’ and the change in their responses across weeks were observed and described. Of particular interest were the differences between the students' responses during the weeks in which the instructor led discussions versus the weeks in which the peer teams led discussions.

3.5 Semi-structured Interviews

To probe more broadly, the questionnaires and content analysis were supplemented with semi-structured interviews. In the interviews, the students were asked to talk about their experience as part of a peer team of discussion leaders; to compare the peer teams to the instructor on their ability to fulfill the teaching presence roles; and to discuss whether or not the discussions had contributed to their learning.

4 Results

4.1 Content Analysis

Two of the thirteen weeks of discussion were used to train the coders. Training was concluded after interrater agreement reached .90 (k = .82) [1]. Two weeks of instructor-led discussion and all five weeks of peer team-led discussion were analyzed by two coders for the purposes of the study.

The total number of messages contributed by the discussion leaders across the seven weeks was 219. For each message, coders were required to make three binary decisions: the message contains/ does not contain instructional design and administration; discourse facilitation; direct instruction. Each message could contain all three elements, none of the elements, or combinations of some elements. For the 219 messages, 657 decisions were required of the coders. Interrater agreement for these decisions was .75 (k = 50.98). This figure is above the .70 cut-off point for interpretable and replicable research proposed by Riffe, Lacy, and Fico (1998).

The average number of messages posted by the instructor per week was 7.5; for the peer teams, this figure was 40. Table 2 shows the proportion of messages that addressed each of teaching presence roles.

Table 2: Proportion of messages that addressed the three teaching presence roles by instructor versus peer teams

Teaching presence role

Instructor

Peer teams

Instructional design

.11

.19

Facilitating discourse

.23

.61

Direct instruction

.67

.52

Omnibus1 messages

.41

.26

Empty2 messages

.00

.14

1. Messages that address all three teaching presence roles.

2. Messages that do not address any of the teaching presence roles.

4.2 Questionnaire

Ten of the seventeen students filled out the surveys during all seven weeks of data collection. Scores for the five weeks in which the peer teams led discussions were averaged as were the scores for the two weeks in which the instructor did so. Table 3 shows that the students rated the peer moderating teams slightly higher than the instructor on their ability to fulfill the three teaching presence roles.

Table 3: Student ratings of peer teams vs. instructor on fulfilment of teaching presence roles.

Teaching presence role

Peer teams

Instructor

Instructional design

4.05

3.24

Facilitating discourse

4.30

3.44

Direct instruction

4.10

3.56

4.3 Semi-structured Interviews

Seven students were selected to be interviewed; with the selection a balance of male and female students, who had participated actively in the weekly discussions, and who represented each of the discussion leading groups, was sought. The telephone interviews lasted approximately 30 minutes and were recorded and transcribed to facilitate analysis. Interview transcripts were coded by both authors into five themes: the contribution of the discussions to learning; positive and negative aspects of both the peer teams’ and the instructor ability to lead discussion; and reflections on the experience of being part of a peer team of discussion leaders. After the data from the interviews were analyzed, we presented our interpretations to the participants, hoping for some feedback or further insight. None of the participants responded.

The first issue addressed in the interviews was whether the discussion contributed to the learning process. Students were divided on this point. Those who responded positively valued the variety of viewpoints and personal experiences that were presented: "The online discussion helped me to learn because it provided much more breadth and diversity of opinion. Sharing experiences and providing analogies is what makes the discussion a valuable part of learning". Another student continued on this theme: “Listening to somebody who can talk about the content as a practitioner, somebody who can talk about the content as a pissed-off person who had to pay too much money the last time they had to have a consultant come in; those are valuable things". It seems that the multiple perspectives revealed different elements and nuances of the content that individual students may have overlooked and made the abstract material more concrete.

It was not only the additional perspectives that contributed to the students’ learning, but also the contradictory perspectives. As one student explained: "I went through the readings and then I'd go to the online discussion and somebody would say something and then I'd have to go back and look at the reading again because I didn’t see it that way. This sharpened up what I’d learned, what I’d read". Another student confirmed this opinion:

It is in the process of defending my position that I really start to think: Why do I feel that way? Why do I think that way? And, two things can happen: Either I become even more convinced of my position, or I go: “Maybe I haven’t thought this through as deeply as I could have or should have.” For me, that’s what’s valuable about the online discussions.

Those who said that they did not learn from the discussions raised a complaint common in computer conferencing research, which is that the discussions are not sufficiently critical or challenging. One student summarized this view:

The discussions helped me to learn only very superficially. A lot of our discussion was mutual stroking. I think as a group we were very gentle with one another. I think we could have been less kissy and more challenging. It was just so kissy it was actually kind of sickening. I think we would have gotten a lot more out of it if we had been more critical, in a constructive way.

This student was looking to her peers for critical comment, which would prompt her to reflect and construct knowledge, but was unable to find it in discussion that was “too kissy,” or as Archer has referred to themÑ“pathologically polite” (personal communication, 2001).

When asked to describe exactly what they had learned from the online discussions, the students were careful to distinguish between lower-order and higher-order cognitive objectives. "The first part of the course," one student explained, "was very technical. I think the online discussion was probably useless in terms of helping me to learn. The only way to learn that kind of stuff is to memorize it". On higher order objectives however, their opinion was quite different: "When you’re dealing with knowledge, with real learning, that’s about applying a concept; it's about applying an idea to a situation. When that's the case, the online discussion becomes very valuable".

The second issue that arose in the interviews was a comparison between the peer teams and the instructor on their respective ability to fulfil the three teaching presence roles. A majority of students expressed a preference for the peer teams, explaining that their discussions were more responsive, more interesting, and more structured. The fact that the peer teams were more responsive is supported by evidence from the content analysis, which shows that the teams posted on average 40 messages per weekÑapproximately 5 or 6 per dayÑcompared to the instructors’ seven (1 per day). Here, the salient term is probably “teams” rather than “peers,” with the instructor being outnumbered four-to-one.

The fact that the peer-led discussions were more interesting could possibly be explained by amount of preparation that occurred. One student recounted the process:

We brainstormed a lot of options looking for more creative ways of leading the discussion. A few of the ideas we toyed with were to have fictitious characters, to use anonymous posting, and to use dramatic elements. Our thought was ‘what can we do that’s different to get people’s attention?’ We wanted to shake things up a bit and create a bit of interest.

This level of preparation was common among the peer teams each of whom expressed some competitiveness with the other groups and who were being assessed on their performance as discussion leaders.

Another responsibility at which the peer teams excelled was structuring the discussions. The following comparison was offered by one of the students:

The peer teams tended to guide in an on-going way more than the instructor did. They posed very specific questions, and they continually came back into the conversationÑ to probe for other things, to stimulate discussion along a track that might have been opened up, to bring the discussion back on track, and to take the discussion to the next topic area.

As an explanation, one student said that inexperience caused the peer teams to adhere to the prescriptive suggestions for moderating provided by the instructor:

I think the student-led discussions were more structured, and it might have been the case of not having as much experience doing it whereas someone who’s been doing it for a long time would be more comfortable to sit back and let the conversation develop naturally.

A common concern about asking peers to assume the instructor role is their lack of content knowledge. When prompted on this subject, the students were dismissive. They said that they were not looking to the online discussions for an authoritative presentation of content, but rather for an exchange of opinions and a sharing of experiences. Instead, the students focused on the positive qualities the peer teams brought to the discussion: "The peer teams lack of subject matter expertise didn’t bother me at all. Most of the teams supplied additional reference materials, and they asked specific questions, and provided prompt replies.” Another student added: “I don’t miss having the instructor leading the discussion because sometimes they’re too authoritative and that kind of thing can shut down discussion. The reading material can provide subject matter.”

The final issue that we asked the students to talk about was their experience as part of a peer team of discussion leaders. Their uniformly positive comments focused on two issues. First, they enjoyed the experienceÑboth leading the discussion and the team work; and second, they learned from the experienceÑabout leading online discussion and about the course content.

Unanimously, the experience was described as enjoyable. One student expressed it this way: “It took a lot of time but it was so enjoyable that the time factor was irrelevant. I don’t think there was any negative aspects”. When asked whether they had learned anything about leading online discussions, the responses were consistently of this nature: “My skills at engaging people in electronic conversations have benefited from doing the online moderating. There’s definitely a practical and valuable component to it".

Several of the students also indicated that they learned the content better during the week that they led discussion. They explained that they felt a responsibility to lead an interesting discussion, and to do so they felt they had to thoroughly understand the material. To accomplish this they read and processed the assigned readings more thoroughly than usual, and they searched for supplementary material. One student provided the following description:

I certainly delved more deeply into the supplementary readings for my topic than I did the other topics. I learned something, certainly. I took ownership over others’ learning. I felt I had to bring a richness to it, expand on the content, and make it stimulating. This gave me a better understanding of the material. I think you can’t help but learn the subject matter more when you’re getting more involved in it. You’re more actively engaged with the content than you would be otherwise in just reading through it and responding, but not leading the discussion.

There were also beneficial effects on the social processes of online learning. One of the students explained that this came mainly from working in a team:

It was valuable getting to know people. You can never underestimate the importance of working with someone and getting to know them on a professional and personal basis. I got to known my group members very well, and I feel extremely comfortable with them, whereas before this experience I didn’t really know [group member] [group member] but now if I see them we’ve got this instant bond ‘cause we’ve had to, you know, struggle together. It’s formed a relationship between us, so I think that’s a real valuable part.

5 Discussion

The purpose of this study was to examine the effectiveness of using peer teams to lead online discussions. The amorphous process of ‘leading online discussion’ was operationalized using Anderson et al.’s (in press) construct teaching presence, which requires discussion leaders to assume three roles: instructional design and administration; discourse facilitation; and direct instruction. The quantitative and qualitative data indicate that teams of four students, selected from the class, were able to fulfill each of the three roles to the extent that students preferred them to the instructor. Working in teams to lead discussion was an enjoyable experience for the students, and it contributed to their learning.

Consistent with Tagg’s (1994) findings, students in the current study reported that the discussions led by peers were more structured and at the same time, more fluid. Our students added that the peer teams were more responsive and more interesting. Our results are also consistent with those of Murphy et al. (1996) who found that students gained a better understanding of how to lead online discussion. The results are somewhat consistent with those reported by Harrington and Hathaway (1998) who questioned the value of student-led online discussions that produced little challenging or critical discussion among the students. This is a common finding in CMC research (Bullen, 1998; Kanuka and Anderson, 1997; Garrison, Anderson, and Archer, 2001; McLoughlin and Luca 2000). When the students that we interviewed described the processes that make online discussion a valuable contribution to their learning, they appeared to be invoking a form of social cognitive conflict theory (Clement and Nastasi, 1988; Piaget, 1977). The underlying assumption of this theory is that knowledge is motivated, organized, and communicated in the context of social interaction. Doise and Mugny (1984) argued that when individuals operate on each other’s reasoning, they become aware of contradictions between their logic and that of their partner. The struggle to resolve the contradictions propels them to new and higher levels of understanding. Research by Bearison (1982), Doise and Mugny, and Perret-Clairmont, Perret, & Bell (1989) supports the assertion that the conflict embedded in a social situation may be more significant in facilitating cognitive development than the conflict of individual centrations alone. As Perret-Clarimont et al. explain: “The more direct the conflict that takes place in a social interaction the more likely the interaction will trigger a cognitive restructuring (p. 45-46).

Two important issues qualify the students’ preference for the peer teams over the instructor. First, the content of the discussions the instructor led was focussed exclusively on the technical content of the course; whereas the content of the discussions that the peer teams led focused on the social implications of the content. The students informed us that some types of content do not lend themselves to an exchange of opinions or perspectives, and that comparing interpretations is not the most efficient method of achieving lower-level knowledge objectives. The presentation of this type of content also restricted the instructor mainly to the teaching presence role of direct instruction.

Second, the instructor continued to participate in the discussions during the weeks in which the peer teams acted as leaders. Therefore, any of the teaching presence responsibilities that peer teams might have overlooked or struggled with, such as ‘diagnosing misconceptions’ or ‘making macro-level comments about the course content’ were still assumed by the instructor.

The overwhelmingly positive results also need to be qualified. It must be recognized that this setting provided a best-case scenario in which to achieve success with online discussion led by teams of peers. The characteristics of the groupÑa graduate-level, professional development cohort, most of whom worked in the field of communicationsÑare ideal for discussion, teamwork, and peer teaching. Epistemic development models (e.g. King and Kitchener, 1994; Baxter-Magolda, 1992; Perry, 1970) suggest that graduate students are much more likely to view knowledge as contextual and socially-constructed than their undergraduate counterparts. This attitude can be essential if students are to view discussion with peers as a worthwhile component of learning. As professionals, these students also have experience working in teams, and bring some experience to the role of discussion leader, often acquired through chairing meetings or other similarly relevant experiences. Their concurrent employment in the communications field also endowed them with valuable experiences and perspectives to bring to the discussion, and these were received as such by the other students.

It is encouraging to see that the results are supportive of the content analysis scheme developed by Anderson et al. (in press). The peer teams had higher frequencies and proportions of teaching presence and they were rated higher on the questionnaires than the instructor. Students’ conceptions of what should happen and what is valuable behavior from discussion leaders, as reported in the interviews, is consistent with the underlying model of teaching presence.

The results from this study point to two important issues for practitioners to consider when including online discussion in their instructional design, and when using peer teams to lead these discussions. First, the students’ experiences with the discussion technique confirm what is axiomatic in the literature on this topic: discussions are useful in achieving higher-order, but not lower-order learning objectives. Efforts to use discussion to facilitate the latter is inefficient, particularly in the time consuming asynchronous, text-based format, and is often met with frustration and dissatisfaction.

Second, the students have provided some insight into how exactly the online discussions help them achieve higher-order objectives. The additional perspectives offered by others in the form of opinion, personal experience, and analogy add to their understanding of the content, and make it more concrete. Contradictory perspectives disturb their initial impressions of the content and prompt them to process it more thoroughly. This later process, however, can only be precipitated by challenging and critical interaction. As Brown (1989) notes: “Change does not occur when pseudo-consensus, conciliation, or juxtaposed centrations are tolerated” (p. 409).

Finally, when asked why they preferred one discussion to another, the students identified three characteristics: responsive, interesting, and structured. In this case, these characteristics were more commonly associated with the peer teams than with the instructor. However, there is no reason to regard these qualities as either intrinsic to peer-led discussion, or extrinsic to instructor-led discussion. The advantage that the peer discussion leaders had was that they worked in teams of four; therefore, they possessed sufficient resources to fulfill all of the teaching presence responsibilities. The implementation described in this case, in which the instructor maintained a presence even during the weeks in which the peer teams moderated, may be the best method for alleviating the demands on the instructor while simultaneously providing all the elements required for valuable discussion.

Acknowledgments. We wish to acknowledge the financial support of the Social Science and Humanities Research Council of Canada provided for study and the time and insights provided to us by the graduate students enrolled in the course that is the subject of this case study.

6 References

Anderson, T., Rourke, L, Garrison, D. R., & Archer, W. (in press). Assessing Teaching Presence in Asynchronous, Text-Based Computer Conferencing. Journal of Asynchronous Learning Networks, 5, (2) < http://www.aln.org/alnweb/journal/Vol5_issue2/Anderson/5-2%20JALN%20Anderson%20Assessing.htm>

Arbes, W., & Kitchener, K. (1974). Faculty Consultation: A study in Support of Education Through Student Interaction. Journal of Counselling Psychology, 21, 121-126 [cited]

Baxter Magolda, M.B. (1992) Knowing and Reasoning in College: Gender-related Patterns in Students' Intellectual Development. San Francisco: Jossey-Bass. [cited]

Beach, L. R. (1974). Self-Directed Study Groups and College Learning. Higher Education 3, 187-200. [cited]

Beals, D.E. (1991). Computer-Mediated Communication Among Beginning Teachers. T. H. E. Journal, 71-77. [cited]

Berelson, B. (1952). Content Analysis in Communication Research. Glencoe, Ill.: Free Press. [cited]

Berkowitz, M., & Gibbs, J. (1983). Measuring the Developmental Features of Moral Discussion. Merrill Palmer Quarterly, (29)4, 399-410. [cited]

Berge, Z., & Muilenburg, L. (2000). Barriers to Distance Education as Perceived by Managers and Administrators: Results of a Survey. In M. Grey (Ed.), Distance Learning Administration Annual 2000. <http://www.gl.umbc.edu/~berge/man_admin.html>

Bloxom, M., Caul, W., Fristoe, M., & Thomson, W. (1975). On the Use of Student Led Discussion Groups. Educational Forum, 39, 223-230. [cited]

Bridges, D. (1979). Education, Democracy, and Discussion. Oxford: NFER Publishing Company. [cited]

Brown, A. & Palincsar, A. (1989). Guided Cooperative Learning and Individual Knowledge Acquisition. In L.B. Resnick (Ed.), Knowing, learning, and instruction, Essays in honor of Robert Glaser (pp. 393-451). Hillsdale, New Jersey: Erlbaum.

Bullen, M. (1998). Participation and Critical Thinking in Online University Distance Education. Journal of Distance Education, 13, (2), 1-32. [cited] [cited]

Clark, Richard E. (1983). Reconsidering Research on Learning from Media. Review of Educational Research, 53, (4), 445-459.

Clements, D., & Nastasi, B. (1988). Social and Cognitive Interactions in Educational Computer Environments. American Educational Research Journal, 2(1), 87-106.

& quot;confer, n" Oxford English Dictionary. Ed. J. A. Simpson and E. S. C. Weiner. 2nd ed. Oxford: Clarendon Press, 1989. OED Online. Oxford University Press. 4 Apr. 2000. <http://oed.com/cgi/entry/00181778>

De Volder, M., Grave, W., & Gijselaers, W. (1985). Peer Teaching: Academic Achievement of Teacher-Led Versus Student-Led Discussion Groups. Higher Education, 14, 643-650.

Dillon, J. (1994). Using Discussion in Classrooms. Buckingham: Open University Press. [cited]

Doise, W., & Mugny, G. (1984). A Social Definition of Intelligence. Toronto: Pergamon Press. [cited]

Feenberg, A. (1989). The Written World: On the Theory and Practice of Computer Conferencing In Mason, R. & Kaye, A. (Eds.) Mindweave: Communication, Computers and Distance Education. Pergamon Press. <http://www.emoderators.com/moderators/feenberg.html> [cited]

Garrison, R. & Anderson, T. (1999). Avoiding the Industrialization of Research Universities: Big and Little Distance Education. American Journal of Distance Education, 13, (2) 48-63. [cited]

Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical Thinking, Cognitive Presence, and Computer Conferencing in Distance Education. American Journal of Distance Education, 15, (1), 3-21. [cited] [cited]

Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical Thinking in a Text-Based Environment: Computer conferencing in Higher Education. Internet and Higher Education, 11, (2), 1-14.

Hara, N., Bonk, C., & Angeli, C. (2000). Content Analysis of Online Discussion in an Applied Educational Psychology Course. Instructional Science, 28, 115-152. [cited]

Harrington, H. & Hathaway, R. (1994). Computer Conferencing, Critical Reflection, and Teacher Development. Teaching and Teacher Education, 10, (5), 543-54. [cited]

Harapnuik, D., Montgomerie, T.C. & Torgerson, C. (1998). Costs of Developing and Delivering a Web-Based Instruction Course. In Proceedings of WebNet 98: World Conference of the WWW, Internet, and Intranet. Association for the Advancement of Computing in Education. Charlottesville.

Hill, W. (1994). Learning Through Discussion (3rd Ed.). London: Sage Publications. [cited]

Hillman, D. C. A. 1999. A New Method for Analyzing Patterns of Interaction. The American Journal of Distance Education, 13, (2): 37-47 [cited]

Hiltz, R.S. (1988). Learning in a Virtual Classroom. Final Evaluation Report 25, Newark, N.J.: Computerized Conferencing and Communications Centre. [cited]

Kanuka, H., & Anderson, T. (1998). Online Social Interchange, Discord, and Knowledge Construction. Journal of Distance Education, 13, (1), 57-74. [cited]

King, P.M. and Kitchener K.S. (1994). Developing Reflective Judgment: Understanding and Promoting Intellectual Growth and Critical Thinking in Adolescents and Adults. Jossey-Bass: San Francisco. [cited]

Kremer, J. & McGuinness, C. (1998). Cutting the Cord: Student-Led Discussion Groups in Higher Education. Education and Training, 40(2), 44-49.

Logan, R. (1995). The Fifth Language: Learning a Living in the Computer Age. Stoddart: Toronto. [cited]

Maitland, K., & Goldman, J. (1973). Moral Judgment as a Function of Peer Group Interaction. Journal of Personality and Social Psychology, 30, (5), 699-705. [cited]

McLoughlin, C., & Luca, J. (2001). Cognitive Engagement and Higher Order Thinking Through Computer Conferencing: We Know Why but Do We Know How? Teaching & Learning Forum 2000, Curtin University of Technology, Australia. <http://cea.curtin.edu.au/tlf2000/abstracts/mcloughlinc2.html>

Murphy, K., Cifuentes, L., Yakimovicz, A., Segur, R., Mahoney, S., & Kodali, S. (1996). Students Assume the Mantle of Moderating Computer Conferences: A Case Study. American Journal of Distance Education, 10, (3), 20-35. [cited] [cited] [cited] [cited] [cited]

Newman, D., Webb, B. & Cochrane, C. (1996). A Content Analysis Method to Measure Critical Thinking in Face-To-Face and Computer Supported Group Learning. Interpersonal Computing and Technology: An Electronic Journal for the 21st Century, (3), 2, 56-77. [cited]

Oliver, M. & Naidu, S. (1996). Building a Computer Supported Co-operative Learning Environment in Medical-Surgical Practice for Undergraduate RNs from Rural and Remote Areas: Working Together to Enhance Health Care. University of Kansas, Missouri USA. [cited]

Perret-Clairmont, A., Perret, J., & Bell, N. (1989). The Social Construction of Meaning and Cognitive Activity of Elementary School Children. In L. Resnick, J. Levine, S. Teasley (Eds.) Perspectives on Socially-Shared Cognition. Washington: American Psychological Association.

Perry, William. 1970. Forms of Intellectual and Ethical Development in the College Years. New York: Holt, Rinehart and Winston.

Piaget, J. (1977). The Development of Thought: Equilibration of Cognitive Structures. New York: Viking. [cited]

Potter, J., & Levine-Donnerstein, D. (1999). Rethinking Validity and Reliability in Content Analysis. Journal of Applied Communication Research, 27, 258-284. [cited]

Riffe, D., Lacy, S., & Fico, F. (1998). Analyzing Media Messages. New Jersey: Lawrence Erlbaum. [cited]

Rourke, L., Anderson, T. Garrison, D. R., & Archer, W. (1999). Assessing Social Presence in Asynchronous, Text-Based Computer Conferencing. Journal of Distance Education, 14, (3), 51-70. <http://cade.athabascau.ca/vol14.2/rourke_et_al.html> [cited]

Rourke, L., Anderson, T., Garrison, D. R., & Archer, W. (in press). Methodological Issues in the Content Analysis of Computer Conference Transcripts. International Journal of Artificial Intelligence in Education, 12, 8-22. <http://cbl.leeds.ac.uk/ijaied/abstracts/Vol_12/rourke.html>

Schermerhorn, S. M., Goldschmid, M. L., & Shore, B. M. (1976). Peer Teaching in the Classroom: Rationale and Feasibility. Improving Human Performance Quarterly, 5, (1), 27-34. [cited]

Tagg, A. (1994). Leadership from Within: Student Moderation of Computer Conferences. American Journal of Distance Education, 8, (3), 40-50. [cited] [cited] [cited] [cited] [cited]

Wilen, W. (1990). Teaching and Learning Through Discussion: The Theory, Research, and Practice of the Discussion Method. Sprinfield: Charles C. Thomas Publisher. [cited]

7 Endnote

[1] Cohen’s kappa (k) is a chance-corrected measure of agreement between two or more raters. For further discussion see Capozzoli, McSweeny, and Sinha (1999), Cohen (1960), and Rourke et al. (in press). [cited]

7 Endnote

[1] Cohen’s kappa (k) is a chance-corrected measure of agreement between two or more raters. For further discussion see Capozzoli, McSweeny, and Sinha (1999), Cohen (1960), and Rourke et al. (in press).