A look at the Interaction analysis model (IAM) which is a tool developed by Gunawardena, Lowe and Anderson in 1997. This tool is used to analyse the activity of students on asynchronous online learning tools such as blogs and discussion boards. The article assesses the IAM and its validity as a way to measure students’ learning experiences, and considers how it might be improved.
The issue of assessing quality of interaction as opposed to just quantity is brought up again as in the Community of Enquiry Framework. To tackle this, Peters and Slotta developed a method of analysing the changes students made to a wiki, but with a deeper consideration of the type of changes made, the particular types of files added, and whether the changes made were to a student’s own work or another’s. By differentiating between “peer” and “self” a picture emerges as to the levels of collaboration and contribution to the group, and some light was shed on aspects which may inhibit an individual’s activity, such as being overwhelmed by the amount of content on a board. (“Coding” the data seems to keep coming up in these studies, and seems to refer to how particular bits of information are logged under certain categories and how they are dealt with.)
To analyse knowledge construction, the IAM sets out 5 distinct phases of knowledge construction, each with distinct learning processes;
- Sharing and comparing of information – statements or observations, comparisons of knowledge etc.
- The discovery and exploration of dissonance or inconsistency among ideas, concepts or statements. Where students identify areas of disagreement and are forced to properly back up and reassert their positions.
- Negotiation of meaning/co-construction of knowledge. This is where common ground is sought, differences of opinion are negotiated and compromises to accommodate other’s views are found.
- Testing and modiﬁcation of proposed synthesis or co-construction. Where ideas proposed are tested against existing knowledge, or personal experience, to see what holds up and what doesn’t.
- Agreement statement(s)/applications of newly constructed meaning. What has been agreed upon and adapted or what has been rejected? What has the group learned from the process that they didn’t know before?
When using the IAM to assess a given discussion or blog, the information is broken down and is assigned to one of the above 5 groups. Ideally, you want to see a lot of students’ contributions getting to the highest numbers on the scale. This didn’t happen though – in several studies of groups of teachers and students, the vast majority of posts were in Ph1 and a few in 2 and 3, with little or none at all in 4 and 5. This was explained by participants essentially feeling inhibited by the need to be polite to people they perhaps didn’t know and not seem controversial, or by being very self conscious if they were being assessed and monitored. In the case of students who met regularly in other classes face to face, it was thought they simply didn’t need to interact online. The big exception was a group of women enrolled in an educational technology course in Korea (Preaching to the converted..?) Also a postgrad course on Multimedia in Education in Portugal scored much higher in the later phases.
The lack of engagement in the higher phases was discussed, a lack of experience on behalf of the moderators / instructors was highlighted, for example in the setting of tasks and questions the students would be likely to willingly contribute to. Also, cultural factors were at play – studies in Taiwan and Singapore suggested that students were not comfortable with the dissonance aspect of the interaction, but were able to reach forms of resolution without it. This suggests that the (western) notion of needing robust debate and challenges to ones deeply held beliefs to learn may not be so valid.