Connectivism Glossary

January 20th, 2011

Stephen Downes and I have kicked off our third iteration of our open course Connectivism and Connective Knowledge…if interested, you can register here. The course is again being offered as part of the Certificate in Emerging Technologies program at University of Manitoba (i.e. for-credit students). In our course orientation yesterday, someone requested a connectivism glossary. A reasonable question – and one that we replied with our usual “if it’s missing in the course, it’s an opportunity for you to create something”. However, today, via Google Alerts, I came across this glossary from participants in the 2009 course: Connectivism Glossary.

It captures some of the more common terms used in discussing social networked learning. After a quick skim of the items listed, I was left with this sense of “great resource. But we’ve somewhat moved on”. Many of the terms listed were quite helpful in the “early days” of 2004/5 when we were trying to grasp onto language that would help describe the phenomenon that we viewed as important. Terms like “half-life of knowledge”, the “pipe” of content, and “informal learning” I could do away with now. They were transitionary terms that don’t quite seem as relevant now as they did at the time. Essentially, these words were used to try and create a sense of what was happening with knowledge and in society that warranted reflection and reconsideration. They don’t speak directly to what connectivism is, but rather the context that raises the importance of social networked learning.

I’m now more interested in terms that address not only what connectivism is, but the ways in which networks are shaped and impact learning (at the neural, conceptual, and external-social network levels). A few of these include:

  • Amplification: the connection of one concept or skill set with another complementary concept or skill set that produces a greater impact than each element could produce on its own.
  • Resonance: when concepts are available to connection with other concepts based on some element of similarity or capacity for connection. For example, a psychologist is in a better position to understand a new theory of motivation than a farmer would be. And a farmer in turn will likely find greater resonance with a new approach to land management than a psychologist would. Resonance is capacity for connections to form based on the attributes of connect-able nodes. Nodes that are too unlike each other will not form a meaningful connection.
  • Synchronization: nodes/concepts aligning themselves to other agents/concepts (fireflies is a common example).
  • Information diffusion: how does information flow through a network? Which nodes slow down information flow? Which test the accuracy or trust-ability of information?
  • Influence: Which concepts or nodes have the capacity to impact others? Which nodes can be trusted? Why? Are single nodes as influential and nodal structures that are in a state of resonance and/or synchronization? (the answer is obviously no). What role do individual nodes play in producing resonance across multiple nodes? Which attributes or actions on the part of nodes contribute most to trust formation and influence generation?
  • Enacting new domains of knowledge:The virus that causes SARS was discovered through a distributed research network, aided by reasonably simple communication technology. We all possess some levels of knowledge. When that knowledge is connected with the knowledge of other people, we are able to access more complex domains of knowledge. For example, the iPad is the combination of innovations and technological advances that spans decades and centuries. The iPad – and its aesthetic and appeal – can only be realized with the knowledge required in its creation is networked and connected
  • Connected specialization:In complex systems, individual agents/nodes become increasingly specialized. In order to enact new domains of knowledge (see above), we need to connect specialized nodes. Understanding how and why nodes form and connect may help us to understand why we have an iPad but not a Windows tablet (as promised by Balmer in 2010). Connections have an impact – but we don’t want random connections for connections sake. We need connections that increase the capacity of a network of individuals to create and grow knowledge.

The stuff is all connected

December 23rd, 2010

I frequently emphasize the substrates at which learning and knowledge are connected or networked. As more attention is paid to learning networks and personal learning, it’s important to highlight that most of the discussion is focused on the social/external substrate, ignoring other dimensions of networkedness (I’m sure that’s a word).

By quick review:

1. Neuronal – brains don’t hold knowledge in chunks – it’s networked. A simple task, such as picking up a pencil, requires numerous areas of the brain to harmonize their distributed activity (sometimes referred to as the “binding problem”) in order to produce the intended action. Recognizing a human face is an astonishingly complex distributed neural activity – an image of a face doesn’t exist in our brains. Instead, different regions of the brain contributed to producing recognition. Olaf Sporns has explored the similarity between some network attributes of the neocortex and other scale-free networks.

2. Conceptual – connections generate meanings. When two or more concepts are brought into some type of relationship, they produce something different than their individual attributes would suggest. Conceptual blending attempts to describe what’s involved in the process of bringing concepts in relation to each other. Burkes’ Knowledge Web is similarly based on trying to find how knowledge is connected/related. As does Danny Hillis’ article Aristotle: The Knowledge Web. Or consider a tool like Brainscanr that attempts to detail relationships between concepts in psychology. We are constantly forming and blending concepts. When we are involved in formal learning, we are more conscious of the process as we’re bringing together our life experiences and current understanding of a topic with new information provided by a course or program of study.

3. Social and technological networks – we live these daily and tools like Facebook and Twitter have made these more explicit. Publications from mathematicians and physicians over the last decade have increased attention on networks (Barabási, for example). However, sociologists have been playing in the domain of networks long before the current hype drove networks into popular society. Barry Wellman, Mark Granovetter, and Paul Lazerfeld lay much of the foundation for what is now being “discovered” about social networks. Researchers are beginning to take a multi-disciplinary approach to networks, realizing that network attributes exists in food chains, transportation systems, etc. Basically, networks underpin life and human existence. The internet, web, and now social media raise the profile of networks because we now experience them daily. When directed toward learning, networks (web, citations, social, etc) are inescapable. As human knowledge becomes more explicit – i.e. stored in a database, waiting analysis – analytics becomes increasingly important in order to understand complexity. The discovery of the corona virus (SARS) was accomplished in a period of a month – an extremely short period of time considering the complexity involved. This was enabled by researchers connecting to each other and sharing information. Understanding how and why people and information connect is a key task of analytics (have I mentioned TEKRI is organizing a conference on Learning and Knowledge Analytics?). Knowledge in any moderately complex task or activity is networked (building a plane, designing a road system, printing a book).

Thought experiment on social networked learning (Connectivism)

December 14th, 2010

I’m working on an article on the discussion around connectivism over the last six years. A key problem that arises with criticism about connectivism, and I think the efforts of proponents to explain it, centre on dramatically different views of ontology, epistemology, and language. In some areas – such as when people ask “how is this different from social constructivism”- it appears that some view differences as trivial. In other areas – such as when people begin to contrast distributed knowledge and social learning networks in relation to the existing education system – it appears that differences are enormous.

I’ve been grappling with a thought experiment that might help to clarify differences and provide a platform with which to think about learning and knowledge. Zombies and other planets are well explored thought-experiment models, likely because they allow the thinker to jettison some of the assumptions that are inherent when thinking about entities that have real world presence. By moving to other planets, or stripping the cognitive capacity of zombies, we are better able to isolate the phenomenon that we want to consider. Here is my current version of a connectivist thought experiment – I appreciate any feedback, questions, disagreements, withering critiques:

We travel to a different planet (Planet Connecton). The ecosystem is similar to what we experience on earth, so we are free to move about and explore. During our exploration, we encounter a human-like species. As we observe their interactions with others, we quickly notice a distinct difference: as each “person” communicates, a cloud appears above their heads. In this cloud we see explicitly their knowledge. The knowledge we observe is networked, so we see real-time changes to their knowledge patterns as they read, learn, and as they interact with others. When they express an idea to a fellow Connectite, we can observe how their thoughts begin to form, which areas of expertise they draw from, which contrary ideas they briefly entertain in an attempt to communicate…but then decide to dismiss. Even more fascinating, we are able to see how a new concept that they learn is broken down into a neural (biological) network. Different conceptual and neural networks are constantly activated and suppressed depending on the context or situation of learning/interaction. We also see how clouds between individuals connect. For example, if one Connectite tries to solve a problem, we observe cross-cloud connections as different levels of skill and knowledge are required for different tasks. Even the simplest task requires the activation of connections to the clouds of others and even objects. These objects can be seen as equivalent to our cognitive objects – books, papers, Google.

We observe one individual reading a book on biology (at roughly the equivalent of an earth-based under graduate degree). As she (sure, gender still exists here) reads about DNA, we begin to see isolated nodes – the fragments of knowledge – appear in the network within her thought cloud. Some of these nodes are quickly connected to existing conceptual patterns. Some nodes, those that don’t readily “cohere” or “resonate” with the existing knowledge of the learner, remain isolated or at best have simple, weak connections. Were we to observe this learner for a period of time, we could see various nodes cohering and strengthening in prominence and, in other cases, weakening and fading, as the learner moves between different subject areas or connects with other learners. Knowledge growth is constant in all domains of her personal and professional life.

As visitors to this planet, we are able to observe every aspect of knowledge and learning through the formation of connections – at the neural, social/object/external, and conceptual substrates. We see the interplay of a social interaction that influences new neural connections, which in turn update and readjust the conceptual understanding of individuals. Surprisingly, the rich, advanced, and varied knowledge of this species can be thoroughly explained through the connection clouds.

We’re a bit surprised because what we’ve learned about, well, learning and knowledge, on earth is so much more complex – theories of intricate details about motivation, images, emotions, and so on. On seeing the Connectites interact, the simplicity of connection-based learning and knowledge push many of our earth-based theories to the outer edges of relevance. Instead of starting with learning at different stages (institution, individual, organization) or seeing numerous views of learning (social, constructivist, cognitivist, situative), we break from our insistence of complicated explanations to complex phenomenon and collapse down to connections as the basic unit for understanding knowledge and the process of learning. The elements that impact connection-forming in the process of learning – such as emotions, pervious experience, and motivation – are not nodes within the connection clouds. Instead, they are enablers or influencing elements that impact whether or not a connection will form or the way in which that connection will resonate with the rest of the network.


What more do we need for a theory of learning and knowledge than what we observed in our interactions with people on Connecton? What can’t we explain with this model?

Secondly, what questions and reservations do you have about this model?

Reflections on open courses

August 19th, 2010

UPDATE: The final research report is available: The MOOC Model for Digital Practice

As part of a SSHRC grant, we (Dave Cormier, Sandy McAuley, Bonnie Stewart, and I) are researching open courses such as Connectivism & Connective Knowledge, Edftures, and the upcoming Personal Learning Environments & Networks.

Before I dive into refections on my experiences with open courses, I want to focus on how we got here. I’ve been reasonably active in sharing ideas around openness in education for about ten years (as a blogger and through articles I’ve posted on elearnspace). In the process, I’ve been able to collaborate and learn with large network of peers/colleagues from around the world.

In early 2006, my work on openness – more specifically, networked learning – took on another dimension when I announced an open online conference on connectivism. I was with the Learning Technologies Centre at University of Manitoba at the time. My simple blog post ended up generating about 750 email registrants for the conference (we eventually cut off registrations). The event – Online Connectivism Conference – came together a bit haphazardly. We (LTC) set up an email list to collect subscriptions, convinced Elluminate to provide a license for the event, set up a Moodle site, set a conference tag (which we tracked in PageFlakes at the time), etc.

The format I used for OCC2007 served as the base for another online conference a few months later on the Future of Education. With this event, we extended our speaker list, experimented with different aggregation features (pageflakes again, but also using Google alerts to track comments on the event), Second Life (my first introduction into cross-media learning around events and into the value of letting others add to the course in spaces that they were passionate about), iTunes/podcast feed, Twitter, etc. The conference resulted in a special issue of Innovate on the Future of Education. In the same year, I ran an open conference on the corporate sector (LearnTrends) with Tony Karrer.

In these conferences and open events, as well as courses taught by David Wiley and Alec Couros, we (participants and hosts of these events) we were trying to give structure to open courses – technologically, socially, pedagogically. I address this topic in a bit more detail in this post on spiralling innovation.

In spring of 2008, I sent an email to Stephen Downes asking if he’d be interested in teaching a course with me on connectivism and connective knowledge. That year, we met up in Memphis at a Desire2Learn conference and hashed out the general format of the course and how we would communicate with learners. We decided to use the software that Stephen uses for OLDaily. Working with a programmer like Stephen quickly added other dimensions: he added gRSShopper to the Daily so course participants could add their blog posts to the Daily and later included Tweets that included course hashtag, etc.

The Daily was one of the most successful additions to CCK08. By the time we offered CCK08, Stephen and I had formed marginally compatible views of the role of technology and pedagogy in open courses. We both felt, humbly, of course, that we could do for teaching what MIT had done for content (with OCW). Dave Cormier joined us in the course as well, proving to be an effective irritant and moderator of discussions.

As registration for CCK08 increased (we capped for-credit learners at 25) to over 2300, the term MOOC (massive open online course) was coined. We built on the model and tools of previous open conferences to create a course defined by diversity of technologies, speakers, tools, and opinions (Antonio Fini has published an analysis of the technological dimensions of CCK08).

And that was the state (and brief personal history of) open courses by the time we rolled out CCK08.

Early lessons:

- There is value of blending traditional with emergent knowledge spaces (online conferences and traditional journals)
- Learners will create and innovate if they can express ideas and concepts in their own spaces and through their own expertise (i.e. hosting events in Second Life)
- Courses are platforms for innovation. Too rigid a structure puts the educator in full control. Using a course as a platform fosters creativity…and creativity generates a bit of chaos and can be unsettling to individuals who prefer a structure with which they are familiar.
- (cliche) Letting go of control is a bit stressful, but surprisingly rewarding in the new doors it opens and liberating in how it brings others in to assist in running a course and advancing the discussion.
- People want to participate…but they will only do so once they have “permission” and a forum in which to utilize existing communication/technological skills.

Question Strand 1

How do MOOCs reflect effective practices within the digital economy?

First a clarification on “digital economy”. I interpret economy to refer not only to monetary exchange, but to the growth and development of knowledge. All economic activity is at its core a knowledge activity. Economic systems seek to provide valuation of an entity (physical or otherwise) and then to provide a mechanism for ongoing value negotiation and exchange. Historically, eras have different entities that underly the valuation process: gold, wheat, coal, oil, and so on. In all instances, however, knowledge is the central entity even when it’s obscured by a focus on commodities or physical objects. Expertise and skill play a role in adding value to the underlying commodities: a jeweller takes gold and fashions it into a necklace. The economy is grown by value addition. An argument could be made (and has been made by Taichi Sakaiya in The Knowledge-Value Revolution) that all work is and has always been knowledge work.

A coffee mug, or tractor, is valuable not for the material used to create it, but rather for the knowledge and skill on the part of those who design and build it. And value of knowledge goes back through the entire chain of production – from extraction of metals from the ground to heating/melting/forming them into something that can be used later to make a mug or a tractor. Value addition at each stage is a function of some application of knowledge. In the past, systems have attempted to raise bars for participation to ensure knowledge or preserve the reputation (integrity) of those already privileged to be part of the system. Guilds in Europe are a perfect illustration. At their peak, guilds were wonderful systems for those on the inside, but very limiting to those on the outside. Information systems – news and media in particular – operate as a guild-like barrier to newcomers by leveraging high capital costs to newcomers.

The internet is a barrier-reducing system. In theory, everyone has a voice online (the reality of technology ownership, digital skills, and internet access add an unpleasant dimension). Costs of duplication are reduced. Technology (technique) is primarily a duplicationary process, as evidenced by the printing press, assembly line, and now the content duplication ability of digital technologies.

As a result, MOOCs embody, rather than reflect, practices within the digital economy. MOOCs reduce barriers to information access and to the dialogue that permits individuals (and society) to grow knowledge. Much of the technical innovation in the last several centuries has permitted humanity to extend itself physically (cars, planes, trains, telescopes). The internet, especially in recent developments of connective and collaborative applications, is a cognitive extension for humanity. Put another way, the internet offers a model where the reproduction of knowledge is not confined to the production of physical objects.

Creating a second tractor has the same input costs as creating the first one (it’s not the knowledge that is the barrier here. The restriction of duplication rests in the physical embodiment of knowledge). Creating a second copy of a video has a fraction of the costs of the first. Digital information is frictionless. MOOCs are a means whereby universities can bring the practices and activities that were formed to serve a physical classroom into the digital realm. Put another way: MOOCs offer educational institutions an onramp to the reality of learning in a digital economy, placing knowledge-based activities like curriculum and knowledge building on a value proposition that is not tied to physically embodied knowledge.

What are their implications for knowledge-making and what it means to know today?

If you accept my argument above – that work has always been knowledge work and that physical objects embody knowledge and that the internet is reducing the costs of knowledge replication, thereby serving as a cognitive extension for humanity – then MOOCs are an instantiation of what knowledge-making looks like in a digital world. Knowledge is a mashup. Many people contribute. Many different forums are used. Multiple media permit varied and nuanced expressions of knowledge. And, because the information base (which is required for knowledge formation) changes so rapidly, being properly connected to the right people and information is vitally important. The need for proper connectedness to the right people and information is readily evident in intelligence communities. Consider the Christmas day bomber. Or 9/11. The information was being collected. But not connected.

Knowledge-making activities are amplified because technology makes knowledge production explicit. In 2003 MIT announced OCW. OCW shares the artifacts of knowledge work (a course, a lecture, a syllabus). MOOCs share the process of knowledge work – facilitators model and display sensemaking and wayfinding in their discipline. They respond to critics, to challenges from participants in the course. Instead of sharing only their knowledge (as is done in a university course) they share their sensemaking habits and their thinking processes with participants. Epistemology is augmented with ontology.

What economic opportunities and challenges does the open model of participation bring into focus?

The open model of participation calls into question where value is created in the education system. Gutenberg created a means to duplicate content. The social web creates the opportunity for many-to-many interactions and to add a global social layer on content creation and knowledge growth. The SARS scare of 2003 exemplifies how scientists from around the world can work together and collaborate in order to solve a complex problem. Connectedness fosters knowledge growth.

Whatever can be easily duplicated cannot serve as the foundation for economic value. Integration and connectedness are economic value points.


Look at Silicon Valley. The knowledge growth of this region is fuelled by the integration of diverse elements: scientists/researchers, entrepreneurs, and funders. Separately, these elements provide only a fraction of the power they provide as an integrated system. Connectedness amplifies knowledge and knowledge’s potential.

In education, content can easily be produced (it’s important but has limited economic value). Lectures also have limited value (easy to record and to duplicate). Teaching – as done in most universities – can be duplicated. Learning, on the other hand, can’t be duplicated. Learning is personal, it has to occur one learner at a time. The support needed for learners to learn is a critical value point.

In theory, we will be building on the right foundation if we shift our financial investment in education from creating content, and turn it to the learning process (fostering, guiding, directing, interacting).

But didn’t you just say that content (information) changes so quickly that we need a way to stay on top of it? How can a lecture recorded last year be used again this year? Wouldn’t we have to continually deliver new lectures to reflect knowledge growth??

Yes, we would need to continually redo lectures. But we shouldn’t do those in isolation from other universities. How many introductory psychology courses does a field need? Educators should collaborate and share around the content needs of their discipline. Learning, however, requires a human, social element: both peer-based and through interaction with subject area experts (again, both epistemological and ontological).

What is the role of content? Of teachers? of Evaluation/accreditation?

I think this was answered above.

Short version:

- Content is readily duplicated, reducing its value economically. It is still critical for learning – all fields have core elements that learners must master before they can advance (research in expertise supports this notion).
- Teaching can be duplicated (lectures can be recorded, Elluminate or similar webconferencing system can bring people from around the world into a class). Assisting learners in the learning process, correcting misconceptions (see Private Universe), and providing social support and brokering introductions to other people and ideas in the discipline is critical.
- Accreditation is a value statement – it is required when people don’t know each other. Content was the first area of focus in open education. Teaching (i.e. MOOCs) are the second. Accreditation will be next, but, before progress can be made, profile, identity, and peer-rating systems will need to improve dramatically. The underlying trust mechanism on which accreditation is based cannot yet be duplicated in open spaces (at least, it can’t be duplicated to such a degree that people who do not know each other will trust the mediating agent of open accreditation)

Question Strand 2

In terms of discourses, literacies, and prior knowledge, what digital skills are privileged and rewarded within the MOOC environment?

The skills that are privileged and rewarded in a MOOC are similar to those that are needed to be effective in communicating with others and interacting with information online (specifically, social media and information sources like journals, databases, videos, lectures, etc.). Creative skills are the most critical. Facilitators and learners need something to “point to”. When a participant creates an insightful blog post, a video, a concept map, or other resource/artifact it generally gets attention.

A MOOC requires production of resources from participants as the facilitators operate from a stance of participative pedagogy. Facilitators need participants who create resources and share their opinions. Each act of creation is a potential node for connection. Technical skills that form a foundation for creativity include: writing, downloading and installing software (like Audacity, Jing), creating a podcast (which has its own set of skills including recording, editing, uploading the file), creating and sharing a video, creating and sharing a mindmap/concept map, posting discussions into a forum like moodle – all of which are basic skills with computers and the internet.

Other skills include:

- Tracking conversations in an LMS like Moodle, Google Reader, Google Alerts
- Capturing important resources using software that utilizes social functionality such as: Delicious, Zotero, Diigo, Evernote
- Developing a coherent view of information (i.e. growing personal knowledge) – personal reflection through blog posts, concept maps, creating artifacts that communicate personal knowledge to others (see Wendy Drexler’s video from CCK08), or systems that enable individuals to form connections between concepts and resources (such as PersonalBrain)
- Engaging with others through Twitter, Facebook, Posterous, Skype, Elluminate, Second Life
- Intentional diversity – not necessarily a digital skill, but the ability to self-evaluate ones network and ensure diversity of ideologies is critical when information is fragmented and is at risk of being sorted by single perspectives/ideologies.

What factors limit participation?

MOOCs are global events, not regional ones such as courses in a university. This distinction injects four factors that can limit participation.

The volume of information is very disorienting in a MOOC. For example, in CCK08, the initial flow of postings in Moodle, three weekly live sessions, Daily newsletter, and weekly readings and assignments proved to be overwhelming for many participants. Stephen and I somewhat intentionally structured the course for this disorienting experience. Deciding who to follow, which course concepts are important, and how to form sub-networks and sub-systems to assist in sensemaking are required to respond to information abundance. The process of coping and wayfinding (ontology) is as much a lesson in the learning process as mastering the content (epistemology). Learners often find it difficult to let go of the urge to master all content, read all the comments and blog posts.

Social dimensions of a MOOC present another challenge. Learning is a social trust-based process. The tone of discussions – sometimes intentionally negative and at other times simply a misunderstanding – produced friction in the synchronous and asynchronous interactions of CCK08 and CCK09. Strong views and opinions can create flare ups that participants may find intimidating. Differences in cultural norms and language barriers also contribute to misunderstandings. Patience, tolerance, suspension of judgment, and openness to other cultures and ideas are required to form social connections and negotiating misunderstandings.

Technology ownership and bandwidth present additional barriers – especially for participants from developing countries. Streaming video and Second Life require reasonable quality of bandwidth (and a reasonably new computer with good quality video/graphics card). Second Life sessions produced difficulty for many participants (especially in the Future of Education conference when SL was still less stable). When Dave Cormier and I taught an open course on Emerging Technologies to a group of educators from Africa, bandwidth was so poor that live audio sessions in Elluminate weren’t possible. More mundane concerns relate to individuals not having microphones, web cams, or headsets.

Timezones can also be concerns in MOOCs, especially if regular live sessions are planned. In CCK08, we ran live sessions at varying times to accommodate needs of international participants, but even then, we were unable satisfy the time needs of all participants. We recorded all live sessions and made the recordings available shortly after the session. However, participants stated that the recordings still produced a feeling of isolation from others in the course.

How can the MOOC model help engage and develop an effective digital citizenry?

MOOCs reduce barriers to learning and increase the autonomy of learners as they develop skills to create, engage, and share in global interactions. An effective digital citizenry needs the skills to participate in important conversations. The growth of digital content and social networks raises the need citizens to have the technical and conceptual skills to express their ideas and engage with others in those spaces. MOOCs are a first generation testing grounds for knowledge growth in a distributed, global, digital world. Their role in developing a digital citizenry is still unclear, but democratic societies require a populace with the skills to participate in growing a society’s knowledge. As such, MOOCs, or similar open transparent learning experiences that foster the development of citizens confidence engage and create collaboratively, are important for the future of society.


July 19th, 2010

Naming things is important. It’s easier to say “web 2.0″ than “participative, fragmented content, conversation-driven web”. Unfortunately, names give shape to concepts that are often imprecise. And, once named, marketers, consultants, and buzzwords galore come running to “monetize the synergistic affordances of web 2.0 [or whatever]” Earlier today I caught a twitter post about “crowdsourcing the longtail of training content”. ugh. Sometimes words hurt more than they help.

Still, naming things can help to mark a turning point. Or a good name can draw attention to changes and give them a defined form that can be used to capture significant trends. Web 2.0 was one such turning point. In the field of learning, Stephen Downes’ elearning 2.0 article was another.

We are now at a period where technological advancements are beginning to coalesce into something more definitive than a random collection of innovations like FourSquare, semantic web, and augmented reality.

Steve Wheeler kicked off a conversation last week with his presentation on web 3.0. Downes replied suggesting Web X (for web eXtended) would be a good title. A great term – but unfortunately, it sounds like web ex – the online meeting vendor. We need another term. I’ve been thinking about xWeb. But my reaching for clever words was not in isolation. Today Rita Kopp posted on the eXtended Web. Like the development of the terms PLE, connectivism, elearning 2.0, and even web 2.0, xWeb doesn’t represent novel insights. Instead, it gives form to a topic that many people are grappling to define.

What is the xWeb?

xWeb is the utilization of smart, structured data drawn from our physical and virtual interactions and identities to extend our capacity to be known by others and by systems.

This is an imprecise definition, but it’s a start. Many elements are involved, as xWeb builds on previous iterations of the web/web 2.0. What is unique with xWeb is the way in which it will transform how we work, learn, and interact with each other and with information. At one level, it is a maturation of the web – a natural extension of current trends with technology and the internet. At another level, it involves a negotiation of two key questions that I continue to grapple with:
1. What does technology do better than people?
2. What do people do better than technology?

With xWeb, we are rethinking what we have to do as people and starting to rely on what technology does better than we possibly could.

Over the last few years, I’ve been trying to capture the nature of the change around technology. I’ve blogged some of those thoughts here (and on elearnspace), included others in presentations and papers, and captured others on delicious.

Some of the recurring themes:
semantic web
location-based services (geoweb)
data overlay
smart information
social media
open data and data in general
Internet of things
cloud computing
mobile technologies
Analytics and monitoring

And, to that list, we could add filtering, recommender systems, distributed “like this” tools, annotation tools (diigo), wearable computing, and so on.

These comprise the key themes at the centre of the xWeb:

1. The physical and virtual worlds are blurring – as evidenced by augmented reality browsers (Layar) and services like Yelp and Foursquare
2. Data is being laid on top of physical objects (digital graffiti and contextual/historical overlays as well as the 3D web)
3. Data is becoming more intelligent – rather than simply pointing to other sources (as with urls), data is now beginning to quantify the nature of that connection.
4. Physical objects are projecting their presence into the digital (the internet of things)
5. Data is increasingly stored in the cloud, permitting better access across a range of devices
6. Data is increasingly open, permitting new/novel combinations by end users…Google maps was one of the first examples of the power of openness, many examples have followed (including open street maps)
7. The abundance of open data, new data sources (social media, sensors) and numerous data uses (overlay, digital graffiti, and social networks) sets the stage for advanced analytics about end users or the current state of mind in a society (such as Twitter trends). Connections mean things. As connections between people, people and data, and data/data become more abundant and explicit, we can gain new insights into what people are thinking and how/why they are acting.
8. Smarter data with better analysis sets the stage for personalization and adaptation of content/socialization/product provision.
9. Data+analysis+personalization requires the formation of predictive computation: “because you are in this demographic, like these types of movies, are friends with these people, you will like this particular coffee maker”. Instead of searching for data, data finds us. In a sense, data knows us.

Complexifying Dave Snowden, Cognitive Edge, SenseMaker

May 7th, 2010

Toward the end of April, TEKRI hosted a conference in Edmonton on Making Sense of Social Media. Dave Snowden keynoted the event. I’ve “known” Dave for about eight years. First through his Cynefin model, then the ACT-KM listserv, and more recently, through his blog. He also spoke at several online conferences I organized while at University of Manitoba. It was a pleasure to meet him in person. Unfortunately, opportunities for dialogue were somewhat limited.

Dave delivered a great keynote – slides and podcast are here. TEKRI will post video soon. He combines of deep knowledge on a fairly wide range of subjects (more on that soon), with great wit, and an engaging presentation style. Most importantly, he presents his ideas in a manner that resonates with the audience. Great ideas need to be presented in a manner that sparks new connections and a desire for creativity in an audience. Dave delivered on both accounts.

I agreed with much that Dave had to say – I’ve been addressing similar topics under the umbrella of connectivism: distributed cognition, coherence, social learning, pattern recognition and expertise, and decentralized narratives.

After the conference, Michael Cheveldave (from Cognitive Edge – the company Dave founded to advance his theories and methods) very ably ran a three day accreditation workshop at the TEKRI office. On Tuesday night, Dave stopped in for a two hour informal presentation.

And that is what I’d like to address.

First, information is not power. And, neither is money. Or any of the other terms that get equated with power. Quite simply, integration is power. How an individual or organization forms a coherent view (integrates elements) internally and how it is related to the entities (venture capital firms, government officials, vendors, clients) that either enable or constrain their actions, that ultimately determines success.

What, for example, gives Goldman Sachs their “power”? Is it their wealth? No – other firms and countries have significant wealth but lack the capacity for influence of GS. Is it the location of their headquarters – i.e. New York? No – many top banks are headquartered in London, Hong Kong, or other major cities. No, the real power of GS is how they have managed to integrate their company with businesses and government. The bailout of AIG benefitted GS more than almost any other firm. The fact that former GS leaders hold influential government positions reinforces the company’s integration with government. Power and influence, then, are not single points but rather the capacity of an organization (or individual) to construct an integrated network that not only frames a certain reality or addresses certain problems or situations in society, but also creates very situations that only they can solve.

Goldman Sachs is a great example. When GS created financial instruments of growing complexity, the government needed to hire their employees in order to make sense of the new financial climate. This in turn created a structure that reinforced the power structure of GS, ensuring “too big to fail” status.

What does this have to do with Dave Snowden?

I’m going to make an imperfect leap from power as an integrated network in corporate and government settings to power as integrated knowledge in conversations, education, and society in general. Dave has a wealth of knowledge, drawing effortlessly from poetry, philosophy, organizational theory, and historical events. However, after a few minutes of listening to Dave weave Hegel’s work with complexity science, neuroscience, throw a shot or two and Peter Senge and others, you end up with an entity that is conceptually challenging to interrogate. After Dave had the floor for about 2 hours (with periodic questions from the audience), in the session, he had created a context of discussion that gave him full control to direct and redirect the conversation according to principles and terms that he had established during his presentation. If someone builds a house, you are left with only the option of arranging furniture once they let you in.

I’ll probably insult both people by saying this, but Dave Snowden shares some attributes of certainty in his reasoning with Stephen Downes. They know what they think. They say it clearly and forcefully. Doubt, vagueness, and uncertainty, if they are part of the process of formulating their views, are well-disguised in dialogue. I, in contrast, (as Stephen has noted in his post the vagueness of George Siemens) do not possess this certainty. I’m somewhat at peace with ambiguity, vagueness, and uncertainty. As philosophers, both Stephen and Dave have been trained for precision in word use and thought.

Dave’s ability to bring a broad knowledge base to bear on knowledge, complexity, and organizations change (with an air of knowingness) results in many nodding heads as he speaks and very little debate when he is done. Essentially, his mode of dialogue creates an integrated cognitive structure (i.e. power base) that is largely unassailable without attempting to interrogate and dismantle each element that he has already connected. This is, I’m sure, why he is a sought after speaker and consultant.

About SenseMaker

During a Cognitive Edge accreditation workshop, I encountered SenseMaker. SenseMaker is an important tool. Grad students conducting research that involves narrative analysis will find this to be an exceptionally useful piece of software. SM takes qualitative data (narratives) and adds a quantitative overlay through a process of self-signification. There is much to be excited about here.

I signed some sort of NDA, so I haven’t a clue how much detail I can go into about SM. Basically, as a narrative-driven tool, SM offers researchers, business people, politicians, policy makers, and others to make sense of complex situations. But is narrative capturing and self-signification sufficient to “make sense” of complex subjects? In the edfuture course, we’re exploring trends and patterns. These will be used as a basis for considering long term implications in society and education. The value of tracking trends – drawing on reliable data sources (World Bank, Unesco, UN, US gov’t) as well as narratives – rests in challenging our existing views, thereby reducing our rigid existing frame of reference and increasing our capacity for adaptivity.

The inclusion of external, non-narrative data sources, are not part of SenseMaker. Perhaps I’m looking for a tool that does too much, but I can’t separate narrative from the tremendous amounts of data now being created and captured by organizations (and by our constant externalizing of our activities and thoughts through social media and mobile devices). As Stephen Wolfram has stated, the future of science, and the biggest innovation of our era, is computation. I’ve been playing with the concept of learning analytics for several years, but I see analytics as part of a larger integrated information structure. It’s nice to know what learners are doing, but I want the ability to situate this information in a larger context of economics, societal trends, and other influencing factors. I’ll tackle this in more detail in a subsequent post. For now, I want to emphasize the value of SenseMaker for research and express my desire for a complimentary tool that offers a more integrated data-driven approach to sensemaking.

Call for Papers: IRRODL special edition on Connectivism

April 1st, 2010

Two things cause me random moments of joy about this call for papers on connectivism: social networked learning:

1. I’ll be able to collaborate with Gráinne Conole. I met Gráinne a few years ago when we were both presenting at a conference in Lisbon. She continues to make enormous contributions to the educational technology field.
2. The IRRODL journal is one of the (if not the) most widely cited journal in educational technology/distance/online education.

Special Edition: Connectivism: Design and delivery of social networked learning

Edited by George Siemens (Athabasca University) and Gráinne Conole (Open University)

The special issue will have its main focus on Connectivism and social networked learning in distance and open education.

Particular emphasis will be placed on emerging technologies, innovative design and evaluation approaches to the design and delivery of social networked learning, learning theory frameworks for digital learning, faculty development through distributed models, innovative pedagogical approaches, research on effectiveness and applicability of connectivism in various contexts, historical roots of social networked learing, and comparison studies between major learning theories in relation to connectivism.

We particularly welcome papers on:

  • Actor Network Theory in relation to social networked learning
  • Activity Theory
  • Critique of Connectivism as a learning theory
  • Design methodologies for social networked learning
  • Personal learning environments and learning management systems
  • Research agenda around Connectivism
  • Distributed learning in fragmented information environments
  • Open learning and transparent teaching
  • New theoretical insights into understanding new technologies
  • Models and frameworks for social networking
  • Innovative approaches to the design and delivery of social networked learning
  • Case studies and empirical studies on social networked learning
  • Epistemological foundations for networked knowledge

Authors are cautioned that the International Review of Open and Distance Learning is not soliciting manuscripts dealing with technology use in traditional classrooms.

More information is available here.


March 30 – Call for Papers
May 30 – Call closed
July 30 – Peer review completed, revisions requested
August 30 – final copy due
October 30 – Issue released

Changing the System at a National Level

March 14th, 2010

This past week, I participated in a conference hosted by the Technology Plan for Education Observatory (where I serve as an external expert on the Scientific Committee) in Lisbon.

Portugal has initiated an unprecedented roll out of computers in a device called the Magellan. Magellan is a small computer based on Intel’s Classmate – dual boot Linux/Windows XP – that costs each student about 50 Euro (~$65 USD). Parents who want an extra computer have to pay something closer to 300 Euro. Having distributed 470 000 Magellan laptops to grade one students over the last two years, the Observatory is tasked with researching the impact of these initiatives and suggesting ways forward with the Technology Plan for Education (TPE). (Portugal will also be providing 1 million Magellans to Venezuela).

Portugal is approaching at 2:1 computer student ration, though at younger levels, it’s closer to 1:1. Early research results aren’t surprising:
- Students are heavy users of computers, but not for education.
- Teachers make limited use of computers and other technologies in class
- Parents are limited computer users
- Teacher training is lacking in utilizing computers effectively in classrooms

I presented the following concluding thoughts to the Observatory at the close of the conference:

At the core of the discussion surrounding the future of education is a concern of how to navigate shifting power and control. What is the role of the student? The teacher? The school? The parents? If learners have the ability to do what educators have done in the past (access information directly), what role should the educator play?

Part of the discussion this week has been on the lack of computer use in classrooms. I’ve been thinking about this argument for several years. I’ve concluded that class time is not wisely used. It’s expensive to get educators and students together in a physical space. Perhaps classrooms are not the place to emphasize computer use. Perhaps face-to-face time should take on a different model than we currently utilize. We should do what we can with technology outside of classrooms. Then we wouldn’t need to meet in classrooms as often.

I mean, if I’m at a face-to-face conference and all of the sessions are online, why bother attending in the first place? It’s the classroom model that needs rethinking, not computer use in classrooms. Stop trying to bend and twist the technology medium to serve f2f needs. Sure, there are instances where searching or tweeting about a subject may help extend the conversation. But, depending on the age level of learners, I think we’re often further ahead to extend the learning process with technology (i.e. out of classroom) and focus our valuable f2f time to do things that we can’t do online.

The Portuguese Secretary of Education made an interesting opening remark during the conference opening: schools are the primary vehicle for addressing societal inequality. I agree. We need the function schools currently perform. I’m not convinced, however, that we need schools as we know them today in order to meet this vital obligation.

When we start crafting models that have a future focus, we need to find some premise for making our decisions. How will we decide if our choices are the correct ones when we don’t yet know of the impact? I suggest that a good choice today is the one that gives us the greatest range of future choices tomorrow. When we don’t know where the future is trending, we need to adopt a many-small-experiments model. We can’t bet everything on one approach. When we cannot anticipate, we must investigate. Small experiments are key.

Most of us in education agree on our needs today:

1. We want good teachers
2. We want good educational content
3. We want to give our learners a bright and hopeful future
4. We want school systems that are relevant to learners and to society
5. We want schools to remedy the social and cultural inequalities that other institutions of society generate

While we agree on the purpose, role, and need of education, we don’t agree on the way to fulfill these needs. We have a sense of the future we desire, but are adrift in conflicting views in how to achieve.

Five key areas are worth considering:


I often hear, as I have this week, that technology is neutral, that it is a tool that we select and use. I strongly disagree. Technology is not neutral. Each tool reflects certain philosophies and beliefs that are designed (or coded) into it. Software is a mix of constraining and controlling choices reflective of corporation or programmer goals and intentions. Technology is also actively promoted by a host of corporations and individuals who seek personal gain through this promotion. This isn’t necessarily a bad thing – it just is.

Why is it that we are so certain about technology? Does anyone think we will be using less technology in a year? In five years? At what point do we pause and ask “Am I using too much technology?” or “How am I being changed by my reliance on technology?”. At what stage do we say “Enough. Here is my limit.”? Or is technology a limitless landscape that in inextricably bound to humanist ideals of progress?

I’m a huge supporter of technology. But I’m more and more interested in the boundaries – if any – we are prepared to place on its role and influence in our daily lives. Is there any other concept in our lives where we permit such limitless future influence?

And this is the irony of technology: Technology creates problems that can only be solved by more technology. Others have said this before. But it is quickly becoming an inescapable reality in our daily lives. The technology and innovations in healthcare that have extended human life and created modern cities have also contributed to population explosion. The only way to feed a world with 6+ billion people (a number only made possible by technology) is to rely on more technology: fish farms, GMOs, etc.

Technology is philosophy. Technology is ideology.

Many of the battles that humanity has fought in the past about human rights, societal organization, democracy, and the role of government, are now being renegotiated in the digital realm. A programmer is today’s policy maker: you can do this, but not that. Software companies are today’s property owners: this is my content, but I’ll let you farm it on my land (or site).

When I hear people talk about the neutrality of technology, I get worried. This ideology-blindness is disconcerting. We are controlled by what we’ve created as much as we control it. Technology is now more than an extension or augmentation of humanity. It is increasingly becoming humanity. Today, I view my iphone less like a device than I do as a part of my cognition. We need to surface technology’s hidden ideologies and philosophies. If we don’t surface these aspects, we dance blindly to a tune that we refuse to acknowledge, but still shapes our moves.

Teaching and Learning

Teaching and learning are the most important aspects of the TPE. Teachers will continue to play a vital role in the lives of students in the foreseeable future. Investments in building educator capacity are important. Children are whole beings; their understanding, their learning, and their knowledge do not segment the way our society is structured: home, school, play. TPE’s emphasis on evaluating family technology use and the out of class contributions of technology to learner development are valuable.

Alternative pedagogy – one that abandons the ideological tethering from previous eras – requires that we answer several questions: Which classroom practices does technology render obsolete? What changed roles do learners and teachers play in this game? What systemic inefficiencies need to be addressed? Which policies hinder, rather than enable, systemic adaptation? These questions are at the heart of educational reform.

We need to know what we are changing to, not what we are changing from.

Practical concerns exist. Preliminary research by the Observatory shows many students are helping teachers with setting up computers, using the whiteboards, and other technical tasks. Teacher’s use of technology is, I suspect, heavily influenced by confidence. Other concerns arise as to the physical set up of the classrooms. A point was made during the conference about classrooms now requiring curtains or blinds to reduce screen glare. Most classrooms are not equipped with sufficient power outlets for recharging laptops. Practical concerns of this nature cannot be overlooked in a successful national laptop roll out.


What will we learn in the future is largely irrelevant from the standpoint of today. How will we learn in the future is critical. In this sense, content is closely tied to innovation in teaching and learning practices.

Content has taken a beating over the last decade. First with web 2.0 and now with social media, focus has been on interaction and engagement. Obviously content has a role to play. The key question for me is whether we need content in order to start learning or whether content is the by-product of an effective learning experience. I’m somewhat partial to the latter view: engaged learners tackling complex subjects under the direction of a talented teacher will learn more than those who consume content. MIT’s decision to discontinue first year physics class lectures attests to this.

Content providers to education, after a long period of drubbing, are beginning to find their niche and to push their agenda: high value content, interactive content, well-organized and structured content. During the conference, we heard that publishers feel that we need them and that without their contributions, we are somewhat lost. Quality, structured content was presented as the means to solve education’s dilemmas.

While context is the primary determinant of how we balance content and interaction, I have a different view of content from what publishers promote. I’m not convinced that nicely packaged and structured content is what we need. Yes, I can understand how well structured content can lead to content personalization. But beautiful structures are of limited value when they fail to serve the needs of society. Properly tagged content, tied to learning objectives and learning profiles means nothing if it doesn’t assist in developing the learners ability to produce personal content (rather than being fed personalized content).

We can organize our content in two primary ways: technologically or socially. These methods have some overlap. Technology enables the social (folksonomies) and the social drives the technological (Facebook). There seems to be a drive to organize the worlds content in a type of digital Library of Alexandria. I think that’s a reasonable idea. But we have to ask ourselves how digital content should be organized based on what it is rather than on our assumptions of content organization.

If we were to build a library today, what would it look like? What would we include? How would we make sense of it? Do we worry about having too much? Or do we take a Google-like approach and dump everything, wherever, and apply intelligence at the point of search. Do we need organization applied at the point of content creation or do we need it applied at the point of use or search?

Quality of content is a genuine concern. A pure dichotomy doesn’t exist, but we can see points of tension: Apple App Store vs Androd Apps, Britannica vs Wikipedia. How much curation do we need? How will we determine quality? How will end-user feedback inform our actions?

The availability of open educational resources also changes the teachers role in relation to content. Teachers should use freely available resources wherever possible. If resources don’t exist on a subject, these should be developed collaboratively across school systems. In terms of content, learners should create, teachers should curate.


Technology is, possibly in a positive sense, a lever for change. The systemic innovation that many desire may not be possible through policy decisions alone. Large scale changes – globalization, warming, population growth, economics – provide fertile soil for change. Technology can be seen as the fertilizer that aids growth of the seeds we plant in this soil. Regrettably, many people have only a vague sense of the change desired.

Education is largely vision-less.

We adopt catch phrases from popular media pundits. What we need is substance – a vision and a means to discover the suitability of that vision. What we have, instead, is mental pablum, ill-informed anti-school rants, and general poor quality thinking. As Dan Meyer recently stated, the further a person is removed from the school system, the less encumbered they feel to see the reality of schooling in society.

Leadership can be somewhat attended to by the contributions of many. When we distribute control, we distribute responsibility. As I commented on NETP, grand schemes and plans benefit from contributions of individuals. Ideas of reform should be shaped by the voices of those who are impacted. Leadership in education should concern itself with creating spaces for vibrant discussion and use these spaces as a means to test their ideas of change. Ultimately, school leaders are accountable to funding agencies. While I’d like to rant against this structure, for now I’ll reserve my comments to the need for leaders to solicit input from diverse voices and to engaging on ongoing network (connected) discussions with systems around the world. Swanson has stated that (.pdf) undiscovered public knowledge can help to foster innovations and novel connections.

Leadership also faces basic tasks of managing supplies of technology, repairs, ensuring vendors (hardware and software) are held to established procedures and standards. It is difficult to establish the proper mix of pursuing innovation while addressing practical day-to-day details. Once Magellans are in the hands of students, the inevitable question of maintenance arises. What happens if hardware fails? What about new versions of the hardware or software? What about in-class technologies such as interactive whiteboards and LCD projectors? Initiating a project is often easier than sustaining it.

And then there is the difficulty of the social and organizational dimensions of change. Change management and incentive strategies can help move an agenda forward. However, leaders don’t need people who do what has been planned. Today, leaders need co-leaders – people who are active in experimenting and exploring future directions.

Leaders face a large scale rebalancing of education. They need to find new points of balance: between teacher/learner, planning/emergence, organized/complex, top-down/grassroots. The entities that will shape our future are already in play. It’s about new and novel combinations, finding new states of relatedness.


Portugal is in a unique position. What is being done with technology in schools is what many countries will do in the future. It is important for Portugal to share and publish work on this front. Many are watching and many will turn to the system as a model for consideration as they develop their own digital learning structures.

Research on the impact of technology can be tackled in four ways:

1. Good description: As Latour states, writing good descriptions of what’s happing is hard work, but very informative. I’m somewhat reluctant to use surveys as their value is limited and often provides little more than confirmation of what an active practicioner already knows. Writing excellent, thorough descriptions of what is happening can be very valuable in coming to understand the nuances of a phenomenon. This is especially true when multiple narratives are included in the final assessment.
2. Patents/innovation/entrepreneurs: How does a technologically literate populace impact society? Long term trends include raise in intellectual property through increased patents, new inventions, and new organizations or startups. Unfortunately, this impact requires clear vision and patience – an increasingly rare mix in an electorate accustomed to sound bites. However, Portugal will know this initiative has been successful if, twenty years forward, new companies and new innovations drive its economy.
3. Sustained, long term evaluation – determine not only trends and actions, but also changes in actions. If a group of learners use laptops for certain tasks, how does their use change over time? When does change, change. This is where it gets interesting. Long term observational and use studies can provide insight into new patterns of use.
4. Because it’s the way of progress – I have not seen any studies that evaluate the effectiveness of the iPod in listening to music. For end-users, it’s not an issue. They use it because it works. Perhaps research in educational technology should have a similar focus: use it because it exists, because it is a part of society, because it is used in other aspects of their lives. By this metric, simply have computers available and using them for learning is success enough.

I’m reminded of a statement: the easiest way to lead is to get in front of a parade. As such, I’m quite confident making the statement that national level technology and pedagogy changes – such as Portugal has initiated – will be common proclamations over the next several years.

Education systems have to start to change somewhere: if a technological basis of education is not developed now, it will have to be developed in the future. Countries collapse future opportunities to choices made today. The need, therefore, is to create national systems that have the greatest flexibility and options for future connections/choices. For all its shortcomings and failings, no approach offers the large (potential) array of future connections that technology offers. To embrace it at a systemic level is no longer a matter of choice. It is a matter of societal need.

Learning or Management Systems?

March 12th, 2010

Jon Mott recently published an article in EDUCAUSE Quarterly on Envisioning the Post-LMS Era. Jim Groom captures the reactions of individuals who have been exploring the link between learning management systems and personal learning environments. There is a sense – and I’ll admit I felt it as well in reading the article – that many long-time contributors to the discussion were not referenced in the article. In theory, the review process should draw attention to important omissions of literature. However, most reviewers would likely not see the spaces (blogs) where much of the conversation happens before it jumps into mainstream as good sources.

I’ve posted below that I wrote while at University of Manitoba addressing the LMS/PLE issue. I’m not sure how long an archive of their copy will exist, so posting it here might give it a bit more of an existence.

A Review of Learning Management System Reviews

October 6, 2006

Learning Technologies Centre, University of Manitoba

George Siemens


Learning management systems (WebCT, BlackBoard, Desire2Learn, Angel, Moodle) hold a position of first choice in learning technology adoption within higher education.  Selecting a traditional Learning Management System (LMS) requires balancing learning and management. Theinitial intent of an LMS was to enable administrators and educators to manage the learning process. This mindset is reflected in the features typically promoted by vendors: ability to track student progress, manage content, roster students, and such. The learning experience takes a back seat to the management functions. Numerous reports (citing administrators, IT departments, and educators) laud the management functions of an LMS. To-date, student experiences and efficacy of the tools have been subjected to limited research. The position offered in this report encourages an organizational definition of learning as the starting point for selecting a technology platform for creating and delivery learning content. A clear definition of learning vision and desired future states, created through input from stakeholders (administrators, faculty, students, and information services) should provide the foundation for decision making, and the boundaries of platform selection. This report covers the typical decision-making criteria utilized by various organizations in selecting an enterprise LMS—most often with the intention of settling on a single, system-wide platform.

Introduction and Background

While virtual learning environments have been available in some capacity since 1960, “the PLATO system featured multiple roles, including students who could study assigned lessons and communicate with teachers through on-line notes, instructors, who could examine student progress data, as well as communicate and take lessons themselves, and authors, who could do all of the above, plus create new lessons” (Wikipedia, 2006a, 1960s section, ¶ 1). Learning management systems have only been available, in roughly their present form, since the 1990s (Vollmer , 2003), with Blackboard and WebCT being broadly adopted in universities and colleges by early 2000 (Online, 2006). Initial versions of an LMS focused on organizing and managing course content and learners. As with many organizations, higher education was unsure about the role of technology in the educational process.

Aggressive sales and state or province-wide licenses resulted in WebCT and Blackboard—now merged as one company (Blackboard, 2006a) cornering over 75% or the market (Mullin, 2005). The rapid penetration of learning management systems as key tools for learning occurs in a vacuum of solid research as to their effectiveness in increasing learning—or even indication of best practices for technology implementation. Pedagogy is generally a secondary consideration to student management; some researchers attempted to bridge research from face-to-face environments to technology spaces (Chickering & Ehrmann, 1996)—a practice that may be convenient, but errs in assuming that the online space is an extension of physical instruction, not an alternative medium with unique affordances. Learning management systems became the default starting point of technology enabled learning in an environment largely omitting faculty and learner needs.

Learning Circuits’ (n.d.) publication, A Field Guide to Learning Management Systems, revealed the nature of most LMS decisions at committee levels (an experience paralleled in academic environments): “an LMS should integrate with other enterprise application solutions used by HR and accounting, enabling management to measure the impact, effectiveness, and over all cost of training initiatives” (p. 1). The value of an LMS is ensconced in language of management and control—notions that most academics would perceive as antagonistic to the process of learning. Most LMS options, features, and comparisons (LMS Options, 2006) focus on tools included in a suite, not on how to foster and encourage learning in relation to an organization’s definition of “what it means to learn.” Discussions of features are divorced from emphasis on learning opportunities.

Current LMS Trends and Needs

After almost a decade of LMS experience, educators and administrators are beginning to question the prominence of an LMS. In a recent LMS governance report, Wise and Quealy (2006) stated “the educational significance of LMS is largely overemphasized and misunderstood …[suggesting it is critical for a university to] … understand itself—what it values, what it does well and how it does it, what it would like to do, and how it might do this” (p. 4).

In a previous publication (Siemens, 2004b), this report author has suggested that LMS in general are the wrong starting point for learning:

Learning Management Systems (LMS) are often viewed as being the starting point (or critical component) of any elearning or blended learning program. This perspective is valid from a management and control standpoint, but antithetical to the way in which most people learn today.

Learning management systems like WebCT, Blackboard, and Desire2Learn offer their greatest value to the organization by providing a means to sequence content and create a manageable structure for instructors/administration staff. The “management” aspect of a learning management system creates another problem: much like we used to measure “bums in seats” for program success, we now see statistics of “students enrolled in our LMS” and “number of page views by students” as an indication of success/progress. The underlying assumption is that if we just expose students to the content, learning will happen. (¶ 1-2)

Two broad approaches exist for learning technology implementation:

  1. The adoption of a centralized learning management approach. This may include development of a central learning support lab where new courses are developed in a team-based approach—consisting of subject matter expert, graphic designers, instructional designer, and programmers. This model can be effective for creation of new courses and programs receiving large sources of funding. Most likely, however, enterprise-wide adoption (standardizing on a single LMS) requires individual departments and faculty members to move courses online by themselves. Support may be provided for learning how to use the LMS, but moving content online is largely the responsibility of faculty. This model works well for environments where faculty have a high degree of autonomy, though it does cause varying levels of quality in online courses.
  2. Personal learning environments (PLEs) are a recent trend addressing the limitations of an LMS. Instead of a centralized model of design and deployment, individual departments select from a collage of tools—each intending to serve a particular function in the learning process. Instead of limited functionality, with highly centralized control and sequential delivery of learning, a PLE provides a more contextually appropriate toolset. The greater adaptability to differing learning approaches and environments afforded by PLEs is offset by the challenge of reduced structure in management and implementation of learning. This can present a significant challenge when organizations value traditional lecture learning models.

The two dramatically opposing approaches to elearning deployment require consideration of what learning means within an institutional context.

Selection Criteria

Reviews of LMS selection criteria fluctuated considerably within the cases reviewed, often reflecting a lack of clear focus on intentions of an LMS as a learning support tool. These criteria were generally considered important:

  1. Ease of use by faculty and students
  2. Integration with a learning object repository
  3. Functionality and tools available
  4. Transition ease and cost from existing tool
  5. Integration with other enterprise-wide tools
  6. Extendibility—configuration to the university or college
  7. Cost

Cases Considered

Learning Management Systems: A Review

In LMS: A Review, Hultin (In press) analyzed key criteria to consider when adopting an LMS, and offers various common platforms. LMS purchasing mistakes include:

  1. Skirting senior management
  2. Failing to spell out your needs
  3. Comparing apples and oranges
  4. Excluding IT from the process
  5. Focusing more on price than on value
  6. Overlooking scalability
  7. Ignoring LMS interoperability
  8. Overlooking vendor track records
  9. Selecting customization instead of configurability (pp. 4-5)

The report attended to divergent needs of different users (administrators, faculty, course developers, learners), context of use (internet connections), usability, and time required to learn the LMS. To meet the needs of various users, a learning environment was offered as a valuable aspect of LMS implementation, while learning environments in this context were linked to an LMS, they will be presented later as an alternative to an LMS:

An important aspect of the learning environments is that they don’t realize any pedagogical models or create learning for the individuals itself. It demands a context based on a pedagogical idea. The pedagogical idea can be realized and strengthened with appropriate learning environments. It is therefore important to integrate the possibilities with Internet based learning already in the idea—and production phase when developing course content. (p. 8 )

Learning environments were categorized as: (a) communication (asynchronous and synchronous), (b) distribution, (c) test and assessment, and (d) interaction (p. 9).

Over the last several years, specialized service providers (like Questionmark, CourseGenie, and Articulate) have offered enhanced testing and content development tools—replacing the tools included in many LMS. This trend is resulting in LMS vendors providing “partners” (Blackboard, 2006d) with priority status in developing and integrating third-party tools.

Melbourne-Monash Collaboration in Educational Technologies

Input from diverse stakeholders within the university environment was solicited during this report. Informal conversations—with individuals directly involved in LMS implementation, support, and administration—were combined with internal reports, meeting minutes, a literature review, and project management reports (Wise & Quealy, 2006). The report presented two broad approaches for LMS governance:

  1. Top-down, command-and-control: Adopt a system, mandate its use, provide support, identify needs and support through new tools as needed (p. 18), and
  2. Bottom-up, emergent: “moves governance into the unordered, ambiguous realm of social complexity” (p. 19) by offering support based on elements of use that emerge.

Governance styles must be aligned with the nature of intended learning. The adoption of technology for learning will differ based on faculty learning models and needs. Medical faculty will require different tools and approaches than Engineering or Arts faculties. To suppose on enterprise-wide model of LMS implementation and governance is to overwrite and obscure the multi-faceted nature of learning and knowledge acquisition (and creation).

The governance model utilized in the Melbourne-Monash report (Wise & Quealy, 2006) relied on ten key principles:

  1. Lay solid foundations for management and oversight
  2. Structure the board to add value
  3. Promote ethical and responsible decision making
  4. Safeguard integrity in financial reporting
  5. Make timely and balanced disclosure
  6. Respect the rights of shareholders
  7. Recognize and manage risk
  8. Encourage enhanced performance
  9. Remunerate fairly and responsibly
  10. Recognize the legitimate interests of stakeholders. (p. 24)

The inclusion of a structured process for LMS review, selection, and governance provides value to all stakeholders. A clear process of selection, preferably tied into the larger university vision of “what it means to learn, dialogue, reflect, and inquire,” ensures the selection process is not vendor-driven or focused on only one aspect of university operation (i.e., needs of the IT department, enrolment and registration, etc.). The needs and interests of learners, however, were not directly addressed in the Melbourne-Monash Report.

EDUCAUSE Center for Applied Research (ECAR)

Beyond merely defining a suite of tools, LMS evaluations should “focus on the processes that underlie creating, preparing, teaching, and taking a course” (Hanson & Robson, 2003, p. 2). Most selection reviews “have typically focused on comparisons of feature checklists and on costs, often narrowly defined as license fees” (p. 2). Additional consideration should also be given to the university’s definition of effective learning, pedagogical models, and larger visions for a changed society—contrast fostering critical thinking with developing learners for the workforce.

ECAR (Hanson & Robson, 2003) presented several guidelines, or steps, for selecting course management systems:

  1. Determine process benefits (p. 3). This step involves determining critical processes, benefits, and features. For example, synchronous communication tools may be deemed as critical for extended education departments, while collaborative spaces (like wikis) may be important to on-campus only departments.
  2. Assigning value to products and features (p. 4). Once learning processes have been defined, products and features are explored. Synchronous learning—in the above example—can be supported through a variety of tools—whiteboard, instant messaging, Skype (or other external voice over IP applications), or integrated tools such as
    Elluminate and Horizon Wimba.
  3. Assigning costs (p. 6). Cost determination is complex. Due to established technology investments (for example, an existing LMS), costs involve more than determining license fees. Integration, support, and faculty training costs will comprise a significant part of the total investment.

Learning Management System Strategic Review

California State University (Adams et al., 2005) conducted an LMS review of Blackboard, WebCT Campus Edition, WebCT Vista, Desire2Learn, and open source systems Moodle and Sakai. After an initial review, all LMS were disqualified, except for WebCT Vista and Blackboard. WebCT Vista was ultimately selected.

Systems were disqualified for a variety of reasons including: previous scale of integration, incompatible with “campus data center standards” (Adams et al., 2005, p. 5), limited feature sets, limited ease of use, open source movement still in infancy, and lack of confidence in product support by an LMS vendor. Mention of learner/faculty concerns were largely ignored in the report. Brief mention was made of “ease of use,” eportfolios, and pedagogical flexibility, which is not defined (p. 7). Migration, training, history with vendor, and technical concerns formed the bulk of decision-making criteria.

Course Management System

University of Oklahoma (CMS Task Force, 2000b) expanded its search for an LMS by including a series of surveys from faculty and students. The survey questions focused on individuals selecting needed features to support learning. As with other surveys and assessments, learning remained vague, poorly defined, and disconnected from how the organization viewed teaching and learning. Faculty responses were particularly revealing of the emphasis on “what works for me” versus “how does this align with larger organizational learning objectives:”

  1. Please keep WebCT! I have hundreds of hours invested in WebCT.
  2. Most other universities in the Great Plains Consortium use WebCt so I have a preference for remaining with that system.
  3. I have been using the blackboard system for past three years. I really enjoyed this system which meets all my needs. I hope this system can be kept.
  4. Switching to new CMS is a time-consuming (and for some faculty) an overwhelming endeavour—so, please, please make this decision with the unconfident computer user in mind – not the power users.
  5. I only use WebCT because I have no choice. (Faculty Overall Comments section)

Evaluation of Learning Management System Software

This report focused on “the issues or consideration for online pedagogy that impact on the selection of an e-learning platform” (Wyles, 2004, p. 4). The focus on pedagogy raised important questions:

  1. What pedagogy will be used?
  2. Will the pedagogy work over the internet?

Emphasis for the evaluation of these questions is based on Chickering and Ehrmann’s (1996) paper Technology as Lever. As mentioned previously, this report assumed that many of the tasks and goals of classroom activity can simply be transferred online. The growth of alternative models of online engagement, as well as parallel conversations found through use of blogs and RRS feeds—such as social bookmarking, tagging, social networks—reveals a dynamic where end-user control grows in prominence. The principles provided by many face-to-face to online transfers of principles or practices does not account for the transformative elements of online learning.

Laying aside the criticism presented, Evaluation of Learning Management System Software (Wyles, 2004) was particularly effective in matching tools (email, bulletin boards, chat, quizzes, tutorials, wikis, etc.) with the work of Chickering, Ehrmann, and Gamson. A critical concept was expressed in the report summary: “Educational institutions need more flexibility and control over their e-learning environments to enable different schools, programmes, course, or instructors to select and deploy the most appropriate e-learning tools suited to the pedagogy” (p. 6). Any LMS selection process should involve a similar match of functionality with the organization’s definition of teaching and learning.

Commonwealth of Learning: LMS Open Source

Open source tools like Moodle and Sakai continue to attract broad interest. The prospects of cost savings in license fees (though fee savings at this level may result in additional investment in maintenance and support) and potential for customization are attractive to organizations.

Commonwealth of Learning (2003) reviewed two open source platforms: ATutor and ILIAS. The methodology used was similar to other reviews listed previously (though focused only on open source options):

  1. Develop evaluation criteria
  2. Identify open source candidates
  3. Filter candidates to produce a short list
  4. Systemic evaluation of features
  5. Systems evaluation of general criteria
  6. Recommendation. (p. 3)

Criteria for selection included:

  1. Features and functionality
  2. Cost of ownership
  3. Maintainability and ease of maintenance
  4. Usability and ease of use and user documentation
  5. Current user community
  6. Openness
  7. Standards compliancy
  8. Integration capacity
  9. LOM integration
  10. Reliability
  11. Scalability
  12. Intellectual property security
  13. Hardware and software considerations
  14. Multilingual support. (pp. 4-6)

Absent from the selection list is a complaint levelled at other reviews: the act and process of teaching and learning are largely ignored in the pursuit of functions, features, integration, and a myriad of other organizational concerns. The very purpose for which an LMS should be selected seems to be a secondary concern in most evaluations of technology solutions. Obviously an LMS needs to be stable, effective (however that is defined), supported, and integrated with other tools. Yet the failure to first define organizational views of learning results in an unanchored and misplaced model of LMS selection.

Change Challenges

Vendor Lock-in

Vendor lock-in is prominent in the LMS space. Lock-in is described as: “a situation in which a customer is so dependent on a vendor for products and services that he or she cannot move to another vendor without substantial switching costs, real and/or perceived” (Wikepedia, 2006, ¶ 1). Due to a combination of proprietary software, weak
standards-adherence, and lack of foresight by colleges and universities, organizations are placed in a position where existing tools are weighted more highly due to financial and procedural constraints, rather than an evaluation on tool effectiveness for teaching and learning. For education institutions focused on innovating course design and delivery to align with rapid societal changes, lock-in is a significant barrier to the diverse options required to “seed, select, and amplify” (Johnson, 2001, p. 42) approaches to innovation.

Faculty Comfort

Learning management systems are still developing in functionality. The last several years has seen existing providers extend their toolset to include tools currently growing in popularity with many online learners: blogs, wikis, podcasts, and social networking. Blackboard (2006c) recently announced Blackboard Beyond Initiative to integrate Web 2.0 functionality to the system.

For many faculty members, the challenges of learning a new tool require a significant investment in time. Departments face challenges with the nature of content, often created to work within a certain LMS—standards
are generally loosely followed, and even where compliance exists, fine tuning is often required.

A Word of Caution

Educational institutions seeking to adopt an LMS should be wary of Blackboard and WebCT, which BB recently acquired. Blackboard (2006b) recently received patent approval for key components of an LMS and initiated a lawsuit against Desire2Learn. The anti-open competition stance has a potentially chilling effect on learning platforms and the development of the industry as a whole. The patent comes at a time when provosts (Jaschik, 2006) are increasingly acknowledging the value of open source and collaboration. The preservation of intellectual property is a cornerstone of academic advancement. Claiming the work of other researchers as one’s own is unacceptable in academic environments and should cause decision makers to reflect on the values and corporate commitment to the health of a discipline, by organizations seeking to close down innovations that have been publicly documented as collaborative in nature (Wikepedia, 2006).

Limitations of LMS Selection Models

The most prominent difficulty, or limitation of review models explored, was the lack of focus on, or connection to, broader organizational views of learning. Instead of learning driving the tool selected, the process of reviewing and selecting an LMS often resulting in a tool that served other organizational needs (student management, content creation, etc.) in advance of learning itself.

Numerous factors impact successful LMS implementation. Key stakeholders include: (a) administrators, (b) faculty, (c) IT and technical support, (d) learners, and (e) curriculum developers.

LMS reviews considered in this paper generally erred in selecting or attending to the needs of one stakeholder at the expense of others. In selecting an LMS., an argument could be made for the supremacy of learning and the quality of learning, as being the most significant element in technology-enabled education.

Within the span of a decade, an LMS has moved from a support tool to the learning process, to the guardrails of what is possible. For many institutions, management, not learning, has become the most prominent criteria in e-learning.

The enterprise-wide, controlled, centralized learning model serves a particular type of learning (often entry-level or foundational). As learners move beyond content consumption and into stages of critical thinking, collaboration, and content creation, LMS weaknesses become apparent. For this reason, the definition of a university’s learning philosophy is critical in guiding LMS activities.

Seeking Alternative Directions

Educator frustration with LMS views of learning is driving alternative views of learning. Instead of having the software define learning, organizations are beginning to first define learning, and then seek tools (and tool suites) to meet desired needs.

All learning management systems. are not alike, and they can be used in different ways. However, a common idea behind an LMS is that e-learning is organized and managed within an integrated system. Different tools are integrated in a single system which offers all necessary tools to run and manage an e-learning course. All learning activities and materials in a course are organized and managed by and within the system. Learning management systems typically offer discussion forums, file sharing, management of assignments, lesson plans, syllabus, chat, etc.

Recently, the emergence of social software has questioned the use of an integrated LMS. Today, only few social software tools are employed within existing learning management systems. The question is: Is the next step to integrate social software tools in LMS? Social software has initiated discussions about the extent to which tools should be separated or integrated in systems. (Dalsgaard, 2006, Integrating section, ¶ 1-2)

Koper (2004) described the allure and promise of alternative learning models not based on management, but based on increased learner control:

Self-organised learning networks provide a base for the establishment of a form of education that goes beyond course and curriculum centric models, and envisions a learner-centred and learner controlled model of lifelong learning. In such learning contexts learners have the same possibilities to act that teachers and other staff members have in regular, less learner-centred educational approaches. In addition these networks are designed to operate without increasing the workload for learners or staff members.

This model does not exclusively replace traditional learning approaches, but does provide greater alignment with the emerging work-life-learning triad. Instead of learning housed in content management systems, learning is embedded in rich networks and conversational spaces. The onus, again, falls on the university to define its views of learning.

Social Software and PLEs

Two key areas are gaining substantial attention: (a) social software, and (b) personal learning environments (PLEs). Social software and PLEs have recently gained attention as alternatives to the structured model of an LMS. PLEs are defined as: “systems that help learners take control of and manage their own learning” (van Harmelen, 2006, ¶ 1).
PLEs “are about articulating a conceptual shift that acknowledges the reality of distributed learning practices and the range of learner preference” (Fraser, 2006, ¶ 9). A variety of informal, socially-based tools comprise this space:

(a) blogs,
(b) wikis,
(c) social bookmarking sites,
(d) social networking sites (may be pure networking, or directed around an activity, 43 Things or flickr are examples),
(e) content aggregation through RSS or Atom,
(f) integrated tools, like,
(g) podcast and video cast tools,
(h) search engines,
(i) email, and
(j) Voice over IP.

The shortcomings of these approaches rest in their lack of integration and the control required by many universities. The experience of many educators parallels my own—learners are very active with technology, but once in an LMS space, they seldom do more than the minimum required (a particular concern in courses where dialogue and theory are important to explore). This may be a function of students taking on “the student role”—defaulting to passive behaviour—once in an academic environment. It may also be due to the change in behaviour expected by educators—where learners must leave their tools behind and adopt tools with limited functionality. For an individual used to Skyping, blogging, tagging, creating podcasts, or collaboratively writing an online document, the transition to a learning management system is a step back in time (by several years).

Recommended Process Forward

Different types of learning require different approaches. As educators, our selection of tools is determined by how we answer the question: “What types of technologies best suits a particular learning context?” (Sessums, 2006, Abstract, ¶ 1). Tool selection in advance of context determination eviscerates subsequent use and adoption. Learning management systems have been effective in eliminating the challenges faced by educators in selecting and aligning particular tools with particular tasks. Unfortunately, these systems have begun to determine options available for faculty and an institution.

Bates and Poole (as cited in Sessums, 2006) listed six characteristics for determining appropriate selection of technology:

  1. Will selected technologies work in a variety of learning contexts?
  2. How does it impact strategic, institutional level and tactical, instructional level decisions?
  3. Do the selected technologies provide equal attention to educational and operational issues?
  4. Will it take into consideration the affect of different media and technologies enabling an appropriate mix for a given context?
  5. Are the selected technologies user-friendly, practical, and cost-effective?
  6. Will the selected technologies be quickly out-dated, or will they be flexible and accommodate new developments? (Conclusions section, ¶ 2)

Universities and colleges need to explore broad applications of technology—beyond simple LMS implementations. LMS may well continue to play an important role in education—but not as a critical centre. Diverse tools, serving different functionality, adhering to open guidelines, inline with tools learners currently use, may be the best option forward.

The challenges of LMS utilization is compounded with ongoing changes in technology. E-portfolios continue to grow in prominence (Siemens, 2004a). Informal, life-long learning—validated or certified by educational institutions in the form of prior learning assessment and recognition—is developing in tandem with a greater societal shift. The rapid development of information (Lyman & Varian, 2003) requires a model that sees learning less as a product (filling a learner with knowledge) and more of a process of continually staying current and connected (learning as a process of exploration, dialogue, and interaction).

While desired, it is unrealistic to expect universities to shift significantly from an LMS to a PLE. Yet the trends occurring online (in relation to social software and Web 2.0 technologies—resources that are single-focus, connected, and two-way) are beginning to impact learner expectation. Many educators in the K-12 sector are adopting learner content-creation tools like blogs, wikis, YouTube, podcasts, and tagging. As these learners enter higher education, they may not be content to sit and click through a series of online content pages with periodic contributions to a discussion forum.

The following steps are recommended for moving forward with a broad review of learning technologies:

  1. Involve all stakeholders (beyond simple surveys).
  2. Define the university’s view of learning.
  3. Critically evaluate the role of an LMS in relation to university views of learning and needs of all stakeholders.
  4. Promote an understanding that different learning needs and context require different approaches.
  5. Perform small-scale research projects utilizing alternative methods of learning.
  6. Foster communities where faculty can dialogue about personal experiences teaching with technology.
  7. Actively promote different learning technologies to faculty, so their unique needs—not technology—drives tools selected.

Create ongoing university teaching and learning technologies council to evaluate ongoing trends, successes, challenges, and needed adjustments to current path. Creating a vision for online learning requires sustained evaluation and monitoring to ensure the approaches to fulfilling the vision change as the context of implementation changes.


The complex process of teaching and learning requires complex, multi-faceted models of implementation. One tool will not meet all needs in all contexts. Changes impact and influence existing models—rendering yesterday’s solutions obsolete. In the field of learning, an adaptive model of technology selection and governance is required to ensure that all stakeholders’ needs are met. A solution today may not be accurate tomorrow. A sustained process needs to be enacted to align context changes with changes and approaches to learning methods and technologies available.

The university “must adapt, using technologies and models of understanding, in this case to reconcile teaching, research, IT, a changing environment, financial accountability and managerial models” (Wise & Quealy, 2006, p. 4). Learning management systems have a position in higher education (certain types of under-graduate level learning are more structured and focused on memorization or content exploration). To meet the needs of all learners in various stages of their education, a multi-faceted (holistic) view of learning must be considered. Increasingly, personal learning environments provide the tools and model to attend to the diverse learning needs of individuals


Adams, S., Banks, B., Evans, B., Gardiner, L., Geunter, C., Irving, J., et al. (2005, April). Learning management system (LMS) strategic review: A next generation learning management system for CSU, Chico. Retrieved October 11, 2006, from California State University, Chico Web site:

Blackboard. (2006a). Blackboard and WebCT merge. Retrieved October 1, 2006, from

Blackboard. (2006b). Blackboard recently awarded patent on elearning technology. Retrieved October 12, 2006, from

Blackboard. (2006c, March 1). Blackboard unveils Blackboard beyond initiative: Four bold inaugural projects will advance e-learning 2.0 vision. Retrieved October 12, 2006, from

Blackboard. (2006d). Why work with Blackboard? Retrieved October 12, 2006, from

Chickering, A. W., & Ehrmann, S. C. (1996, October). Implementing the seven principles: Technology as lever. AAHE Bulletin, 3-6. Retrieved October 12, 2006, from

CMS Task Force. (2006). Course management system. Retrieved October 12, 2006, from Oklahoma State University, Faculty Development Link Page Web site:

Common Wealth of Learning. (2003, July). COL LMS open source report. Retrieved October 12, 2006, from

Dalsgaard, C. (2006). Social software: E-learning beyond learning management systems. European Journal of Open Distance and E-Learning. Retrieved October 12, 2006, from

Fraser, J. (2006). More PLE questions. EdTechUK. Retrieved October 12, 2006, from

Hanson, P., & Robson, R. (2003). An evaluation framework for course management technology [Electronic version]. Educause Centre for Applied Research, 14(Research bulleting). Retrieved October 12, 2006, from

van Harmelen, M. (2006). Personal learning environments. Retrieved October 12, 2006, from University of Manchester, School of Computer Sciences Web site:

Hultin, J. (In press). Learning management systems (LMS): A review. Retrieved October 12, 2006, from

Jaschik, S. (2006, July 28). Rallying behind open access. Retrieved October 12, 2006, from

Johnson, S. (2001). Emergence. New York: Scribner.

Koper, R. (2004). Increasing learner retention in a simulated learning network using indirect social interaction. Retrieved October 12, 2006, from

Learning Circuits. (n.d.). Field guide to learning management systems. Retrieved October 12, 2006, from

LMS options and comparisons. (2006). WebCT instructor community. Retrieved October 12, 2006, from

Lyman, P., & Varian, H. R. (2003). How much information? Retrieved October 12, 2006, from

Mullin, S. (2005, December 7). On line learning market may be monopolized. Retrieved October 12, 2006, from
- some differing views exist on how market share is defined, but—for example a CMS (course management system) definition would exclude enterprise-wide vendors like SAP (who are offering their own learning platform), resulting in a WebCT/Blackboard market share in excess of 80%.

Online learning history. (2006). Moodledocs. Retrieved October 1, 2006,

Sessums, C. D. (2006, April 11). Revisioning the LMS: an examination of formal learning management systems and component-based learning environments.

Siemens, G. (2004a). ePortfolios. Elearnspace. Retrieved October 12, 2006, from

Siemens, G. (2004b). Learning management systems: The wrong place to start learning. Elearnspace. Retrieved October 12, 2006, from

Vollmer, J. (2003). Debunking the LCMS myth. Retrieved October 1, 2006, from

Wikipedia. (2006a). History of virtual learning environments. Retrieved October 11, 2006,

Wikipedia. (2006b). Vendor lock-in. Retrieved October 12, 2006, from

Wikipedia. (2006c). Virtual learning environment. Retrieved October 12, 2006, from

Wise, L., & Quealy, J. (2006). LMS governance project. Retrieved October 12, 2006, from University of Melbourne, Information Services Web site:

Wyles, R. (2004). Evaluation of learning management system software. Retrieved October 12, 2006, from

Collapsing to Connections

March 9th, 2010

This past week, I participated in TEDxNYED. I was fortunate to be among a great list of presenters. The conference videos should be available soon.

The highlight of the day, for me, was the opportunity to be a learner in a room of incredibly passionate and bright people. The typical rhetoric of educational reform was largely missing, with only the occasional references to softball items like NCLB, standardized testing, and industrial models of education. As a whole, I found the day to be a refreshing affirmation of the ideals of education, the value of committed and passionate educators, and the opportunities and affordances new technologies enable.

A rough summary of my talk:

[I took a slight detour at the start to respond to Jeff Jarvis' focus of education to mimic corporate models and respond to corporate needs. At least one person found this to be inappropriate, though as I read his reaction, I find he's talking about a different talk and a different person. Wish I could have been there to hear that talk :) . My primary assertion was that education requires the greatest opportunity for connection-forming and connectedness. Corporations have a sharp focus on revenue generation and profit-making, which by nature of this focus, constrains the array of potential connections in learning, thereby reducing the effectiveness of the corporate model in education].

Collapsing to Connections: reducing learning and knowledge to a unit of change

…a small world of confined information connections

Space influences permissible connections.

I was born in Mexico, in a small Amish-like community south of El Paso/Jaurez. While not geographically distant, the community represents a time shift of several centuries. I spent the first six years of my life in a society very different from what I have known since. Today, I live in Canada – proud home of gold-medal winning men’s and women’s hockey team.

I grew up in a small clustered community, largely devoid of external connections. Our community was without paved roads and electricity and many associated benefits. News and information traveled primarily through social systems..

I recall evenings sitting around an oil-lamp, listening to the conversations of adults. Even though I had only an anemic cognitive awareness of what was being discussed, I could share emotions. Joy. Fear. Anxiety. Laughter. Belongingness.

It was a good feeling to be sitting in the peripheral world of adults – a small social system for sharing ideas, feelings, and world events. The memory of the oil lamp is to this day revived in certain settings and by certain smells. The flickering shadows cast on walls, moving almost rhythmically with the tone and energy of the conversation. It provided a sense of the world as knowable, as predictable, and as structured.

This safe, structured reality shaped the types of connections that were possible to individuals. This environment was a fabrication of, and for that matter, a poor introduction to, the larger world. The social system provided safety, but simultaneously, fostered erroneous views of how the external view.

The flow of information was wonderful – tightly clustered social network. The validation of the accuracy of that information, however, was somewhat lacking.

Defined by connections

A community or group is defined by its connections – how people are connected to each other and to the world outside. Relationships aare tight-knit. Everyone knows everyone. Social circles, church, school are all part of our social networks, providing a shaping influence on possible connections we draw between concepts, information sources, world views, and even other people.

But the question arises as to who is able to define suitability of connections. In my youth, who determined that we could connect certain religious concepts to our use of agriculture equipment? Who decided that certain physical diseases were worthy of medical treatment? But mental illnesses were not medical issues, viewed instead as the world of spiritual agents?

When connections calcify and become dogma and rigid structure, they fail to represent the chaotic and continually shifting world outside.

To map at least partly to reality – the rapidly shifting world of education, commerce, and science – we require innovation and creativity; both of which are fundamentally about drawing novel connections. While growing up, a false boundary was drawn around what was knowable. As a result, all aspects of life were shaped by the known connections: cause/effect, identity/government, etc. The network – tightly nit and highly exclusionary – was the measure of our society. We could grow no more than the freedom of connectedness that we permitted through our social systems and norms. The soft comforting appeal of safety and security, to the exclusion of progress and accurate interpretations of the world around, was too strong a pull to ignore. The connections we form are, for us, reality.

Increasing information accuracy, decreasing social spaces

When my family moved to Canada in the late 70’s, the cognitive network established in Mexico prevailed. Yes, the setting had changed – sand and cactus were replaced by farmland and snow. The oil lamp no longer attended animated conversations. Instead a chandelier above the dining room table provided a uniformity of light. Social conversations, though not accentuated with dancing shadows and the scent of kerosene, still formed the basis for coming to know and coming to understand the world.

But the school system started to disrupt my notion of information accuracy. Unfortunately, in order to access more accurate information, and exposure to scientific thinking, I had to sacrifice the soft social structure that shaped who I was as a person, rather than only what I knew. The education system started to serve the role of filtering and shaping of ideas that had previously occurred through conversation in a trusted small group around a table.

Social and information systems in conflict

The primary information network for most people is tightly integrated with their social network. Cognitive engagement can be invigorating intellectually when information and system systems are aligned.

The solutions we need to address societies biggest problems – warming, population growth, poverty – will be found through serendipity, through chaotic connections, through unexpected connections. Complex networks with mesh-like cross-disciplinary interactions provide the needed cognitive capacity to address these problems.

Delcious, Myspace, Facebook, ustream, Ning, blogs, podcasts, and Twitter represent an acceleration of information and an integration with social systems. These tools permit socialization at a scale that matches traditional small groups and communities. Emerging technology offers a “binding back” to our social, networked, small-group past: a past centered on the social sharing of information and making sense of the world together.

In an odd twist, technology has become social. Technology – the dehumanizing agent of technique that Ellul warned about – is the nexus point for quality information flow (fast networks) and socialization (humanness)

Confinement of connections – which influence social cohesion and knowledge growth – are also a core problem in classrooms and education.

The beauty of chaos, of serendipitous encounters, of information clashing with information – is too often subverted to rule, to structure so that it can be better controlled.

We are our networks

The connections we participate in form our identities. We – you, I – know what our networks know.

Every expression is a point of connection

Every moment of transparent learning is a moment of teaching others

When we make our learning transparent, we become teachers.

Connections are all…

Fragmented information is woven and remade through global social interactions.

The breakdown of distance and the growth of the speed at which information flows in our networks, is fortunately balanced by the rise of tools enabling social connectedness.

We don’t, after all, make sense of our complex world as individuals. We make sense through connections…and these connections create our identity and help us to find our sense of belonging and our sense of humanity.

Systematized normalcy

Unfortunately, the return to sociality has not yet made its impact in education. Classrooms have become micro-communes – closed, clustered, and controlled.

Who permits which questions? Who controls the permissible space in which connections can be formed?

Fragmentation shatters traditional structure. It’s easy to fragment information and conversations. The difficulty arises when we try and weave it into a coherent narrative.

Our society talks too much about networks – the key point of focus should really be on connections. Networks, after all, are only a pattern of connections. What we most need is a unit of change that is under the control of individuals. A social network analysis reveals gaps, network structure, and information flow. This is valuable information for management and policy makers. It is weak as a system of personal control and contribution.

When we collapse learning and knowledge to connections, we affirm individual agency. In discussions of educational reform, it’s time to start thinking about appropriate points of focus and units of change. This is why I find much of the discussion of networks misleading. We can’t influence network development without paying attention to individual connections. And yet, surprisingly, very few conversations in educational reform are focusing on connections.

The very lessons of connect forming that we want our kids to know, also serve us in our exploration of the future of education. For example, which pieces of the future of education puzzle will we put together? How will we connect them? How will we weight evidence? How will we weight social elements?

A failure to connect

The Christmas day bomber, terrorist activities, and the financial meltdown share a common problem: information was there, but it wasn’t connected. Undiscovered public knowledge (Swanson), emphasizes the cost of information that is available, but isn’t connected.

Private Universe – a video documentary of Harvard grads, alumni, and faculty being largely unable to detail why we have seasons. Their views/assumptions were not shared and therefore shaped, guided, by social discourse and expert knowledge. The issue is one of conceptual failure – the inability on the part of individuals to share and shape their understanding of a subject through discourse with others. Erroneous or errant connections are pruned through social discourse.

The scientific method offers a response to faulty connections, offering a long history of creating a transparent structure whereby connections are validated and evaluated. What is permissible to be connected? Why? What are other views? At its core, the scientific method is a structured mode of analyzing the validity of connections between entities, correlation, and cause and effect.

Educators have the obligation to stitch together social and information systems, based on the smallest unit of change – namely, connections.

What does this look like in practice??

Connectivist Model of Learning is one that Stephen Downes and I have utilized since 2008 in CCK08, CCK09, and will be using in CK10 later this year. Dave Cormier and I will be offering a similar open version of Future Trends in Education course starting in April, 2010.

In these courses, our focus has been (will be) to disrupt the traditional concept of a course and the relationship between educator, learner, and content. Rather than the educator creating a narrative of coherence through a discipline, learners do this as part of peer participative pedagogical practice (Peter Piper picked…). The experience of wayfinding and sensemaking is shaped by social and technological connections. The educator still has a role, but one that is altered by the corresponding control shift to learners.


When we distribute control, we also distribute responsibility. We can no longer blame others for systems that are not functioning well. We can’t blame schools. We can’t blame government. Or even corporations. We need to take up the responsibility trail that is created by control distribution.

Finding the smallest unit of change on which to build is important. Richard Feynman has stated “everything is made of atoms” as the single most important scientific knowledge we possess. While atoms have since been reduced to smaller and smaller entities, the concept of individual units of construction for the physical world is still consequential. I propose a similar collapsing to connections in education. We will only understand what we need to do with education and reform if we recognize the element of construction of the entire system.

What would a world of learning look like if it were based on a granular unit of change – like connections – instead of large impenetrable concept like “accountability”, “school reform”. How can we structure educational reform in such a manner that anyone can participate?

The big battles of history around democracy, individual rights, fairness, and equality are now being fought in the digital world. Technology is philosophy. Technology is ideology. The choices programmers make in software, or legislators make in copyright, give boundaries to permissible connection. Clustered, isolated information systems – such as I experienced in Mexico – are incapable of adapting and reacting to the external world. To collapse education, knowledge, teaching, and learning to connections is to give individuals the control and freedom needed to effectively change education.

And to change education is to change society.

A quick note of thanks to TEDxNYED organizers (Dave Bill lists the organizers in the Thank You section of this post) – it was a wonderful learning experience.