Wikipedia and Google: Control vs Emergence

While the debate surrounding Wikipedia often centres on authority, questioning whether a group of amateurs can create a trustworthy resource, the real issue is about access. We use Wikipedia not because it is authoritative, though that argument can be made. We use it instead because we can access it for “quick and dirty” knowledge. How is beer made? What’s elearning? Wikipedia provides “gap filling” information, not necessarily foundation information on which we base world views. For foundational world view information, we don’t rely on a singular resource. We blend many – experts, our own experiences, our own thinking, influences of colleagues, articles, books, and so on. People sometimes are mislead in this discussion when they fail to acknowledge that we require different types of information for different purposes. And, for most of my daily quick and dirty information needs, Wikipedia suffices. I am therefore drawn to it because it is at my fingertips. The information source is in line with my information needs. I use the web for the same reason. Do complete books exist on the history of Greek philosophy? Of course. But if they are not in my home library, then I must trudge to the library. I need to be highly motivated for this trek. Instead, I can access an online resource within seconds. Access barrier: Library: 30 minutes. Internet: 1 minute. Repeat as required for Britannica and Wikipedia.
But all is not well. Wikipedia has a fatal flaw, evidenced by frequent criticism about deletions of articles or persons not deemed to be of note (Peter set up a wikipedia page for me and connectivism – an experiment to see how long until I’m classified as not notable :) ). Wikipedia, at its core, is an extension of how we do things in the physical world: a group of people, for whatever reason (position, reputation, authority), make decisions for the vast majority about what should be permitted to be viewed. This is necessary for Wikipedia, or any centralized resource aspiring to authority/impact status, to work. Wikipedia filters for readers. News programs do the same. So do academic journals. And newspapers. The underlying assumption is that some can make decisions for others. The vast majority of people prefer this. But not all. If you’re on the fringe, Wikipedia serves a silencing gate-keeping role. By its very nature, it is intended to do this. In order to be more effective, it applies democratic processes such as voting and discussion. In the end, however, someone still makes a choice on behalf of others.
This flaw of making decisions for others is handled in an entirely different way by Google. Wikipedia assumes a target, sets metrics, and holds discussions against those standards. When someone is deemed “not notable”, their biography is eliminated…and for subsequent searchers, ceases to exist. The flaw arises from its structure – centralized and controlled. Google, on the other hand, adopts a more decentralized model. Instead of centralizing information and determining what can exist (let’s briefly lay aside Google’s activities in relation to China), Google makes its decision after something exists, not before.
Yes, a search engine’s algorithm expresses and ideology and determines what is weighted. Universities, established media outlets, and government sites carry greater authority. But search engines (especially blog search sites like icerocket and technorati) seek to reflect what is occurring online. They attempt to reflect the patterns produced by many interactions. Are search sites like Google neutral? No, not entirely. But they impose less bias onto the information space than Wikipedia does. Search engines express the emergent structure of information, instead of applying mechanics of inclusion up front.
Why is this important? I think Wikipedia harbors a structured content mindset that is reflective of its physical (and now online) competitors (Britannica). Most people find value in the centralized nature of this information. It’s easier to search, the coherence of content requires less cognitive effort to make sense of a subject, and it has a growing degree of name recognition (and thereby, trust). But it is a model that I don’t think is sustainable in the long run.
We will need to outgrow our digital manifestations of physical assumptions. We have the same struggle with online learning content: “Hey, let’s move this content online”. We transfer instead of transform. It works in the short term because we are familiar with the approach and process. In the long run, it impairs innovation. Once access is not a barrier, the model of “a few selecting for many” produces information with inherent bias.
To this end, Google is a better foundation for information’s future. The less bias our initial source of information, the more options we have for repurposing it. If we apply intelligence at the level of need (search) instead of the level of entry into a system (the evaluation/editor model of Wikipedia and other centralized services), we have greater options for future use. Keep the initial source pure. I can’t see why an effective search engine, in the near future, can’t create an “on the fly” representation of a wikipedia article. We type in a term; it generates an article complete with references and differing viewpoints. Not sure we’ll need Wikipedia in the future. I think it’s a transition tool, a temporary crutch, as we align ourselves to the new context and characteristics of information.

5 Responses to “Wikipedia and Google: Control vs Emergence”

  1. David Gerard says:

    Biographies of living people are Wikipedia’s biggest problem, and way too often a curse to their subject. So we’re actually getting harsher on entries on people for good reason.

  2. George,
    I wonder if Wikipedia will evolve to move from its all too familiar format and text based structure. As you state, it works for now but will need to change. What will not change is the need for the “quick and dirty”. As Weinberger states, some knowledge is simply good enough. Not every bit of information needs the scrutiny and rigorous examination that much of the academic world expects. For me this is why wikipedia works. We need quick and dirty and always will.

  3. Arnold Mühren says:

    “Keep the initial source pure”. This is key! The didactic, ’summarizing’ approach to teaching (set textbook, selected readings, teacher PPT notes) will need to be supplemented (or even replaced?) by an approach that turns both teacher and student into explorers in search of meaning and knowledge in an ever vaster and more kaleidoscopic information land. Summarizing that land is a futile endeavour. It needs to be approached in other ways. What other (new?) cognitive skills will be emerging in the future?

  4. Hi Dean – thanks for the comment. Yes, the need for “quick and dirty” will always exist. The bigger issue – regardless of whether the knowledge is “good enough” or has been expert-validated – is one of access. That is, I believe, still wikipedia’s strength. Its weakness is that it’s not designed for a network view of information, as it seeks centralization.

  5. Hi Arnold – in terms of the “summarizing approach” you mention, I’ve been thinking of that for quite a while – i.e. purely unstructured information is stressful, particularly to newcomers to a field. We need frameworks (not sure if that’s the right word) in order to filter so sense-making can begin. I’ll be posting more on this in my Knowing Knowledge blog. So, while I agree with you that summarizing can be a futile endeavor, for many learners, it is temporarily required so some elements (nodes of a network?) of a discipline can be established to which future information can be connected. The role of the educator is to present soft frameworks for managing information…so learners do not become fixated on filtering new information through the established framework, subjecting it (new info) to bend to a framework that exists.