Tuesday, October 30, 2007

Assessment: Reliability

Reliability is one of the standards that Alvin Goldman has promoted to evaluate how well social procedures lead to true beliefs. He defined reliability as "the ratio of truths [or results] to the total number of beliefs fostered by the practice" (cf. Thagard, 1997). Goldman asserts "[r]eceivers have an option of believing, disbelieving, or assigning some intermediate level of belief to any message received" (1999). How then, can the givers of information make the information they give more creditable to the receiver(s)? One way is to ensure that the information being given is from a knowledgeable source. Another is to give assurance that the information being given is from an honest source. Normally, competence is proven by signals, like certification or proof of experience. Honesty is often conveyed by presentation style and a "prior pattern of (verifiable) truth telling" (Goldman, 1999). These are more difficult to demonstrate in a web-based community, where interaction is rarely long-term or face-to-face.

P.D. Magnus (2006) raised some of these concerns in his criticism of Wikipedia. He affirms that because there are volunteer editors continuously cleaning up poor grammar, bad spelling, and other physical signifiers of a resource's questionable reliability, it is more difficult to use these traditional signifiers to assess the reliability of a given resource (cf. Fallis). Content that is "dugg" by Digg users can vary from personal blogs to YouTube videos to corporate news sites, and (unlike Wikipedia) users can often rely on traditional signifiers to help determine the reliability of a resource. For example, let us assume a user comes to a blog article via Digg about the popular music artist Prince suing his fans over copyright infringement. If the article is riddled with conjecture, typos, or other signposts not typically found in a professionally written news piece, the user may choose to seek out other resources to confirm or debunk the validity of the claim. Indeed, thorough research practices should always involve seeking out supporting evidence, but it becomes even more critical if one doubts the validity of a particular claim.

Additionally, Digg, like many of its seemingly endless list of Web 2.0 sibling projects, provides a number of tools that can help researchers (whether casual or professional) avoid acquiring false beliefs. There are numerous opportunities for feedback and correction so that mistakes can be remedied quickly (Thagard, 1007). One aspect of Digg that dominates its features is that it is a socially based. Users of the site can learn, by experience, which posters tend to submit reliable content and comment.

Information gains and loses authority or reliability largely based on two questions: "Who said it?" and "Under whose auspices?" Researches have put their trust in these two questions for centuries (Ovadia, 2007). Authority, in the sense of library and information sciences, is defined as:
The knowledge and experience qualifying a person to write or speak as an expert on a given subject. In the academic community, authority is based on credentials, previously published works on the subject, institutional affiliation, awards, imprints, reviews, patterns of citations, etc. (Reitz, 53).

How then, can researchers decipher authority in the online world where reliability is often determined through popularity more than by traditional standards? The sociability of a site like Digg.com is one of the elements that gives it reliability. In 1968, J.M. Zinman stated:


Far from being the sum of independent, individual researchers, the continuous compilation of innumerable disconnected facts, observations and theories, scientific knowledge is the joint social product of the members of these 'Invisible Colleges' whose intercourse is through the citations that they award one another, however seldom they meet face to face (cf. Ovadia, 2007).

The same could presumably be said of non-scientific studies as well. It is the constant interchange of ideas that leads to the correction of mistakes and the production of new knowledge. Although some of the information Digg contains may not be completely reliable all of the time, it does have a characteristic that some web-based information sources lack, and that is that much of what is posted is not original material, but linked material. When a piece's title is clicked on, the researcher is taken to the original site of the piece's production. At this point, it is possible to do some background research to determine the reliability of the piece--the researcher can begin to understand who wrote it and what his or her credentials are. This can be carried out by a search of the author's name in an academic database or an engine like Yahoo! or Google. (Ovadia, 2007). A researcher can then know more about who the author is and why that author is qualified to speak on a subject.


In other words, like so many encyclopedic materials, Digg is a place to begin research. Some of the materials it contains might, indeed, be scholarship-worthy, but one piece of information can lead to another, and another--and each of these can solidify the authority of the one before. In doing this type of background searching, researchers can learn, not only to knowledgeably assess what they initially read in Digg, but also to "create their own authority concept" (Ovadia, 2007). In this way, it is content that is being judged as well as authors. Eventually, researchers learn to avoid that information which can lead them away from justified beliefs. Such research methods are the basic foundation for any beneficial information seeking, and only bolster the reliability of the items Digg houses as a whole. Like Wikipedia, Digg's reliability as an information service relies not on the the individual reliability of each item but on the aggregate objectives and execution of the service as a whole.

No comments: