P.D. Magnus (2006) raised some of these concerns in his criticism of Wikipedia. He affirms that because there are volunteer editors continuously cleaning up poor grammar, bad spelling, and other physical signifiers of a resource's questionable reliability, it is more difficult to use these traditional signifiers to assess the reliability of a given resource (cf. Fallis). Content that is "dugg" by Digg users can vary from personal blogs to YouTube videos to corporate news sites, and (unlike Wikipedia) users can often rely on traditional signifiers to help determine the reliability of a resource. For example, let us assume a user comes to a blog article via Digg about the popular music artist Prince suing his fans over copyright infringement. If the article is riddled with conjecture, typos, or other signposts not typically found in a professionally written news piece, the user may choose to seek out other resources to confirm or debunk the validity of the claim. Indeed, thorough research practices should always involve seeking out supporting evidence, but it becomes even more critical if one doubts the validity of a particular claim.
Additionally, Digg, like many of its seemingly endless list of Web 2.0 sibling projects, provides a number of tools that can help researchers (whether casual or professional) avoid acquiring false beliefs. There are numerous opportunities for feedback and correction so that mistakes can be remedied quickly (Thagard, 1007). One aspect of Digg that dominates its features is that it is a socially based. Users of the site can learn, by experience, which posters tend to submit reliable content and comment.
Information gains and loses authority or reliability largely based on two questions: "Who said it?" and "Under whose auspices?" Researches have put their trust in these two questions for centuries (Ovadia, 2007). Authority, in the sense of library and information sciences, is defined as:
How then, can researchers decipher authority in the online world where reliability is often determined through popularity more than by traditional standards? The sociability of a site like Digg.com is one of the elements that gives it reliability. In 1968, J.M. Zinman stated:
The same could presumably be said of non-scientific studies as well. It is the constant interchange of ideas that leads to the correction of mistakes and the production of new knowledge. Although some of the information Digg contains may not be completely reliable all of the time, it does have a characteristic that some web-based information sources lack, and that is that much of what is posted is not original material, but linked material. When a piece's title is clicked on, the researcher is taken to the original site of the piece's production. At this point, it is possible to do some background research to determine the reliability of the piece--the researcher can begin to understand who wrote it and what his or her credentials are. This can be carried out by a search of the author's name in an academic database or an engine like Yahoo! or Google. (Ovadia, 2007). A researcher can then know more about who the author is and why that author is qualified to speak on a subject.
In other words, like so many encyclopedic materials, Digg is a place to begin research. Some of the materials it contains might, indeed, be scholarship-worthy, but one piece of information can lead to another, and another--and each of these can solidify the authority of the one before. In doing this type of background searching, researchers can learn, not only to knowledgeably assess what they initially read in Digg, but also to "create their own authority concept" (Ovadia, 2007). In this way, it is content that is being judged as well as authors. Eventually, researchers learn to avoid that information which can lead them away from justified beliefs. Such research methods are the basic foundation for any beneficial information seeking, and only bolster the reliability of the items Digg houses as a whole. Like Wikipedia, Digg's reliability as an information service relies not on the the individual reliability of each item but on the aggregate objectives and execution of the service as a whole.
No comments:
Post a Comment