From the September and October installments of FirstMonday, a peer-reviewed journal on the Internet, comes four research papers that have tangential or direct impact on participatory media. Hopefully, folks like Ross, Clay, Howard, Joi, Dan, Larry, Doc and the rest of the collective braintrust will help us make complete sense of all this - in plain (read: non-academic) English. Meanwhile, here are some key quotes we were able to glean out of the papers:
The economics of open source hijacking and the declining quality of digital information resources: A case for copyleft
by Andrea Ciffolilli
“The principle of copyleft constitutes a powerful tool available for digital content creators and policy–makers, implying that information arrangements built upon freely accessible resources should be distributed under licensing terms similar to those covering the original resources. In cases where copyleft appears so restrictive that participation in a collective project may be discouraged, further customisation is always a feasible strategy. The case of Creative Commons illustrates this point and represents a critical learning exercise.”
Is copyright necessary?
by Terrence A. Maxwell
“This article provides the results of a dynamic simulation of the publishing industry in the United States from 1800 to 2100, and tests the impact of different protection schemes on the development of authorship, the publishing industry, and reader access.”
Asynchronous discussion groups as Small World and Scale Free Networks
by Gilad Ravid and Sheizaf Rafaeli
“When we investigated the 10 most influential people in the network we found that only two (20 percent) of them are instructors. The large majority of hubs were “regular” students. They “earned” their designation as hubs through participation, not through holding a formal position.”
Update: Michael Feldstein does a great job of explaining (in plain English) this paper in It’s a Small Campus After All and has some excellent ideas about implications and future research.
Internet time and the reliability of search engines
by Paul Wouters, Iina Hellsten, and Loet Leydesdorff
“In short, search engines are unreliable tools for data collection for research that aims to reconstruct the historical record or for research that aims to analyze the structure of information at a particular moment in history. Only those Web pages that contain the date of the publishing document in question (for example, in various Web archives and citation index databases), can be used for this purpose (Hellsten, 2003). This unreliability is not caused by sudden instabilities of search engines, but precisely by their operational stability in systematically updating the Internet. For many types of social science research, it is therefore necessary to build ‘tailor made’ archiving tools that are not based on the available commercial search engines.”