Web Peer Production Timeline

With 0 comments

A brief timeline of some important events in the history of peer production on the web (sort of, really the larger 21st century web), just so I can keep the chronology straight for myself. I’ve assembled this as part of prep for an article on the history of Wikipedia, so events I think of as connected to Wikipedia’s emergence are privileged.

My thoughts on Media Manipulation and Disinformation Online

With 0 comments

I was interested to see this report on Media Manipulation and Disinformation Online circulating through my networks last week, especially once I saw that one of the co-authors was Alice Marwick, whose work on political/emotional economy of social media I’ve found really valuable in the past. I got a chance to sit down and spend some time with it over the course of the last few days, and here are a few quick thoughts about the piece.

Artists and Spotify: Another Datapoint

With 4 comments

A recent article from TechDirt provides an interesting data point in the discussion of how streaming services like spotify compensate artists. According to the article, a recent study by a French record-industry trade group finds that a large percentage of the revenue generated by these services is paid to record labels, rather than artists. To be precise the study finds that about 45% of the revenue goes to labels, and 6.8% goes to recording artists and 10% goes to songwriters and publishers. About 20% of revenue goes to the services themselves. (Click through to the tech dirt article for charts, and more discussion of the results).

I've been wondering about what Spotify's reportedly meager pay-out for artists means for the future of digital music for awhile. Does the low revenue from streaming services mean that the "music as service" model is, in an of itself, not a sustainable way to pay artists? Or does it mean that artists need to re-negotiate their relationships with labels to capture more of the revenue from streams?

This study seems to provide evidence in favor of the second conclusion, but I'm not sure it's the whole story. For one thing, the article mentions that the streaming services are not yet "anywhere close to profitable." One wonders then, what the path to profitability for these platforms look like. Do they need to charge more for subscriptions? Will they want to pay less for content? I'm sure they are telling their investors that they will be able to spin user data in gold, but I'm increasingly dubious about the value of consumer data in an economy where consumer purchasing power is, at best, barely growing.

I'm also not sure we can dismiss all label income as rent-extraction. Yes, production expenses aren't what they once were, and distribution expenses must be lower than they were when fleets of trucks brought plastic discs to physical stores, but the "publication is a button now" rhetoric doesn't tell the whole story about the work and expense that might be involved in digital publication. I need to dig into record company financial data to try to round out the story here.

On Punching

With 2 comments

What follows is a badly belated, fairly incomplete response to the Charlie Hebdo affair, leading to a slightly less belated but much more incomplete response to the Wikipedia Gamergate scandal.

I know that #jesuischarlie is now roughly 4 or 5 centuries out of date in Internet-years, but I got to thinking about it after an interesting panel on the topic here, which my colleague Mike Lyons (@jmikelyons) participated in. The discussion focused, in part, on the contradictions raised by the Charlie Hebdo case in the classic western legal and political formulation of "free speech." What does it mean, for example, when the French government imprisons a comic for expressing opinions that could be read as sympathetic with those who killed the Charlie Hebdo staff?

I was struck by how this discussion of the contradictions inherent in our juridical notions of "free speech" pointed to the way the discussion of #jesuischarlie often revolved around an instability in a much less formal, but still important concept. Namely, the popular notion that rude or offensive humor is justified only if it is "punching up" against the powerful, rather than "punching down" against the powerless. As in many other cases, it was not immediately clear to all which way Charlie Hebdo was "punching." Were they punching down against immigrants and outsiders, or up against powerful state and religious institutions?

This confusion about "punching up" versus "punching down" seems endemic to our discussions about offensive speech. The theory suggests this we should expect this to be the case, after all intersectionality means power is never really distributed along neat vertical lines. It can be much fuzzier, with different subjects simultaneously holding positions of vulnerability and privilege.

I wonder if, perhaps, this suggests we ought to ask less about the direction of punches, and instead try to think carefully and critically about why so many punches are being thrown.

More specifically, I can't help but wonder if, just maybe, the throwing of punches has less to do with an effort to achieve a particular political agenda, and more to do with a performance of a certain sort of masculinity. This is only reinforced by the fact that, of the 21 editors, journalists and cartoonists listed as Charlie Hebdo staff on Wikipedia only 3 are women.

What would it mean for us to acknowledge the possibility that we throw punches either because we identify as men and were taught that this is what that identity means, or because we exist in a culture that privileges male violence and we wish to draw on that power? How would that change our debate about offensive speech? Might it be time to consider punching less?

This predilection for verbal violence might also play a role in the recent conflict surrounding the Wikipedia article documenting the Gamergate phenomenon, which recently made news when the Wikipedia Arbitration Committee moved to ban several anti-Gamergate editors. This decision has been widely, and justly, criticized for removing activists from the project in the name of "neutrality." I'm still digesting what happened in that case, and I want to write up something more complete about it soon. For now, however, let me just note one of the banned editors comments in defense of himself:

"As Chris Kyle (or the movie version thereof) said, "I'm willing to meet my Creator and answer for every shot that I took." I entered a contentious topic area to protect living people from abuse at the hands of the identified "pro-Gamergate" editors named in this Arbitration, as well as the hordes of SPAs on the periphery."

The quote seems to show this editor, at least, understanding his action as distinctly "macho." He is engaging in righteous combat against bad people. I am very sympathetic to him, as the Gamergate crowd was indeed a nasty bunch with a vile agenda. The Arbitration Committee, for their part, relied on Wikipedia rules that attempt to discourage verbal combat in favor of constructive content building. Wikipedia, they say, is not a battleground.

It is not unfair, I think, to point out that this call for "peace" may tend to favor the status quo. What might it mean, however, for us to consider that calls to combat may, in part, be driven by a desire to project and maintain a certain sort of masculine identity? I'm honestly not sure, but it seems like a question we might need to consider.

Why I didn't go to the MLA Contingency Panel

With 0 comments

I saw it in the program, thought, "that looks really important, I should go to that."

But then I noticed it was scheduled near dinner, and we were supposed to go to that Mexican place everyone was talking about, with the hot chocolate.

And I thought, "well someone else will go."

"I'm not very important, I won't be missed."

"Someone will tweet it and there will be a storify and I can catch up."

So I didn't go.

This is just a note to express my regret.

And to try to shame myself into learning for next time.

 

3d Printing

With 0 comments

I've really enjoyed messing with our new makerbot here at Bucknell ITEC. Here's what I've printed to date!

[satellite gallery=1 auto=off caption=off thumbs=on]

My favorite flight yet

With 0 comments

Having fun, droning on...

video platform video management video solutionsvideo player

Migrations

With 0 comments

road scene

I am happy to announce that I'll be starting a new position as Bucknell's Digital Scholarship Coordinator in June!

The life of the contemporary academic is a migratory one, I haven't lived in the same city for more than a few years since I completed my Master's degree. However, this move is a migration in more ways than one. For the first time in my adult life I'll be leaving the college classroom, taking on a role as an organizer and developer for Bucknell's growing involvement in Digital Scholarship. This has been a difficult decision to make, as teaching has been a rewarding job. Ultimately, however, I decided it was time to flex some new muscles, and work on refining and building another set of skills.

So, I'm excited to have the opportunity to work with Bucknell to build and shape their new Digital Scholarship Center, which promises to emerge as a fascinating new hub for digital work in the humanities and social sciences. That said, I'm also of course a bit sad to be leaving Dallas. For one thing, Central Pennsylvania lacks in options for high quality tacos. More seriously, the students of UT Dallas are some of the most creative and engaged I've ever worked with. Even as I look forward to my new opportunity, I'm sad to be leaving them behind.

Still, onward and upward! I expect there will be many opportunities for new collaborations and projects in the coming weeks and months! Stay tuned!

Information, Embodiment, Articulation, Authority: An Epiphany

With 0 comments

So I was reviewing this Elinor Ostrom, trying to figure out how to fit my ideas into the larger conversation about the commons, when I had this kind of epiphany about information, embodiment, articulation, and authority. It's a half-formed idea, but it feels powerful and I want to share it here right away as a way to make a note for myself, and to see if others immediately recognize it as something well established (or perhaps refuted) elsewhere.

There is a tendency, when dealing with digital information, to treat it as sort of fungible. An arbitrarily embodied mass of ones and zeroes that is, by virtue of its near-infinite replicability, both non-scarce and non-rival. Digital information "wants to be free" in the sense that it wants to spread, it wants everyone to have their own copy.

This tends to lead to a discussion of digital information as a unique realm of abundance. Authority, which is seen as a sort of arbitrary move to limit the abundance of information, is seen as critically weakened in this environment.

The above has already been criticized, as I'm sure we're all well aware. Authorities, especially institutions like search engines, do in fact exist in the contemporary digital information environment. This is well established. Wikipedia, too, serves as a sort of authority. Wikipedians are well aware of this, sometimes even discussing the need for Wikipedia to do right by "posterity" or record information accurately for "history."

And yet, the formation of these authorities has not been without significant resistance. Where ever centralized authorities have emerged, de-centralized schemes to combat them have also. Take, for example, the many attempts to decentralize Wikipedia, "federating" it across multiple platforms/wikis. Or the attempt to escape the singular Facebook via the decentralized Diaspora.

The conundrum, for me, is this: the attempts always seem well-grounded in the theory of digital information, and the experience of the skilled coders who propose them. After all, copying and transferring data really is fairly easy and cheap. And yet, they almost always fail. Why?

Maybe part of the answer has to do with how we treat information differently depending on what we intend to articulate it with and our embodied experience with the topic at hand.

Imagine a coder writing a project solely for his or her own personal use. Coders, of course, have a great deal of experience with code. Thus, when they encounter new code-type information, they are able to readily evaluate it for themselves, with very little need for authority. In the case of personal use, only the non-human actor of the machine will be articulated with the code, so authority is also not needed to establish trust with other human actors.

Another example of the same basic scenario would be a skilled cook making a meal for him or herself. Who cares what the source is, evaluation is easy.

Since coders are the ones who propose new digital systems, the decentralized, anti-authoritarian approach that works for code seems like it might just work for everything else. Set the information free, let everyone get it from everywhere and let them evaluate it for themselves. Why not?

However, this overlooks cases in which either embodied experience is lacking, or information must be articulated with human as well as non-human actors.

Imagine the case of the novice chef (or coder!), how can they evaluate information they encounter for themselves? In the absence of embodied experience, we may well want to rely on authority (or at least reputation) on the part of information sources.

Imagine the case of the bar bet (which, I posit, is the most frequent use for Wikipedia :) ), in this case, we must articulate information with another human actor who also lacks embodied experience with the question at hand. The only answer with any value in this case is one that comes from a source that both parties trust. A source with authority.

And in this case especially, authority is rivalrous not everyone can have it at the same time. A trusted source builds an audience, a shared reputation, at the expense of others attempted to become trusted sources as well. (This is perhaps not perfectly true, there can be communities of shared trust and reliability, but these would seem to grow at the expense of other possible networks).

Thus, the need for scarce, rivalrous authority hems in the possibilities of non-scarce, non-rivalrous information. But perhaps this tendency is overlooked by those closest to the heart of our new digital systems, the coder elite, because the form of information they are most familiar with (code) is one they both have deep embodied experience with, and one that powerfully articulates with non-human actors that don't care if the source is trusted, so long as the code works (hence the IETF slogan reference to "rough consensus and running code" and the old adage about the ultimate test of code in an open source community being "whether it compiles.")

My two cents on #Twittergate

With 0 comments

So, over the course of the last few days a dispute over live-tweeting conference presentations has boiled over on, of all places, twitter.

Just a few hastily composed thoughts here. Setting the default state for conference presentations to "do not share on twitter" is both absurd (given that the whole goal of academic knowledge production is, you know, sharing and discussing knowledge) and impractical (given the rather limited affordances twitter has built in for users to censor one another).

That said, I think it might make sense to take a step back and ask ourselves: what on earth would motivate someone to make the ridiculous and seemingly self-limiting attempt to stop others from discussing their work? Framing this as another confrontation between those who do and do not "get the internet" seems to be a limited frame, in my opinion.

Instead, I can't help but wonder if this desperate attempt to lay claim to knowledge as property is driven by fear. The act of an increasingly precarious academic population desperate to hang on to any asset, however intangible, that might give them a leg up in the dog-eat-dog academic job market. Of course people resent what looks like the "theft" of their reputation. They experience reputation as a scarce asset. One to be guarded jealously.

This attempt to hoard reputation, to hoard the fruits of our intellectual labors, makes us all poorer. In a sense, this could even be seen as analogous to the "paradox of thrift" familiar to Keynesian economists. If everyone fears for their future, and everyone over saves, the economy collapses.

But Keynes did not suggest we simply shame savers into spending. Instead he emphasized the need for collective action to drive investment. I am a strong supporter for openness, and I think it must be first among the values of our profession. If we truly want to create an academic culture of open ideas and shared information, we must take steps to secure the material conditions where people feel secure being open! I think this interpretation of openness is somewhat in contradiction to the sometimes crypto-libertarian language that often infiltrates our discussions of free information. It is not enough to "set ideas free." We must build safety nets (with all the slogging bureaucracy that will entail) if we want people to feel genuinely free to share and to fail.

Weighing Consensus - Building Truth on Wikipedia

With 0 comments

In a recent piece in the Chronicle of Higher Education's "Chronicle Review" section, Timothy Messer-Kruse criticizes the editing practices of the Wikipedia community. He describes his attempt to correct what he understood to be a factual error in Wikipedia's article on the Haymarket affair, and argues that his experience demonstrates that Wikipedia limits the ability of expert editors, such as himself, to correct factual errors on the site. While Dr. Messer-Kruse believes his experience demonstrates Wikipedia's lack of respect for scholars, I believe it actually demonstrates that Wikipedia holds a deep respect for a collaborative scholarly process that is collectively more capable of producing "truth" than any individual scholar. 

Wikipedia's privileging of the collaborative scholarly process has practical implications for how scholars should, and should not, interact with Wikipedia. Academic Wikipedia editors might have more satisfying Wikipedia editing experiences in the future if they respect this fact.

To understand how and why Wikipedia functions the way it does, we must first understand the day-to-day realities of Wikipedia's editing process. Because they have the responsibility of securing the free encyclopedia against vandals and other bad actors, editors are always on the lookout for certain patterns that, for them, indicate likely vandalism or mischief. For academics, the best analogy might be the way that we scan student papers for patterns that indicate likely plagiarism. This sort of rough pattern recognition is deeply imperfect, and in the case of Messer-Kruse's edits, Wikipedians suffer from a false positive. Nevertheless, just as teachers use rough patterns when scanning giant stacks of student assignments, so too Wikipedia editors have a clear need to be able to quickly detect likely bad actors.

If we look at Messer-Kruse's first interaction with the Wikipedia community, we can see some of the patterns that likely flagged him (incorrectly) as what Wikipedian's sometimes call a "POV pusher." That is to say, a person with an ax to grind, looking to utilize Wikipedia as a free publishing platform for their own particular pet theories. He starts his engagement with a post to the article's talk page (a special Wikipedia page that permits editors to discuss the process of creating and revising a particular article), writing:

The line in the entry that reads: "The prosecution, led by Julius Grinnell, did not offer evidence connecting any of the defendants with the bombing..." is inaccurate. The prosecution introduced much evidence linking several of the defendants to the manufacture of the bomb, the distribution of the bombs, and an alleged plot to attack the police on the evening of Tuesday, May 4. An eye-witness was put on the stand who claimed to have seen Spies light the fuse of the bomb. Police officers testified that Fielden returned their fire with his revolver. Now these witnesses and this evidence may be disputed, but it is historically wrong to claim it was not introduced. For more specific information, see http://blogs.bgsu.edu/haymarket/myth-2-no-evidence/ (http://en.wikipedia.org/w/index.php?title=Talk:Haymarket_affair&diff=prev&oldid=265725190)

By starting with a post to the talk page, Messer-Kruse follows good Wikipedia etiquette, which encourages new editors to discuss substantial changes they wish to make to pages before making them. (Those who wish to review the full record of Dr. Messer-Kruse's Wikipedia activity may do so, here.)

However, in crafting this talk message, Messer-Kruse has unintentionally engaged in rhetorical patterns that flag him as a potential bad actor in the eyes of experienced Wikipedians. His most significant error is citing a self-published source, his Bowling Green State University blog, in support of his desired changes to the article. This is, as several Wikipedia editors quickly point out, in violation of Wikipedia's reliable source guidelines.

In Messer-Kruse's defense, the tone taken by some editors in these early exchanges represents a serious misstep on their part. They tended to engage in what is known as "wiki-lawyering," simply spitting the Wiki-shorthand code for the policy he has violated (WP:RS) at him, with little attempt to explain why he has made an error, and no attempt to offer constructive ways in which a compromise solution might be reached. These editors have since been called on the carpet for being unnecessarily hostile to newcomers, or "Biting the Newbies" in Wikipedia-speak, on the article's talk page.

Messer-Kruse, for his part, does not seem to absorb the reason why his blog is unacceptable as a source. After being directed to the reliable source policy by one editor, he retorts, "I have provided reliable sources. See my discussion of the McCormick's strike above in which I cite the primary sources for this information. By what standard are you claiming that http://blogs.bgsu.edu/haymarket/myth-2-no-evidence/ is not a 'reliable source.' It clearly cites primary sources in its rebutal of this myth. Perhaps its [sic] not 'reliable' sources you want but ideologically comfortable ones" (http://en.wikipedia.org/w/index.php?title=Talk:Haymarket_affair&diff=prev&oldid=265740457).

What Messer-Kruse is missing is how the reliable source policy allows Wikipedia to use the larger scholarly process of peer review for its own benefit. By preventing the use of self-published sources, and preferring secondary sources to primary sources, Wikipedia attempts to ensure that information has been subjected to the most vigorous review possible by scholars before being included in the encyclopedia. Does Messer-Kruse really believe that we should abandon this process, and simply allow any individual scholar to make novel claims about truth, regardless of their ability to convince scholarly peers? It is not some faceless herd of editors that Wikipedia defers to when evaluating truth-claims, it is the scholarly process itself. Even now, discussion is unfolding on the Haymarket affair article talk page concerning the larger scholarly response to Messer-Kruse's book. At issue is whether or not, in the eyes of the experts, this very recent book has indeed significantly revised our understanding of the Haymarket affair.

Wikipedia's policies here seem to have frustrated an attempt to add well-researched points to the encyclopedia, which is unfortunate. However, it is important to understand that Wikipedia editors are, every day, confronted by vast numbers of self-styled experts, many claiming academic credentials, referring to a blog or other self-published source that purports to upend this field or that based on a novel review of primary evidence. Climate science, evolutionary biology, and "western medicine," are all particularly common targets, though I have also witnessed claims to such unlikely discoveries as a grand unified field theory. While Messer-Kruse's claims are not outrageous, his use of a self-published source, and claims to a unique interpretation of historical events flag him in the eyes of Wikipedia editors as a potentially disruptive editor. They thus use the reliable source policy to defer the responsibility for deciding whether or not his claims are true to the larger process of scholarly peer review.

This deference may indeed, as Messer-Kruse points out, render Wikipedia resistant to change. It can also, as I have argued in my previous case study of Wikipedia's coverage of the Gaza conflict (see chapter 6 here for more detail), privilege points of view with greater access to the means of producing "reliable sources." This is an important potential problem for Wikipedia. It is an even more critical problem for a web-using public that too often allows Wikipedia to serve as their primary, or only, source of information on a given topic. More must be done to ensure the greater visibility of minority opinions on the web, and to prevent so-called "filter bubble" effects that may prevent web users from consuming a diverse set of information sources.

However, I don't think that Messer-Kruse's critique of the "undue weight" policy of Wikipedia, which holds that Wikipedia should base its coverage on academic consensus on a given topic, is the best way of correcting for this potential problem. It is interesting to note that Messer-Kruse himself, in discussing a related edit to the Haymarket affair article, makes a sort of "undue weight" argument of his own. He argues that the article's casualty count for the McCormick riot (an event that would help set the stage for the later events at the Haymarket) should be changed because, "The claim that six men were killed at the McCormick riot is inaccurate. This claim comes from the report written for the anarchist newspaper by August Spies. Chicago cornorer's records and all the other daily newspapers finally settled on two deaths as the correct number" (http://en.wikipedia.org/w/index.php?title=Talk:Haymarket_affair&diff=prev&oldid=265729292). Here, Messer-Kruse is effectively arguing for the exclusion of a fringe opinion, in deference to the weight of consensus found in other sources.

Consensus, then, is an important mechanism by which we judge the validity of certain truth-claims. I believe that one reason academics, like Messer-Kruse, and Wikipedia editors may not see eye to eye is that they have been trained to evaluate consensus in very different ways. Academic training, especially in fields that stress the production of mongraphs like history, tends to privilege the scholar's own individual judgement of the consensus of evidence. Wikipedians, by necessity of the situation Wikipedia finds itself in, understands consensus to be an ongoing process involving a vast number of both scholarly and non-scholarly actors. Rather than asking Wikipedia to hew closer to any one academic's evaluation of "truth," I would posit that we can more readily improve Wikipedia's accuracy and respect for evidence by engaging with and respecting this ongoing process. By offering our scholarly findings to the Wikipedia community as peers in a larger process of negotiating the truth, we have the best chance of helping to build a Wikipedia that truly reflects the fullest and best picture possible of the always fraught and diverse process of establishing what we know.

 

(Meta)Aggregating Occupy Wall Street

With 0 comments

So, a few weeks ago, as the Occupy Wall Street movement started picking up steam and spreading beyond the initial occupation site at Zuccotti park, I noticed that news about the various occupations, which was predominantly being spread via social media channels, often seemed fragmentary and hard to get a hold of in any sort of holistic way. This, it occurred to me, was basically an aggregation and meta-data problem to be solved, so I suggested as much to a group of fellow academics with an interest in the digital humanities. Sadly, we're all busy teachers and academic professionals, and only one of us was an experienced coder, so we didn't produce the grand aggregation of public data on OWS I had imagined. We did, however, start to collect a database of tweets that will hopefully become a fruitful source for future research.

In the meantime, however, others have done what I suggested. This is the Web 2.0 version of the "procrastination principle," if you have a good idea, just wait. Someone else will implement it. In this blog post, I attempt make my own (very) small contribution to this process by providing an annotated list of the available aggregation projects: a sort of meta-aggregation, if you will.

OWS Aggregation Sites:

  • OccupationAlist is an attempt to be a single page portal to the entirety of the media coming out of the occupy movements. It includes recent updates from the "We Are The 99 Percent" tumblr  arranged by date in a horizontal format that seems to have been inspired by something like iTunes' coverflow. They use foursquare check-ins to provide a visual representation of activity at occupation sites, and a map of occupation related meetups. Recent video posts are on the right hand side and recent results of twitter searches of relevant hashtags sprawl across the bottom of the page. The attempt to be everything to everyone is ambitious, and I'm curious to see how they refine the site.
  • Occupy Together, an early hub site for online organization of the movement, provides a hand-edited daily news blog of events they believe to be significant to the movement, as well as organizing information and a directory of actions including action websites.
  • OccupyStream provides a handy way to access dozens of occupation LiveStream channels, which have often been the source of important citizen-documentation of events as they unfold. Sadly, the site does not currently give the user any way of knowing if a given channels is broadcasting, or even active, without clicking through to the channel. I'm not sure if the LiveStream site provides any way of doing this, but being able to see who was broadcasting live at a glance would be great.
  • Researchers may be interested in participating in occupyresearch, an interdisciplinary hub wiki for research projects investigating the movement.
The aggregation sites above are interesting, but I have found that the best way to keep abreast of news about the Occupy movement is via social media, especially twitter. Cultivating a good list of twitter sources is, of course, essential to using the medium effectively. Here are some useful twitter sources I have discovered:
  • R. Kevin Nelson's Occupy Wall Street list is New York-centric, but very good.
  • Andrew Katz is a Columbia J-school student who has been a great first-person source of Occupy Wall Street news, and a strong curator of messages from other occupations as well.
  • David Graeber is an anthropologist and anarchist theorist who played an important role in fomenting the initial Wall Street occupation.
  • Xeni Jardin is a boingboing contributor, and tweets prolifically on a wide variety of topics. I have, however, found her a useful retweet relay of  occupation news, as well as other nerdy news items.
I'm sure I don't have a comprehensive list here. What am I missing? Let me know in the comments!

Netflix, Strategy, and the First Sale Doctrine

With 0 comments

When I was a younger man, I used to fancy strategy wargames. I thought I was pretty good at them too, until I played Stea. Stea was a hacker's hacker, the man who first taught me Unix, a person for whom logical forms of abstraction and analysis were as natural as breathing. By the time I started my second turn of our game, I had already lost. There were better moves I could make, and worse moves, but all the moves I could make lead to me losing. That, I learned, was what the art of strategy was: the practice of giving your opponent only losing moves.

To me, that's what Netflix announcing it will be spinning off its DVD-by-mail service looks like, a losing move made by a desperate player. The best analysis I've seen as to why Netflix would take this seemingly counter-intuitive move argues that Netflix is intentionally throwing by mail DVD distribution overboard, ridding itself of the expensive baggage of distribution centers, warehouses and (paging Nicholas Carr) workers to move forward into a future dominated by digital streaming. Discs are dead. Burn the boats.

This logic makes sense, but can Netflix survive on the ground it is moving forward onto? As a distributor of physical discs, Netflix enjoyed the protection of the first sale doctrine, which holds that purchasers of books, video cassettes, DVDs, Blu-ray discs, or other tangible copies of media, have the right to do as they please with that particular copy. The first sale doctrine meant that Netflix was free to rent the same discs sold to consumers, and that publishers couldn't easily stop them from running a rental business without withholding content from the general public. In a sense, Netflix got its start by being a bit of a clever hack, leveraging the first sale doctrine and business reply mail rules to build an innovative and inexpensive way for consumers to access a vast library of video recordings.

In the streaming environment, things are different. Netflix must obtain permission from publishers to stream movies to consumers. If it wants, say access to NBC Universal content, it has to deal with Comcast. Why should a vertically-integrated entity like Comcast allow Netflix to take a piece of the action for streaming content it owns across a network it also owns large pieces of (and which it has already attempted to limit Netflix's access to)? I don't see how that equation works. All Netflix has to bring to the table here is the good will of its customers, good will it hasn't exactly been cultivating of late.

That said, they may retain me, at least, as a customer for a little while longer. The reason? They are keeping the envelopes red. The other thing I learned, all those years ago, watching Stea march his armies toward me across the game board in impeccable order, is that I am a hopeless romantic. I was too busy building beautiful bomber formations to bother with actually winning the game. As long as I can get red envelopes in the mail, I'll probably stick with Netflix (or qwickster, or whatever) until the end of their losing game.

Computer Generated News

With 0 comments

An article in yesterday's New York Times reports on recent advances in using software to automatically generate sports reporting. The software, created by a firm called Narrative Science, reportedly generates human-like text, and has already had one big operational success:

"Last fall, the Big Ten Network began using Narrative Science for updates of football and basketball games. Those reports helped drive a surge in referrals to the Web site from Google’s search algorithm, which highly ranks new content on popular subjects, Mr. Calderon says."

The role of Google here cannot be stressed enough. Once again, the preferences of the search engine giant are shaping our contemporary media environment in profound ways, perhaps without much conscious reflection on our part.

My biggest anxiety in cases like this is always the one expressed by Norbert Wiener at the close of his 1947 volume Cybernetics:

"The modern industrial revolution is similarly bound to devalue the human brain at least in its simpler and more routine decisions. Of course, just as the skilled carpenter, the skilled mechanic, the skilled dressmaker have in some degree survived the first industrial revolution, so the skilled scientist and the skilled administrator may survive the second. However, taking the second revolution as accomplished, the average human being of mediocre attainments or less has nothing to sell that is worth anyone’s money to buy."

What should the humanities be?

With 0 comments

A while ago I wrote a blog post expressing my frustrations with the available definitions for the collection of disciplines known as "the humanities." You can read it on TechStyle, the group blog for Georgia Tech's Brittain Fellows, here. I explained how I didn't think defining the humanities in terms of the canon of "literature," the method of "reading," or the advancement of "values" could adequately provide a framework for an academic discipline. Today, briefly and humbly, I would like to propose a definition I think could serve as a framework for the humanities.

The definition I propose is quite simple: the humanities are the disciplines concerned with the production, distribution, and interpretation of human readable texts.

I'm borrowing my use of the term human readable from the Creative Commons project. Creative Commons builds on the distinction, likely familiar to digital humanists and computer scientists alike, between machine readable codes, which are designed to be interpreted by a computer, and human readable codes, which are designed to be interpreted by a person. Creative Commons, for example, creates a machine readable version of their licenses, designed to permit search engines to automatically discover works that have been released with particular re-use rights, and a human readable version of their license, designed to permit "ordinary people" to understand the terms of a particular license and what these terms mean. However, Creative Commons further distinguishes the human readable version of the license from the technical legal code of the license itself. This legal code has sometimes been dubbed the "lawyer readable version." To fully appreciate the difference between human readable and lawyer readable, you can compare the human readable version of the Creative Commons Attribution license to the full legal code of the same license.

My suggestion then, is that the humanities should focus on texts that are human readable in the sense that Creative Commons human readable licenses are intended to be. That is to say, texts that are written to be read by a varied audience, rather than a narrow group of professionals with intensive and explicit training in interpreting these texts. Texts that are meant to serve as contact zones, where a variety of constituencies might negotiate common understandings of shared issues.

I propose that we focus on the human readable, but not that we limit ourselves to it. Clearly the human readable is always deeply interlinked with a wide variety of other actors: legal and machine codes, media technologies, economic entities, human biology. I only suggest that we make the human readable our point of entry. I believe it is an important point of entry. After all, for all of the specialized knowledge produced by our highly technical and segmented culture, we still rely on human readable texts to build political and economic coalitions that span these specialized forms of knowledge. The science of climate change, for example, cannot impact the political and economic processes that shape the human influence on the climate without the production of human readable texts that explain the significance of the science. Furthermore, these texts do not operate in a vacuum, rather their reception is shaped by earlier texts.

So, that is my modest proposal. The humanities as the study of human readable texts. What do people think?

Friends Don't Let Friends CC-BY

With 0 comments

In the last day or so it seems the old Creative Commons license debate has flared back to bright and vibrant life. Perhaps it is a rite of spring, since a bit of googling reveals that LAST spring Nina Paley and Cory Doctorow embarked on a long debate on this very issue. From where I'm sitting (on my twitter feed) there seems to be an emerging consensus in this spring's license campaign that the simple CC-BY license is the best choice for scholars and artists interested in contributing their work to a vibrant intellectual commons. This point of view seems best summarized by Bethany Nowviskie in her blog post "why, oh why, CC-BY." In her post, Nowviskie notes that the CC-BY license allows for the broadest possible redistribution of work by clearing all possible restrictions on re-use. This openness, she argues, has several benefits: it gives her work the potential to be "bundled in some form that can support its own production by charging a fee, [and help] humanities publishers to experiment with new ways forward," it removes the potential that her material could someday become part of the growing body of "orphaned work," and ensures that she isn't left behind by commercial textbooks (which, she points out "will go on with out me"). She argues that restrictive CC clauses, like the NC clause, represent misguided attempts to retain pride-of-place by original creators unwilling to give over their work to the commons freely. She concludes that "CC-BY is more in line with the practical and ideological goals of the Commons, and the little contribution I want to make to it."

I understand where Nowviskie is coming from, and I think the generous impulse she is following to make her work free to everyone, without restrictions, is an admirable one. Like her, I believe that individual authors should set their work free, as it were, and not try to exercise control over what becomes of the words, images, or sounds they produce. She is correct in asserting the CC-BY is the license that most perfectly negates the dubious property rights that individual authors are granted by copyright. However, ultimately I think she is wrong about the collective effect of artists and scholars releasing their work under the CC-BY license. I do not believe the CC-BY license does enough to protect and maintain a vibrant intellectual commons.

Here's why. The CC-BY license is, as Nowviskie points out, the Creative Commons license most similar to releasing one's work into the public domain. The problem is, we know what happens to an unprotected public domain in the presence of large, rapacious, commercial interests that have a vested interest in the production of "intellectual property:" it is sucked up, propertized, and spit back out in a form the commons can't use. Lawerence Lessig tells this story forcefully and eloquently in Free Culture, as does James Boyle in his "Second Enclosure Movement." The example of Disney's transformation of folk and fairy tales is perhaps the clearest. The old stories that Disney based many of it's early movies on were free for anyone to re-imagine, the version's Disney made (which are, for our culture the definitive versions thanks to Disney's ubiquitous publishing reach) are strictly controlled property. The revision of copyright law (shaped in part by Disney's lobbyists) threatens to remove these versions from the public domain forever. There is nothing stopping a textbook publisher from scooping up Nowviskie's work (or the work of any other scholar publishing under CC-BY) and performing the same trick, producing a modified version that would be lost to the commons (and which might be in thousands of classrooms, becoming the definitive version for students). Without protection, the commons becomes fodder for commercial intellectual property producers, who take from it but give nothing back. This exploitation of the commons harms it in several ways: it prevents the re-use of what are often the best known versions of works, it reinforces a system of production that insists on propertizing or otherwise monetizing content to support producers, and it may alienate creators who what to give their work to the commons but feel taken advantage of by commercial uses of their work.

For this reason, I strongly recommend that everyone use either the Share Alike (SA) clause, which forces re-users to release the derivative work under Creative Commons, or Non-Commercial (NC) clause on their CC licensed work. I use both, just to be sure. While some might argue that these clauses should be adopted by those who prefer them and abandoned by those who don't, depending on their personal feelings about the re-use of their work, I hold that the building of the commons is a collective endeavor, and that we must all collectively choose to prevent the enclosure of the new commons we are building together. My work is not very valuable on its own, but combined with the work of all the other contributors to the commons, it forms a body of work worth protecting from those who would take from our community without giving anything back.

PS: This blog is not clearly labelled with the CC-SA-NC license because I am in the middle of a site redesign (I had to push this post out while the debate was hot)... this blog is, however, under CC-SA-NC

PPS: The redesign is also why everything is such a mess! Come back soon for a nicely designed site!

 

propertized

Watson + Capitalism = ???

With 0 comments

Earlier this week, a piece of natural language processing software, dubbed Watson, developed by IBM, successfully and decisively defeated two human opponents on the game show Jeopardy. The potential implications of this technology seem immense. Wired reports IBM, "sees a future in which fields like medical diagnosis, business analytics, and tech support are automated by question-answering software like Watson." One of the humans Watson trounced, former Jeopardy champion Ken Jennings, mused in in the same article, "'Quiz show contestant' may be the first job made redundant by Watson, but I'm sure it won't be the last."

The question, for me, is what are the larger implications of this emerging automation of intellectual work for our political economy. What happens when we automate vast numbers of service sector jobs? The same jobs that had absorbed (some) of the manufacturing jobs automation had eliminated from the manufacturing sector? Are we on the cusp of a moment, predicted long ago by cybernetics pioneer Norbert Wiener, when "the average human being [...] has nothing to sell that's worth anyone's money to buy?"

I find the notion all too plausible. Blame my time spent reading Peter Watts. Emerging media scholar David Parry, always a bigger optimist than me, suggested a skill that may remain the unique domain of human beings during a discussion of Watson's victory on twitter. In response to a half-joking tweet in which fellow academic questioned her own employability in post-Watson world, Dave wrote, "well yeah, that's why we need academics who can do critical thinking, computers aren't so good at that yet."

Critical thinking is a good thing, and indeed something computers still struggle with. However, under capitalism, meaningful critical thinking, the ability to evaluate arguments, reflect on the big picture situation, enact alternatives to the status quo, is exactly what has been denied the working class. Critical thinking is for capital, the cognitive resources of the working class has been employed in quite a different mode, and one that machines like Watson will likely find all too easy to replicate. This is not to say, of course, that working class people are incapable of critical thought, or that they don't employ critical thinking in their daily lives, only that this thought has not been granted economic value under capitalism.

The question we must ask, then, is what sort of shifts could be made in our political economy to accommodate technologies like Watson, and what sort of are we likely to make? Could we shift our productive mode to value critical thinking by ordinary people? Will we devalue the labor of a vast cross-section of humanity, further destroying the middle class? What tactics or moves might make one shift more likely than the other?

Clearly, I don't know. What do you think?

My diss in one sentence

With 0 comments

"This suggests that those interested in intervening in Wikipedia, or other peer-production based projects, might be better served by focusing on changing the terms of negotiation between interested parties, rather than technologically empowering individuals."

The Rhetorics of the Information Society - Michael Jones at Future Media Fest

With 0 comments

24 hours of video per minute

That's the rate at which digital footage is being uploaded to YouTube, according to Michael Jones' keynote opening keynote presentation at Future Media Fest. Jones, who is Chief Technology Advocate at Google, cited the number as part of his argument that digital communication technology is becoming ever more ubiquitous. Understandably, he saw Google as playing an important role in this ubiquitous information environment.

This image, of thousands of camera-phone eyes feeding days of video into Google as minutes tick by may, for some media theorists, call to mind the image of the Panopticon, the model prison made famous by French philosopher Michel Foucault, in which prisoners arranged in transparent cells at the perimeter had their every move watched by a concealed figure in a central tower. The Panopticon, Foucault explained, was designed to teach prisoners to internalize the values of their guards, because they never knew if a guard was watching, they began to watch themselves.

Is Google a modern day Panopticon, watching over us all, invisibly guiding, Foucault would say "disciplining," our behavior? Jones didn't think so. He went to great pains to describe Google as a passive entity. "We are your servant," he said at one point. At another, he claimed, "we don't make decisions, we take what humans do and we amplify it." As examples, he cited the ways in which Google tried to reflect the needs of its customers. He described how users of Google maps were active participants in the process of drawing the maps that Google served, especially in developing countries. Explaining the motivations of contributors to the Google maps project, Jones said ,"they didn't want me to have their map, they wanted their friends to have their map." Finally, in response to a questioner who asked how Google could claim that they were a reflection of already existing behavior when values were always embedded in technology, Jones replied that data harvested from users was used to develop the technology itself. For example, he explained that the size of buttons in Google's Gmail webmail service had not been designed by some top-down process of expertise, rather different button sizes had been provided to different users, and the ideal button size had been determined based on data collected on the users' reaction times when using the various buttons.

All this, should, of course, be taken with a grain of salt. Anytime an executive officer of major corporation argues that his company is basically powerless, it suggests the company has become aware of popular anxieties about its power. Certainly, this is true for Google. Jones' claims that Google is passive and reflective also seem to overlook an observation that he made earlier in his presentation, when he noted that, "Henry Ford changed the way cities were designed." Just as the automobile transformed the American urban landscape, leading to, among other things, the rise of the suburbs, so too, it is difficult to imagine that a technology as powerful as search could fail to transform our patterns of behavior.

That said, however, I think that Jones' apology for Google makes clear important differences between the 19th century technology of the Panopticon and the 21st century technology of search. Unlike the Panopticon, where a human agent stood in the tower and imposed rational, intentional values on the confined prisoners, encouraging them to adopt regimented work habits and abandon dangerous transgressions, nothing human could possibly process the surveillance performed by Google. Just to watch a day's worth of YouTube video would require a three-year effort! Instead, what seems to stand in the center of Google's apparatus of search (to the extent that there is such a thing) is something else entirely, something lashed together out of computer algorithms and pre-conscious thought. Something that adjusts buttons without us noticing and sums together collective contributions to make a map.

This should not be, in and of itself, frightening. The mastery of human consciousness was always a bit of an illusion. However, I do think we may need to do some reflection about who the mechanisms of search benefit, and what larger transformations this shift from intention to algorithm may entail.

More Wikipedia Clouds

With 0 comments

I put together two more Wikipedia word clouds, in part because I wanted an excuse to work on my Python coding skills, and in part because I enjoy word clouds as an interesting visualization. For these word clouds, I used a Python script to organize the information I scraped from the Wikipedia Zeitgeist page (see prior post for link). The resulting file listed the titles of articles and the number of times each article had been edited for the month(s) it had made the list. By running this file through the Wordle software, I was able to produce a word cloud that displays the titles with their relative sizes determined by the number of edits they had received in a single month.

Most edits in one month

The image above shows that the Wikipedia article on the Virginia Tech Massacre probably has the largest number of edits in a single month for any one English Wikipedia article, though if you look closely (click through to the larger size on Flickr) you can see some articles, like the one on George W. Bush, represented by many smaller entries in the word cloud. This represents the many months that the George W. Bush article was one of the most edited articles on the English Wikipedia, even though it was never edited nearly as many times in a single month as the Virginia Tech Massacre article.

Here is the same data, with some of the less-edited articles left out. The result is less visually impressive, but a little more legible.

Most edited articles 2

Next, I'll modify my script to count up all the edits and display a cloud showing which titles are the most edited articles on the English Wikipedia ever!

Wikipedia Zeitgeist

With 0 comments

Just for fun, here's a quick and dirty wordcloud built by running the data from the most edited articles on the English Wikipedia (Found here: http://stats.wikimedia.org/EN/TablesWikipediaEN.htm#zeitgeist) through the IBM software that powers the website Wordle.

wikipedia zeitgeist

Fun Fact: Wikipedia's Most Edited Pages

With 0 comments

Wikipedia maintains a massive archive of statistical data on the project. Among this data is a list of the 50 most edited pages on the English Wikipedia. Of these 50 most edited pages, all but two are pages having to do with project maintenance, such as the page that is used to notify administrators of vandalism.

Only one actual article is listed, the article for George W. Bush. The other non-project maintenance page? The talk page for the article on Barack Obama.

Bitcoin and the Libertarian Individual

With 0 comments

In Ellen Ullman, in her excellent memoir Close to the Machine, describes an odd young man she briefly took up with as an on-again off-again lover. Among his many obsessions was the notion of creating a cryptographic currency, a wholly anonymous and independent banking system. Well, it looks like someone has gone and implemented this idea. The BitCoin project "is a peer-to-peer network based digital currency." It apparently derives its backing from CPU processor cycles. I'm not exactly sure how that works, but the Ron Paul-esque libertarian dreams of the creators are quite clear in their description of the project's advantages: "Be safe from the instability caused by fractional reserve banking and bad policies of central banks."

I'm pretty sure projects like this get something deeply wrong about the social relationships that money relies on, but I'm not sure exactly what. The individualist mindset that backs all this is suspect, money relies on shared social relationships. However, the bitcoin folks clearly imagine a set of relationships among individuals, in the form of the peer-to-peer network. It is easy to explain why peer-to-peer networks do not describe the world as it currently exists. It is more difficult to explain why attempts to build them seem to inevitably fail.

Nietzsche and the NPOV

With 0 comments

Wikipedia's Neutral Point of view is, understandably, a controversial thing in some circles. I'm still investigating what the actual impact of the NPOV is, and I think its effects are actually complicated. However, I think some people are getting hung up on the word "Neutral," which may imply a sort of "objectivity" that they find distasteful. Namely, a vision of objective knowledge that is "neutral" because it is "pure," somehow detached from the messy world of politics and subjectivity we live in. This critique of the NPOV, I think, is a mistake. I'm not the only one to make this argument, Wikipedians themselves make it in their FAQ for the NPOV, and Joe Reagle has made it as well.

Nietzsche, in The Geneology of Morals makes a critique of the notion of "pure reason" or "knowledge in itself," not dissimilar, I think, to the criticism that is sometimes leveled at the NPOV. He writes:

There is only a perspective seeing, only a perspective "knowing"; and the more affects we allow to speak about one thing, the more eyes, different eyes, we can use to observe one thing, the more complete will our "concept" of this thing, our "objectivity," be.

I couldn't help but be struck by how similar Nietzsche's call for an "objectivity" based on a multitude of perspectives is to the NPOV itself! One section of the policy reads:

[The NPOV] is not a lack of viewpoint, but is rather an editorially neutral point of view. An article and its sub-articles should clearly describe, represent, and characterize all the disputes within a topic, but should not endorse any particular point of view. It should explain who believes what, and why, and which points of view are most common.

Of course, this proves nothing about how the NPOV is actually employed in practice, which is considerably more complicated. Still, I think it makes for an interesting comparison.

About Copyvillain

With 0 comments

Hello! And welcome to the blog!

Here on Copyvillain, I hope to work toward a crtical examination of what's called been called Free Culture, Peer-to-Peer culture, or Peer Production. Basically, all of these terms refer to the mode of producing culture that we find on Wikipedia, in which many loosely organized collaborators work together to produce a larger text without a strictly hierarchical organization.

I think it is important to explain what I mean when I say I intend to take a "critical" take on this method of production on this blog. I do not come here to bury Wikipedia (or YouTube, or Hacker spaces, or what have you). These are some of my favorite things. I think that these projects, and the people involved with them, are often animated by a tremendous idealism, and a wonderful sense that they are working together to build a better future for all of humanity.

I think that idealism is real, in fact I'm banking on it. What I'd like to do here is to argue that some of the cultural assumptions Peer Production brings with it from capitalism may serve to undercut the very idealistic goals that its practitioners embrace. My hope is that their genuine commitment to a better, fairer, human future will motivate them to move away from these assumptions and towards a new vision for Peer Production based on a broader understanding of human equality and shared responsibility.

Of course, I'll also be posting some short write ups on current news, just keeping abreast of things.

Stay tuned!