Aden Evens
Assistant Professor of English, Dartmouth College
Early on with the first Apples, we had these dreams that the computer would let you know what you wanted to do.
— Steve Wozniak
Digital and Medial
Wozniak’s nightmare endures; still we dream of the computer that already knows what one wants. If only we could eliminate the clumsy interface, all that clicking and typing, the computer would at last become equal to the will of the user. Fully adequate to Marshall McLuhan’s description of technology as an ‘extension of ourselves’, a transparent interface would bypass the senses to transcend medium altogether.
Problematic at best, the desire for a transparent interface nevertheless drives much of digital culture and technology. But not the Web; or at least, not Web 1.0. Thoroughly commercialized, comfortably parsed into genres, serving billions of pages of predigested content to passive consumers, the World Wide Web as developed in the ’90s unabashedly embraces its role as medium. While so many digital technologies work to hide their mediacy–drawing in the user with a total simulated sensorium, dematerializing the resistances of size and weight, untangling the knots of cables tying user to machine and machine to cubicle, minimizing the interface–Web 1.0 proudly clings to the browser as a glaring reminder of its medial character.
While Web 2.0 has not forsaken the browser altogether, it nevertheless seems to offer a different sort of mediation. Arising alongside the atomization of browser functions, the ubiquitization of connectivity, and the coincidence of producer and user, Web 2.0 retains the form of a medium while reaching for the experiential logic of immediacy. This is not the immediacy of the transparent interface; rather, Web 2.0 effects an immediate relationship between the individual and culture. The interface does not disappear, but its mediacy is subsumed under the general form of cultural participation. Focusing on the “version upgrade” from Web 1.0 to 2.0, this essay will explore the implications for mediacy of this transition, noting that the fantasy of immediacy which drives Web 2.0 is layered and complex. The typical account of immediacy proposes to eliminate the interface and so construct a virtual reality (VR). But Web 2.0 mostly sidesteps the virtual, propelled instead by a fantasy of intuition in which the Web already knows what you want because it is you. Crucially, fantasies about the digital are effective: the computer’s futurity inhabits our world, finding its expression in politics, advertising, budgeting, strategic planning, fiction, philosophy, and in the hopes and fears that infuse and define our culture. Conceptions of today’s future are inevitably shaped by the digital, which appears in forward-looking images and texts from patent applications to novels and film. A fantastic promise, often utopian, drives the development and adoption of digital technologies. But we do not fantasize without an attendant anxiety, a worry about what becomes of us in the digital future.
No one is sure whether Web 2.0 has arrived. When it carried the aura of futurity its fanfare was brash, but like much technological evolution, its actual adoption has been more subtle. Sighting along the line that connects Web 1.0 to 2.0 and 3.0, we can discern a telling fantasy about the internet and the place it makes for the user. Whereas the loudest discourse about the digital continues to trumpet VR as the ne plus ultra of the digital future, the fantasy surrounding the internet looks down a different path of digital progress. Instead of imagining a predefined virtual world in which a consumer can explore and play without risk, we anticipate a world made by the user’s desire, a domain equal to the will of the user. This is the fantasy of intuition, the computer that knows what one wants, supplanting the tired fantasy of virtuality. These two fantasies are easily confused and frequently intertwined, for they both answer to a more general desire for immediacy, the desire to erase the gap that separates user and computer. Immediacy includes both virtuality and intuition, but the discourse of VR tends to eclipse the more subtle (and less representational) issue of the intuitive interface.
In Remediation, Jay Bolter and Richard Grusin note that new media move in two different directions, toward a hypermediacy that underlines the medial nature of new media, and toward a countervailing immediacy that works to erode new media’s mediacy. Bolter and Grusin claim that a dialectic of hypermediacy and immediacy characterizes all media to some extent, though new media realize this bipolarity with particular intensity. Media are traditionally understood according to a tripartite, unidirectional schema: sender → channel → receiver, or author → content → audience. This structure is fundamental to media but inadequate, and the interactivity of new media mounts an especially fierce challenge to this traditional image. Interactivity ties together the ends of the simple schema, coinciding audience and author; ‘for the individual participant, the traditional value chain of producer-distributor-consumer has condensed to a singular point’ (Bruns, 2008). There are no longer distinct actors separated by a transmission channel; without opposed boundaries, the channel, the middle does not hold. Below I argue that this scrambling of the medial position engenders a risk, a dissolution of the subject, when the audience can no longer be separated from the content with which it interacts. Exciting and powerful, interactivity is thus also dangerous, for it threatens to dissolve actors and medium, an information soup.
When the digital collapses the two ends of the medial schema, it also renders indistinct the middle term, the medium, by denying its specificity. Television and books, though each includes many genres, impart a character to their contents. No matter what is on television, it is televisual, and if this character is hard to define it is nevertheless distinctive and recognizable. The digital by contrast resists essential character, for it excels at a broad range of simulation; it can behave as almost any medium, a ‘universal media machine’ (Manovich, 2001: 69). In light of its apparent lack of medium specificity, some commentators propose to label the digital a panmedium, but this goes too far.[1] While the digital can effectively simulate a great many things (TV, newspaper, books, film, radio, compact disc, telephone, etc.), it never submits entirely to the simulated, stamping its products and processes with its own ontology. Recognizing the need for some additional complexity, Alan Kay coins the term metamedium to describe the medial character of the computer. ‘The protean nature of the computer is such that it can act like a machine or like a language to be shaped and exploited. It is a medium that can dynamically simulate the details of any other medium, including media that cannot exist physically. It is not a tool, although it can act like many tools’ (1999: 136). According to Kay, the metamedium is a construction kit, a medium for the invention of media as much as for their capture. This awards the computer a creative (and not only mimetic) role, to exercise ‘degrees of freedom for representation and expression never before encountered’. The computer is not just a simulation machine, but a vastly malleable canvas.
Emphasizing the computer’s ‘protean nature’, Kay does not mean that the computer is a nascent technology. Rather, he maintains that the digital is essentially and eternally protean, its nature always to be determined. Which technology insists more forcefully on its own futurity? The high-tech industry sells a promise with every product, a promise about what the digital will do tomorrow. With a dynamic of constant renewal (hence also built-in obsolescence), the digital is the technology of the future. This is the age of the imminent arrival of the next big thing: the next version number, the next chip, the next OS, the next standard. The ontology of the computer includes an indeterminacy that might be turned to whatever purpose, protean despite the determinism of the binary code.
For example, consider the introduction of new operating systems. When Microsoft or Apple introduces a new operating system or a new version of an operating system, many of its features are trumpeted on the basis of what it will eventually be capable of when certain future conditions are met. A 64-bit OS will really work well once applications are recompiled to take advantage of this wide-bus architecture; new hardware standards will shine as soon as third-party manufacturers re-engineer their products to operate according to the new integrated standard. Advances in hardware and software are often predicated on the imminent arrival of complementary advances, and the typical situation is one of constant flux, where standards are altered before they have been entirely implemented. The very concept of a standard has a different meaning in the digital realm: digital standards tend to serve both as norms governing current behavior to ensure interoperability or broad consistency and predictability, but also as virtual carrots, dangling forever in front of the industry to prompt its progress.
The Web as Medium
If the fundamental proteanism of the digital perplexes its ready categorization as a simple medium, the World Wide Web offers a complex counterpoint to the computer’s recalcitrance. For no digital technology more than the Web can claim to be a (new) medium. We obtain news and information on the Web. We watch video and listen to music. We publish and distribute fiction and analysis on the Web. We shop, make art, and pass the time on the Web. Understood in terms of these and its many other functions, the Web is a medium.
The Web’s claim to mediacy is not generated spontaneously. Driven by a confluence of commercial and popular interests the Web sloughs off the dynamic open-endedness of the digital and tends increasingly toward the staid and static. We visit the same sites each day, settle into familiar and comfortable habits of browsing. But it was not always so. Twelve years ago you did not browse but surfed the Web. Surfing is active, thrilling and risky. The waves carry you where they will, and the water may even overwhelm you. The experience of surfing the Web used to be an unpredictable and exciting one; the next link led who-knows-where. Following the currents whose dynamic topology was inscribed in the hyperlinks leading from page to page, the user generated not just a new path, a new text, but a whole new set of meanings, a new activity. (Jerome McGann refers to those early days of the Web as the ‘World Wild West’ [2001: 5].) In the mid-’90s, surfing the Web was surrounded by an aura of excitement and invention, and there was a shared sense that the mass-scale availability of hyperlinked documents and hypermedia formats was producing a new way of reading, previously unknown. No one knew quite what direction this challenging network might go, and its instability and unpredictability resisted the ascription medium. (In the early ’90s, the frenzy of academic research into hypertext was driven by the belief that we were teetering on the cusp of a whole new paradigm of textuality. Somewhat abashed, we’re still waiting for the new electronic text fully to arrive.)
A dozen years on and the risky thrill of surfing has given way to the bourgeois fantasy of browsing. Idling one’s way along the aisles, one peruses goods which for their part offer no resistance, no threat, and very little surprise. In 2009, one knows just where one will go today. Hyperlinks do not startle or jar, and they may no longer even define the activity of browsing, which is now more often a matter of selecting a bookmark. Bookmarks at the ready, users visit the same few sites each day, and if Google or a blog entry points to an unfamiliar site there is little chance that clicking this link will open up new vistas of experience.
Cementing its mediacy, Web 1.0 adheres readily to the semiotic structure that pervades media studies: content-creator → content → audience. And unlike the digital in general, the breadth of the Web, while imposing, can be comfortably parsed into genres (blog, shopping, news, wiki, reference, search, chat, forum, etc.). (Of course there are outliers.) Having lost its threat and its thrill, the Web does not push against the boundaries of media, but situates itself as the new medium.
Web 2.0 purports to reverse all of that, reinjecting a lost dynamism into the ossified medial structure of the World Wide Web. By making the terminal positions of the medial chain overlap, the circuit of production and consumption collapses into itself, and this disruption leads to unpredictable and exciting reconfigurations of subjects and contents. ‘The endpoints are starting to inform the center’ (Markoff, 2005). By definition, Web 2.0 content remains to be determined, for the audience, using Web 2.0, generates content, and one does not decide in advance what they will come up with. Neither surfing nor browsing, the user of Web 2.0 contributes (or withholds contribution by lurking), creating the very content she peruses.[2] Announced to the public in 2005 (see Markoff), in the hushed tones reserved for developments that we oldsters will never fully grasp, Web 2.0 defies existing categories, like medium, and is so forward-looking that it already risks being leapfrogged. Those who embrace the insistent futurity of the digital are keen to usher in the next increment even before we have realized the imminent upgrade. Web 3.0 hovers in the middle distance, obscured by the singularity that must precede it, in all of its eschatological finality.
The Web and Immediacy
Befitting its claim to mediacy, Web 1.0 eschews the headlong rush toward a transparent interface so characteristic of other digital phenomena; rather than attempting to eliminate the interface as in virtual reality, the Web embraces its interface, the browser. Browsers have changed quite a bit over time, to accommodate alterations in standards for content encoding on the Web, to allow greater customization of the appearance and experience of browsing, and to render more efficient the most common browsing maneuvers; but the basic interface of the browser, including especially the strictly windowed display of information, shows no sign of erosion. As analyzed above, the stable and familiar activity of browsing characterizes the Web as medium, so it is no surprise that the browser persists as its significant medial symptom.[3]
Despite the persistence of the browser, the Web—even 1.0—is not immune to the fantasy of immediacy that propels so much of digital culture and technology. Immediacy seizes the Web not via a frontal attack that would simply eliminate the interface (the browser), but via an end-run that recapitulates the interface in a diversity of devices and channels. The Web’s particular brand of immediacy is most manifest in the increasing availability of access. No longer restricted to the personal computer, the Web is now available on cell phones, PDAs, handheld gaming systems, electronic readers, and other “personal” devices. Further, even while the browser remains dominant, its functions have been cannibalized and regurgitated piecemeal, to render key information available more immediately and effortlessly: specialized software constantly displays certain time-sensitive data from the Web, such as stock quotes, weather reports, or sports scores, while other software selectively browses and packages specific material like film reviews and search results, or sends a text message when your plane is late. The dream underlying this diffusion of the Web is a dream of unmediated and fully intuitive access: anytime, anyplace one should be able to retrieve any information. In fact, ideally desire should be adequate to its own fulfillment, so that one should as soon have the information as seek it. (Vannevar Bush might have forecast even this development, though to his credit he restricted his description of an associative universal database of information, the Memex, to technologies that were practically realizable even in his pre-transistor era.) Wearable computing, universal wireless coverage, a world encrusted with ID chips, ubiquitous computing, all aim toward the goal of immersion not in a simulated sensorium (VR) but in a sea of data.
Thus at least one popular account of the topology of the World Wide Web must be amended. The Web has been heralded as a flat space; lacking a center, growing rhizomatically and without imposed architecture, the Web refuses hierarchy so that each site is equally central, each page can lay claim to a perspective as valid as any other. The Web has no starting point and no inherent order, though there are plenty of local organizations and ad hoc impositions of structure. The lack of hierarchy has, it is claimed, significant economic and political consequence. For example, the availability of news from the broadest variety of sources with no official sanction given to one or another source means that audiences can choose whom to trust and can consider perspectives that would never show up in mainstream media. Because the flat organization of the Web implies a sort of populist economics of information, commercial interests that lack the resources to compete against the largest corporations can still find a healthy number of buyers. Moreover, the viral model of marketing, bolstered by the anti-hierarchical topology of the internet, means that the little guy might even come out on top: may the best product win! The flat Web, it is said, implies a level playing field.
Though these claims about the social and political implications of Web topology may have real merit, it is mistaken to describe the organization of the Web as an ‘infinite flat surface’ (Manovich, 2001: 76).[4] Lacking an architectonic structure, the Web’s particular fantasy of immediacy implies a topology best described not as flat but as a black hole: each point next to each other point, any page on the Web but a single click away, the maximum density of information. The image of a flat surface encourages the imposition of a metric on the space, a measure of distance that describes some points as closer and others as farther away. By contrast, the black hole distorts space and defies an overarching metric, allowing local and provisional measures of distance but ultimately insisting that any point might be proximal to any other.
With most points equally distant, the problem is not how to get to a given page but how to find the right ones. At its extreme, the dream of the intuitive interface implies an immediate access to information, such that the desire to know something would without effort call to mind its desideratum. Though this dream is driven by immediacy, it is not the typical fantasy of the virtual, for the black hole collapses the simulated space of VR. Rather, on the Web we dream of incorporating the machine into ourselves, making it part of consciousness. To swallow the machine not only keeps it always close but also eliminates the interface. (As we will see below, the dream of integrating the machine such that it becomes equal to one’s will is the driving force behind Web 2.0.) The transition from the fantasy of the virtual to that of intuition alters the corollary threat: taken to its fantastic limit in fiction, VR usually threatens to overwhelm its users. By contrast, at the limit of the fantasy of intuition on the Web, desire equals its fulfillment, so that the user herself becomes part of the flow of information: information as knowledge, information as memory. The black hole crushes into indistinction anyone caught in its orbit, flattening the division between user and data. The interface is thus replaced by intellection, at least according to this dream of internet immediacy. (Marie-Laure Ryan [1994] too calls forth this equivocation of desire and its fulfillment, describing the future [absence of] interface as a ‘language of the angels’, in which it is enough to think something that it become real.) In this fantasy we see the result of the collapse of the medial chain, where a total interactivity not only dissolves the distinction between author and audience but even between audience and content. Floating in a sea of data, the user has simply become those data, their availability part of her consciousness or memory.[5]
This endpoint should seem farfetched, even improbable, but the Web beckons to this fantasy of intuition.[6] The unification of all information, the elimination of division among dataspaces, the database of databases, the desire to index everything, the increasing inclusion and proposed standardization of metadata, the heads-up display of contextually appropriate information, even the banality of the push-delivery of sports scores through cell phones, these are the policies and products of the age of universal availability. ‘The dream is an old one: to have in one place all knowledge, past and present. All books, all documents, all conceptual works, in all languages’ (Kelly, 2006). If the dream of the digital is to eliminate the interface, to make the user’s desire equivalent to its realization, then on the Web this dream amounts to the elimination of the subject, who simply has the data, without having to wrestle an interface in order to get it. One must sacrifice only one’s self to become all-seeing and all-knowing.
The diminutive single click turns out thus to be crucial, for it sustains the user as subject and so also the Web as medium. Choosing not to click, the subject staves off the collapse of the Web into itself and the collapse of the subject into the Web. A minimal resistance, the quantum unit (one bit) of information, the single click (or rather, the option to withhold that click) holds at bay the entirety of the World Wide Web, placing just this page before the user and not the others. Absent the click, the Web would effectively collapse into itself, the user would no longer be separate from the data and would drown in those data, user and data dissolving into the “formless” digital that makes no distinction without a subject.[7] Choosing not to click, or choosing to click here and not here, the user defines her unique perspective on the World Wide Web, establishes her own experience as limited and tied to a place and time.
The single click therefore defines the subject as a subject; it represents her sole power not only to choose which page appears before her, but to separate herself from the mass of data that is the black hole of the World Wide Web. Maintaining her subjectivity—as the one who stands above and apart from the Web and also as the one who chooses which particular page will be viewed—the single click renders the Web a medium. For only as what stands between the subject and the data is the Web medial. The browser, even as it breaks apart and reconstitutes itself within Web 2.0, remains the symptom of the Web’s mediacy, the software that accepts and directs the single click. (While push delivery of information to computers and personal devices circumvents the single click, users must be very choosy about which feeds to display. Push delivery only works for a few choice bits of information before it becomes overwhelming; relinquishing the power of the click, we teeter on the brink of the black hole.)
The space of information has previously been compared to a black hole, though with a very different resonance. Martin Rosenberg (1994) describes hyperlinks as black holes, passages from one page to another. Even while employing the figure of the black hole, Rosenberg cautions against the uncritical appropriation of new physics as a revolutionary discourse, noting that such borrowings from science fail to recognize the ways in which science, even when tied to non-linear or discontinuous functions, still imposes a stable geometry on its objects of study, denying them the truly dynamic character that a revolutionary theory would seem to require. In this regard, he privileges the moment of passage through the hyperlink, (what I have called the single click), as the sole possibility of a disorientation severe enough to count as a revolutionary destabilization, but he warns that this moment is too fleeting, and that the reader soon reorients herself in the new space.[8] Readers are momentarily uprooted from normal consciousness when they click on a link, but soon reestablish a stable perspective in relation to the new page that comes up. Though provocative, this view of the Web is dated, reflecting the excited sense of potential that surrounded the notion of hyperlinks at the end of the last century. Linking, which was only a marginally revolutionary possibility even in Rosenberg’s account, has now become simply a normal experience and no longer destabilizes the reader. The black hole does not represent the possibility of escape from the banality of linear text but only the promise of a total availability of information which is also the threat of a total absorption into the machine.
The Web as Metamedium
Though the hyperlink may no longer destabilize the reading process on the World Wide Web, it does both promote and defer the fantasy of the Web as black hole. As the clickable passage from page to page, the hyperlink represents the immediate proximity of every linked page, the tie that binds. In this spirit, Manovich (2001: 172) cites Paul Virilio: ‘at least in principle, every point on earth is now instantly accessible from any other point on earth’. But inasmuch as each page contains only certain links and not others, the reality of the hyperlink (as opposed to its fantasy) is to render only a small subset of the Web immediately available. The logical space of the World Wide Web consists of numerous organizations or perspectives, wherein each page brings near some part of the Web and places more distant the rest.[9] Just as the resistance of the single click makes the user a subject and the Web a medium, so each page constitutes a unique perspective on the Web, an organizational overview of the information space, (literally) underlining some pages and leaving most of the space of the Web to hover implicitly in the background.
This role of the hyperlink as structuring element gives new meaning to the claim that the Web is a metamedium. Certainly the Web mediates other media, and certainly the Web has brought about the creation of new genres and even new media. But its most essential function, its most consistent purpose across its many genres is to organize information, to present other media and other Web pages as more or less distant from the subject. To construct a page on the Web is to create a perspective, to privilege selected content and encourage ready access to particular information, to thrust forth and even forge certain relationships. This suggests a new reason for applying Kay’s neologism metamedium to the World Wide Web: the Web is a metamedium in the sense that it is a medium for the organization or structuring of information, a medium for providing diverse perspectives on a vast and unwieldy mass of information. ‘This may imply that new digital rhetoric may have less to do with arranging information in a particular order and more to do simply with selecting what is included and what is not included in the total corpus presented’ (Manovich, 2001: 78, fn13). Meta- here indicates the shift from an emphasis on content proper to an emphasis on its structure or organization. Inasmuch as each page offers a perspective on the entire Web, the Web pushes content into the background, privileging instead the presentation of structure or form. A Web page organizes information into a hierarchy of near and distant, and its metamedial function is precisely the presentation of this hierarchy. Each page is effectively an index to the entirety of the Web, but it is a critical index whose meaning derives from the choices of what to frontload and what to leave in the background.
This metamedial charge of the World Wide Web prompts its past and current development. To promote the role of the Web as organizer of information, technical standards increasingly mandate the separation of form from content, effectively allowing the independent analysis and manipulation of the perspective itself, regardless of what content it provides a perspective on. Cascading style sheets (CSS) explicitly divorce the appearance of information from its content, and more fundamentally treat appearance in terms of structural characteristics. That is, a given object on a Web page (paragraph, image, sound, etc.) is labeled in terms of its structural properties, placed within a hierarchy that records its functional relationships independently of its actual content. By analyzing CSS, one can read something about the structure of a Web page without attending to the specific content of the text or other elements on the page.
CSS, which has been widely adopted across the Web and incorporated into all recent Web design software, is only one prominent example of the push toward foregrounding structure as the Web’s modus operandi. More broadly, HTML is slowly being superseded by XHTML, which latter standard is based on the greater flexibility and syntactical rigidity of XML, and this too means increasing facility in treating structure without regard to content. (One could gloss XML as a system for formalizing content so that it becomes a feature of structure.) The incorporation of metadata into the underlying code for Web pages similarly provides information that describes the structural relations of that page to its outside. Metadata are keywords and other information that do not appear when a page is viewed normally, but that are incorporated into the code for the page to help identify the nature of its content. The Semantic Web project is one extreme example of this practice, where metadata provide information, for example, about the relationships among terms on the page and not just the appearance of those terms. Even the Document Object Model instituted early in the life of HTML furthers this goal, encapsulating each element on a Web page (as well as the page itself) under a single name, so that its relationships to other elements come into sharper focus.
Alan Liu (2004) identifies this push to separate form from content as the driving force behind the progress of networking technologies. Though he acknowledges the extraordinary power of this approach–which allows a reader (or, as Liu underlines, a machine) to gain a perspective on information independently of the information itself–he also notes a sinister dimension to this trend. Given the Web’s power to manipulate perspective, content is increasingly deemphasized to the point ‘where an author in effect surrenders the act of writing to that of parameterization’ (59, my emphasis); content comes to fall ‘outside the normal play or (as we now say) networking of discourse’ (60). The desire to rein in an unruly overload of information (the World Wide Web) through technical measures that organize that information according to structural categories leads to the abdication of authorship. The Web as metamedium is a tool for creating perspectives on information, but this tends to hypostatize the very idea of a perspective to the detriment of meaningful information itself. The user, focused on establishing perspectives, no longer engages critically with content, entrusting the generation of content to the vastness of hyperspace. Manovich (2001: 127): ‘The World Wide Web […] encourages the creation of texts that consist entirely of pointers to other texts that are already on the Web. One does not have to add any original writing; it is enough to select from what already exists’. Once again, the fantasy of intuition–already intruding upon reality in this case–cuts both ways, providing a more efficient access to just those data the user desires while in the same stroke eliminating the singularity of authorship by yielding authorial responsibility to the network. To cast the Web as a metamedium is to void the medium in favor of the meta-; for all its massive gravity, the black hole, at its limit, turns out to contain nothing.
Two brief examples, search spaces and blogs. Search engines use algorithms to filter information. While some of this filtering is based on content, what usually distinguishes one engine from another is how it evaluates structure. That is, a search engine chooses to filter and order its results largely on the basis of non-contentual aspects of the pages being searched, such as how many other pages link to it or how frequently it has been accessed. Search engines thus provide perspectives on information by analyzing perspectives on information, doubly earning the prefix meta-. Blogs are as often as not about the act of blogging, a reflexivity that demonstrates explicitly the way in which content disappears in favor of the analysis of form. Moreover, the genre of the blog nearly mandates that each blog must refer to other blogs, so that the dynamics of the blogosphere are based largely on the interlinking or cross-referencing of various blogs. The skeletal schema of a blog entry is just a link to another blog entry.
Immediacy 2.0
Web 2.0 coincides authors and readers, producers and users. The collapse of the medial schema makes for a particularly strange form of mediacy, as the equation of author and audience generates a kind of immediacy by default. Self-expression and thus also self-recognition become the defining experiences of Web 2.0, such that the Web is both mediate and immediate at the same time. How far does this paradox extend?
Alex Galloway might credit standards, or protocol in his terms, for deemphasizing content and promoting instead the Web page as placeholder, a canvas on which the user can paint what she will. Standards like TCP, XML, and CSS are neutral with respect to content. By formally separating content from the package that surrounds it, our computers can work with those packages regardless of their content, a protocological agnosticism. Since protocols remain ‘indifferent to the content of information contained within’ them (2004: 52), one can “author” a Web page without actually creating any content, just by leveraging available standards. Web 2.0 then appears to be a natural consequence of the ubiquity of standards; mature standards help to vacate content on the Web, so that Web pages start blank and their contents appear eventually to spring up as if from culture at large.
The mathematical theory of communication supports this development. The more universal and specific are the standards that regulate the passage of information, the less information any given message contains. Mathematically, information is a measure of how surprising or unanticipated are the data that arrive, while a standard makes the possible data more anticipated. A rigorous standard allows the receiver to predict much about the message to be received. It is as though more and more of the information is itself built into the infrastructure of computing, so that we no longer pass information but just conduct ourselves according to the standard. Inasmuch as Web 2.0 establishes standards on top of standards, templates awaiting completion by users who fill in the blanks, the range of information narrows. To contribute to Web 2.0 is not so much to originate an idea as to adopt an available position.
No wonder that the fantasy of Web 2.0 recasts the nature of authorship. First, the mantle of authorship moves from authorized content-creators to users. Those who formerly provided content now generate forms awaiting contributions, portals that fill with data from other sites, blank slates on which users may inscribe their desires. But even those users mostly defer full-fledged authority, choosing instead to circulate or affirm existing information. Bloggers link compulsively to other blogs, reproducing content or just redirecting readers with any number of gestures that boil down to ‘Check this out!’. YouTube videos bury the author and assert his insignificance as an individual; his role in creating the video is relatively minor compared to the many who comment on it (‘Sick!’, or ‘Adorable!’, or of course ‘LOL’), and who, simply by viewing, promote the video to the point of broad recognition, voting with their clicks. These masses supply Web 2.0 with its content, not only by uploading but by transmitting, by referring, by linking, and by starring in all their banality as the subjects of the videos. The content of the video only confirms its collective authorship, as most YouTube clips are remarkable for their mundanity. YouTube documents ordinary lives, the silly things that anyone might do, the footage that could have been shot in your backyard. A significant portion of the meaning of a YouTube video derives from its claim to everydayness, its status as generic cultural expression. The subjects of these videos are us, just as the directors and the viewers are also us. Collectively we must be the authors of these videos, for they claim to represent and define us.
YouTube videos seem to become popular by already being popular, an information-age version of the adage about the rich getting richer. There is a strange kind of circularity or bootstrapping underlying Web 2.0, visible in countless examples but best seen in that hallmark of Web culture, the meme. (YouTube is home to many memes.) Memes come from nowhere in particular. They succeed not because they are exceptional but because they are everyday. We view and enjoy them with a wink, marveling as much at their clumsiness or amateurishness as at their humor or pathos. What we most enjoy in a meme is the bizarre realization that ‘this is culture’, that the meme represents ourselves. Their provenance rarely matters, for they thrive, regardless of where they come from, by virtue of being passed around or referenced. Transmission eclipses content in a meme; a meme is what it is because it gets passed around, affirmed by a collective viewership. Neither the content nor the medium but participation itself is the message. Deprecating its banal content, a meme foregrounds its status as meme, inviting viewers to click as the formal acknowledgement of participation. To view a meme is not to appraise its content but to assert its status as representative of culture and to assert one’s own status as purveyor of that culture. We pass along a meme because that is what it means to belong to this new Web culture: Web 2.0 is undoubtedly a fetish.
The fantasy is that, with enough participation, content arises from out of culture at large, reflecting our collective beliefs, opinions, and ideas. Each user gets to assent to those expressions that suit her, authorship having disappeared in favor of selection, a menu-driven collective creativity. Individuals are represented just to the extent that they subjugate their individuality to the zeitgeist, proclaiming in many voices nothing more than their willing complicity, a common message whose content is its commonality. (‘I love Big Brother‘?) Each contributes to Web 2.0 but only as a generic representative, not an individual but a subject position that rightly claims authorship of the collective expression that is Web 2.0.
From online shoe stores and book stores to recipe sites and film review databases, users chime in with perspectives that are their own but also enunciations of culture. That is, Web users feel obliged to speak for themselves precisely because their ideas and opinions are representative, how others will also likely feel. (This is partly why so much discourse on Web 2.0 is me-too-ism. One writes not in order to say something but in order to have one’s say.) Writing under generic pseudonyms that express roles more than individuals (‘frequentbookbuyer1001’ or ‘iluvvideocameras’), users shoulder the burden they have welcomed, to generate the future by weaving the Web. Each capsule review, each editorial response, each partisan contribution an assertion of belonging to a collective that owns the products of its self-fulfilling enterprise. The opinion aggregator, from tag clouds to last.fm to Netflix to StumbleUpon, captures the essence of Web 2.0, masking its users’ identities behind the collective weight of their approval. One freely offers one’s valuable opinions, knowing that they inevitably become the views of the collective. This is not altruistic as much as it is a means of constructing culture and simultaneously constructing oneself as its representative. The Digg is the minimal representative gesture of Web 2.0, the simple thumbs-up that, combined with everyone else’s thumbs, constitutes content. If the single click as described above maintains the Web as a medium, the Digg, expressed in a single click, dissolves the individual into the participating collective, the owners of culture. ‘You won’t find editors at Digg—we’re here to provide a place where people can collectively determine the value of content and we’re changing the way people consume information online’ (Digg.com, 2008).
The pretzel logic of Web 2.0 finally realizes the immediacy that has been the dream of the digital since its inception, driving its relentless progress. Abdicating content and recasting authorship, Web 2.0 presents itself as a blank page on which each user selects what to foreground, representing her individuality as a matter not of creativity or desire but of perspective. The Web becomes a reflexive nest of perspectives on itself, a simulacrum that severs its ties to the ground of authorship and declares itself equal to culture, from which it arises autochthonously. No gap exists between culture and its expression, no individual can fail to recognize herself in this new Web of perspective since she is, after all, invited to generate there her own perspective, representing her “unique” relationship of collective belonging. Alienation on Web 2.0 is impossible, for there is no individual who could be alienated. The fantasy of intuition returns full force: of course Web 2.0 knows what you want, for Web 2.0 is you. Emptied of content, Web 2.0 expresses only the fact of belonging to culture, the formal declaration of participation that is the collective auto-constitution of culture. When users are producers, when audience becomes actors, all the world’s a stage and performance equals reality. Web 2.0 achieves immediacy by default, not a reflection of culture but the total of culture, the zeitgeist materialized. A strange paradox then that this immediacy of Web 2.0 should be so highly mediated. The new Web expresses you just to the extent that you affirm your belonging by embracing a mediated existence. The immediacy of Web 2.0, its claim to be culture, operates only as long as culture itself contains the structure of technological mediation. You are not a member of our culture unless you express yourself and discover yourself on the Web, which is to say, via the computer. The persistence of the browser evinces a residual mediacy, but this mediacy is now a mere formality, an element subsumed under the immediacy of Web 2.0. Finally the computer knows what you want.
Post Script
Wozniak vindicated? ‘Early on with the first Apples, we had these dreams that the computer would let you know what you wanted to do’ (Stern, 2007).
When Eve eats the first apple, man discovers his humanity: alienation from nature and from God is the human condition, but the Fall is also the condition of freedom and responsibility. Wozniak dreams the undoing of the Fall; this time the first Apples, rather than leaving you to your own devices, let you know what you want, resubmit humans to a nature and thus to a (digital) God who rules that nature. After all, the computer that tells you what you want provides not only your desire but also its satisfaction: the black whole.
Author’s Biography
Aden Evens is Assistant Professor of English at Dartmouth College. Having misspent his youth as a computer programmer, Aden completed a doctorate on the ethics of Gilles Deleuze, published a book, Sound Ideas, on the phenomenology of music technology, and is currently researching the creative possibilities and limitations of the digital.
Notes
[1] Marie-Laure Ryan: ‘VR is not so much a medium in itself, as a technology for the synthesis of all media’ (1994: §6).
[back]
[2] Even lurking is a form of contribution, as one’s patterns of browsing generate value to be mined.
[back]
[3] MIT Media Lab scientist Michael Bove (2000) conducts research aimed at eliminating the browser by extracting it from the personal computer and incorporating its functions into television and other household appliances. Notably, he describes this process as one in which the Web becomes ‘just a medium’, presumably because abandoning the computing interface allays the threats to mediacy characteristic of the computer: ‘What has to happen before the Internet becomes just a medium is the disassociation of the services from the Web browser’. Though Bove claims that eliminating the browser is the key to establishing the Web as a medium, his motive is the normalization and stabilization of the Web’s function for information retrieval, which he believes is threatened by the cumbersome interface of the personal computer.
[back]
[4] Though the image of the Web as flat is widely repeated, it is not commonly held among those who actually study the Web’s topology. Foremost among such theorists is Alex Galloway, whose principal argument in Protocol is that the lack of hierarchy is a myth and a dangerous one at that. While some of the standards that allow communication over the Web are egalitarian and decentralized, others are utterly hierarchical. This is an important and perceptive correction to the popular image of a flat Web topology, but it applies to the structure of power on the Internet and not primarily to the structure of information. Power exerts a real influence over the organization of information, so that, for example, the Web has a disproportionate number of pages written in English, but there is still no center, no index, no hierarchy of information, nor anything even approaching one.
[back]
[5] The fantasy of intuition is not the only force driving digital development but it remains most compelling. Witness the film Æon Flux, where the interface is inside the subject: computer graphics show an animated view of the character Æon’s brain chemistry sparking into action, while she abandons her external milieu and finds herself, phenomenally at least, in a space of immediate communication and information. In this space, her interactions (conversations, queries) are only a matter of willing, the interface is a pure cognition.
[back]
[6] The current fervor regarding search space on the Web (Google, Yahoo, etc.) attests to the dire need for a perspective on a set of data that is otherwise simply overwhelming. Since the beginnings of the Web’s popularity, it has been said that the killer app is the algorithm that will find just those data one was searching for. This evinces again the ambiguity between the computer as tool and the computer as intuition: the correct algorithm will produce not only the page for which the user was searching but will also generate results that the user did not know to look for but will find eminently helpful.
[back]
[7] Compare Mark Hansen (2004: 11), who describes the digital as without its own form, waiting for the human perceptive body to ‘enframe’ it and give it a form. Or Alex Galloway (2004: 52): ‘digital information is nothing but an undifferentiated soup of ones and zeros’.
[back]
[8] Rosenberg (1994: 289-290): ‘Yet, except for the dislocations made possible by the leaps through the nodes of a hypertext, what really occurs is that the attention of an observer simply becomes shifted from one geometry to another. Differences may exist between one geometry and another, but from the perspective of avant-garde polemic, no fundamental change can occur in the framing of human awareness by regulated space and time, as exemplified by the reading process, except in that brief moment of nodal dislocation’. Rosenberg’s article focuses on debunking rhetoric that draws revolutionary conclusions in the humanities from non-linear physics, and he spends too little time (at least given my interests) examining the exceptional situation of the hyperlink.
[back]
[9] A game demonstrating the principle of distance and proximity on the Web was discussed in various forums around the turn of the century: ‘Web that Smut’ (also called ‘Six Degrees of Pornography’) placed each player in front of his computer, with his browser pointed at an agreed-upon ‘innocent’ starting page on the Web. The aim was to travel to a page of pornography in the fewest clicks. The claim of this game is not only that the Web approaches an ideal of total proximity, but that this same ideal includes carnal desire, the crush of Web pages incorporating into its midst a mass of erogenous flesh.
[back]
References
Æon Flux. Dir. Karyn Kusama, perf. Charlize Theron, screenplay by Matt Manfredi and Phil Hay, characters created by Peter Chung, MTV (2005).
Baldwin, Sandy. ‘Purple Dotted Underlines: Microsoft Word and the End of Writing’, AfterImage 30:1 (July/Aug. 2002): 6-7.
Bolter, Jay David and Richard Grusin. Remediation. (Cambridge, Mass.: MIT Press, 2000).
Bove, V. Michael, Jr. ‘Will Anyone Really Need a Web Browser in Five Years’, unpublished research statement available at https://web.media.mit.edu/~vmb/papers/Bove-Montreux2000.pdf (2000).
Bruns, Axel. ‘The Future Is User-Led: The Path towards Widespread Produsage’, fibreculture 11 (2008), https://journal.fibreculture.org/issue11/issue11_bruns.html.
Bush, Vannevar. ‘As We May Think’, originally published in The Atlantic Monthly (July 1945), HTML version by Denys Duchier, University of Ottawa (April 1994).
Digg.com. ‘Digg-Overview’, https://digg.com/about/ (2008).
Galloway, Alex. Protocol: How Control Exists After Decentralization, (Cambridge, Mass.: MIT Press, 2004).
Hansen, Mark. New Philosophy for New Media, (Cambridge, Mass.: MIT Press, 2004).
Kay, Alan. ‘Computer Software’, in Paul Mayer (ed.) Computer Media and Communication: A Reader, (Oxford: Oxford University Press, 1999), 129-137. Originally from Scientific American (September 1984).
Kelly, Kevin. ‘Scan This Book!’, The New York Times (13 May 2006).
Liu, Alan. ‘Transcendental Data: Toward a Cultural History of Aesthetics of the New Encoded Discourse’, Critical Inquiry 31:1 (Autumn 2004): 49-84.
Manovich, Lev. The Language of New Media (Cambridge, Mass.: MIT Press, 2001).
Markoff, John. ‘Web Content by and for the Masses’, The New York Times (29 June 2005).
McGann, Jerome. Radiant Textuality: Literature After the World Wide Web, (New York: Palgrave, 2001).
Rosenberg, Martin. ‘Physics and Hypertext: Liberation and Complicity in Art and Pedagogy’, in George Landow (ed.) Hyper/Text/Theory (Baltimore: Johns Hopkins U. Press, 1994), 268-298.
Ryan, Marie-Laure. ‘Immersion vs. Interactivity: Virtual Reality and Literary Theory’, Post-Modern Culture 5:1 (1994), https://muse.jhu.edu/journals/postmodern_culture/v005/5.1ryan.html.
Stern, Joanna. ‘The Way It Woz: Steve Wozniak on All Things Apple’ (interview), Laptop: Mobile Solutions for Business & Life (26 October, 2007), https://www.laptopmag.com/news/interviews/the-way-it-woz.aspx.