article

FCJ-095 Mapping Commercial Web 2.0 Worlds: Towards a New Critical Ontogenesis

Ganaele Langlois, Fenwick McKelvey, Greg Elmer, and Kenneth Werbin
Infoscape Research Lab, Ryerson University

1. Web 2.0 and its Critical Contradictions

At the 2007 International Communication Association Conference, Web 2.0 was highlighted as an emergent topic of research with a keynote panel entitled ‘What’s so Significant about Social Networking? Web 2.0 and its Critical Potentials’. One of the thought-provoking moments during the panel was the juxtaposition of two very different and at first, contradictory theoretical approaches to the relationships between Web 2.0 and user-generated content. While Henry Jenkins focused on the democratic potential of online participatory culture as enabling new modes of knowledge production, Tiziana Terranova argued for a post-Marxist perspective on Web 2.0 as a site of cultural colonization and expansion of new forms of capitalization on culture, affect and knowledge. The juxtaposition of these two very different critical approaches did not simply rehash the old divide between cultural theory, particularly active audience theory, and post-Marxist critical theory; rather, this debate over Web 2.0 suggested new possibilities for the synthesis and continued development of both sets of critiques. In other words, the event reinforced our belief that corporate colonization arguments do not provide an entirely adequate model for understanding Web 2.0. After all, commercial Web 2.0 spaces such as Facebook, YouTube and MySpace are important sites of cultural exchange and political discussion, in part because they almost entirely rely on user-generated content to exist.

Internet users are producing videos, blogging, organizing themselves, and making their voices heard across the globe. From the global mobilization of a network of support for monks in Burma to videos scrutinizing the actions of public figures, the political and cultural importance of Web 2.0 has radically changed the ways in which culture is produced and lived. At the same time, the surveillance and commercialisation of information produced by users cannot be ignored. Indeed, apart from the notable exception of Wikipedia, the Web 2.0 sphere is dominated by aggressive entrepreneurial models and goals: YouTube for example was bought by Google for $1.65 billion in 2006, while Facebook rejected a $1.62 billion offer from Yahoo! in the same year. The for-profit imperative of Web 2.0, though, has not gone unchallenged. Facebook users have been extremely vocal over the years with privacy settings and policies (Boyd, 2008) Facebook has made concessions and changed some of its features, managing to maintain a dataveillant business model while quelling public outrage.[1] The process of negotiating privacy within a space of personal publicity thus raises interesting questions about creativity and empowerment within an emerging cyber-capitalist infrastructure.

Major commercial Web 2.0 sites thus present us with a paradox that unfortunately neither position can fully resolve. On the one hand, such popular platforms allow users to express themselves to new audiences in ways that were not possible before. On the other hand, even though they are freely accessible and have come to act as seemingly quasi-public spaces, such platforms are designed to produce profits, mostly through the tracking of user behaviors, interests, and patterns of use to create new forms of customized advertising. Zimmer (2008) notes that this Faustian trade-off that takes place with user-generated sites and social network sites in particular: the ease of communication, connection and exploration of one’s interests can only take place through agreeing to terms of service and terms of use that allow for dataveillance and the commercialisation of user-generated content through advertising.

The popularity and rich cultural experiences witnessed on these spaces cannot be simply dismissed as yet another form of corporate control over culture, or Orwellian dataveillant machine. It would thus appear that current analytical frameworks and tools have failed to fully comprehend the ontology of commercial Web 2.0. If we are to identify critical alternatives to commercial Web 2.0 and, more generally, if we are to intervene in the ontogenesis of Web 2.0, we need to reconstruct a critical approach that deals with these contradictions. The analysis of the ontogenesis[2] of Web 2.0 – of its constant production and reproduction through the deployment of material, technical, semiotic, commercial, political and cultural flows – makes it possible to identify critical points of intervention. In so doing, we can understand the productive power of Web 2.0 through an analysis of the technocultural underpinnings of its structure, functions, and dysfunctions. This paper argues for the following change of perspective: commercial Web 2.0 platforms are not simply about facilitating user-produced content and carrying content across networks to large audiences or ‘end-users’; rather, they are primarily concerned with establishing the technocultural conditions within which users can produce content and within which content and users can be re-channelled through techno-commercial networks and channels. Consequently, new areas of concern with user-generated spaces include not only questions of censorship, but increasingly issues related to the regimes through which information circulates online.

Such a shift in perspective has epistemological consequences in terms of developing a critical understanding of Web 2.0 and furthering our understanding of the political economy of commercial Web 2.0 spaces and spheres. Focusing on the conditions within which users and their content can exist on Web 2.0 spaces invites us to see the ontogenesis of commercial Web 2.0 in terms of active production and constant stabilization of disparate elements: systems, software, human users, commercial forces, political actors, etc. Subsequently, the challenge lies in understanding and identifying these processes of articulation in a grounded manner, that is, in a manner that identifies the unfolding of commercial Web 2.0 ontogenesis and that does not simply try to assert a set framework on a shifting object of study. How can we understand, map and otherwise critique emergent forms of connectivity and articulation among Web 2.0 sites, users and content, especially when the architecture and technical processes shaping communicational dynamics are black-boxed, opaque and secretive? This is not only a technological or methodological challenge, but also an invitation to reassess the shaping of power dynamics in online spaces.

2. From Protocols to Platforms

As a concept, Web 2.0 feels a bit like a black hole: everything gets trapped within its porous boundaries, from commercial and private social networks to open source initiatives such as Wikipedia, from the latest online craze such as Twitter to some of the first successful online business models, such as Amazon (O’Reilly, 2005). Mainstream discourse about Web 2.0 refers to a projected perception of the contemporary state of the World Wide Web as correcting the shortcomings of the previous Web 1.0 era and fostering the articulation of better democracy with new business opportunities. YouTube, Facebook, Wikipedia and Google have entered public consciousness to define Web 2.0 and exemplify the larger trends of social media, online video publishing and sharing, user-generated content, blogging, social networking and collaborative platforms like wikis. It is at the level of functionality that Web 2.0 becomes more definable. Two characteristics emerge in that respect: first, Web 2.0 relies on users to produce content. Social network sites such as Facebook rely almost entirely on users posting personal information that is then shared with a network of “friends” through newsfeed stories. The popularity of YouTube is not simply linked to its capacity to act as a repository of mainstream videos, but also for its capacity to offer a venue for people to express themselves through video-making.

The second characteristic logically follows from the first: user-friendly design through complex technical processes. Web 2.0 websites feature rich interactivity, dynamic content and complex interfaces (Vossen and Hagemann, 2007). That is, Web 2.0 makes the Web behave more like desktop software, albeit in a networked environment. The popularity of Web 2.0 spaces such as YouTube, Facebook MySpace, Flickr or Twitter corresponds to the adoption of new techniques to facilitate content production and publishing for users. Tasks that used to require knowledge of HTML, CSS and other publishing Web languages can be done on Web 2.0 spaces through pushing a button or hitting a return key, just like the user-friendly environment of the desktop interface. Following Florian Cramer and Matthew Fuller’s (2008: 149) typology of interfaces, with Web 2.0, the processes of content production and publication are simplified at the level of the user interface, which comprises the ‘symbolic handles’ such as buttons, text box, scrolling devices, etc. that are available to users in order to connect with software programs. The simplification of technical processes from a user point of view and the greater user-friendliness offered by these spaces is accompanied by comparatively more complex and invisible processes that take place via other types of interfaces that connect software to software, software to hardware, and hardware to hardware. Of particular interest in the Web 2.0 environment is the interface that Cramer and Fuller describe as ‘specifications and protocols that determine relations between software and software, that is, application programming interfaces (APIs)’ (Cramer and Fuller, 2008: 149), as they enable the circulation of user-generated content within and across Web 2.0 spaces.

Our approach focuses on understanding the code politics of user-generated content within commercial Web 2.0 spaces. We rely on software studies (Fuller, 2003, 2008) in order to understand the mutual shaping of software, hardware and cultural practices. Software studies invites us to focus on the cultural and communicational changes brought by software, and to pay attention to the ways in which online communication is not simply a human activity, but a set of practices negotiated through complex dynamics between software architectures and different categories of users (i.e. software engineers, citizens, activists, etc.). As a topic stemming from the broader field of software studies, code politics seeks to understand the conditions of code and software in relation to power, capital, and control. Studying code politics means studying how actors have ‘literally encoded the Web for their political purposes’ (Elmer et al., 2007). For example, Grusin (2000) explores the code politics of the computer when he relates an operating system’s desktop with physical real estate that corporations compete to control. He argues that,’whenever you boot up your computer, you are engaging in a commercial transaction in a mediated public space which is being increasingly contested by Microsoft, the USA Government, and inevitably other governments and corporations as well’ (Grusin, 2000: 59). Even the matter of your default web browser has tremendous value for corporations. With user-generated content and commercial Web 2.0, a code politics approach requires moving beyond analyzing the content of the user interface to locating how that content articulates visible and hidden processes. A discussion group on Facebook, for instance, raises questions not only about the content of a discussion, but also about how information about content and users is captured and recirculated by software through data-mining and marketing, about how the presence of commercial forces is facilitated both at the software level and at the economic level of commercial partnerships, and about how users’ participation is rechanneled as marketable data. In short, a code politics approach to user-generated content requires paying attention to the articulations between the user, the software and the interface: to explore how the cultural experiences and practices available at the user-interface level (Cramer & Fuller, 2008; Fuller, 2003; Grusin, 2000; S. Johnson, 1997; Jørgensen & Udsen, 2005; Turkle, 1997) are articulated and coexist with the processes through which software encodes user input according to material (Hayles, 2004; Manovich, 2002), ideological (Chun, 2005), and legal (Grimelmann, 2005; Lessig, 2006) logics. In so doing, a code politics approach seeks to understand the connections that enable and shape the traffic and trafficking of information, data, immaterial labour and subjectivities online.

A code politics approach is hardly new, and actually has been a concern not only in the field of software studies, but also in the development of Web methodologies writ large. Understanding the relationship between the circulation and production of content and the shaping of the communicative possibilities of the Web has been a constant in developing methodologies that take into account the specificities of the Web environment. The problem does not lie in justifying the need for such an approach, but in adapting it to the specific dynamics of commercial Web 2.0. This requires a shift away from the protocological approach that has been dominating Web studies. Protocol refers to the ‘language that regulates flow, directs netspace, codes relationships, and connects life-forms’ on the Internet (Galloway, 2004: 74). Protocol points to the technical conventions that enable computers to communicate in a decentralized network such as the Internet. By extension, protocols are the sites through which possibilities for control and resistance on the Web and the Internet are defined. Studying the protocols that regulate the formation of computer networks thus offers a way to examine the power relations at stake on the Internet in general, and on the Web in particular. While Galloway (2004) focuses on TCP/IP as the protocols that enable the Internet, other Web studies approaches have focused on HTML/HTTP as the protocols that enable communication on the World Wide Web. While HTTP is about the transfer of information, the HTML language is a protocol that encodes content within a specific hyperlinked context. Web studies first started focusing on the hyperlink as a unit of analysis to understand, for instance, how the flow of information through hyperlinks can yield clues as to the communicational and social dynamics of the actors involved in a hyperlinked network (Garrido and Halavais, 2003). Other Web methodologies seek to examine how discussions on issues of common interest can be studied through looking at the evolution of hyperlink networks (Rogers and Marres, 2005). Still other approaches seek to understand the relationship between the protocological aspects of the Web and the circulation of content. Web sphere analysis (Schneider and Foot, 2005), for instance, does not only examine hyperlink networks, but also textual content and website features (e.g. email, message posting) in the case of election campaigns. More recently, efforts have been made to include not only hyperlinks to analyze the shaping of informational dynamics, but also other Web protocols such as metatags, cookies and robot.txt (Elmer, 2006). Common to all these approaches is a focus on the informational dynamics of single protocols.

The transition from Web 1.0 to Web 2.0, however, has changed the language and protocols of the Web. HTML is no longer the dominant language on the web, but rather one among a multiplicity of co-existing and competing languages and protocols. The situation has changed from one dominant markup language on the Web to multiple code languages. As a consequence, hyperlinking is now but one feature of Web 2.0 spaces. Increasingly buttons replace linking and the logic of embedding Web objects on different Web 2.0 spaces is becoming more and more popular: if I like a video on YouTube and want to show it to my friends, I do not send a link via e-mail anymore, I push the Facebook share button on YouTube and the video is immediately embedded in my Facebook profile and visible to all my friends. Furthermore, whereas the production of hyperlinks in the Web 1.0, HTML-dominated environment was created by human users, hyperlinks in Web 2.0 are increasingly produced by software as tailored recommendations for videos or items of interest, suggested friends, etc. The technocultural articulations that regulate the production and circulation of hyperlinks are thus different in the Web 2.0 environment from the Web 1.0 environment, particularly with regards to the re-articulation of hyperlink protocols within other software and protocological processes. The production of an HTML tag linking to a personalized recommendation is the result of the algorithmic processing of a user’s profile correlated with other profiles and potentially commercial interests. While these processes are invisible from a user perspective of instantaneous communication, they nevertheless represent a central site of analysis in order to understand the new communicational and cultural regimes of user-generated environments.

The folding of the once dominant HTML language into a range of other protocols requires changing our assumptions about the Web as an object of study. The dramatic increase of code, operating systems, programs, languages and browsers in the Web 2.0 environment reduces the applicability of protocol to fully encapsulate the conditions and possibilities of Web 2.0. How then might one conceptualize the relationships between different protocols? The model put forward by Galloway starts with the assertion that ‘the content of every protocol is always another protocol’: HTML is encapsulated in HTTP, HTTP is dependent on TCP, TCP on IP, and all these protocols are encapsulated within ‘physical media’ (Galloway, 2004: 10-11). Galloway’s model is ambivalent in that the ‘encapsulating’ of protocols could be understood either as a reduction of all online activity to one protocol or as pointing out that different protocols co-exist, and that they have a hierarchical relationship based on the material requirements needed for them to exist. When the list of protocols was relatively simple – HTML, then HTTP, then TCP/IP – the articulation of protocols was quite straightforward and unproblematic. However, if, as Galloway suggests, protocol determines the limits of possibility, and the number of protocols increases dramatically, we need an answer as to how different protocols interact. One of the central technical characteristics of Web 2.0 is the reliance on APIs, on customized software programs that rearticulate protocols in different ways. Common examples involve Web services, such as Amazon web services, which allow third parties to access the Amazon catalogue, mashups of different Web 2.0 spaces (i.e. Google maps and Flickr images) and internal applications, for instance Facebook applications. APIs process and represent data in different ways, yet they are based on a common set of protocols. Therefore, we need an approach that interrogates the Web as an assemblage of protocols rather than the nesting of one protocol into another. In response, we suggest that protocols act as modular elements, assembled as part of different Web 2.0 platforms.

The modularity of protocols, as elements that serve to create different assemblages, necessitates a platform-based perspective. While the previous protocological approach conceptualized the Web as a carrier of information, the concept of the platform as ‘hardware and/or software that serves as a foundation or base’[3] points out that the articulation of modular protocols enable the possibility for Web 2.0 to run complex software similar to a desktop operating system. Web 2.0 actualizes the universal platform, a constructive space independent of hardware, imagined by the Java project (Mackenzie, 2006). Where Java failed to network enough of its actors to stabilize the platform, Web 2.0 has created a platform by drawing in a variety of standards and actors into its network. Web 2.0 involves HTML, XML, JavaScript, AJAX, PHP, databases, browsers, developers, and users that behave as a platform capable of being the grounds for a new class of websites (Vossen & Hagemann, 2007, pp. 38-48) offering a multiplicity of search and personalization features. While Web 2.0 can be conceptualized as a meta-platform that enables communication across Web 2.0 spaces – e.g. embedding a blog post or a YouTube video in a Facebook page – Web 2.0 spaces should also be considered as platforms in their own right, because they articulate protocols in different ways to operationalize different logics, for instance, open-source or private. A Web 2.0 platform is a convergence of different systems, protocols, and networks that connect people in different and particular ways and thus offer specific conditions of possibility. It becomes central, in turn, to figure out not only the articulation of protocols with other protocols, but also the articulation of protocol with other technocultural dynamics. That is, the question is not simply one of understanding how the imbrication of protocols helps create the personalized and sometimes enclosed, portalized and black-boxed spaces that typify commercial Web 2.0 spaces, but rather how the stabilization of these Web 2.0 platforms is dependent on a host of other commercial, discursive, cultural and legal processes. In that sense, platforms as protocological assemblages have a complex status in that they at the same time are authorized by and enact the specific articulations of legal, economic, social and cultural processes.

3. From Platforms to Worlds

Conceptualizing Web 2.0 spaces as platforms helps highlight the need to examine the modularity of Web 2.0 spaces in order to see how they arise as sites of articulations between a diverse range of processes and actors. Furthermore, the concept of platform as the convergence of technocultural processes that connect people, protocols, and processes in different ways offers a radical break from previous conceptions of the Web as a carrier of content. That is, Web 2.0 spaces do not simply transmit content according to specific communicative formats, even though this is still one of their roles. Rather, Web 2.0 spaces serve to establish the conditions within which content can be produced and shared and where the sphere of agency of users can be defined. With regards to understanding commercial Web 2.0 platform, this distinction draws a parallel with Maurizio Lazzarato’s argument that ‘in reversal of the Marxist definition, we could say that capitalism is not a world of production, but the production of worlds’ (2004: 96). Lazzarato’s exploration of contemporary forms of capitalism draws a distinction between the factory and the corporation. While the factory is about fabricating and producing objects, the corporation is about the creation of the world within which the process of manufacturing, distributing and consuming can take place (2004: 95). As Lazzarato further argues: ‘the corporation does not create objects (merchandises), but the world within which such objects can exist’. Furthermore, the corporation ‘does not create subjects (workers or consumers), but the world within which such subjects can exist’ (2004: 94). That is, the corporation creates flows of discourses, practices and materials that delineate a horizon of possibility and therefore work to create subjective norms that can be interiorized by individuals. The transition from Web 1.0 to Web 2.0 becomes more understandable if we follow the model provided by Lazzarato, especially when it comes to the commercialization of and capitalization on users and their content. While the type of commercializing processes at stake with Web 1.0 were primarily about transforming users and their content into commodities, Web 2.0 dynamics establish the conditions within which such processes of commercialization can occur through the promotion and harnessing of user-generated content.

User-generated content, which we define as digitized objects shared across the web 2.0 network, is said to have produced a computerized ‘gift economy’ (Barbrook, 1998) through the maximization of ‘free labor’ (Terranova, 2000). The concept of gift economy refers to the open-source inspired models of collaboration that departs from proprietorial models of software (e.g. Mozilla and its open source browser Firefox), or content (e.g. Wikipedia). While volunteer online collaboration has often been viewed as revolutionizing cultural and economic production (Jenkins 2006; Benkler, 2006) and therefore as challenging a proprietary system of digital ownership and cyber-capital, Terranova’s critique of 2.0 labour also underlines the ways in which the open-source movement has been incorporated into a capitalist digital economy. As Terranova points out: ‘you do not need to be proprietary about source codes to make a profit: the code might be free, but tech support, packaging, installation software, regular upgrades, office applications and hardware are not’ (Terranova, 2000: 51). As Terranova further demonstrates, free labor has been a central component in the development of the Web:

Simultaneously voluntarily given and unwaged, enjoyed and exploited, free labor on the Net includes the activity of building Web sites, modifying software packages, reading and participating in mailing lists, and building virtual spaces on MUDs and MOOs. Far from being an ‘unreal’ empty space, the Internet is animated by cultural and technical labor through and through, a continuous production of value that is completely immanent to the flows of the network society at large (Terranova,2000: 33).

Unpaid content production, then, is one aspect of a capitalist process focused on the ‘creation of monetary value out of knowledge/culture/affect’ (Terranova, 2000: 38). This new form of capitalism as expressed on the Internet and the Web is about nurturing, exploiting and exhausting the ‘cultural and affective production’ of the labor force (Terranova, 2000: 51). Terranova’s analysis is easily applicable to commercial Web 2.0 models, where processes of commercialization can take place at the level of users and their content. Establishing a new – and free – Facebook account lays the ground work for the growth of one’s social network. In exchange for this service, Facebook reserves the right to use information provided about users – who they are friends with, what their preferences are, what they read or consult – in order to share such information with third parties. YouTube likewise relies on freely and voluntarily produced videos to attract people and sell audiences to the advertising industry, among other marketing techniques, such as promoting partners’ videos.

Commercial Web 2.0 spaces are, however, not simply technologies of content commercialization. There is clearly a qualitative difference between the America Online (AOL) chat hosts working for free that Terranova describes (Terranova, 2000: 33) and Facebook or YouTube users. The main difference lies in how the process of alienation inserts itself within a variety of online spaces. Terranova’s investigation into what can now be considered Web 1.0 describes a process where a capitalist machine ‘nurtures, exploits and exhausts’ user-generated cultural production. In that sense, there is a process of alienation of the user from their cultural production as the content produced serves to maintain and further an economic system. Such processes are far from being absent from Web 2.0 platforms, but the ways in which the platforms re-articulate processes of alienation to users is more complex. The alienation of users from their information first takes place through a technico-legal system whereby the implementation of surveillance software is accompanied by terms of service that authorize the platform to re-use user information. For instance, while users on Facebook retain intellectual property of their content, the Facebook terms of use stipulate that:

By posting User Content to any part of the Site, you automatically grant, and you represent and warrant that you have the right to grant, to the Company an irrevocable, perpetual, non-exclusive, transferable, fully paid, worldwide license (with the right to sublicense) to use, copy, publicly perform, publicly display, reformat, translate, excerpt (in whole or in part) and distribute such User Content for any purpose, commercial, advertising, or otherwise, on or in connection with the Site or the promotion thereof, to prepare derivative works of, or incorporate into other works, such User Content, and to grant and authorize sublicenses of the foregoing.

A second characteristic of commercial Web 2.0 platforms is that processes of alienation are kept invisible to users. A case in point is Facebook‘s Beacon. The Beacon was designed to enable more targeted advertising on the Facebook site through the sharing of users’ information with commercial third parties. The Facebook Beacon was quickly met with resistance because of privacy concerns and was changed to an opt-in system rather than a set feature of the website.[4] This software change could be interpreted as a victory against the alienation of users from the information they provide, yet this is far from being the case. The Beacon, after all, was a visual representation of processes of commercialization that are still taking place on the Facebook platform. These processes, however, increasingly take place at the back-end level and because they are invisible to users, they meet with less resistance.

A third characteristic of commercial Web 2.0 concerns the re-articulation of the dynamics of alienation to make alienation disappear altogether. What Zimmer (2008) describes as a Faustian trade-off between augmented personalized exploration and surveillance and commercialization gives way to a dynamic whereby the process of commercialization is part of providing to users augmented cultural knowledge, affect and desire, to borrow from Terranova (2000). YouTube‘s recommendations are designed to assist users in their quest for knowledge that corresponds to their interests. Facebook is about enhancing the personal socialized world of users and multiplying social exchanges through joining groups and sharing stories. Commercial Web 2.0 is about us – it is about re-presenting ourselves through the mediation of the platform. This where Web 2.0 platforms echo Lazzarato’s point that contemporary forms of capitalism is about the creation of worlds, which means the setting up of a horizon of possibilities. This also means that specific processes of subjectivation can be formulated as the crystallization of psychological, social, economic dynamics and factors that favour the formation of specific subject positions. These processes are present on Web 2.0 platforms and present us with the paradox of narrowing down the field of possibilities while creating, producing and enriching our experience of being on the Web. Commercial Web 2.0 platforms are attractive because they allow us, as users, to explore and build knowledge and social relations in an intimate, personalized way. In this dynamic, the commercialization of users and information is one of the central factors through which this enrichment takes place. As a consequence, alienation disappears, as in the Web 2.0 worlds there is no contradiction anymore between the marketing of user information and the subjective enrichment of users: what used to be two separate processes are now one in the augmentation of social and cultural factors. Third-party advertising is reinscribed as cultural capital produced by the platform for the user through personalized recommendations. The role of the platform, in that sense, is to set up the context, or world, within which such re-articulations can take place.

There is need to further examine this third process of re-articulation of alienation within commercial Web 2.0 platforms, and this requires further analysis into the ways in which users come to exist as subjects online. The concept of subjectivation as explored by Guattari (1989), Guattari and Rolnik (2007) and Lazzaratto (2004) points out that what is unproblematically considered as an individual subject in online spaces is actually the ‘momentary stabilization and enclosure’ (Lazzarato, 2004: 57) of a specific position articulated through a range of heterogeneous forces. Subjectivity, as Guattari declares, is fabricated and modeled under a social regime (Guattari and Rolnik, 2007: 46). The concept of the platform can enrich our understanding of subjectivation as it brings in a technocultural dimension that helps us to acknowledge that the user cannot be equated with a human actor – it is actually a site of articulation between the technocultural dynamics present through the platform and human actors. In turn, the hybridity of the user points out how processes of subjectivation on Web 2.0 worlds are both highly personalized and standardized. That is, the representation of ourselves takes place through a platform’s universal algorithmic logic. As users, we input personal information into the platform, and in turn, the platform represents us on the user-interface as the aggregation of bits and pieces of images, texts, sounds, videos, and links. The user-interface becomes the site where the exploration and extension of ourselves, our knowledge, culture and affect is negotiated through a technocultural mediation. In that regard, a major change between Web 1.0 and Web 2.0 is the rise of the first-person perspective. In the 1.0 environment, the drive was towards providing users with an overall, universalist view of Web spaces. The site map, for instance, emerged as an essential component of websites. With Web 2.0, the site map, the universalist perspective, has entirely disappeared as information is constantly re-classified through personalization and customization. Our relation to others and to what populates commercial Web 2.0 spaces is always realized from an individualized perspective. While Wikipedia is about the sum of individual knowledge to get as close to a universalist perspective as possible, commercial Web 2.0 are about the creating processes of socialized individualization whereby the sum of all knowledge is parsed out through customization and individuation. In that sense, the process of subjectivation on commercial Web 2.0 worlds is locative: it is the double logic and the inseparability of finding and being found, of locating ourselves and our personalized network. Locative, in that sense, does not refer to the technical feature of handheld devices, but to a process of creating experiences for users not only in terms of their geographic locale, but in terms of defining and refining their ‘level of existential, inhabited, experienced and lived place’ (Bleeker, 2006). Commercial Web 2.0 platforms help construct worlds and set up the subjectivation processes through which users can inhabit and explore these worlds.

4. World Mapping

The commercialization of users and their content on commercial Web 2.0 worlds is reintegrated as part of a productive dynamic that enable specific modes of subjectivation and define the individualized and localized relationships between users and the technocultural world around them. The question, in turn, is about understanding how different platforms enable the production of worlds. How can we map commercial Web 2.0 worlds, and therefore build possibilities of critical intervention? We suggest that one entry point towards mapping Web 2.0 worlds is through platforms and through the visualization of the many connections operated by the platforms between users, content and protocols. Understanding these connections at the technical level helps to identify the many autonomous and sometimes heterogeneous processes that authorize these very connections. In that sense, Guattari’s exploration of the constitution of contemporary forms of capitalism as deploying a range of semiotic regimes is important in developing a methodological framework for Web 2.0. Guattari argues that the instruments through which ‘integrated world capitalism’ comes to stabilize and expand itself involve four semiotic regimes: economic, legal, techno-scientific and semiotics of subjectivation[5] (Guattari, 1989; 41). Such typology of processes and articulations of heterogeneous elements can be easily adapted to commercial Web 2.0: the economic logic of attracting large audiences within a delineated online space is supported by both a legal system that allows for the circulation of information from users to commercial entities and a technico-scientific assemblage of platforms and protocols that render such circulation of information feasible. Furthermore, semiotics of subjectivation in the case of commercial Web 2.0 platforms work towards the stabilization of the production of subjectivities within specific user modes, representational interfaces and algorithm used to create personalized and socialized recommendations that seamlessly articulate economic and cultural processes.

The unpacking of the combination of Guattari’s four semiotic systems as they traverse and shape commercial Web 2.0 platforms can be conducted through a range of methodologies and approaches. Our specific interests lie in developing a critical methodological approach that starts with the techno-scientific, that is, the platform, and that focuses in particular on the circulation of information online. A platform-focused methodology allows for a grounded approach to the formation of Web 2.0 worlds. In particular, our intent is to avoid the trap of trying to define Web 2.0 worlds as broad structures that encapsulate specific issues and moments. Rather, there is a need to develop an approach that pays attention to the ways in which Web 2.0 worlds are constantly re-articulating and stabilizing disparate elements. A platform is the embodiment of a Web 2.0 world, as specific articulations of protocols are allowed by, enact and ground economic, legal and subjectivation processes in specific technocultural contexts. The platform, as such, realizes possibilities through the deployment of protocological modules. In turn, these modules are about the mining, transformation, production and exchange of pieces of information according to the regimes of localization, customization and socialized individualization.

A platform-based methodology facilitates a process of making visible the ways in which protocols are articulated so as to channel information in specific ways and thus enact specific economic, legal, and cultural dynamics. In other words, a methodology that would witness the unfolding of the production of worlds via commercial Web 2.0 platforms would be of considerable benefit in terms of identifying specific sites of stabilization. The first step in developing such a methodology to re-envision the Web requires a move beyond, and below the user interface. That is, we need to challenge our perception of the Web as rooted within the visual aesthetics of the user interface. This is all the more crucial and challenging as on proprietary and closed websites such as Facebook, the interface becomes a limiting factor as our only point of entry is through the first-person perspective of our own network. We can never have access to the totality of the information available on Facebook via the interface, and this limits our analysis. Indeed, the user perspective creates a specific worldview of Web 2.0 – one that is localized, grounded in and delineated by a specific visual regime. Adopting a platform perspective helps overcoming the limitation of the user worldview so as to understand broader subjectivation processes, and re-localize users within the visible and invisible elements that compose Web 2.0 worlds.

As the role of the platform is to enact and realize Web 2.0 worlds through the articulation of protocols, it becomes necessary to dissect these protocological articulations. Platforms perform connections via protocols among “objects”: users, pieces of content, pieces of information produced through recommendation and personalization software. These connections enable the formation of the relationships that populate Web 2.0 worlds. In other words, Web 2.0 platforms establish the channels through which information can circulate and the challenge lies in developing tools to track, map and visualize such channels, from the protocols that enable them to their effects on stabilizing specific modes of being online. Such an approach has roots in the critical aesthetics of software studies – for instance, Fuller’s Webstalker (2003) as an alternative Web browser was an important step in understanding how user perceptions of what the Web is are constructed through specific visual regimes at the level of the user interface. Such an approach also stems from info-visualization as the production of tools to render visible previously invisible processes.

This critical approach to Web 2.0 platforms is based on disaggregation. Disaggregation as a method through which to strip, parse and rip the platform into its components is a useful approach, albeit one that needs to be adapted to the Web 2.0 environment. Indeed, most disaggregation tools rely on Web 1.0 protocols accessible to users, such as hyperlinks, domain names, or metatags. There are user-accessible protocols in Web 2.0, and those can be understood as traffic tags. Traffic tags are pieces of identification that are attached to a Web object: a user, a video, a picture each have, for instance, a distinct ID number on Web 2.0 platforms. Traffic tags can be human-generated, such as the title of video, or the real name of a user as they appear on the user-interface, or the user tags that describe how an object belongs to a class of object (i.e. “X’s wedding” or “election 2008”). Traffic tags are also computer-generated: unique identification numbers are assigned to a YouTube video, as well as to users on Facebook. Traffic tags allow not only for the recognition of objects within Web 2.0 platforms, but also are used by protocols to allow objects to circulate across platforms. For instance, when a user presses the “Share on Facebook” button after watching a video on YouTube, the ID number of the video will reappear in the Facebook source code of the user’s page. The current challenge thus lies in identifying and following traffic tags associated with Web objects so as to see how information circulates within and across Web 2.0 platforms. This will give clues as to how cultural processes that are traditionally only visible at the level of the user-interface are dependent on the software interfaces. In turn, this disaggregation of the articulation of Web 2.0 protocols will serve to identify techno-scientific semiotics and the ways in which they are associated and articulated with legal and economic semiotics and semiotics of subjectivation.

Such an approach is a challenge for researchers accustomed to working at the level of the user interface. Yet, moving from the user interface to the software interface is promising in terms of not only analyzing the technocultural economy of commercial Web 2.0 worlds through the mapping of the unfolding of protocological assemblages, but also with regards to using commercial Web 2.0 in non-commercial ways. Of particular interest are the Application Programming Interfaces (API) which have become a central feature of Web 2.0 spaces. APIs allows software programs to connect to Web 2.0 platforms and databases and undertake specific tasks. APIs offer a way to access information and tags that bypass the limits of the user interface. Therefore, their potential as ways to develop critical methodological tools should be explored. The goal would be to create new visualizations, new geographies – a map of where objects flow, where they migrate, where they are reshaped and re-circulated. The mapping of the connectivities and disconnectivities of Web 2.0 platforms could thus make use of techno-scientific semiotics as an entry point for understanding the production of Web 2.0 worlds.

Critical interventions into commercial Web 2.0 platforms are needed if we are to recognize the cultural importance and critical potentials of Web 2.0. Many instances in code studies and software studies show that such interventions can take different forms, from radical ruptures to aesthetic experiments and methodological tools. It is crucial, however, to not to be too quick in formulating critical judgments, and to understand the power dynamics in commercial Web 2.0 as both repressive and productive. That is, the ontogenesis of Web 2.0 is about the creation of inhabitable worlds within which users can exist and extend themselves according to specific technocultural logics. The challenge, then, lies in formulating alternatives that make use of specific protocological articulations and divert them so that they are not about stabilizing a system, but rather about creating other possibilities. We are hoping that the above framework provides a step towards enabling the mapping of protocological formation on Web 2.0 platforms.

Authors’ Biographies

Ganaele Langlois is Associate Director at the Infoscape Research Lab at Ryerson University (Toronto, Canada, www.infoscapelab.ca and Assistant Professor of Communication in the Faculty of Criminology, Justice and Policy Studies at the University of Ontario Institute of Technology (Oshawa, Canada). Her research interests build on software studies, Actor-network theory, and Guattari’s mixed semiotics to study the critical, cultural and political dimensions of online participatory cultures. Her publications have appeared in the Canadian Journal of Communication and New Media & Society.

Fenwick McKelvey is a PhD student in the Communication and Culture program at Ryerson and York Universities (Toronto, Canada) and a Research Associate at the Infoscape Research Lab. His research explores the intersection of network management with regimes of human governance. He holds a Joseph-Armand Bombardier Canada Graduate Scholarship.

Greg Elmer is Bell Globemedia Research Chair and Director of the Infoscape Research lab at Ryerson University, Toronto. Greg’s research and teaching focus on new media and politics, information and communication technologies, computer networks, and media globalization. Greg’s scholarly publications have appeared in the peer reviewed journals First Monday, New Media & Society, Screen, Convergence, and Scan. Greg has published a number of books including, most recently with Andy Opel, Preempting Dissent: The Politics of an Inevitable Future, https://www.arbeiterring.com/new/preempting.html, (2008, ARP Press).

Dr. Kenneth Werbin is Assistant Professor of Contemporary Studies and Journalism at Wilfrid Laurier University-Brantford Campus in Ontario, Canada. His research focuses on the security and surveillance dynamics surrounding the automated aggregation and circulation of personal biographic content across social media platforms.

Notes

[1] See https://en.wikipedia.org/wiki/Criticism_of_Facebook for a timeline and resources on privacy issues and Facebook [back]

[2] Our working definition of the Deleuzo-Guattarian approach to ontogenesis is best summarized by Ansell-Pearson: “it is the process itself that is to be regarded as primary. This means that ontogenesis is no longer treated as dealing with the genesis of the individual but rather designates the becoming of being” (1999, p. 90).
[back]

[3] https://www.eetimes.com/encyclopedia/defineterm.jhtml?term=platform
[back]

[4] https://en.wikipedia.org/wiki/Facebook_Beacon
[back]

[5] Semiotics of subjectivation do not particularly belong to one category, as opposed to legal, economic and techno-scientific semiotics. Rather, they include a wide range of processes and flows that, in the case of the capitalist machine, work to create norms which individuals can interiorize (Guattari,1989: 24).
[back]

References

Ansell-Pearson, K. Germinal Life: The Difference and Repetition of Deleuze (London: Routledge, 1999).

Benkler, Yochai. The Wealth of Networks: How Social Production Transforms Markets and Freedom (Yale: Yale University Press, 2006).

Bleeker, Julian. ‘Locative Media: A Brief Bibliography and Taxonomy of GPS-Enabled Locative Media’, Leonardo: Electronic Almanac 14.3 (2006). https://www.leoalmanac.org/journal/vol_14/lea_v14_n03-04/jbleecker.html

Boyd, Danah. ‘Facebook‘s Privacy Trainwreck: Exposure, Invasion, and Social Convergence’, Convergence 14 (2008): 13-20.

Chun, Wendy. ‘On Software, or the Persistence of Visual Knowledge’. Grey Room 18 (2005): 26-51.

Cramer, Florian and Matthew Fuller, ‘Interface’, in Fuller, Matthew (Ed) Software Studies: a Lexicon (Cambridge: MIT Press, 2008), 149-152.

Elmer, Greg. ‘The Vertical (Layered) Net’ in David Silver (Ed.) Critical Cyberculture Studies: New Directions (New York: NYU Press, 2006): 159-167.

Elmer, G., Ryan, P. M., Devereaux, Z., Langlois, G., Redden, J., & McKelvey, F. ‘Election Bloggers: Methods for Determining Political Influence’ First Monday 12.4 (2007). https://www.firstmonday.org/issues/issue12_4/elmer/index.html.

Fuller, M. (Ed.). Software Studies: A Lexicon (Cambridge: MIT Press, 2008).

Fuller, M. Behind the Blip: Essays on the Culture of Software (Brooklyn, NY, USA: Autonomedia, 2003).

Galloway, A. Protocol: How Control Exists after Decentralization (Cambridge: MIT Press, 2004).

Garrido, M. & Halavais, A. ‘Mapping networks of support for the Zapatista movement’ in M. McCaughey and M. Ayers (Eds.) Cyberactivism: Online Activism in Theory and Practice. (London: Routledge, 2003).

Grimelmann, J. ‘Regulation by Software’, Yale Law Journal 114 (2005): 1719-1758.

Grusin, R. ‘Location, Location, Location: Desktop Real Estate and the Cultural Economy of the World Wide Web’, Convergence 6.1 (2000): 48-61.

Guattari, Felix. Les Troies Ecologies (Paris: Galilee, 1989).

Guattari, Felix and Suely Rolnik. Micropolitiques (Paris: Les empecheurs de penser en rond).

Hayles, N. K. ‘Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis’, Poetics Today, 25.1 (2004): 67-89.

Jenkins, H. Convergence Culture: Where Old and New Media Collide (New York: New York University Press, 2006).

Jenkins, Henry and Tiziana Terranova. ‘What’s so Significant about Social Networking? Web 2.0 and its Critical Potentials’, International Communication Association Conference (May 2007).

Johnson, S. Interface Culture: How New Technology Transforms the Way We Create and Communicate (San Francisco: HarperEdge, 1997).

Jørgensen, A. H., & Udsen, L. E. ‘From Calculation to Culture: A Brief History of the Computer Interface’ in K. B. Jensen (Ed.), Interface://Culture – The World Wide Web as Political Resource and Aesthetic Form (Frederiksberg: Samfundsletteratur Press/NORDICOM, 2005): 39-64.

Lazzarato, Maurizio. Les Révolutions du Capitalisme (Paris: Empêcheurs de penser en rond, 2004).

Lessig, L. Code: Version 2.0 (New York: Basic Books, 2006).

Mackenzie, A. Cutting Code: Software and Sociality (New York: Peter Lang, 2006).

Manovich, L. The Language of New Media (Cambridge: MIT Press, 2002)

Marres, Noortje and Richard Rogers. ‘Recipe for Tracing the Fate of Issues and Their Publics on the Web’ in Bruno Latour and Peter Weibel (Eds.), Making Things Public (Cambridge: MIT Press, 2005): 922-935.

O’Reilly, T. ‘What is Web 2.0’, (2005), https://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html.

Schneider,S. & K. Foot. ‘Web sphere analysis:An approach to studying online action’ in C. Hines (Ed.) Virtual Methods: Issues in Social Research on the Internet (Oxford: Berg Publishers, 2005): 157-170.

Turkle, S. Life on the Screen: Identity in the Age of the Internet (New York: Touchstone, 1997). Vossen, G., & Hagemann, S. Unleashing Web 2.0: From Concepts to Creativity. (Amsterdam: Elsevier/Morgan Kaufmann, 2007).

Terranova, Tiziana. ‘Free Labor: Producing Culture for the Digital Economy’, Social Text 18.2 (2000): 33-58.

Zimmer, Michael. ‘The Externalities of Search 2.0: The Emerging Privacy Threats when the Drive for the Perfect Search Engine meets Web 2.0’, First Monday 13.3. https://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2136/1944.

When commenting on this article please include the permalink in your blog post or tweet; https://fourteen.fibreculturejournal.org/fcj-095-mapping-commercial-web-2-0-worlds-towards-a-new-critical-ontogenesis/

This article has has been mentioned 86 times in:

    1. Workshop 06 – activism | Manuela Barz