February 2010

Despite the technical difficulties of the video conference this Saturday, I feel like there was nevertheless some great discussion.  In the interludes during which the video software cooperated with us, we heard from Katherine Hayles, Jerome McGann, Matthew Kirschenbaum, and Arthur Kroker.  I know for a fact that Michael Joyce and Rita Raley made it into the program, but we couldn’t hear the first and severe lag prevented us from listening to the second.

Hayles began the event with an intriguing story that served as a representation (Concetta aptly called it a parable) of the intersection of humanities and technology in the form of a war between “DHs” and “PHs,” the D standing for “digital” and the P for “physical” or “paper,” I assumed.  I know Martin said he was recording it, so listening to it would make a much better rendition than my shoddy recap of it, I’m sure.  To me, her point was that departments seeking integration with digital humanities will be better off than those who don’t.  Either way, it was good to put a voice and face to the words we’ve been reading.

McGann envisioned what he called the “New World Library,” or “a machine that will stabilize the cultural record, both print and digital.”  This discussion led to McGann’s commenting on the Google Books project, a topic which dominated the discussion for most of the remainder of the conference.  McGann commented that, when the Google Books settlement occurred, scholars were not invited to the conversation, causing them to be left out of a very important discussion regarding the digital future of books.

The second part of the conference became more of a local discussion as by that time the server’s ability to handle the connections had for the most part faltered.  We listened to Dr. Kamrath talk about his visit last week to the University of Virginia, where McGann had also been, and discussed some of the issues brought up earlier in the event.  Of particular interest to me (at least partially because I just started working with Dr. Kamrath on his Charles Brockden Brown project) was Dr. Kamrath’s talk about digital standards.  He said a mere 5-6 years ago, it was deemed acceptable to scan a document as a jpeg file at 300 dpi.  If anyone knows what that means technologically, they’ll know that a 300-dpi .jpg file won’t tell you the difference between a dot on an “i” and a piece of dust on the page, which at times is significant to scholars looking at old documents.  This brought the conversation once more to libraries and the Google Books topic, which personally I was glad to hear about since I am woefully under-informed on the issue.  Furthermore, it’s good to hear opinions from the scholarly crowd that was apparently neglected when the decision to allow the service was passed.  Most of the controversy seems like it centered on copyright and the opinions of authors and publishers, completely leaving out the perspective of academics.

I’ll leave off here for now, since I’m sure others who attended will have their own content to contribute, and those who didn’t should have their own insight and opinions on the topics above.

Hayles and the Literary

In Hayles book Electronic Literature: New Horizons for the Literary, she discusses a distinction between the shifts in literature from the era of the book as a “precious artifact” to a more modern “electronic literature” looking at a range of first generation digital objects (1-3). These digital objects range from the initial “[lexia] with limited graphics, animation, colors, and sound” to objects with “a fuller use of the multimodal capabilities of the Web” (6). Among these objects she lists a range of interactive fiction like, “Bad Machine” or “Carrier (Becoming Symborg).” She then interprets these works within her book.

Her basic premise for interpreting electronic literature and digitally born literature focuses on computer mediated text as layered (163), multimodal, separately stored and performed, and temporally fractured (164). “These characteristics provide focal points for the recursive dynamic between imitation and intensification” (165). Specifically, the print novel imitates electronic textuality and intensifies the specific traditions of print at the same time (162). Ironically, print authors that fear electronic literature making print obsolete may not recognize that the same technology can “be seen as print in the making: we have met the enemy and he is us.” What some may see as a battle others may see as a symbiotic dance.

In the recent CNN Money article “10 Luminaries Look ahead to the Business of Reading” various authorities discuss this very dynamic:

(Fortune Magazine) — 1. Kurt Andersen, novelist and public radio host
“Anything remotely resembling news media is going to continue to migrate online until very little or none of it is produced on dead trees. But what remains to be figured out is how it’s paid for, and whether this whole system of enormous magazine and newspaper staffs can be reconfigured to be sustainable in this new age.”
“I think we’ll see content that’s a deeper, better hybrid of audio, video, and print emerge, and that will become the default expectation of people. “
3. Jimmy Wales, founder, Wikipedia, the collaborative online encyclopedia
“I think we’re going to continue to see more or less what we see today — a mix of free and paid, advertising-supported content, and a lot more community-generated content. But I don’t think we’re really going to see any radical changes to that mix because it is working well for consumers. “
5. Marc Andreessen, co-founder, Netscape and Andreessen Horowitz, a venture fund
“The good news is that reading is alive and well and flowering in a way we’ve never seen before. Text is the primary format of the Internet. More and more text is flowing over Facebook every day. The written word is alive and well and thriving.”
9. Kevin Rose, founder, Digg
“I want a lot of social features to be built into the tablet reading experience. If I’m reading a book I want to see where my friend left off, or I want to be able to leave a voice annotation around a chapter so if a friend stumbles upon that chapter they can listen to what my thoughts were around that area. I want rich media incorporated into my books. I want the ability to go out and look up instantly on Wikipedia what something means or see pictures or video around that. That doesn’t exist today.”
10. Matt Mullenweg, founding developer, WordPress
“I think in the future we’ll see more content produced by smaller organizations.”
“I think for the written word, the elements of it that make it successful — the basics of typography, the quality of writing — haven’t changed very much in hundreds of years. And those fundamentals don’t change when you’re on the screen, whether you’re looking at a tablet or a Kindle or anything like that.”

The biggest concerns seem to be how to pay for this content, who new authors will be, and what the content will look like or how it will function. Many of the people in the full article discuss the IPad as the preferred alternative to the Kindle. (The IPad has not been released as of yet. It is scheduled for a late March or an early April 2010 release date, depending on the model. It was debuted in January by Jobs.)

This is certainly an exciting time to study the literary. I am most surprised by the resistance to the literary and re-mixed media. I accept that it can be a frightening prospect to have everything change, but change is a part of life. You either embrace it or get left behind. Also, I was doing a book review search for Electronic Literature, and I couldn’t find a single article on it other than a brief library review. Yet this book is seminal to the field. I must be missing something. Does anyone know where to look?

When thinking of the book as the end-all be-all of communication in a literate society, the equations used to describe the exchange of ideas and information are quite plain, simple, and clear: The writer is the creator, the book is the medium, and the reader is the audience. Critical analysis of this scenario is blessedly limited by its simplicity. Academics, theorists, critics, and everyone in between could understand and appreciate the relations among the elements involved. Typesetters worried about legible and aesthetic print; bookbinders worried about beautiful, sturdy, and functional physical materials; authors worried about predictable linear story arcs; and the reader was along for the ride. It’s a scenario that is rather glorious in its simplicity, and it’s easy to find a starting point for theory or criticism.

With electronic literature, the interactions among participants become exponentially more complex. The author is now both a writer of words and story arcs but also a typesetter, programmer, game designer, flash author, webmaster, etc. The reader is now a reader, a contributor, a player, a participant, and often a storyteller, as well. The medium, previously just “words on paper”, now includes the physical system used to view the material, the software used to decode and/or run the material, the interface used to access the material, etc. Academics in the field of literature often talk of the essential need to know the context in which a text was written. I think that need is far less severe, though equally relevant, compared to the need to recreate or have on-hand the proper context to experience a new-media work in the first place.

What Hayles drew my attention to is something we briefly discussed toward the end of last night’s class: There are as yet no accepted standards for assessing the quality of a piece of electronic literature. I’m now beginning to question whether there ever will be. With so many variables that are completely in flux with every work in consideration, I’m not sure that any very direct or specific critical approach can be presented.

While reading Hayles’ views of the merits of elit, I found myself thinking that we should use standards of analysis that are appropriate to poetry, since the end goal seems the same—to influence the reader and to create a specific emotional reaction by presenting the material in a specific way. But I think as elit has matured beyond the initial experimental phase, we may find that it’s mostly reverting back to familiar structures of narrative and expression. Video games are becoming increasingly narrative-driven. Movies rely more and more on their surrounding backstories to enrich the experience. Websites are even becoming more immersive, helping the user’s visit follow a certain rather predictable pattern.

This is why I suggested using rhetoric and aesthetic as the standards by which elit should be judged and the perspectives through which it should be evaluated. No matter the platform, language, console, resolution, device, duration, or method of distribution, we should always consider how effectively a work of elit achieved its rhetorical goals and how effectively it adhered to aesthetic standards. Hayles argues for this need, I think, very early on in her book, when discussing the development of elit systems: “Like the boundary between computer games and electronic literature, the demarcation between digital art and electronic literature is shifty at best” (12).

My main concern is related to the warning Hayles gives on 119, that “the criterion already dictates the outcome.” If we limit our views of elit to a rhetorical lens, we risk “leading to the predetermined conclusion that electronic literature is inferior to print literature” (118-19).  While Hayles leaves me longing for a concrete answer, she poses a very compelling question of how we can analyze these works.

I kind of had thoughts about McGann and Hayles as I read the article that was assigned for this week along with the Hayles reading… Nevertheless, it all culminates on a Hayles moment………. 


“The cultural history of our world is wrapped up in digital worlds, and in the future, if people want to understand our culture, they’re going to need documents and information,” says Henry Lowood, who leads the preservation effort at Stanford. “We’re in a position to do something about that for these synthetic worlds.”

by Clay Risen

This is crying out for the universal creation of the databases that McGann spoke of in Radiant Texuality…In other words, the documents and information that will be provided are synthetic…AKA, NOT REAL!!! Therefore, it is stupid of us to think that we would ever be able to FULLY Capture the essence of the video game. We can simulate it, but not FULLY replace it. But I think that is ok. The beauty of evolving technologies is to continually find ways to remix and reuse the old in a new way….

Video-game preservation is tricky. First, a definitional question: Is a video game just lines of code, or does it include the disk, box, and console? “To preserve an Atari 2600, do you need a piece of shag carpet?” asks Kirschenbaum. He’s only half joking: this year a team at Georgia Tech made an emulator that lets old games be played on today’s computers, but makes them look fuzzy, as if they were on a TV circa 1977.

Of course a video game is much more than just code: the true experience goes beyond…..It may lead to a situation where we end up having a Simulation of the Simulated. Most video games/some video games actually simulate real life situations. Well, if we need to or intend to archive these games, then we would end up Simulating the [already… formerly] Simulated….

This simulating the simulated is a very interested concept; I am wondering if it parallels with Hayles thought about the (IF) or interactive fiction. She, herself, says that it is sometimes very difficult to differentiate between what is narrative and what is a game when one begins to add electronic elements to narrative (8). She goes on to say that the difference rests in the configuration that happens in a game versus the interpretation that happens in a narrative. To clarify, I believe she is saying that in a game there are more definitive, structural, and objective elements involved whereas a narrative should foster a less linear and more subjective perspective based on the reader. In other words, there may be several outcomes in a games, but if two completely different individuals played that game, they are limited by the preset outcomes. While a “true” electronic narrative could produce multiple interpretations that are not preset or prescribed.  

            Now, to totally turn this argument on its backside – the Ivanhoe Game developed by McGann and his scholars seems to press against the Hayles distinctions that are mentioned above.

The Ivanhoe Game invites participants to use textual evidence from a given literary text to imagine creative interpolations and extrapolations……(36)

So, if the above mentioned is actually occurring, we are forever shifted to a new place where narrative will never be the same. That delineation will forever be muffled and confused as those of us who care to push against the grain do so fearlessly.

In her book, Electronic Literature, N. Katherine Hayles works towards addressing the challenges and the potential electronic based writing brings to critical understanding. While the genres of electronic literature that Hayles addresses, particularly those who are named for the software by which they are produced, might have a relatively limited application as technology changes; it is possible that more concrete terms may begin to form around the general trends that Hayles identifies. The challenges that electronic literate poses to the concept of narrative and the possibility that is provides to engage in radical experiments of spatiality and temporality are evidently areas that Hayles believe to be vital to the exploration of electronic literature as a critically vigorous genre.

The concepts of dynamic hierarchies and fluid analogies are important to take from this book and according to Hayles, vital to understanding electronic literature. The computer systems, as they grow more complex, are responsible for creating multiple sites of contact between digital compositions and critical interpretations through mutual determining interactions. These influencing relations are heightened by increasingly ubiquitous computing, which are brought forth through the composition of electronic literature.

Hayles’ positioning as neither body nor machine, but rather the intertwining of these two elements for the creation of another subjectivity (88) is one of the strongest appeals of the book. It is from this positioning that Hayles is able to make her strongest claims about the reflexivity of media as electronic literature is influenced by and in turn influences the more traditional media from which it was derived. Her resistance to seeing body and technology as opponents leads her to point in which she critiques Mark B. N. Hanson, who built his arguments largely on Hayles’ previous How We Became Posthuman, and suggests that in trying to prove the vital role of the body has dismissed its potential for transformation through mechanical means.

Hayles’ prediction at the beginning of her final chapter, that “digital literature will be as significant component of the twenty-first century cannon” (159) is by no means a small claim as Hayles herself seems to suggest. The fact that almost all literature spends most of its life as a digital file is perhaps an important fact of production, but the consumptive efforts of readers, even critically minded academic ones, have been largely limited to paper in bindings. Even if the medium is largely digital the output is what has the focus. With the increased access to digital book readers, popular, even commercially successful digital literature may not be far off. While Electronic Literature is a good book today; one that is of great interest to media studies scholars, I do predict that this book will be of importance to nearly all scholars of literature in the not to distant future. New texts are composed with increasing digital finesse and old texts are being remixed into electronic media formats.

What is theory?  It is, at least in part, an effort to make explicit what is implicit and to expose what is assumed but unspoken. To paraphrase Niklas Luhmann, theory seeks to perceive the reality which one does not perceive when one perceives it.  If so, then Katherine Hayles is offering us a theory of electronic literature that renders electronic literature an embodiment of theory.  Not, however, in the sense early theorists of hypertext imagined, but in a way that is more consonant with the anti-theoretical perspective advocated by Jerome McGann in radiant textuality.

Early hypertext theorists such as George Landow enthusiastically read electronic literature as an embodiment of poststructuralist theory.  It appeared that electronic literature manifested on the surface everything Roland Barthes painstakingly sought to reveal about traditional literature and the author.  This reading of electronic literature, however, appears now to have been stillborn because of its identification of the hyperlink as electronic literature’s “distinguishing characteristic,” a move which Hayles shows was beset by serious problems.  (EL, 31)  Likewise, McGann seems intent on moving past this mode of theorizing.  As I read him, it is not so much theory itself which McGann repudiates, but a certain way of constructing and applying theory.  As I’ve noted in response to radiant textuality, theory for Mcgann is best aligned with the kinds of knowledge that arise from performance/deformance because Mcgann envisions theory as poiesis rather than gnosis. The kind of embodied knowledge or theory that arises from acts of making or performance “makes possible the imagination of what you don’t know” because it elicits knowledge from failure and also from serendipity. (RT, 83)

McGann’s notion of imagining what you don’t know recalls Hayles’ clever appropriation of Rumsfeld’s Zen-like categorizations of knowledge.  As she puts it, “I propose that (some of) the purposes of literature are to reveal what we know but don’t know that we know, and to transform what we know we know into what we don’t yet know.”  Further resonating with McGann, she sees literature achieving this knowledge by “activating a recursive feedback loop between knowledge realized in the body through gesture, ritual, performance, posture, and enactment, and knowledge realized in the neocortex as conscious and explicit articulations” in much the same way that McGann sees theory arising through “the kinds of knowledge involved in performative operations.”  (EL, 132; RT, 106)  In one sense I would argue that as Hayles describes the effects of electronic literature it functions in a way reminiscent of McGann’s practices of deformance.

For both Hayles and McGann, theory is bound up with the body, with the material, and with action.  For McGann, deformance is a practice which leads the reader to tap through performance the kind of embodied knowledge that helps them reckon with the materiality of the text.  For Hayles, electronic literature already requires or is intended to force the kinds of interactions that deformance is attempting to artificially elicit in the context of print.  Put otherwise, the productive disruptions code introduces into narrative awaken us, according to Hayles, to the reality of the human life-world’s integration with intelligent machines in much same way that the disruptions of deformance awaken us to the material realities of the text according to McGann.

Electronic literature as Hayles’ theorizes it draws into the open features of human existence that previously lay below the level of awareness.  At its best then electronic literature, like good theory, reveals what is not always perceived, but always present.  And this, according to Hayles, it accomplishes by “creating recursive feedback loops between explicit articulation, conscious thought, and embodied sensorimotor knowledge.”  (EL, 135)  Or, as McGann might put it, through poiesis and not merely gnosis.

Today my research interests include the integration and application of software applications and servlets for the purposes of literature, but just a couple of years ago I was one of the people Hayles describes as possessing the characteristics of “deep attention.” My research was mainly concerned with the 18th century English novel particularly the works of Henry Fielding which influenced the long drawn out Victorian novel that Brown thought to be a waste of reading time. It was not until I started to spend more time studying object-oriented programming theory that I started to think how the process of writing fiction is very similar to the process of writing code (so similar in my mind that I began toying with the idea of reading texts through OOP).

Keeping my background and my recent interests in mind I was surprised at my initial reaction to Hayles’s Electronic Literature. The first time I read Hayles’ Electronic Literature the only reaction I had to the text is that electronic literature is an insult to the literature. I found the literature on the sample CD to be examples not of literature but examples of people learning to use software and sharing what they learned. It was not until I read the text a second time that chapter 3 started to make more sense. Hayles states that computer code is a “double-edged sword. On the one hand, code is essential for the computer-mediated communication of contemporary narratives; on the other, code is an infectious agent transforming, mutating, and perhaps even fatally distorting narrative so that it can no longer be read and recognized as such” (137). My first impression of this statement was that code was mutilating literature and when software was used to create texts in the way that “Cruising” (a poem that uses sound, images and text to simulate the experience of “cruising around in a car) does the end product is no longer “literature.” Since my ideas of “literature” where bound to the conventions of the printed text there was no way to include a piece such as “Cruising.” At first I was much more accepting of the Storyspace texts because they more closely followed the conventions of print. Although they were somewhat interactive with their hyperlinks they were not a threat print because they were print, electronic print but print all the same (no movement, barely integration of images and for the most part no sound). I was neglecting the idea that “technologies are embodied because they have their own material specificities as central to understanding how they work as human physiology, psychology, and cognition are to understanding how (human) bodies work” (112). My initial reaction to the literature on our companion CD being a collection of “people sharing what they learned to do with software” became a narrative itself. While I still do not possess the “hyper attention” to appreciate a large amount of the work on that CD, I do think that it is worth preserving and archiving such works because as both Hayles and Risen state in their pieces the work is part of our cultural history.

To speak to Anne’s post I want to say that I believe the way to determine which e-lit texts are “worthless drivel” and which are worth preserving is to see how they spoke to the software that was used to create them. Code Movie 1 for instance is deceptively simple and can be seen as a “neat” way to use flash. But if we really start to think about what it is doing in flash it starts to become a little more complex. Not only is Code Movie 1 displaying “the code” that is normally hidden on the screen (breaking the rule of encapsulation in OOP) as its narrative it is showing the functions by manipulating the hex code through scene effects and timeline effects.

Next Page »