January 2010


Check it out!  🙂

Viddler is a useful video hosting site we might want to consider.

What profound opportunities has the creation of  the “database”  provided.  As Peter Stallybrass (Against Thinking, 1580) reiterates in commenting on Ed Folsom and Kenneth Price’s Walt Whitman Archive, this so helps to “liberate (Whitman, or another author’s name) from the economic and social constraints that govern archival research.”  With such liberation, the arguments begin: classification, categorizing, “ownership of knowledge,” use of the technology classification of users of the technology.

However, this liberation comes with a price. For every new “toy” or technological advancement in the computer age, the positive and negative “forces” must comes to terms. If, as Leo Manovich claims (The Language of New Media, Folsom 1574), that database and narrative are natural enemies, then Folsom’s comments in deference to Katherine Hayles’s new terminology, the “dance” of narrative and database wins us over–in humanistic terms, preferring a dance rather than a battle.

Seemingly, the crux of the issue in Folsom’s response to Jerome McGann’s comments: “How do we design and build digital simulations that meet our needs for studying works like Walt Whitman’s (or any scholar’s)?” would be then to build the simulations correctly, user-friendly, and make them so users can test and challenge embedded hierarchies and interpretive decisions. (Folsom 1609)

As scholars, is it not essential that we question and analyze the development of emerging technologies as best we can, gaining the best possible outcome from their use? In other words, as Meredith McGill states (Remediating Whitman, 1595) “If we misconstrue media shift as liberation, we are likely to settle for less than the technologies can offer us.”

We are here to understand and explain; to make meaning out of the world (Janovich, 225), but also to properly assess and evaluate the limitations and possibilities of our new “liberating” technologies.  As we surrender to the classifications (i.e., database as work), we begin to realize the scope and magnitude of the task ahead. However, the only limit should be our imaginations; as Freedman (1601) has so aptly stated: “no less than Whitman, we are compelled to make imaginative response.”

What profound opportunities has the creation of  the “database”  provided.  As Peter Stallybrass (Against Thinking, 1580) reiterates in commenting on Ed Folsom and Kenneth Price’s Walt Whitman Archive, this so helps to “liberate (Whitman, or another author’s name) from the economic and social constraints that govern archival research.”  With such liberation, the arguments begin: classification, categorizing, “ownership of knowledge,” use of the technology classification of users of the technology.

However, this liberation comes with a price. For every new “toy” or technological advancement in the computer age, the positive and negative “forces” must comes to terms. If, as Leo Manovich claims (The Language of New Media, Folsom 1574), that database and narrative are natural enemies, then Folsom’s comments in deference to Katherine Hayles’s new terminology, the “dance” of narrative and database wins us over–in humanistic terms, preferring a dance rather than a battle.

Seemingly, the crux of the issue in Folsom’s response to Jerome McGann’s comments: “How do we design and build digital simulations that meet our needs for studying works like Walt Whitman’s (or any scholar’s)?” would be then to build the simulations correctly, user-friendly, and make them so users can test and challenge embedded hierarchies and interpretive decisions. (Folsom 1609)

As scholars, is it not essential that we question and analyze the development of emerging technologies as best we can, gaining the best possible outcome from their use? In other words, as Meredith McGill states (Remediating Whitman, 1595) “If we misconstrue media shift as liberation, we are likely to settle for less than the technologies can offer us.”

We are here to understand and explain; to make meaning out of the world (Janovich, 225), but also to properly assess and evaluate the limitations and possibilities of our new “liberating” technologies.  As we surrender to the classifications (i.e., database as work), we begin to realize the scope and magnitude of the task ahead. However, the only limit should be our imaginations; as Freedman (1601) has so aptly stated: “no less than Whitman, we are compelled to make imaginative response.”

I am so trying to keep a positive mindset with this book:).  There is a lot of technical writing and words that I am unfamiliar with.  Here is what I have managed to squeak out of him so far (suggestions and comments are super! welcome):

Ironically, texts tend to be a somewhat database in and of themselves.  The words in a text are used to categorize, systemize, and organize texts within databases.  Words used can conjure meaning, which in turn conjures a picture, which in turn conjures a setting, etc. (semiotics at work, right?)  But are we limited to how our long term memory is indexed?  It would be interesting to reflect upon how the storage and retrieval of information interacts with McGann’s argument. 

This draws me back to the experimental narrative _House of Leaves_, by Danielewski – it is a visual and textual indexing of plot, information, documents, images, and words as a dialogue (more thoughts on this hopefully after class when I have a fuller understanding).  Taking the indexes, bibliographies, and possibly even footnotes into consideration, all of these lend to what I still feel is the human nature (maybe desire is a better term) to fit items into categories/slots/spots/etc.  Is this performative…categorical organization based on what something look likes, feels like, reads like, etc.?  Judith Butler may say so.  I feel McGann is arguing that even words in texts can’t escape this determination.

Deformance breaks this down for us (or so I hope).  I call it “the onion peel from aesthetics to systemics.”  We can superficially interpret a text as just a combination of words on a piece of paper, but we can also peel the layers away to see how it “transforms” (he gives some very technical, HTML/mark up examples in the text) from words with literal meaning (1’s and 0’s possibly) to a performative meaning (philosophic, academic, etc.).  From what I am told about 0’s and 1’s, we could even go as far as to discuss what they “really” stand for.  Hmm.

Sorry if this may be off topic or ramblish – I am working on it:).

What struck me the most about McGann’s thinking is that this man who devotes so much time and energy to building the Rossetti Archive also spends so much effort arguing (successfully, I would say) that bound, printed books are themselves a form of database. He consistently refers to the varying layers of information that can be gleaned from a printed work, from the content itself, to the arrangement of the material in print, to the way printed data is indexed, cataloged, and stored. With examples spanning from the very small through the massive, McGann shows that a text shouldn’t be viewed as any different from a very detailed and robust database. Indeed, I believe he is arguing that modern databases have a hard time reproducing the depth and complexity of a bound version of a database.

The “microscopic” detail of language itself provides challenges for textual analysis for machines, but humans often don’t consider it. When speaking of alphabetic and diacritic forms (what I typically think a computer considers simply bytes of data), McGann says that

“they are the rules for character formation, character arrangement, and textual space, as well as for the structural forms of words, phrases, and higher morphemic and phonemic units—that readers tend to treat them as preinterpretive and pre-critical. In truth, however, they comprise the operating system of language, the basis that drives and supports the front-end software” (115).

His argument is that we, too, process language in a very machine-like way. I see this as the essential root of all his work with databases: trying to reduce the complexities of language—and its interpretation—to things machines can read and understand. I see there being a large gap between thorough encoding and comprehension.

Where I do wholeheartedly agree with him is on the nature and importance of layout: “A page of printed or scripted text should thus be understood as a certain kind of graphic interface” (199) and on print-based reference systems:

“Grotesque systems of notation are developed in order to facilitate negotiation through labyrinthine textual scenes. To say that such editions are difficult to use is to speak in vast understatement. But their intellectual intensity is so apparent and so great that they bring new levels of attention to their scholarly objects” (79).

There is more useful metadata on a page than most people recognize. However, nearly every time I remember having read or annotated something, I recall precisely where on the page it was located. The interconnectedness of hypertext vastly reduces the complexities McGann identifies in notation systems; however, we have yet to sufficiently transfer the physical/spacial aspect of print into the digital arena.

For researchers, database is a very useful “genre,” if it can be called that. As Ed Folsom (co-creator of the Walt Whitman Archives) mentioned, it provides a valuable source of information. The more databases that exist, the more educated as a public we can become, in theory. According to him, databases should be taken more seriously as part of both scholarly and popular culture, providing assistance to both pedagogy and to those who are casual readers. Folsom believes that database is its own, emerging genre, separate from “archives,” which are physical and reachable only by the few who have time to gather information there. Databases put information together in simple formats, where everything is available to everyone, everywhere.

Stalllybrass’ response to Folsom was one of my favorite articles to read, as he basically used the database argument to suggest that plagiarism is a misunderstood issue. According to him, There is no such thing as originality as we think of it. In fact, we’re all just repeating each other’s ideas and words in our own ways, as Shakespeare did deliberately. Stallybrass explained that databases have pros and cons – one of the pros being that they make information available and useful to all. However, he said that the cons include issues of plagiarism, monoculture, and information overload. I agree partially with this. He added that databases are everywhere. Thinking of them as just a single genre is incorrect, just as dissuading students from plagiarism is incorrect. Everyone borrows and learns from others. As teachers, we have to ask questions that are not likely to be plagiarized completely. Otherwise, we must accept that students will and should pull from other sources.

Jerome McGann (The Rossetti Archive) added to the discussion by saying that we have to understand our paper-based inheritance in order to understand older texts. Therefore, databases are lacking in this area, whereas archives are a better resource. He also covered this idea in Radiant Textuality, when he described poets’ original works, like those of Emily Dickinson, who pasted a stamp to one of her pages. Without a physical representation of the way the page actually looked, the reader or researcher is not getting the full picture.

Meredith McGill (an associate professor who uses the Walt Whitman Archives in her courses) explained that she does not believe database can be considered its own genre. In her view, it is simply a reconfiguring, or a “remediation of archives.”

Overall, I agree that there are some semantic issues regarding what databases are, and there are some places where “overinformation” may occur in database research. It is difficult at times to navigate databases. However, as more than one writer suggested, there are excellent archives in London and other major cities, where researchers can find incredible resources. Sadly, I do not often find myself in London and other locations where there are fantastic amounts of original documents. Sometimes I want information at my fingertips. This is not because I am lazy necessarily; it is because I am constantly seeking more. Databases feed this interest. Whether the movement to database can be considered its own genre is in a way a semantic debate, and one that I am still considering.

Ed Folsom, Jerome McGann, and Jonathan Freedman are three researchers whose work involves databases.

Folsom is the lead scholar of the Walt Whitman Archive, and he believes that the database is the genre of the 21st century.  He lauds the possibilities of databases and believes that they can provide new ways to access and view information.

While McGann, like Folsom, appreciates the opportunities that databases offer, he does not agree with Folsom about what constitutes a database.  He argues that database requires a user interface, and that Folsom’s Whitman “database” is, in fact, a markup interface built on top of a database.

Similarly, Freedman worries that Folsom’s claims about the Whitman Archive are utopian and too optimistic, deeming them overzealous and “self-valorizing” (1600).  Nevertheless, it seems like he believes the Whitman Archive itself is a praiseworthy and invaluable resource.

I believe databases are and will continue to be invaluable, but like Hayles and McGann, I think they are only part of an equation.  They require interpretation, or narrative, to put the raw data together coherently.  McGann might also say that user interface is part of that crucial, necessary interpretation.

Next Page »