Week of April 15 — Electronic Literature and New Media

This week’s readings are centered around electronic literature and video games as avenues for DH inquiry. I was interested to see some examples of electronic literature so I went over to Twine (twinery.org) to get a firsthand look at the kinds of things writers have been producing. At the bottom of the page they link to sample works and I opted to give Aleks Samoylov’s “Composition in a Minor Key” a chance (I had first been tempted by “Ashes” by Glass Rat Media but there was a warning for sexual violence–hard pass). “Composition in a Minor Key” was created in 2014 and bills itself as “surreal interactive story about love, community, loneliness” (Samoylov, 2014). With little prior experience of electronic lit, this seemed to me like a hybrid of a 90s-style choose your own adventure game and a printed Choose You Own Adventure book. The illustrations are rather primitive, digitally speaking, which adds to the retro feel of the work.

After a number of turns the ending becomes unlocked but I didn’t get that far. The story seemed to have several dead-ends, which required going back and navigating to alternate routes, which I found frustrating.

Something that did intrigue me were the tags associated with the work on the Interactive Fiction Database (ifdb.tads.org). I clicked through to look at the most popular tags (sorted into a sort of word-cloud) and the list of all the tags used (helpfully alphabetized). Without creating a log-in it was hard to tell if this list was exhaustive (and controlled-language — a taxonomy) or if users could create new tags (a folksonomy). One of the tags for “Compostition” is CYOA — Choose Your Own Adventure — and another was retro, both very appropriate.

I was also interested in learning how media like electronic books and video games are preserved by archives and went to the Library of Congress page on digital preservation for the Preserving Virtual Worlds project. Without reading the 195 page report, I thought I could get some more information from the linked project website (http://pvw.illinois.edu/pvw/) but, ironically, the site no longer seems to be supported as I couldn’t get the page to open. This is perhaps because PVWII began shortly after the publication of the PVWI report. (https://ischool.illinois.edu/research/projects/preserving-virtual-worlds-ii). From the limited information available, the objectives of PVWII seem to be similar to PVWI, though PVWI set out to develop basic standards for metadata and PVWII intends to explore how existing metadata standards can be used for long-term preservation. Sadly, the PVWII website is also no longer supported (http://pvw.illinois.edu/pvw2).

Advertisements

Week of April 9 — Digitization and the Print Record

This week’s reading and discussion have me thinking about the book as Book. The bookiness of books. The platform of the printed book. Unlike manuscripts, I assumed that the printed book had nothing to offer over a digital surrogate (or avatar, if you like–thanks, Scott) but our reading, “My Old Sweethearts: On Digitization and the Future of the Print Record,” by Andrew Stauffer (Debates, 2016) brought me back to my English undergrad roots and reminded me of the necessity of the study of print culture, the history of the book, and the significance of marginalia. But my library science brain is still left wondering how to strike a balance between keeping all the books and the trend toward deaccessioning large swaths of a collection once it has been digitized since once the books are gone, they’re gone.

Stauffer sources ten volumes via interlibrary loan for his comparison and later comes across an eleventh volume at a book sale. I’m not sure how large a sample would be preferable for research purposes (as many as one can find?). Stauffer does justice to the pressures facing libraries by acknowledging and even sympathizing with the demands to reduce stacks to make way for collaborative spaces and new materials, to spend funds on digital databases instead of preserving old and rarely used volumes. Off-site storage seems to be a good compromise but one that he would seem to object to as it would impede the library from fulfilling its role as a “built space for historical encounters” (Stauffer, 2016). Stauffer’s own academic library has been undergoing a renovation that has ruffled some feathers, even prompting the Dean of Libraries, John Unsworth, to step in and defend on Twitter:

So what to do? (Most) libraries are not getting bigger or, if they are, it is not with the goal of expanding the stacks to house a larger print collection. There should, at the least, be more transparency when introducing undergraduates to digitized collections to dismantle the aura of authority and to suggest that the digital surrogate they’ve located or are working with is (1) a reproduction of a tangible, paper thing that exists out in the world, that (2) it is just one version of such a thing, and (3) that the other versions may have varying degrees of value depending on how the scholar is using them. In a perfect would we would keep building bigger and more beautiful libraries but, in this world, off-site storage will have to do, though allowing students and researchers access to these facilities (within reason [by appointment, during certain hours, etc.]) may help alleviate some of the discomfort felt when paper books are moved from the library.

There is also the touchy subject of what happens to books when they are deaccessioned — sometimes they are put into the dumpster, which seems to light a fire inside people otherwise uninterested in the mundane goings on in library spaces. Deaccessioning, or weeding, is a necessary part of maintaining a healthy collection. Recently, a stir was created in a large group of librarians and library workers on Facebook. Librarians and staff go to great lengths to ensure that books don’t end up in the dumpster but, sometimes, all other avenues have been exhausted.

All this is to say that of course librarians work hard but of course we can always do better to meet the needs of our patrons. Universities and other institutions can help in this way by preserving access to funding or creating new funding opportunities to expand the stacks or to pay for the cost of offsite storage, to contribute to the variety of digitized books by encouraging partnerships between scholars and experts and digital scholarship services to digitize multiple versions of printed works, and by encouraging faculty to embed librarians within their courses or to invite teaching and outreach librarians to speak to their classes about the limitations and opportunities of using print and digitized materials.

Week of April 2 — Scholarly Editing and Markup Languages

Markup languages are something I have had very limited experience with. Most recently I created a Wikipedia page and was horrified at first to realize that the WYSIWYG editor was missing (spoiler: it wasn’t missing, my settings had been changed) and panic and dread rushed over me as I faced the challenge of creating something from nothing. Thankfully I toggled back to source editing rather quickly and was able to create my page without further distress but it signaled to me a dependence on built-in tools to create digital objects and the ways in which we are constrained by choices made for us if we don’t know how to use HTML and XML to display and describe our content.

One of the ways in which I had not previously considered the significance of having control over markup language was in scholarly publishing and editing. Professor Jones mentioned in class that “digital textuality undermines a single dominant narrative” and I love this concept — that by having control over the display a variety of variants can be included with the text, or that alternate or conflicting versions of a text can be viewed in proximity to one another. Digital texts can allow for greater transparency of the scholarly editing process, revealing that often there is not a singular text but an assemblage of expressions and manifestations of a text (FRBR connects these conceptually in its relationship model).

Week of March 26 — Text Mining

Text mining as it functions presently allows for research and analysis to be conducted using distant reading techniques that have previously been unavailable to scholars because the enormous volume of data exceeds what one person, or a team of people, could reasonably study, even if given a lifetime. Optical character recognition (OCR) has given rise to the digitization of huge swaths heritage texts quickly and relatively inexpensively, though not without some flaws. The transformation of printed text to digital has been backed by organizations like the HathiTrust, which currently boasts more than 140 members and almost 17 million total volumes, with over 6 million of those volumes within the public domain. Emerging research projects can use software to comb through these volumes to detect trends previously imperceptible to researchers on a large scale.

The tools used to navigate this ocean of information are being improved upon but, just as with OCR, there are some drawbacks. Notably in the readings for this week is the blindspot in Underwood, Bamman, and Lee’s study, The Transformation of Gender in English-Language Fiction. The authors studied the trends and significance of gender in 104,000 works of fiction over the last 170 years, focusing on how the attributes of characters are more or less sorted into gendered categories. Their software, Book NLP, identifies characters and words that are connected to each character. But the software and the researchers aren’t able to parse out the gender of first person narrators, and so those characters are excluded from consideration in the study. This seems to be problematic because the narrator is often the main character of the story. It raises lots of questions, like are there significant trends in the frequency of first person narration versus third person over time? How often do texts in the sample employ first person narration? Is the gender of the first-person narrator often different from the other characters in their narrative and would including the narrator change the findings of this study? Sometimes the methods used to data mining can raise more questions than they answer. This is not necessarily a bad thing, as it pushes for a greater understanding of the texts and provides scholars with avenues of inquiry worthy of following up on.

Week of March 19 — The First Humanities Computing Center

This week’s reading and RECAAL project panel featuring Julianne Nyhan, Geoffrey Rockwell, and Marco Passarotti (in addition to our professor, Steven Jones) highlighted the work being done to reverse-engineer the Center for the Automation of Literary Analysis–arguably the first humanities computing center–in Gallarate, Italy, founded by Father Roberto Busa. In attempting to recover the physicality of the center for the purposes of creating an interactive 3D model there is much work to be done, and I found the work of Dr. Nyhan particularly interesting. Her work involves tracking down the women who worked in the center as punchcard operators and interviewing them about their work under Busa in the mid-twentieth century. Their stories are largely neglected as part of the historical record regarding Busa’s work.

Nyhan discussed that Busa does not mention the women by name, with few exceptions, and that this may have to do with the need to be perceived as a solitary visionary and not as the leader of a network of people doing skilled labor. The women were also denied formal channels to advancement, though there were some who acted as supervisors. In the reading for this week one woman lamented the lack of a certificate or diploma for the women engaged in computing work–a reflection of their standing in the organization and an impediment to finding similar work upon the completion of the project.

In her work to highlight and repatriate the stories of the women punchcard operators, Nyhan is enriching the historical record about humanities computing and enabling women to see themselves in that history.

Week of March 5 — Artificial Intelligence and Dystopia/Utopia

Recently Facebook presented me with this ad and the hashtag caught my eye: #AIforGood. Artificial intelligence for good. It implies a binary and a default. AI for good means there is AI for bad and calling attention to its goodness means we can infer that the default is not good. This was an odd way to remind the observer that AI is not neutral in the public’s imagination and conjures all the dystopian imaginations of computers overtaking humans, of epic human-robot wars, and of all the ways in which robots, cyborgs, and androids threaten the status quo. But Microsoft promises that *this* AI is *good*.

It got me thinking about another controversial application of AI — the ambitious work of entrepreneur and CEO Martine Rothblatt to transcend the limits of human biology by fusing artificially intelligent robots with the essence of human beings. The driving force behind her project is to create immortality through cyberconsciousness — she has created a robotic facsimile of her wife, complete with synthetic emotional intelligence.

All this is to say that the future of AI is coming, shepherded by corporations and tech enthusiasts, and the difference between utopia and dystopia is a matter of perception. Many people would recoil at the thought of their wife living on after death as a machine but to Rothblatt that is ideal. People and entities with the financial and technological power to stay on top of, or ahead of, AI are not fearful that it will replace their job or become sentient because they maintain (the illusion of?) control. People who have already been outsmarted, outpaced, or seen their positions outsourced to computers are rightfully concerned that they will continue to be marginalized. How will they participate in a world that doesn’t need them anymore? In order to sell the public on AI, corporations will need to find a way for the public to participate and will need to rebrand robots to eliminate the scare factor. And that perhaps how #AIforGood came to be. As for cyberconscious robots, there is still a long way to go.

Week of February 26 — Automation and Machine Learning

This week’s topic and readings are reflected in the headlines as Pepsi lays off hundreds of thousands of workers in favor of automating production and machine-learning ascends to new heights as computers train each other to become gaming champions through reinforcement learning. From reading Greenfield we can only expect to see the volume and frequency of such news increase as more jobs are automated and machines become smarter. Greenfield raises some important questions about what the future will look like and if we should prepare for a Utopian Leisure Economy or something more sinister. Experimentation with a guaranteed minimum income/universal basic income (UBI) is already underway. Interestingly, the fiction we’ve been reading from the Institute for the Future maintains engagement between humans and the formal economy within the confines of capitalism.

One of the two quotes that Greenspan choses to open his essay with is from science-fiction writer, Ursula K. Le Guin: “We live in capitalism. Its power seems inescapable. So did the divine right of kings. Any human power can be resisted and changed by human beings.” It’s hard to imagine what comes after the late capitalism we find ourselves in presently, but Greenfield hints that the conditions for resistance and revolution are mounting. Disruptive technology and technological unemployment are going to leave millions idle and destitute more quickly than a safety net of UBI can be successfully implemented. This makes for some interesting speculation–how will this continue to shape the political landscape where dying industries are clinging to relevancy (e.g. coal) while corporations are making more money than even (Pepsi) and exert unparalleled political control (Citizens United) while convenient scapegoats (immigrants) are being blamed for the failings of the economy to train people for jobs in a post-human world? It would be interesting to learn more about the ways in which technology and xenophobia are linked in the developed world. Will wealth be redistributed in a more equitable way once the economy collapses? What will humans do with their time when they aren’t compelled to work and what will be the incentive to draw workers into fields where humans are still necessary? A dystopian caste system comes to mind.