The Contemporary Novel

in Uncategorized

(Introduction to “The Contemporary Novel” colloquy at Arcade.)

Any colloquy on the contemporary novel faces two immediate challenges.

We must deal first with our adjective. What do we mean by "contemporary"? The primary sense of the word, according to the trusty OED, is "[b]elonging to the same time, age, or period; living, existing, or occurring together in time." This sense brings to mind the calendrical fetish so deeply ingrained in the DNA of literary study. We have long presumed that synchrony conceals a cultural logic–an episteme, a Zeitgeist, a generational affiliation, whatever collective term we wish to employ to describe a moment–in need of analysis or exposure by the astute critic. Everything is connected, many of us imagine, and our job is to show just how.

Our faith in the significance of synchrony, in the sanctified integrity of the period, links up to the OED’s fourth sense of "contemporary": "Modern; of or characteristic of the present period; esp. up-to-date, ultra-modern; spec. designating art of a markedly avant-garde quality, or furniture, building, decoration, etc., having modern characteristics." After all, if each period has a unique character, what might the character of the present be? When did our period begin? How is it changing? Even historical scholarship opposed to histoire événementielle (event-driven history) doesn’t bypass these questions, but merely folds in longer durations of time into the "moment" of the modern. From the perspective of the long durée, after all, might not authors such as James Joyce and Virginia Woolf seem contemporary with Toni Morrison and Don DeLillo?

The second challenge, the challenge of our noun, is both simpler and more vexing. The term "novel" already betrays a relationship to time, such that the adjective might seem redundant. In a sense, inasmuch as they have since their eighteenth-century rise always advertised their newness, all novels are contemporary novels, at least with the moment in which they are written. Without delving here into novel theory–this is the job of our bloggers–we might begin questioning the noun from another perspective, the perspective of genre. After all, what is a novel? And whatever it is, isn’t the contemporary moment defined by its failure, exhaustion, waning, or death? Don’t newer technologies, media, and genres more effectively give us a taste of the Zeitgeist than the stale conventions of realism and metafiction? In an era when, it seems, television, the graphic novel, and nonfiction have stolen the novel’s proverbial thunder, who wants to give Jonathan Franzen the time of day?

That we don’t have good answers to these questions, or at least answers that form anything resembling a critical consensus, speaks to the vital need for this colloquy. My hope is that Arcade will evolve into an important, if admittedly informal, zone where we question the key terms of this multifaceted field, work to form a consensus or at least understand our differences, and organize the early stages of diverse research programmes that will in time find their way into our scholarship and public writing.

—Lee Konstantinou, August 2011

How to Squeeze the Humanities 101: The Case of Mark Bauerlein

in Uncategorized

(Crossposted at Arcade.)

Mark Bauerlein–author of The Dumbest Generation: How the Digital Age Stupefies Young Americans and Jeopardizes Our Future (Or, Don’t Trust Anyone Under 30) and Literary Theory: An Autopsy–recently released a widely discussed study called "Literary Research: Cost and Impacts" for the Center for College Affordability and Productivity. This short study concludes that the impact of literary scholarship "does not justify the labor that went into their making," and that "[a] university’s resources and human capital is thereby squandered as highly-trained and intelligent professionals toil over projects that have little consequence."

Before looking at the substance of Bauerlein’s claims, a word on the CCAP, the "independent, nonprofit research center" that published this study. The CCAP defines its mission this way:

We define our mission rather broadly. “Affordability” means not only rising tuition and other costs to the consumer of education services, but more broadly the burden that colleges impose on society. “Productivity” refers not only to the costs and resources needed to educate students and perform research, but also to the measurement and quality of educational outcomes. CCAP is concerned about finding new ways to do things better–to improve affordability and productivity. In particular, we are interested assessing how the use of the forces of the market could make higher education more affordable and qualitatively better.

Very openly, the CCAP regards "affordability" and "productivity" in broadly libertarian terms. On November 18, 2011, the CCAP co-hosted a conference with the Cato Institute called "Squeezing the Tower: Are We Getting All We Can From Higher Education?" It seems fair to me to imagine that Bauerlein’s report was gratefully added to the program of this conference in the hope of bolstering arguments meant to convince University administrators–and, in the case of public schools, state legislatures–that their schools are unproductive, in need of the loving and gentle invisible hand of the free market to correct the unproductiveness that–no surprise here–Bauerlein argues is endemic to literary scholarship. By "the burden that colleges impose on society" it is not hard to imagine that what the CCAP has in mind is the burden that publically financed colleges–or even college educations financed by government grants or loans–impose on taxpayers who might more fruitfully pump their money into hedgefunds or private charities. It has lately become popular to claim that there is a "bubble" in higher education, by which free-market-lovers usually mean that a higher education is only worthwhile to the degree that it contributes to getting economically remunerative post-college jobs. What’s the point of getting a low-interest government loan to educate yourself if your fate–determined by the iron laws of economics!–is to be little more than a barista? 

Given this provenance, it is unsurprising that there are serious problems with this report, which I will address below, but despite the uses to which the CCAP might want to put Bauerlein’s study there are also serious issues that it raises, albeit in a highly unsystematic way. The centrally important question this study asks — but fails to answer — can be put in the form of a riddle: When is hyperproductivity unproductive?

Studying the scholarly output of literature departments across a range of schools (the University of Georgia, SUNY-Buffalo, the University of Vermont, and the University of Illinois), Baulerlein finds that literary scholars are highly productive, even after they gain tenure, crafting high quality scholarship and criticism across the span of their careers. This is a highly inconvenient finding, of course, if you are a school administrator whose goal is to find arguments for the abolition of tenure–and for the increased casualization and adjunctification of humanities faculty. Such arguments typically rely on the myth that the University literature department is crammed with tenured radicals who do nothing but sit on their collectivized asses all day. That the tenure system is ultimately what is at stake in this debate should not be in doubt. In another article, "Is Tenure Doomed?," written three years before he released his CCAP study, Bauerlein concludes:

To fend off adjunctization, then, individuals and professional organizations need to craft and defend a different model. They need to develop employment schemes less absolute but still protective and meritocratic. One possibility might be to grant teachers some form of tenure, but on the basis of teaching duties, not research expertise. That is, they would be hired to handle undergraduate student demands more than to fulfill a disciplinary field. So, as the burdens shift in the undergraduate student body–for instance, fewer students in Romance languages, more in freshman composition–professors would shift as well, in this case, with Romance language professors reducing their language courses and assuming freshman comp duties (after some re-training). That would require, of course, that professors lighten their research identities and raise their teaching profiles–a welcome adjustment in all humanities and "softer" social science fields.

To create a convincing case for an emphasis on teaching over research, a change of emphasis that eviserates the reason for having tenure in the first place–the whole point of tenure isn’t job security per se, but the securing of academic freedom from the market and the state–Bauerlein must devise some other way of discovering unproductivity in this curious situation of–if anything–hyperproductivity. What he points to, via a methodology of citation-counting that many have criticized but which for this blog post I am happy enough to accept, is that though scholars are highly productive their research has very little "impact" or an impact that should not be regarded as "significant." On average, articles and books receive very little attention in the years immediately following their publication. What is the justification for paying scholars so much money, Bauerlein asks, if what they write receives so little attention from other scholars in their own field? It’s a good question.

Bauerlein concludes, very much in keeping with his already long-established preference, that there isn’t a justification. Institutions of higher learning should reduce the demand for scholarly production in favor of service and teaching. Whether or not the individual scholar finds her research personally enriching, contemporary literary scholarship is a waste of valuable time when viewed from a systemic perspective. At times, Baulerlein even tries to suggest that "[c]ampus leaders may, in fact, find a grateful constitutency among the faculty" when they change the balance of teaching, service, and research!

The problem with this argument should hopefully be apparent: the norms guiding Bauerlein’s study, especially around the definition of "impact," are so thinly defended as to be almost meaningless. What is the purpose of scholarship? Why should citations count more than, dare we say it, truth-value? What is the mission of a University if not to produce excellent scholarship? This is, it must be emphasized, different from the problem of overproduction in particular subfields. The relative lack of impact of every subsequent essay on Dickinson–compared to every essay on David Foster Wallace–is a separate question from the norms that should systematically guide the balance of scholarship, teaching, and service. Moreover, without comparative data, it’s hard to know what sort of comparative impact English departments are having. How do they compare to mathematics departments? Classics departments? If a physicist devises a successful unification of gravity and the other forces that is only legible to three other theoretical physicists in the world–and is therefore infrequently cited–was the money that supported that research a waste of University resources?

If the purpose of the University is to produce knowledge–and by Bauerlein’s account the English department is successful at producing knowledge–then the cost of producing knowledge is whatever it costs to produce. If anything, the cost of producing literary monographs and articles is neither more nor less, in our current system of higher education, than the market rate of hiring faculty to do the work that is expected of them. (And given that literary study doesn’t require particle accelerators, the costs are actually comparatively pretty low, though one shouldn’t doubt that the Department of Defense would quickly step up should the new quantitative turn in literary study require a couple billion dollars in taxpayer support.) It is, moreover, the conditions of the market that propel an arms-race-like escalation of demands on scholarly productivity. To the degree that universities arrive at collective agreements that modify disciplinary norms–whether in literature departments or any department–they are making the environment within which literary scholarship is produced less market-like. Making conditions less market-like might be a good thing, if the de-escalation of scholarly hyperproductivity weren’t also taken as an excuse to dismantle the already fraying system of tenure. The negative response to Bauerlein’s study is grounded, I suspect, in the fear that it was commissioned and will be used as an excuse to increasingly adjunctify the humanities. Far from being an alternative to adjunctification, an emphasis on teaching and service often facilitates its acceleration.

What Baulerlein doesn’t really consider is the possibility that a less manically competitive set of tenure requirements might be devised as a way of allowing, on the one hand, more time to be devoted to each essay or book and as a way of facilitating, on the other hand, a raw increase in the number of citations, if that’s his preferred metric of impact (rather than truth-value). After all, if fewer essays and books were published each year on Dickinson relative to the total number of Dickinson scholars, might it not be increasingly likely that each of those essays and books would get a higher number of citations? That that scholarship might even be widely read? The change I am suggesting would entail not the substitution of teaching in place of scholarship–but rather a new emphasis on the article or essay as the coin of the disciplinary realm. Of course, no such academic arms control agreement is likely to be adopted unless the broader trend of adjunctification is addressed.

This process of adjunctification, which is often misunderstood as an "overproduction" of Ph.D.s, is the ground upon which the "insignificant" scholarship we are asked to produce currently stands. It is the ground that must change if we hope to make more of an impact on our respective fields.

Alan Jacobs and the Rise of the Reading Class

in Uncategorized

(Crossposted at Arcade.)

In a recent article in The Chronicle of Higher Education called "Why We Can’t Teach Students to Love Reading," Alan Jacobs argues that "’deep attention’ reading has always been and will always be a minority pursuit." The inevitable minority status of deep reading "has been obscured in the past half-century, especially in the United States, by the dramatic increase in the percentage of the population attending college, and by the idea […] that modern literature in vernacular languages should be taught at the university level."

Mass higher education has artificially propped up reading, in Jacobs’s view, leading many to falsely believe that engaged, long-form reading is something everyone should love. Drawing on a 2005 sociological survey of reading practices, which I discuss at greater length below, Jacobs calls the population of deep readers "the reading class." Our "anxiety about American reading habits, and those in other developed nations to a lesser degree," he concludes, "arises from frustration at not being able to sustain a permanent expansion of ‘the reading class’ beyond what may be its natural limits."

In fact, "the idea that many teachers hold today, that one of the purposes of education is to teach students to love reading–or at least to appreciate and enjoy whole books–is largely alien to the history of education." We are deceiving ourselves if we think we can teach students to love reading or for that matter to read more deeply than they would "naturally" do.

Because focusing on print is a cognitively alien (and alienating) activity for children, because an appreciation for long-form reading must be, in the words of Steven Pinker, "bolted on" the student, and because that bolting-on process is so very "painstaking," we should, in Jacobs’s view, "extricate reading from academic expectations." Instead of teaching "[s]low and patient reading[…]"–a pursuit which "properly belongs to our leisure hours"–we would do well to teach high schoolers and undergraduates how to "skim[…] well, and read[…] carefully for information in order to upload content."

In short, mass literary culture is an artificial construction produced in part by an unnatural and inauthentic university system. The real or authentic form of reading happens–definitionally–outside academia, among autodidacts and amateurs. Though couched in a breezy and easygoing tone, Jacobs is making an extraordinarily destructive argument, not only from the perspective of someone who is invested in the flourishing of the academy but also from the perspective of someone who wants to enlarge literary culture. I count myself among both groups.

In an era where universities are seeking new ways to justify slashing and burning the humanities, Jacobs provides fresh ammunition to administrators. Real reading can’t, apparently by definition, happen in the classroom. Real reading happens in the marketplace, among individuals or small private groups of enthusiasts. Why fund literature departments if they, at best, have no effect on literary appreciation or at worst actively inculcate shame and fear in potential readers by making reading a pill?1

To support his arguments, Jacobs cites a great 2005 Annual Review of Sociology article, "Reading and the Reading Class in the Twenty-First Century," by Wendy Griswold, Terry McDonnell, and Nathan Wright. In this review of recent approaches to the sociology of reading–and investigations of multiple literacies–Griswold, McDonnell, and Wright show that reading is always the product of collective determinants and institutional mediation, and suggest that reading might indeed become a minority taste in the future.

Contra Jacobs’s claims, however, their discussion of the development of a "reading class" has nothing to do with the "natural" boundaries of the reading public, but is rather about the way different institutional arrangements lead to different reading levels and practices.

They do argue that "historically the era of mass reading, which lasted from the mid-nineteenth through the mid-twentieth century in northwestern Europe and North America, was the anomaly" and that "[w]e are now seeing such reading return to its former social base: a self-perpetuating minority that we shall call the reading class."

But far from being the practice of a tribe of "natural readers," as Jacobs wants to argue, reading always happens in terms of a "social base." Whether a majority or a minority taste, there is little that is "natural" or "unnatural" about what they describe. Formal education is "the main determinant of literary proficiency," but even the isolated reader (or the autodidact Jacobs celebrates) is enmeshed in large and complex social systems of literary framing and pre-digested interpretation. Whatever their motivations or virtues, the self-perpetuating minority of the "reading class" relies as much on this "social base" as mass readers do. The anomalous nature of mass reading is not an argument against it–or for it. It is merely a social fact.

What this means is that Jacobs misunderstands the real implications of his own claims. From a social fact (the unnaturalness of mass long-form reading) he derives what seem to me to be non-sequitur conclusion (the desirability of this decline). As I have argued in a previous post, the unsuitability of our biology to a certain practice is not an argument against that practice. Likewise, the universality of a biological aspect of the human organism is no argument for it. The artificiality, difficulty, and education-dependence of deep reading is not an argument against the humanities but could well be an argument for the humanities. After all, if we value long-form reading–and long-form reading requires intensive training to perform well–we had best invest in institutions whose goal is the inculcation of this skill.

Finally, in a literary-historical register, Jacobs’s arguments seem to call for the development of a research program that could empirically elaborate upon the conclusions of Griswold, McDonnell, and Wright. If, as I suspect, the demand for literature is anything but "natural," but is itself produced institutionally, literary scholars should dedicate themselves to investigating the historical, social, political, and economic production of demand. Post-WWII U.S. literature would be an especially ripe case study for anyone interested in this research program, not only because the institutional forces producing demand are so well documented but because for many of us these forces have had very powerful personal effects on who we are and our relationship to literary art.

Notes

1. Some might argue that literature departments ought to be justified without referring to their salubrious effects on reading habits and practices. This is something like Stanley Fish’s argument on the uselessness of the humanities. I won’t address this argument here, but I should say that it’s problematic and probably leaves the humanities on even weaker footing, even if only in the purely cynical terms of the administrative fight to secure funding.

William S. Burroughs’ Wild Ride with Scientology

in Uncategorized

Over at io9 you can read a short article I wrote on William S. Burroughs’ relationship to Scientology.

In 1959, the same year Olympia Press published his most famous novel Naked Lunch, the writer William S. Burroughs visited the restaurant of his friend and collaborator, Brion Gysin, in Tangiers. There, Burroughs met John and Mary Cooke, a wealthy American hippie couple who were interested in mysticism. Burroughs recalled, “There was something portentous about it, as though I was seeing them in another medium, like they were sitting there as holograms.”

Who were these portentous holograms? Scientologists. Indeed, John Cooke is reported to have been the very first person to receive a status of “Clear” within Scientology, and was deeply involved in its founding. Cooke had been trying to recruit Gysin into the Church, declaring that the artist was a natural “Clear” and “Operating Thetan.” Ultimately, it was Burroughs, not Gysin, who explored the Church that L. Ron Hubbard built. Burroughs took Scientology so seriously that he became a “Clear” and almost became an “Operating Thetan.”

Read the rest here.

Biological Universals as Authenticity, or, What’s the Matter with Steven Pinker?

in Uncategorized

(Crossposted at Arcade.)

In a fascinating parable, “A Story In Two Parts, With An Ending Yet To Be Written,” posted on the National Humanities Center’s On the Human Web site, Paula Moya tells the tale of a researcher named Kitayama who travels from the land of Interdependence to the land of Independence, conducts research into the way that culture shapes perception, and finds his results grossly misinterpreted by journalists (as reinforcing racist narratives of essential ethnic differences). Kitayama’s basic finding is that those of an Independent cultural disposition tend to commit the “fundamental attribution error” when judging actions, overvaluing the importance of personality as an explanation of action, whereas Interdependent folk are likely to consider situational factors when judging human action and agency. [1]

The mistranslation of Kitayama’s work from culture to race in Moya’s story is a not-so-disguised allegory of the journalistic framing of forthcoming research by Jinkyung Na and Shinobu Kitayama, specifically their article “Spontaneous Trait Inference is Culture Specific: Behavioral and Neural Evidence.” This mistranslation (from culture to race; from Those Reared in an “Asian” Cultural Context to simply Asians) is presented as an example of what Moya and her collaborator Hazel Markus call “doing race,” “creating ethnic groups based on perceived physical and behavioral characteristics, associating differential power and privilege with these characteristics, and then justifying the resulting inequalities.” The comments following Moya’s article are well worth reading in their entirety, as is Andrew Goldstone’s great Arcade reply, “Race, Ethnicity, Brains: Some Marginalia.”

There is much to say about Moya’s post, but I want to point to a reference she makes to the pop science journalism of Steven Pinker. In Moya’s allegory, Kitayama achieves a measure of success, getting together with Recognition (Connie), only to come home to the following scene:

All was going well, that is, until one day Kitayama came home in the middle of the afternoon and found Connie in the bedroom, looking flushed and breathing heavily as she shoved a book under the pillow. “What are you doing?” demanded Kitayama. “Since when do you hide your reading material from me?” Connie avoided his gaze as she handed him the book she’d been reading. Kitayama felt an arrow pierce his heart as he gazed at the title: The Blank Slate: The Modern Denial of Human Nature. “How could you?” he cried, “Don’t you know that Pinker believes that human behavior is generated by the deeper mechanisms of mental computation that may be universal and innate? He claims that culture is epiphenomenal to more basic psychological processes! It’s everything I’ve worked so hard to overturn!”

“I’m sorry, dear,” replied Connie, looking genuinely apologetic. “It’s just so scientific,” she offered. “There’s something so wonderfully hard about cognitive neuroscience,” she added with an appreciative shiver.

I am no fan of Steven Pinker, least of all his attempts to write about the arts, but I would not characterize his views on culture in the way the fictional Kitayama does. In The Blank Slate, Pinker does not argue that culture is epiphenomenal so much as claim that there are a list of human universals that transcend cultural difference. He writes, “My goal in this book is not to argue that genes are everything and culture is nothing–no one believes that–but to explore why the extreme position (that culture is everything) is so often seen as moderate, and the moderate position is seen as extreme.” That is, in his view certain aspects of human existence are culturally variable–though no less biological for their variation–and other aspects of humanity can be found among all cultures. The fictional nature of Moya’s story might suggest that delving into “Kitayama’s” error is beside the point, but I think looking at what Pinker is really arguing will yield some interesting insights into the significance of Na and Kitayama’s real research.

At this level of abstraction, it seems to me that Pinker’s claim is hard to dispute, but the problem is that it is also hardly very interesting from the perspective of the human sciences. What Pinker fails to tell us with any level of precision is where we can find the boundary between difference and identity and what the significance of that boundary is. Pinker’s appendix listing human universals is so free of relevant discussion and context as to leave the reader scratching his head–though it seems perfectly plausible that human universals, like the language faculty, exist and might tell us something about the arts. His discussion of evolutionary psychology, for this head-scratching reader at least, fails to convince, though this is more the fault of contemporary evolutionary psychology than Pinker, whose own area of expertise is linguistics.

As many reviewers have pointed out, The Blank Slate‘s discussion artistic production (in genetic or evolutionary or biological terms) borders on the ridiculous, quickly and problematically moving from fact to norm, abandoning science very quickly for poorly thought through moralizing. From arguments about universal human capacities to appreciate symmetry or tonality Pinker claims priority for artworks that make use of symmetry and tonality.

After embarassingly misquoting Virginia Woolf, and fundamentally misunderstanding her views on human nature, Pinker disparages “the [then] new philosophy of modernism that would dominate the elite arts and criticism for much of the twentieth century, and whose denial of human nature was carried over with a vengeance to postmodernism, which seized control in its later decades.” Modernism’s problem is that it allegedly denies human nature, which is a mistake because “[a]rt is in our nature–in the blood and in the bone… in the brain and in the genes… In all societies people dance, sing, decorate surfaces, and tell and act out stories.”

Of course, Pinker is aware enough of how problematic his argument is to feel the need to explain the prestige of artworks (elite artworks, as I’m sure Sarah Palin would not hesitate to note) that fail to meet his Fact-Backed-Norm, and so he whips out his shopworn Bourdieu. “The conviction that artists and connoisseurs are morally advanced is a cognitive illusion, arising from the fact that our circuitry for morality is cross-wired with our circuitry for status…” We are also informed that the avant-guard tendency to “sneer[] at the bourgeoisie” is

a sophomoric grab at status with no claim to moral or political value. The fact is that the values of the middle class–personal responsibility, devotion to family and neighborhood, avoidance of macho violence, respect for liberal democracy–are good things, not bad things [as presumably postmodernists thing]. Most of the world wants to join the bourgeoisie, and most arrests are members in good standing who adopt a few bohemian affectations.

Humans who appreciate modernist or avant-garde artworks only pretend to do so because of an ultimately (in an evolutionarily psychological sense) cynical desire to gain acclaim and prestige (and fitter sexual partners, which is what the game often boils down to) or because we are “cross-wired” in weird ways:

As Bourdieu points out, only a special elite of initiates could get the point of the new works of art. And with beautiful things spewing out of printing presses and record plants, distinctive works need not be beautiful. Indeed, they had better not be, because now any schmo could have beautiful things.

We can all be grateful that Pinker doesn’t have his moral-circuitry cross-wired with his status-circuitry. Certainly, none of us could imagine that there is any advantage Pinker might gain (in either a proximate or ultimate sense) in condemning the menace of Sneering Sophmoric Status-Grabbing Bohemian Modernist/Postmodernist Beauty-Haters in these terms, especially since those of us who enjoy ugly artworks (how can I deny that I am a hater of schmos?) are so powerfully dominant.

It goes without much saying, especially for anyone with even a remote understanding of the history of the arts, that there is reason to be skeptical of Pinker’s account of how and why we appreciate difficult and avant-garde artworks–let’s break out the brain scanners, people, and prove him wrong!–but even more troubling is Pinker’s not-so-tacit claim that we should appreciate art along lines he approves of.

Even for the sake of argument granting his claims, who is to say that our evolutionarily psychological status-seeking response to art is invalid or a complicated form of cynicism? As I noted in my previous post, where I discuss the attempt of neuromarketers to use brain scanners as a means of breaking through social dissemblance to understand what we really want from our advertisements, our films, and (I’m sure some day soon) our literature, Pinker’s invocation of alleged aesthetic universals assumes what it needs to argue for: the superiority or desirability of the universals he celebrates.

After all, rage is a human universal, as is sickness, as is the genetic programming that leads us all inexorably toward death [2], but the fact of their universality is in no way an argument for their desirability. Indeed, given that we’re all biology all the way down–our universals and our differences, our aesthetic sense and our social sense, our fated deaths and our desire to transcend death are all by Pinker’s account proximately or ultimately expressions of or bound by biology [3]–we are very quickly back to square one even if we grant the validity of his argument. Pinker’s rhetoric honors a certain element of our biology (universals) as authentic while granting other aspects (cultural differences, social motivations, a distaste for the popular) an almost unnatural or diabolical power, but why should we?

I would tenatively contribute to the discussion Moya has provocatively begun by suggesting that, in a sense, humans are cultural all the way down precisely because we’re biological all the way down, as Pinker’s errors help us see more clearly. Returning to the research that prompted Moya’s parable, Kitayama argues that cultural differences are “deep,” that they go “much deeper” than we previously thought, engaging in a move that might be viewed as the reverse of Pinker’s (biological difference or variability seems now to be the authentic or valorized term). But what if these differences turned out to be “shallow”? What if cultural differences were actually “skin deep”? What, if anything, would follow? After all, our shallowness too would be no less biological than our depths.

(Note: This post has been slightly modified.)

[1] I will blithely ignore the degree to which the “fundamental attribution error” should be considered an error, though I should note that Kitayama doesn’t use the term.

[2] Mysteriously, in the appendix of The Blank Slate, which reproduces Donald E. Brown’s list of human universals, death is not listed as a human universal, though there is an entry for “death rituals.”

[3] Also, chemistry and physics and many other physical processes.

The End of Ideology (Critique)?

in Uncategorized

(Crossposted at Arcade.)

I.

In The Sublime Object of Ideology, Slavoj Žižek famously lays out his analysis of claims that we* find ourselves in a postideological age. Žižek doesn’t exactly mean “postideological” in the sense of Daniel Bell or Francis Fukuyama. For Bell or Fukuyama, postideology is characterized by the rise of technocracy, the transformation of great political debates into parochial, microideological questions, what tax rate to set, how to regulate this or that industry, what zoning ordinances to pass in a city. For Žižek, by contrast, postideology refers to the failure or collapse of ideology critique as such.

We used to think that by exposing frauds, lies, and the subtle ideological lacework of high cultural artefacts, we liberated ourselves from self-deception and false consciousness. Now, Žižek admits, everyone practices ideology critique. We have achieved a reflexive cynicism, what Peter Sloterdijk calls “cynical reason.” Recounting Sloterdijk’s argument in Critique of Cynical Reason, Žižek writes: “Cynicism is the answer of the ruling culture to this kynical subversion: it recognizes, it takes into account… the distance between the ideological mask and the reality, but it still finds reason to retain the mask.” Under such conditions, “the traditional critique of ideology no longer works. We can no longer subject the ideological text to ‘symptomatic reading’, confronting it with its blank spots, with what it must repress to organize itself, to preserve its consistency–cynical reason takes this distance into account in advance.”

In this post, I’d like to question whether traditional ideology critique is as obsolete as Žižek suggests, and eventually question the efficacy of his endrun around its alleged collapse. I haven’t arrived at strong conclusions yet, but I’d love to get a conversation started in the comments section that might help me figure out whether or not Žižek is right.

II.

The Space Merchants, a small masterpiece of science fictional satire, will serve as my model of traditional ideology critique.

Fredrick Pohl and C.M. Kornbluth’s 1952 novel depicts a dystopian future in which the free market has colonized all governmental functions and public space. The House of Representatives and Senate represent not American states but corporate firms, in proportion to those firms’ financial might. The social world is divided between two great classes: immiserated consumers (the overwhelming majority of the population, many of whom rent individual stairs in skyscrapers to sleep upon every night) and wealthy executives (a tiny but powerful minority who enjoy slightly more space in tiny studio apartments).

Government-engineered overpopulation (meant to increase the consumer base) threatens to consume all of Planet Earth’s resources, inspiring the rise of the “Consies,” radical conservationists who engage in sabotague and other acts of dissent against the monolithic consumerist order of the day. Let it not be said that Pohl and Kornbluth’s satire is subtle.  Nonetheless, it dates surprisingly well for Golden Age science fiction. The novel’s plot hinges on an effort by the “Star Class” copysmith, Mitch Courtney of Fowler Schocken Associates, to successfully sell American consumers on the prospect of colonizing Venus, which is by all accounts a hellhole–scalding hot, wracked by 500 mph winds, chemically toxic to biological life–more or less uninhabitable. The novel is thus not only about the social dynamics of its dystopian world, but a commentary on the contemporary function and dangers of advertising, a popular topic at the time (and ever since).

There is much one can say about the novel but for the purposes of this post, I’d like to present one especially interesting scene. Early in the novel, Mitch is trying to convince Jack O’Shea, the first man to land on Venus and return to Earth alive, that marketers can indeed shape consumer preferences using only language, that in fact O’Shea’s various consumer choices have been successfully, subconsciously manipulated by Fowler Schocken Associates.

O’Shea laughed uncertainly. “And you did it with words?”

“Words and pictures. Sights and sound and smell and taste and touch. And the greatest of these is words. Do you read poetry?”

“My God, of course not! Who can?”

“I don’t mean the contemporary stuff; you’re quite right about that. I mean Keats, Swinburne, Wylie—the great lyricists.”

“I used to,” he cautiously admitted. “What about it?”

“I’m going to ask you to spend the morning and afternoon with one of the world’s great lyric poets: A girl named Tildy Mathis. She doesn’t know that she’s a poet; she thinks she’s a boss copywriter. Don’t enlighten her. It might make her unhappy.

‘Thou still unravish’d bride of quietness,

Thou Foster-child of Silence and slow Time—’

That’s the sort of thing she would have written before the rise of advertising. The correlation is perfectly clear. Advertising up, lyric poetry down. There are only so many people capable of putting together words that stir and move and sing. When it became possible to earn a very good living in advertising by exercising this capability, lyric poetry was left to untalented screwballs who had to shriek for attention and compete by eccentricity.”

“Why are you telling me all this?” he asked.

“I said you’re on the inside, Jack. There’s a responsibility that goes with power. Here in this profession we reach into the souls of men and women. We do it by taking talent and redirecting it. Nobody should play with lives the way we do unless he’s motivated by the highest ideals.”

O’Shea reassures Mitch not worry, that his motives in promoting the colonization of a nearly uninhabitable planet are pure. “I’m not in this thing for money or fame,” he says. “I’m in it so the human race can have some elbow room and dignity again.” Mitch is shocked at this answer and informs the reader that “[t]he ‘highest ideal’ I had been about to cite was Sales.”

What may not be obvious, and what it took me a while to wrap my head around, is that Mitch is not — and at no time in the novel can ever be accused of being–a cynic. Mitch is a true believer in the sacrament of Sales. He believes in the virtue of the current order–and sees nothing deceptive or self-interested in his pursuit of what he regards as the “highest ideal.” His uprightness and inability to see the horror before his eyes is, of course, partly what makes The Space Merchants so funny.

When, later in the novel, he comes to understand the ideological flaws in his worldview, he begins acting differently, ultimately bringing theory and action into alignment. Score one for traditional ideology critique!

III.

Have things changed much since The Space Merchants was published? I’d suggest the answer is no.

A similar non-cynical commitment can be seen in the recent vogue for “neuromarketing” and “neurocinema,” a practice that involves using fMRI scans to understand more fully how we process visual and auditory stimuli while watching advertisements and films. Firms with such imaginative names as MindSign Neuromarketing, Neuro-Insight, NeuroFocus, and the Social Cognitive Neuroscience Laboratory at Caltech are leading the effort to figure out what our brains “really” think as we watch film.

Explaining the goals of neurocinema, Peter Katz says:

Movies could easily become more effective at fulfilling the expectations of their particular genre. Theatrical directors can go far beyond the current limitations of market research to gain access into their audience’s subconscious mind. The filmmakers will be able to track precisely which sequences/scenes excite, emotionally engage or lose the viewer’s interest based on what regions of the brain are activated. From that info a director can edit, re-shoot an actor’s bad performance, adjust a score, pump up visual effects and apply any other changes to improve or replace the least compelling scenes. Studios will create trailers that will [be] more effective at winning over their intended demographic. Marketing executives will know in a TV spot whether or not to push the romance- or action-genre angle because, for example, a scene featuring the leads kissing at a coffee shop could subconsciously engage the focus group more than a scene featuring a helicopter exploding.

Their ultimate goal, of course, is to create aesthetic experiences that are utterly engrossing and irrisistable, all in the holy name of Sales, which–naturally!–only occur by supplying the autonomous consumer with What He Demands, even if this consumer doesn’t know what he is really Demanding. This is a version of what the film cartridge “Infinite Jest” does in David Foster Wallace’s magnum opus. But what is interesting to me about neuromarketing/-cinema is the degree to which our “subconscious” responses to stimuli are regarded as our “authentic” responses. The problem researchers seem to face is that consumers don’t remember films well enough to fill out surveys or that when they fill out such surveys consumers feel obligated to respond positively. Social norms and lapses in consciousness get in the way of arriving at the truth.

In short, neuromarketers/-cineasts position what they are doing as giving The People what They Really Want. What could be more non-cynical than that? And yet, the question remains: would a humanistic debunking of the idea that fMRI scans are such “authentic” or “real” representations of desire do much to derail the train of neurocinema? If not, what sort of ideology critique could?

IV.

Attempting to expose the reign of cynical reason, Žižek’s develops an idea of “ideological fantasy,” the notion that cynical subjects “do not know… that their social reality itself, their activity, is guided by an illusion, by a fetishistic inversion. What they overlook, what they misrecognize, is not the reality but the illusion which is structuring their reality, their real social activity. They know very well how things really are, but they are doing it as if they do not know. The illusion is therefore double: it consists in overlooking the illusion which is structuring our real, effective relationship to reality. And this overlooked, unconscious illusion is what may be called the ideological fantasy.”

He concludes that to the degree our ideology is encoded not in our ideas but in our collective, unconscious fantasies, “we are of course far from being a post-ideological society. Cynical distance is just one way–one of many ways–to blind ourselves to the structuring power of ideological fantasy: even if we do not take things seriously, even if we keep an ironical distance, we are still doing them.” I am skeptical that the transfer of ideology from ideas to fantasies solves the problem, for a variety of reasons.

Isn’t the critique of ideological fantasies very much in line with traditional ideology critique, simply transferred to a new object? Don’t the examples of The Space Merchants and neurocinema suggest that the fundamental problem is the content of ideology, not its form? Given these examples, isn’t a little bit of cynicism just what we need?

* Please feel free to engage in ideology critique of my use of the term “we” in the comments section below. Or have I preempted your† ideology critique by anticipating it here in this footnote, in effect sucking you into an unconscious ideological fantasy? Don’t look at me for answers! I have no idea.

† I give up.

13 Ways of Looking at “Pac-Man”

in Uncategorized

(Crossposted at Arcade.)

January was apparently Andrew Ross month over at Dissent.  Two articles, Jeffrey J. Williams’s “How to be an Intellectual: The Cases of Richard Rorty and Andrew Ross” (in the Winter 2011 issue of the magazine) and Kevin Mattson’s “Cult Stud Mugged” (an online original), track Ross’s evolution from a so-called cult-stud into someone more akin to an academic labor reporter.

Though the tone of each of these articles differs significantly — Mattson is by far snarkier and consquently more amusing than Williams — the upshot of each is that Ross has matured into a serious, Dissent-approved scholar after a flashy but shallow cult-stud start.  Their larger, more trenchant point is that the casualization of acadmic labor, September 11, the various wars of the last decade, and the financial crisis have collectively “mugged” cultural studies afficionados, revealing its modes of analysis to be significantly less studly that was previously imagined.

This discussion has reminded me of my own introduction to cultural studies, way back when I was an undergraduate at Cornell.  Whatever may be true of Ross’s work, past and present, I think the shift away from cultural studies isn’t only about a turn toward more “serious” issues, such as grad student unionization, sweatshops, and income inequality.  I have been tracking a similar shift even in the way we analyze “merely” cultural objects.  This is where Pac-Man comes in.  I should warn my readers, that this post will only discuss two of the thirteen ways one might look at the game.

I.

As an undergrad with semotics in my eyes, I read — and loved — Arthur Asa Berger’s Signs in Contemporary Culture: An Introduction to Semiotics.  Nothing was more exciting to me than semiotics; the very word seemed magical. A science of signs?  How cool was that? Very. I enjoyed lavishly sprinkling phrases like hermeneutics and ontology into papers I would now be forced to admit probably would have been better off without such stylistic garnishes.  To get a sense of why I was a fan, I present a long quote from Berger’s analysis of Pac-Man:

We can find in “Pac Man,” I believe, a sign that a rather profound change was taking place in the American psyche. Earlier video games (and the video-game phenomenon is significant in its own right) such as “Space Invaders” and so on, involved rocket ships coursing through outer space, blasting aliens and hostile forces with ray guns, laser beams, and other weapons, and represented a very different orientation from “Pac Man.” The games were highly phallic and they also expressed a sense of freedom and openness. The games were played in outer space and one had a sense of infinite possibility.

“Pac Man,” however, represents something quite different. The game takes place in a labyrinth which implies, metaphorically, that we are all trapped, closeted in a claustrophobic situation in which the nature of things is that we must either eat or be eaten. This oral aspect of the game suggests further that there is some kind of diffuse regression taking place and we have moved from the phallic stage (guns, rockets) to the oral stage (eating, biting).

Regression often takes place in people as a means of coping with anxiety and there is good reason to suspect that the popularity of a game like “Pac Man” indicates that our young people, who play the game, are coping with their anxieties by regressing (in the service of their egos). This may be because they are, for some reason, now afraid of taking on responsibilities and feel anxious about long-term relationships and mature interpersonal sexuality. When we regress to more child-like stages we escape from the demands of adulthood–but we pay a considerable price.

It is these aspects of “Pac Man” that disturb me. On the surface it is just a game. But the nature of the game–its design, which suggests that we are all prisoners of a system from which there is no escape, and its regressive aspects–must give all who speculate about the hidden meanings in phenomena something to think about.

“Pac Man” is important because it was the most popular video game in America for several years. In the 1990s, video games are much more sophisticated and complex and use more powerful technologies. They also may be more violent, sexist and psychologically damaging.

As badly as this passage may be dated, I can still remember the sense of liberation and fun it held, and passages like it, in its capacity to bring together two seemingly irreconcilable discursive registers: pop culture and high theory. To be fair to my younger self, there was also a sense of irony and play in reading such passages. I had no illusions that Pac-Man‘s aescendency spelled or was a symptom of doom, psychopathology, and sexist regression for the youth of America. Still:  “All who speculate about the hidden meanings in phenomena”! That exactly describes the group I wanted to be a part of.

II.

Sometime between the late nineties and today, something changed. To give a sense of what has changed, for me and for cultural studies as an enterprise, I’d like now to contrast Berger to a more recent approach to Pac-Man, drawn from Nick Montfort and Ian Bogost’s Racing the Beam: The Atari Video Computer System, the first in a new series from The MIT Press called “Platform Studies,” for which Motfort and Bogost also serve as editors.  The most important insight this book offers about Pac-Man is that gaming platforms matter: there is no Pac-Man apart from the technological frameworks within which the game is realized. The following review succinctly sums up how the Atari version of Pac-Man differed from the arcade version — and why the home console verson sucked:

Let me now quote at length from Racing the Beam to indicate the insights Monfort and Bogost bring to Pac-Man:

Even before we get to the game’s hero and villains, Pac-Man’s method of drawing the maze demonstrates one of the major challenges in porting the game to the Atari VCS: time. In the arcade game, the programmer would load character values into video RAM once per maze, using the character tiles to create its boundaries. On the VCS, the maze is constructed from playfield graphics, each line of which has to be loaded from ROM and drawn separately for each scan line of the television display.

To be sure, mazes had already been displayed and explored in VCS games like Combat, Slot Racers, and Adventure. But these games had to construct their mazes from whole cloth, building them out of symmetrical playfields. The arcade incarnation of Pac-Mac demonstrates how the notion of the maze became more tightly coupled to the hardware affordances of tile-based video systems. In the arcade game, each thin wall, dot, or energizer is created by a single character from video memory. Though the method is somewhat arcane, the coin-op Pac-Man also allowed up to four colors per character in an eight-bit color space. (Each character defined six high bits as a “base” color–which is actually a reference to a color map of 256 unique colors stored in ROM–with two low bits added for each pixel of the bitmap.) This method allows the hollow, round-edged shapes that characterize the Pac-Man maze–a type of bitmap detail unavailable via VCS playfield graphics. The maze of the VCS game is simplified in structure as well as in appearance, consisting of rectangular paths and longer straight-line corridors and lacking the more intricate pathways of the arcade game.

What should be immediately apparent is that Monfort and Bogost have very little interest in approaching Pac-Man with a semiotic tool-kit; instead, they want to give an account of the form of the Atari VCS version of Pac-Man relative to the technical, economic, and temporal limitations constraining its development.  More than anything, their fantastic little book reads like a technically-literate guided tour or history of the game console.

More and more, I find myself drawn away from the approach to studying culture and the arts represented by Berger and toward the richly rendered historical, technical, and formal description offered by accounts such as Racing the Beam.  Those three registers — the historical, technical, and formal — turn out to be tightly linked together.  You simply can’t discuss one without discussing the others.  Such rich descriptions must always be pressed into the service of larger arguments, of course; technical description for its own sake is of little interest apart from the claims such description serves.  Yet a close attention to technical details allows Monfort and Bogost to paint a richer picture of these early Atari games than a non-technical treatment could.  One comes away from this history with a renewed sense of how amazingly creative early game developers were.

III.

This is a longwinded way of suggesting that the shift away from the older cult-stud model — which these Dissent essays register — seems not only to apply to political, economic, and historical questions, but also to textual analysis and to the study of culture as such.  To my mind, this shift is almost all for the good, though it is in some ways less fun than the earlier model.

The Postironic Art of Charlie Kaufman

in Uncategorized

(Crossposted from Arcade.)

I’d like to point my loyal readers to the amazing introduction Charlie Kaufman wrote for Synecdoche, New York: The Shooting Script, which is available over at The Rumpus. I would summarize the introduction and analyze it — I am almost unable to resist the temptation — but to do so would ruin the pleasure and surprise of the thing itself.

Suffice it to say, I consider Kaufman’s ouvre to be a species of what I call postirony; indeed, Kaufman’s body of work was instrumental for me — along with the work of David Foster Wallace and Chris Ware — in suggesting the need for such a term in the first place.  By postirony, I mean the use of metafictional or postmodernist (usually narrative) techniques in the pursuit of what amounts to the pursuit of humanistic or traditional themes:  the desire to “really connect” to other people, the project of cultivating sincerity, the wish to move beyond systems-level analysis of the world toward an analysis of character, the new centrality of “narrative” and “storytelling” in experimental works.  It doubly suffices to say that the details get pretty complicated pretty quickly, so I won’t go into those details here.

Kaufman’s introduction, here, takes us in a remarkably short space from a kind of metafiction that initially seems cynical and mercenary toward self-transcendence, human connection, and the mutuality of love.  It’s awesome.

Infinite Summer and New Models of Online Scholarship

in Uncategorized

(Crossposted at Arcade.)

I’d like to use my bloggy pulpit to draw your attention to a draft of Kathleen Fitzpatrick’s essay, “Infinite Summer: Reading the Social Network,” which discusses the origin and signifiance of an online effort to read Infinite Jest the summer after David Foster Wallace’s suicide.

This essay is destined to become part of a collection of essays on David Foster Wallace, which I am co-editing with Sam Cohen, called The Legacy of David Foster Wallace: Critical and Creative Assessments. The collection is forthcoming from the University of Iowa Press.

Beyond the content of the essay, I want to start a conversation about the future of scholarship and academic communities on the Internet. Along with group blogs (Arcade, The Valve, Crooked Timber, and countless group and personal blogs), there are journals that publish exclusively online (electronic book review), wiki-like resources dedicated to certain fields (Modernism Lab at Yale), and electronic “gateway” or aggregator sites (Nines).

What is new, as far as I know, about the model Fitzpatrick is using is that she is getting commentary on her drafts of written essays through an “open” peer review process. She has gone through this open review process with her new book, Planned Obsolescence: Publishing, Technology, and the Future of the Internet (which is also forthcoming from NYU Press) and she has gone so far as to put her first book, The Anxiety of Obsolescence: The American Novel in the Age of Television (which Vanderbilt first published in 2006), online in full.

In a sense, Fitzpatrick is “blogging” this essay — she is using WordPress as a framework to make her essay available — but the open-source WordPress theme/plugin (CommentPress) she is using facilitates reading her text like a book and commenting on individual pages and paragraphs. There have been other projects that led to the development of this framework, including McKenzie Wark’s Gamer Theory, which was subsequently published by Harvard UP.

All of this leads me to ask a few questions: What are the advantages and disadvantages of showing work in progress online and inviting commentary? Is there any reason why, a few years after a work of scholarship has come out, and in the overwhelming majority of cases has sold most of what it will ever sell, we should not all be placing our books online? Are we too print-bound? Too locked into norms that guarantee that our work is inaccessible to vast majority of readers? Or are there good reasons for keeping our systems of scholarly dissemination more or less as they are today?

I ask these questions without much of an agenda. Rather, I’d like to spark a conversation that will help me think through these issues.

Zadie Smith, Facebook, and the Game Layer

in Facebook, Jaron Lanier, New York Review of Books, Seth Priebatsch, Zadie Smith

(Crossposted at Arcade.)

In the New York Review of Books, Zadie Smith has written an interesting review of Aaron Sorkin’s The Social Network that doubles as a critique of Facebook.  Smith rhetorically positions herself as a sort of luddite or dinosaur, a defender of what she calls "Person 1.0" against the debasements wrought upon — and by — a generation of "People 2.0."  Drawing on the arguments of Jaron Lanier, the author of You Are Not a Gadget, Smith suggests that Facebook entraps us "in the recent careless thoughts of a Harvard sophomore":  

When a human being becomes a set of data on a website like Facebook, he or she is reduced. Everything shrinks. Individual character. Friendships. Language. Sensibility. In a way it’s a transcendent experience: we lose our bodies, our messy feelings, our desires, our fears. It reminds me that those of us who turn in disgust from what we consider an overinflated liberal-bourgeois sense of self should be careful what we wish for: our denuded networked selves don’t look more free, they just look more owned.

With Facebook, Zuckerberg seems to be trying to create something like a Noosphere, an Internet with one mind, a uniform environment in which it genuinely doesn’t matter who you are, as long as you make “choices” (which means, finally, purchases). If the aim is to be liked by more and more people, whatever is unusual about a person gets flattened out. One nation under a format. To ourselves, we are special people, documented in wonderful photos, and it also happens that we sometimes buy things. This latter fact is an incidental matter, to us. However, the advertising money that will rain down on Facebook—if and when Zuckerberg succeeds in encouraging 500 million people to take their Facebook identities onto the Internet at large—this money thinks of us the other way around. To the advertisers, we are our capacity to buy, attached to a few personal, irrelevant photos.

Is it possible that we have begun to think of ourselves that way? It seemed significant to me that on the way to the movie theater, while doing a small mental calculation (how old I was when at Harvard; how old I am now), I had a Person 1.0 panic attack. Soon I will be forty, then fifty, then soon after dead; I broke out in a Zuckerberg sweat, my heart went crazy, I had to stop and lean against a trashcan. Can you have that feeling, on Facebook? I’ve noticed—and been ashamed of noticing—that when a teenager is murdered, at least in Britain, her Facebook wall will often fill with messages that seem to not quite comprehend the gravity of what has occurred. You know the type of thing: Sorry babes! Missin’ you!!! Hopin’ u iz with the Angles. I remember the jokes we used to have LOL! PEACE XXXXX

When I read something like that, I have a little argument with myself: “It’s only poor education. They feel the same way as anyone would, they just don’t have the language to express it.” But another part of me has a darker, more frightening thought. Do they genuinely believe, because the girl’s wall is still up, that she is still, in some sense, alive? What’s the difference, after all, if all your contact was virtual?

Initially, I felt that Smith’s argument bordered on alarmism — a sort of critical low-hanging fruit for the Smart Set.  Who, after all, really thinks that the existence of a memorial means the person so memorialized continues "in some sense" to live?  Doesn’t Facebook merely supplement our personhood, not replace it, giving us new channels through which to express or constitute whatever greater totality we are?  Didn’t advertisers think of us as little more than our capacity to buy well before Facebook ever came into the world?

After a bit of thought, though, I recalled recently seeing this video on the construction of a "game layer" over reality, which speaks very much to Smith’s concerns–

–and I came to think Smith may have a point, though I also offer this video as a way of reformulating or restating Smith’s argument.  In the terms of this reformulation, the issue isn’t so much that we become 2.0 folk when we enmesh ourselves in electronic systems such as Facebook.  Instead, the question is one that is relevant in all areas of political, economic, and social significance:  Who designs the systems we are embedded within?  Who gets to build — and who has the technical expertise to build — the frameworks or, as Priebatsch puts it in this video, the "game dynamics" that incentivize certain behaviors and suppress others?  In an era increasingly obsessed with behavioral economics and its myriad "nudges," who is nudging you — and how?