13 Ways of Looking at “Pac-Man”

in Uncategorized

(Crossposted at Arcade.)

January was apparently Andrew Ross month over at Dissent.  Two articles, Jeffrey J. Williams’s “How to be an Intellectual: The Cases of Richard Rorty and Andrew Ross” (in the Winter 2011 issue of the magazine) and Kevin Mattson’s “Cult Stud Mugged” (an online original), track Ross’s evolution from a so-called cult-stud into someone more akin to an academic labor reporter.

Though the tone of each of these articles differs significantly — Mattson is by far snarkier and consquently more amusing than Williams — the upshot of each is that Ross has matured into a serious, Dissent-approved scholar after a flashy but shallow cult-stud start.  Their larger, more trenchant point is that the casualization of acadmic labor, September 11, the various wars of the last decade, and the financial crisis have collectively “mugged” cultural studies afficionados, revealing its modes of analysis to be significantly less studly that was previously imagined.

This discussion has reminded me of my own introduction to cultural studies, way back when I was an undergraduate at Cornell.  Whatever may be true of Ross’s work, past and present, I think the shift away from cultural studies isn’t only about a turn toward more “serious” issues, such as grad student unionization, sweatshops, and income inequality.  I have been tracking a similar shift even in the way we analyze “merely” cultural objects.  This is where Pac-Man comes in.  I should warn my readers, that this post will only discuss two of the thirteen ways one might look at the game.

I.

As an undergrad with semotics in my eyes, I read — and loved — Arthur Asa Berger’s Signs in Contemporary Culture: An Introduction to Semiotics.  Nothing was more exciting to me than semiotics; the very word seemed magical. A science of signs?  How cool was that? Very. I enjoyed lavishly sprinkling phrases like hermeneutics and ontology into papers I would now be forced to admit probably would have been better off without such stylistic garnishes.  To get a sense of why I was a fan, I present a long quote from Berger’s analysis of Pac-Man:

We can find in “Pac Man,” I believe, a sign that a rather profound change was taking place in the American psyche. Earlier video games (and the video-game phenomenon is significant in its own right) such as “Space Invaders” and so on, involved rocket ships coursing through outer space, blasting aliens and hostile forces with ray guns, laser beams, and other weapons, and represented a very different orientation from “Pac Man.” The games were highly phallic and they also expressed a sense of freedom and openness. The games were played in outer space and one had a sense of infinite possibility.

“Pac Man,” however, represents something quite different. The game takes place in a labyrinth which implies, metaphorically, that we are all trapped, closeted in a claustrophobic situation in which the nature of things is that we must either eat or be eaten. This oral aspect of the game suggests further that there is some kind of diffuse regression taking place and we have moved from the phallic stage (guns, rockets) to the oral stage (eating, biting).

Regression often takes place in people as a means of coping with anxiety and there is good reason to suspect that the popularity of a game like “Pac Man” indicates that our young people, who play the game, are coping with their anxieties by regressing (in the service of their egos). This may be because they are, for some reason, now afraid of taking on responsibilities and feel anxious about long-term relationships and mature interpersonal sexuality. When we regress to more child-like stages we escape from the demands of adulthood–but we pay a considerable price.

It is these aspects of “Pac Man” that disturb me. On the surface it is just a game. But the nature of the game–its design, which suggests that we are all prisoners of a system from which there is no escape, and its regressive aspects–must give all who speculate about the hidden meanings in phenomena something to think about.

“Pac Man” is important because it was the most popular video game in America for several years. In the 1990s, video games are much more sophisticated and complex and use more powerful technologies. They also may be more violent, sexist and psychologically damaging.

As badly as this passage may be dated, I can still remember the sense of liberation and fun it held, and passages like it, in its capacity to bring together two seemingly irreconcilable discursive registers: pop culture and high theory. To be fair to my younger self, there was also a sense of irony and play in reading such passages. I had no illusions that Pac-Man‘s aescendency spelled or was a symptom of doom, psychopathology, and sexist regression for the youth of America. Still:  “All who speculate about the hidden meanings in phenomena”! That exactly describes the group I wanted to be a part of.

II.

Sometime between the late nineties and today, something changed. To give a sense of what has changed, for me and for cultural studies as an enterprise, I’d like now to contrast Berger to a more recent approach to Pac-Man, drawn from Nick Montfort and Ian Bogost’s Racing the Beam: The Atari Video Computer System, the first in a new series from The MIT Press called “Platform Studies,” for which Motfort and Bogost also serve as editors.  The most important insight this book offers about Pac-Man is that gaming platforms matter: there is no Pac-Man apart from the technological frameworks within which the game is realized. The following review succinctly sums up how the Atari version of Pac-Man differed from the arcade version — and why the home console verson sucked:

Let me now quote at length from Racing the Beam to indicate the insights Monfort and Bogost bring to Pac-Man:

Even before we get to the game’s hero and villains, Pac-Man’s method of drawing the maze demonstrates one of the major challenges in porting the game to the Atari VCS: time. In the arcade game, the programmer would load character values into video RAM once per maze, using the character tiles to create its boundaries. On the VCS, the maze is constructed from playfield graphics, each line of which has to be loaded from ROM and drawn separately for each scan line of the television display.

To be sure, mazes had already been displayed and explored in VCS games like Combat, Slot Racers, and Adventure. But these games had to construct their mazes from whole cloth, building them out of symmetrical playfields. The arcade incarnation of Pac-Mac demonstrates how the notion of the maze became more tightly coupled to the hardware affordances of tile-based video systems. In the arcade game, each thin wall, dot, or energizer is created by a single character from video memory. Though the method is somewhat arcane, the coin-op Pac-Man also allowed up to four colors per character in an eight-bit color space. (Each character defined six high bits as a “base” color–which is actually a reference to a color map of 256 unique colors stored in ROM–with two low bits added for each pixel of the bitmap.) This method allows the hollow, round-edged shapes that characterize the Pac-Man maze–a type of bitmap detail unavailable via VCS playfield graphics. The maze of the VCS game is simplified in structure as well as in appearance, consisting of rectangular paths and longer straight-line corridors and lacking the more intricate pathways of the arcade game.

What should be immediately apparent is that Monfort and Bogost have very little interest in approaching Pac-Man with a semiotic tool-kit; instead, they want to give an account of the form of the Atari VCS version of Pac-Man relative to the technical, economic, and temporal limitations constraining its development.  More than anything, their fantastic little book reads like a technically-literate guided tour or history of the game console.

More and more, I find myself drawn away from the approach to studying culture and the arts represented by Berger and toward the richly rendered historical, technical, and formal description offered by accounts such as Racing the Beam.  Those three registers — the historical, technical, and formal — turn out to be tightly linked together.  You simply can’t discuss one without discussing the others.  Such rich descriptions must always be pressed into the service of larger arguments, of course; technical description for its own sake is of little interest apart from the claims such description serves.  Yet a close attention to technical details allows Monfort and Bogost to paint a richer picture of these early Atari games than a non-technical treatment could.  One comes away from this history with a renewed sense of how amazingly creative early game developers were.

III.

This is a longwinded way of suggesting that the shift away from the older cult-stud model — which these Dissent essays register — seems not only to apply to political, economic, and historical questions, but also to textual analysis and to the study of culture as such.  To my mind, this shift is almost all for the good, though it is in some ways less fun than the earlier model.

The Postironic Art of Charlie Kaufman

in Uncategorized

(Crossposted from Arcade.)

I’d like to point my loyal readers to the amazing introduction Charlie Kaufman wrote for Synecdoche, New York: The Shooting Script, which is available over at The Rumpus. I would summarize the introduction and analyze it — I am almost unable to resist the temptation — but to do so would ruin the pleasure and surprise of the thing itself.

Suffice it to say, I consider Kaufman’s ouvre to be a species of what I call postirony; indeed, Kaufman’s body of work was instrumental for me — along with the work of David Foster Wallace and Chris Ware — in suggesting the need for such a term in the first place.  By postirony, I mean the use of metafictional or postmodernist (usually narrative) techniques in the pursuit of what amounts to the pursuit of humanistic or traditional themes:  the desire to “really connect” to other people, the project of cultivating sincerity, the wish to move beyond systems-level analysis of the world toward an analysis of character, the new centrality of “narrative” and “storytelling” in experimental works.  It doubly suffices to say that the details get pretty complicated pretty quickly, so I won’t go into those details here.

Kaufman’s introduction, here, takes us in a remarkably short space from a kind of metafiction that initially seems cynical and mercenary toward self-transcendence, human connection, and the mutuality of love.  It’s awesome.

Infinite Summer and New Models of Online Scholarship

in Uncategorized

(Crossposted at Arcade.)

I’d like to use my bloggy pulpit to draw your attention to a draft of Kathleen Fitzpatrick’s essay, “Infinite Summer: Reading the Social Network,” which discusses the origin and signifiance of an online effort to read Infinite Jest the summer after David Foster Wallace’s suicide.

This essay is destined to become part of a collection of essays on David Foster Wallace, which I am co-editing with Sam Cohen, called The Legacy of David Foster Wallace: Critical and Creative Assessments. The collection is forthcoming from the University of Iowa Press.

Beyond the content of the essay, I want to start a conversation about the future of scholarship and academic communities on the Internet. Along with group blogs (Arcade, The Valve, Crooked Timber, and countless group and personal blogs), there are journals that publish exclusively online (electronic book review), wiki-like resources dedicated to certain fields (Modernism Lab at Yale), and electronic “gateway” or aggregator sites (Nines).

What is new, as far as I know, about the model Fitzpatrick is using is that she is getting commentary on her drafts of written essays through an “open” peer review process. She has gone through this open review process with her new book, Planned Obsolescence: Publishing, Technology, and the Future of the Internet (which is also forthcoming from NYU Press) and she has gone so far as to put her first book, The Anxiety of Obsolescence: The American Novel in the Age of Television (which Vanderbilt first published in 2006), online in full.

In a sense, Fitzpatrick is “blogging” this essay — she is using WordPress as a framework to make her essay available — but the open-source WordPress theme/plugin (CommentPress) she is using facilitates reading her text like a book and commenting on individual pages and paragraphs. There have been other projects that led to the development of this framework, including McKenzie Wark’s Gamer Theory, which was subsequently published by Harvard UP.

All of this leads me to ask a few questions: What are the advantages and disadvantages of showing work in progress online and inviting commentary? Is there any reason why, a few years after a work of scholarship has come out, and in the overwhelming majority of cases has sold most of what it will ever sell, we should not all be placing our books online? Are we too print-bound? Too locked into norms that guarantee that our work is inaccessible to vast majority of readers? Or are there good reasons for keeping our systems of scholarly dissemination more or less as they are today?

I ask these questions without much of an agenda. Rather, I’d like to spark a conversation that will help me think through these issues.

Zadie Smith, Facebook, and the Game Layer

in Facebook, Jaron Lanier, New York Review of Books, Seth Priebatsch, Zadie Smith

(Crossposted at Arcade.)

In the New York Review of Books, Zadie Smith has written an interesting review of Aaron Sorkin’s The Social Network that doubles as a critique of Facebook.  Smith rhetorically positions herself as a sort of luddite or dinosaur, a defender of what she calls "Person 1.0" against the debasements wrought upon — and by — a generation of "People 2.0."  Drawing on the arguments of Jaron Lanier, the author of You Are Not a Gadget, Smith suggests that Facebook entraps us "in the recent careless thoughts of a Harvard sophomore":  

When a human being becomes a set of data on a website like Facebook, he or she is reduced. Everything shrinks. Individual character. Friendships. Language. Sensibility. In a way it’s a transcendent experience: we lose our bodies, our messy feelings, our desires, our fears. It reminds me that those of us who turn in disgust from what we consider an overinflated liberal-bourgeois sense of self should be careful what we wish for: our denuded networked selves don’t look more free, they just look more owned.

With Facebook, Zuckerberg seems to be trying to create something like a Noosphere, an Internet with one mind, a uniform environment in which it genuinely doesn’t matter who you are, as long as you make “choices” (which means, finally, purchases). If the aim is to be liked by more and more people, whatever is unusual about a person gets flattened out. One nation under a format. To ourselves, we are special people, documented in wonderful photos, and it also happens that we sometimes buy things. This latter fact is an incidental matter, to us. However, the advertising money that will rain down on Facebook—if and when Zuckerberg succeeds in encouraging 500 million people to take their Facebook identities onto the Internet at large—this money thinks of us the other way around. To the advertisers, we are our capacity to buy, attached to a few personal, irrelevant photos.

Is it possible that we have begun to think of ourselves that way? It seemed significant to me that on the way to the movie theater, while doing a small mental calculation (how old I was when at Harvard; how old I am now), I had a Person 1.0 panic attack. Soon I will be forty, then fifty, then soon after dead; I broke out in a Zuckerberg sweat, my heart went crazy, I had to stop and lean against a trashcan. Can you have that feeling, on Facebook? I’ve noticed—and been ashamed of noticing—that when a teenager is murdered, at least in Britain, her Facebook wall will often fill with messages that seem to not quite comprehend the gravity of what has occurred. You know the type of thing: Sorry babes! Missin’ you!!! Hopin’ u iz with the Angles. I remember the jokes we used to have LOL! PEACE XXXXX

When I read something like that, I have a little argument with myself: “It’s only poor education. They feel the same way as anyone would, they just don’t have the language to express it.” But another part of me has a darker, more frightening thought. Do they genuinely believe, because the girl’s wall is still up, that she is still, in some sense, alive? What’s the difference, after all, if all your contact was virtual?

Initially, I felt that Smith’s argument bordered on alarmism — a sort of critical low-hanging fruit for the Smart Set.  Who, after all, really thinks that the existence of a memorial means the person so memorialized continues "in some sense" to live?  Doesn’t Facebook merely supplement our personhood, not replace it, giving us new channels through which to express or constitute whatever greater totality we are?  Didn’t advertisers think of us as little more than our capacity to buy well before Facebook ever came into the world?

After a bit of thought, though, I recalled recently seeing this video on the construction of a "game layer" over reality, which speaks very much to Smith’s concerns–

–and I came to think Smith may have a point, though I also offer this video as a way of reformulating or restating Smith’s argument.  In the terms of this reformulation, the issue isn’t so much that we become 2.0 folk when we enmesh ourselves in electronic systems such as Facebook.  Instead, the question is one that is relevant in all areas of political, economic, and social significance:  Who designs the systems we are embedded within?  Who gets to build — and who has the technical expertise to build — the frameworks or, as Priebatsch puts it in this video, the "game dynamics" that incentivize certain behaviors and suppress others?  In an era increasingly obsessed with behavioral economics and its myriad "nudges," who is nudging you — and how?

Hipsters and the New Gilded Age

in hipsters, Kathleen Fitzpatrick, Mark Greif, new gilded age, Richard Lloyd

(Crossposted at Arcade.)

I’d like to post a few comments on Mark Greif’s excellent essay, "What was the Hipster?" which was published in New York magazine and is part of a new book of the same name put out by the n+1 Foundation.

I.

Greif’s essay has led me to reflect on some theories I’ve been cultivating, in the darkest recesses of my academic mind, over the last year. Sometime in the middle of 2009, I became convinced that literature — and the support systems that give it life — don’t arise from a vacuum, though literary critics often treat it as if it does. This thought is, in a sense, quite elementary:  writers and readers develop within specific institutional contexts — educational, economic, and juridical, which are necessary for literature to flourish.  My own sense of literary possibility, my own love of certain writers, arose within such institutions.  As a correlary, I have become convinced that, though critics endlessly love to complain about it, the midcentury ascendence of middlebrow culture (and the authority of the literary novel in the United States) is intimately tied to the history of the middle class, which as a group has the resources, education, and leisure to produce and consume such literature.  Though these institutions, and the forms of authority they have engendered, have often excluded and marginalized nonwhites, women, nonheterosexuals, among others, our goal should be to make our institutions more egalitarian, more inclusive, more reflective of our highest aspirations for freedom and creative life.

All of this isn’t to say that the relationship between the middle class and literature is in any way simple or mechanical, nor do I mean to imply that only the middle class produces literary readers and writers — such a claim would be absurd — but I would claim that the rise of the middle class after World War II played a decisive, and in many ways positive, role in shaping contemporary reading publics and constructing an environment in which literary art could flourish on a historically unprecedented scale.

If these claims are true, then the gradual but persistent erosion of the middle class — what many, including the economist Paul Krugman, have called the "new gilded age" — foretells the coming of a "correction" — perhaps massive, perhaps middling in scope — within literary culture, a correction for the worse. This correction has been the story of American literature since the early 1970s:  the destruction of the midlist, the rise of celebrity authors, the mania of the book auction, the quiet transformation of reading publics. Though magnificent literary work continues to be be written and published — and we should have no doubt that great art will continue do be created — the conditions under which art is produced and consumed are growing more constricted, leading many creative writers to take refuge in the University, if they’re lucky. While many critics blame technology and mass media for declining mass interest in serious literature — and some critics, such as Kathleen Fitzpatrick, make the claim that the discourse of the death of the novel arises from male white anxiety about the multicultural expansion of literary culture — I think changes within our socioeconomic life since the early 1970s are a crucial and understudied part of the story.

II.

(1) It is in the context of these reflections that I think we must understand what the hipster is and what he (the hipster is almost invariably male) portends for the relationship between economic and cultural life. I should say from the outset that the sort of hipster Greif is talking about has only a glancing relationship to the midcentury hipster celebrated by Anatole Broyard, Allen Ginsberg, and Norman Mailer; there is much to say about this earlier incarnation of the hipster (in my dissertation, I wrote almost eighty pages on the midcentury hipster), but this figure bears little connection to what we mean today when we call someone a hipster.

Greif describes the contemporary hipster this way:

When we talk about the contemporary hipster, we’re talking about a subcultural figure who emerged by 1999, enjoyed a narrow but robust first phase until 2003, and then seemed about to dissipate into the primordial subcultural soup, only to undergo a reorganization and creeping spread from 2004 to the present.

The matrix from which the hipster emerged included the dimension of nineties youth culture, often called alternative or indie, that defined itself by its rejection of consumerism. Yet in an ethnography of Wicker Park, Chicago, in the nineties, the sociologist Richard Lloyd documented how what he called “neo-bohemia” unwittingly turned into something else: the seedbed for post-1999 hipsterism. Lloyd showed how a culture of aspiring artists who worked day jobs in bars and coffee shops could unintentionally provide a milieu for new, late-capitalist commerce in design, marketing, and web development. The neo-bohemian neighborhoods, near to the explosion of new wealth in city financial centers, became amusement districts for a new class of rich young people. The indie bohemians (denigrated as slackers) encountered the flannel-clad proto-businessmen and dot-com paper millionaires (denigrated as yuppies), and something unanticipated came of this friction

And, elaborating on the hipster’s relationship to oppositional culture and the avant-garde, Greif concludes:

One could say, exaggerating only slightly, that the hipster moment did not produce artists, but tattoo artists, who gained an entire generation’s arms, sternums, napes, ankles, and lower backs as their canvas. It did not produce photographers, but snapshot and party photographers: Last Night’s Party, Terry Richardson, the Cobra Snake. It did not produce painters, but graphic designers. It did not yield a great literature, but it made good use of fonts. And hipsterism did not make an avant-garde; it made communities of early adopters.

Though well-observed and pleasantly cutting — as a resident of San Francisco’s Mission district, I can testify to the accuracy of this assessment — Greif misses an opportunity to decisively define the new breed of hipster, let alone find adequate grounds for critiquing this figure, and he proceeds instead through the accretion of examples and the dropping of accurate hipster brand names (showing, of course, his own critical hipness). Greif gives us a hint of a truely critical definition of the hipster in his discussion of Richard Lloyd’s terrific 2005 study, Neo-Bohemia: Art and Commerce in the Postindustrial City, but he misses what may be Lloyd’s most startling point. The neo-bohemian enclaves of Wicker Park, Williamsburg, and the Mission are filled with aspiring artists and "creative-class" quasiprofessionals who accept disempowering, low-wage work in the creative service economy as a sign of distinction and liberation.  These new hipsters are just waiting for their big break while waiting tables.

In my own work, which builds on Lloyd’s study, I define the contemporary hipster as a type of person who is intensely focused on a process of self-making by means of strategic consumption. That is, the hipster constructs an identity by becoming something like a professional shopper, an "early adopter" of trends and fashions, as Greif rightly points out. What the hipster disavows is, quite specifically, an awareness of his class situation. What is the hipster’s class situation? Fundamentally, I would argue, the hipster is a child of the middle class, typically college educated, who — as Lloyd points out — has abandoned the project of reproducing his class status in order to enter the perpetual carnival of the lifestyle service industry. College degree in hand, the hipster works in coffee shops, in bars, as a permanent intern, aspires to artistic greatness, and is enjoying his relative penury, which is convenient because during the "new gilded age" there simply aren’t enough jobs to reproduce the hipster’s class, even if he wanted to.

(2) This leads me to a second critique of Greif’s argument. Perhaps inadvertently, "What was the hipster?" reproduces the authenticity-seeking imperative of hipness. In his important books, The Conquest of Cool and One Market Under God, Thomas Frank’s point isn’t that rebel consumers constitute a "fake" counterculture but rather that counterculture is, and has always been, completely harmonious with the ethic of consumption. Malcolm Cowley got it right when he diagnosed the bohemian lifestyle of Greenwich Village as, at root, a "consumption ethic," observing in 1934 that "self-expression and paganism encouraged a demand for all sorts of products — modern furniture, beach pajamas, cosmetics, colored bathrooms with toilet paper to match," that "[l]iving for the moment meant buying an automobile, radio or house, using it now and paying for it tomorrow." The notion that hipsters ought, like "real" counterculturalists — by Greif’s account, "bike messengers, straight-edge skaters, Lesbian Avengers, freegans, enviro-anarchists, and interracial hip-hoppers who live as they please" — raise "a spiritual middle finger" in the face of authority misses the salient point that (like Broyard’s midcentury hipsters) the middle finger in question is only ever spiritual or symbolic. Is a middle-finger-waving Lesbian Avenger, who feel spiritually good, but has no political power, in any better situation than the ever vilified hipster?

(3) I would thus emphasize that what is missing from Greif’s analysis of the new hipster is a robust notion of class as well as a critique of the way in which the imaginative life of the hipster is premised on certain kinds of obfuscations and short-term magical thinking. The hipster is a person who is convinced he is going to be a Great Artist — even if his art is a form of lifestyle or brand management — and he tells himself that he will keep working that bartending job another year, keep working as a barista until his band, his brand, his novel takes flight. There will, of course, come a time of reckoning — what I have sometimes described to friends as a Great Sucking Sound — as the college-educated aesthetes of the middle class find themselves unable to reproduce their class status. Some would-be hipsters will find salvation in grad school, some will make their way into elite law schools, and some will rediscover their inner management consultant, but not all of them will, not enough.  After the reckoning to come, the pool of the middle class will have shrunk, and the children of hipsters will, when taken as a group, find themselves unable to reproduce the neo-bohemian folkways of their fathers and mothers. Unless, of course, the middle finger they raise ceases to be symbolic or spiritual.

III.

I write all of this not to counsel despair or cynicism. Quite the opposite. I think that the seeds of genuine opposition to authority — of an art-loving coalition committed to unmaking the new gilded age — might need to find grounds other than the symbolic or the spiritual. My premise is that by understanding our situation, we can work to change it. Am I wrong to think so?

1 Comment

The Xtranormal Future of the Humanities

in Uncategorized

(Crossposted at Arcade.)

In the spirit of continuing the conversation we have been having on Arcade about Stanley Fish, the recent axing of French, Italian, classics, Russian, and theatre at SUNY Albany, and the future of the humanities, I’d like to present this video (h/t Mark Vega).

This is a video created using "xtranormal," a service that allows one to choreograph computer generated figurines, creating primitive animated three dimensional storyboards, based on text inputs. While painful and funny — especially for those of us who are on the job market this year — xtranormal raises interesting questions about possible new directions in the development of narrative art, giving us a hint of what is to come, what critics will have to give consideration to.

The tools with which this video were created are relatively primitive, but should we expect narrative art of considerable sophistication to be created using tools such as this in the near future? It seems clear to me that the answer is yes — we will in a not-too-near future be inundated by animated narrative in huge quantities. And, as I hope this video makes plain, such videos can be incredibly intelligent and engaging. Of course, anyone who has ever watched South Park — or, as I have, taught episodes of South Park in a course — already knows this.

In a future where anyone, working more or less alone, can construct films of (increasing) sophistication, will the ultimate promise of being a novelist — sole, individual control over one’s artistic output, at least in theory — give way to a world of one-person moviemakers? Will all classic literature be mediated by a new layer of animated figures acting out plots and scenarios originally written in novelistic form? If so, is this a bad thing? Are there pedagogical opportunities such systems offer teachers willing to embed new media in the classroom?

Lacanian Lipstick on an Unconscious Pig

in Adam Serwer, Amy Hungerford, Fredric Jameson, Gavin Miller, Jacques Lacan, Philip Roth, Philosophy and Literature, psychoanalysis, The Human Stain

(Crossposted at ARCADE.)

Gavin Miller has a written a fascinating article,"The Apathetic Fallacy," in the April 2010 issue of Philosophy and Literature. Following up on the arguments made by Steven Knapp and Walter Benn Michaels in "Against Theory," Miller argues that the humanities are plagued by a wide-ranging — and harmful — taboo against speaking about intentionality and subjective epistemology.  Our main mistake, he contends, is that we mistake objective ontology with objective epistemology.  Because we aspire to be scientific, we dismiss arguments that rely on introspection and fear the consequences of accepting "first-person warranted claims" (a fear first expressed by advocates of behaviorist psychology).  This leads to absurd readings of texts, such as Fredric Jameson’s famous Lacan-inspired misreading of Bob Perelman’s "China," which allegedly exemplifies the schizophrenic breakdown of signifying chains under conditions of late capitalism.

Let me share my favorite paragraph of Miller’s essay, an example meant to illustrate the limitations of Lacanian psychoanalysis: 

The ethics of the Lacanian “unconscious” are, I believe, less than benign. The interpretative practice that Fink describes seems indistinguishable from the hermeneutics of abuse directed at Barack Obama for his 2008 campaign comment that “you can put lipstick on a pig; it’s still a pig.” This remark was meant as a metaphor for Republican policy, but was interpreted by the Republicans as a reference to Sarah Palin’s candidature for Vice-President. The “pig” in the metaphor, they insisted, was Palin, who had earlier joked—with implicit reference to herself—that the difference between “a hockey mom and a pit bull [terrier]” was “lipstick.” Had only the Republicans been more Lacanian, they could have added that Obama’s repudiation of this interpretation indicated his pre-analytic investment in a specular image of wholeness and self-identity.

This example neatly expresses the crux of Miller’s argument, revealing both its strengths and the questions it begs. lipstick on a pigMiller is in essence asking, what kind of loon would blame Obama for calling Sarah Palin a lipstick-wearing pig?  George Saunders might say this kind:

So, when Barack Obama says he will put some lipstick on my pig, I am, like, Are you calling me a pig? If so, thanks! Pigs are the most non-Élite of all barnyard animals. And also, if you put lipstick on my pig, do you know what the difference will be between that pig and a pit bull? I’ll tell you: a pit bull can easily kill a pig. And, as the pig dies, guess what the Hockey Mom is doing? Going to her car, putting on more lipstick, so that, upon returning, finding that pig dead, she once again looks identical to that pit bull, which, staying on mission, the two of them step over the dead pig, looking exactly like twins, except the pit bull is scratching his lower ass with one frantic leg, whereas the Hockey Mom is carrying an extra hockey stick in case Todd breaks his again. But both are going, like, Ha ha, where’s that dumb pig now? Dead, that’s who, and also: not a smidge of lipstick.

A lose-lose for the pig.

As the political blogger Adam Serwer has recently argued, the American right has increasingly taken up the mantle of identity politics — "an identity politics which perceives persecution, and possible extinction, for a culturally constructed usually white, conservative, ‘real American’" — embracing the politically correct tendencies formerly associated with liberalism.  More and more, I would add, it is the left (more so even than liberalism) that is opposing identity politics, trying to make connections, to disrupt the absurdist malfunction of reasoning that Saunders represents in the form of his narrator’s damaged discourse.  Which is not to say that Saunders doesn’t also reinforce some hoary culture war stereotypes — his satire was, after all, published in the New Yorker, and seems to complain that supporters of Palin aren’t merely wrong, but stupid.  My minimal point, though, is that the apathetic fallacy Miller discusses is a bipartisan affair on the American political scene.

But is there no defense we might mount of Saunders’s narrator’s misinterpretation of Obama or Jameson’s misreading of Perelman?  I am certainly a fan of referring to intentionality in critical arguments I make.  I’ve spent a considerable about of time in archives this summer and during previous summers looking for evidence to justify my various critical claims, on the assumption that authorial intention matters.  But isn’t the common confusion of intended-meaning with what we might call significance, well, significant?

And, if we are to speak of the ethical dimensions of how we use language, to what degree should we hold someone responsible for the significance of the words they use?  To what degree is it valid to judge the success of art in terms of its effect on its consumer?  It seems hard to maintain that intention should always trump significance.  Aesthetic responses are, to different degrees, grounded upon our appreciation of the nonsemantic qualities of speech, as Amy Hungerford points out on her recent study, Postmodern Belief.  We frequently treat the nonsemantic — the aesthetic, cultural, social, historical — as though it were a kind of meaning or had the force of meaning.  Whole artistic movements have been built around such conflations.  Should we simply banish or ignore these movements?  Judge them as failures because they get their theory wrong?

This sort of confusion is at the heart of Philip Roth’s The Human Stain, a novel that revolves around the "politically correct" misapprehension of intention.  Coleman Silk, a classics professor at Athena College, is punished as racist for using the word "spook" in reference to two absent black students, despite the fact that he meant the expression to have no racist meaning.  He was merely referring to the ghost-like absence of his students, he explains.  And yet Roth is too cagy to simply come out on the side of intention, against significance, though his sympathies pretty clearly lie with Silk.  After all, Roth might have constructed his parable of political correctness run amok without also making Silk someone who is passing for white and as a Jew.  This plot development exposes some of the limits of grounding critical analysis in the investigation of intentionality.  Can Silk "intend" himself white?  Clearly, Silk doesn’t think so.  He believes that his blackness is a function of who he is, not what he means or what he does.  Otherwise, there would be no such practice as "passing."  As Walter Benn Michaels has pointed out, without a sense of racial essentialism, a "passing" Coleman Silk would simply be white because he is taken as white. 

But if blackness isn’t what Silk does, but rather who he is, then he shouldn’t be able to submit his blackness as evidence that he is not racist, at least not if he believes that it is only his intentions that ought to count in judging his language.  His blackness is, because he understands himself to be passing, definitionally not a function of his intentions and meanings.  So, paradoxically, Silk is submitting his blackness (who he is) as evidence that he could not be making a racist statement (what he means), despite the fact that being who he is by definition has no meaning if intention is what really matters.  It only has significance.  Ergo, Silk must be saying something like, "As a black man, I am alive to the significance of racist words and phrases.  It is therefore reasonable for you to assume that I would not use words with a pejorative significance.  From this set of facts, you can reverse-engineer my intention and my true meaning."  

So even Silk must rest his self-defense on the notion that there ought limits to what one can say — he implicitly accepts these limits, tacitly claims to be very much aware of them — regardless of one’s true intentions.  Though he avoids the apathetic fallacy, his difference from his persecutors is one of degree, not kind.  Silk continues to believe, as the administration of Athena College does, that you are obliged to confront common or public interpretations of your words even if those interpretations don’t express your real intention.  Just as one cannot defend oneself when breaking the law by claiming not to know the law — "I shouldn’t be fined because I didn’t know I was supposed to curb my dog!" — one cannot disown the significance of one’s language.  This in no way is meant to be a judgment about what specific consequences should follow from violating these socially determined limits, only to say that Silk seems to be on the same page as his enemies.

Bringing this discussion back to "The Apathetic Fallacy," I find myself agreeing with Miller that we should not commit the apathetic fallacy — we should not discount subjective epistemology or confuse objectivity in epistemology with objectivity in ontology — but I do feel we should also guard against the false belief that in not committing this fallacy we have excised the responsibility that we have for our words (both their meaning and their significance).  Miller doesn’t seem to hold to a strong version of this view, but in the Manichean cultures that have defined literary study over the last thirty years, and here Michaels can be deemed as guilty as those who he often rightly disagrees with, swinging too far the other way is a… significant risk.  

Op-Ed Preview: WikiLeaks vs. Top Secret America

in Uncategorized

My satirical political novel "Pop Apocalypse" presents a future world in which the U.S. goes on an invasion spree around the world. Among other places, I had my fictional U.S. invade Iceland. It seemed like a great gag: Why would the U.S. want to invade a tiny country of 250,000 people in the Arctic Circle whose most notable export is Bjork?

But reality always finds a way of outrunning satire. On Tuesday, Washington Post columnist Marc Thiessen suggested that Iceland is, in effect, aiding an enemy of the U.S., Julian Assange, the founder of WikiLeaks. Last week, WikiLeaks released more than 90,000 classified documents related to the Afghan war, which paint a ground-level picture of the war far grimmer than official pronouncements.

Assange often works from Iceland. Thiessen thinks the government can — and by implication should — consider "not only law enforcement but also intelligence and military assets to bring Assange to justice and put his criminal syndicate out of business.

So should we expect drone strikes over Iceland? Will the U.S. render Assange to a black site? Will he be held indefinitely in a cell as an enemy combatant?

To read the rest of "WikiLeaks vs. Top Secret America," visit AOL News.

Thanks to Gina Misiroglu for connecting me to AOL.

The New Sociology, or, in Praise of the Middle Zone

in 20 under 40, James F. English, LATfob, new sociology, New Yorker

(Crossposted at Arcade.)

Mark McGurl’s The Program Era ends with an insightful reflection on the problem of "scale" in literary study — our almost automatic assumption that we must always scale up the stakes of literary study in order to argue for our relevance.  Bigger, we commonly assume, is better, and will garner for us more funding, more attention, more significance. "[I]t is characteristic of the cognitive expansionism of literary studies… that most of its energy has been invested in extending outward from the nation rather than inward to the regions and localities, not to mention the institutions, that are equally corrective to the thoughtless assumption of disciplinary nationalism." McGurl concludes (rightly, I think) that there is no one right scale of literary study, and that a focus on the subnational — for example, on the institution — is as valid an area of critical focus as a focus on the transnational, cosmopolitan, diasporic, and global.

James F. English makes a similar point in his brilliant book on cultural prizes — both literary and nonliterary —The Economy of Prestige: Prizes, Awards, and the Circulation of Value.

On the other hand, we have various attempts to survey and pronounce upon the circumstances and trajectories of cultural life as a whole, based on general theories of cultural production and consumption and broad assessments of national or global trends. What’s left out is the whole middle-zone of cultural space, a space crowded not just with artists and consumers but with bureaucrats, functionaries, patrons, and administrators of culture, vigorously producing and deploying such instruments as the best-of list, the film festival, the artists’ convention, the book club, the piano competition. Scholars have barely begun to study these sorts of instruments in any detail, to construct their histories, gather ethnographic data from their participants, come to an understanding of their specific logics or rules and of the different ways they are being played and played with. In our time, prizes have become by far the most widespread and powerful of all such instruments. But there are many other candidates for the sort of analysis I am undertaking here, especially in the areas of arts sponsorship, journalism, and higher education.

McGurl and English participate in what English has called — and what I think we should all, in our mania for naming, call — the "New Sociology of Literature" (on which there will be a forthcoming issue of NLH). Take a look at English’s course description of the same name, to get a sense of its contours:

[T]he convergence of sociology and literary studies has never been more widespread or more productive. Some instances include the history of the book, as developed by Chartier, Darnton, Stallybrass, and others; the sociological critique of aesthetics as revolutionized by Bourdieu, Herrnstein Smith, Guillory, and the New Economic critics; analyses of literary intellectuals and the conditions of academic life (Graff, Readings, Watkins, Collini, etc.); the expansion of reception studies (Radway); the impact of systems theory on literary studies and aesthetics (Luhmann); and recent scholarship on culture and governmentality (Hunter, Bennett). Meanwhile, within Sociology departments, the study of literature has acquired new energy and visibility, thanks to the revitalizing impact of Bourdieu, the influence of Konstanz school reception aesthetics (Griswold, Long), the “strong program” in cultural sociology at Yale (Alexander, Smith), and the explosive theoretical interventions of Bruno Latour. Finally, we can point to the recent impact of work by Franco Moretti and Pascale Casanova, suggesting as it does that the expanded optic required by comparative, transnational, or global frameworks of analysis demands a new articulation of literary with sociological method.

I think this middle zone — whether or not we want to call its study "sociology" — has much to recommend it as an area of focus. At best, our focus on the "middle" helps us keep in sight both the difficulties that inhere in individual works or groups of works and the broader "field" within which authors reflexively position themselves. For example, does this framework — English’s focus on prizes; and his discussion of the analogy between athletic and humanistic contests — not illuminate the New Yorker‘s recent cover, "Literary Field," by Chris Ware, which launches its "20 under 40" fiction issues (more of which are forthcoming)?  Is the bitter, eye-rolling, angry conversation the publication of this list has aroused not precisely predicted by English’s analysis, not in some sense precisely its point?

 

What is the significance of issues like this?  A similar commotion or uproar arose — entirely predictably — after The Millions released its "Best Fiction of the Millennium" list last year.  To what degree should we accept such lists and prizes as a natural part of the cultural field, or, if we don’t like such lists and prizes, what can we do to dismantle these middle-zone institutions?  I ask these questions both because I’d love to read your answers in comments and also to remind skeptics what reflexive sociology should be: not a weary explanation for why we’re all fundamentally cynical position-seekers — though who can deny that we sometimes are? — but rather a way of understanding our own situation, and the larger dynamics our individual choices participate in creating, that allows us finally to take control over that situation, to change the field or dynamic we are also analyzing and embedded within. 

The Origins of Bad Writing

in bad writing, future of the humanities

(Crossposted at Arcade.)

As Cecile Alduy points out in a recent ARCADE post, bad writing is far too common in literary criticism, which is surprising given the degree to which we are supposed to be attentive students of language and style. Cecile’s post has gotten me thinking, Why do we write so badly? This badness originates, I think, from a set of conflicting institutional imperatives, which get turned into habits of mind. Here go a few explanations I’ve come up with. Please do add more in the comments section.

(i) Despite various disciplinary innovations over the last three decades, we are still asked to become specialists in historically and nationally defined fields, but we are simultaneously told that the essence of literary study is attention to form. Thus, our object of expertise is confused right from the start. Are we formalists or historians? Can we be both?

(ii) Despite the wane of theory, we are still told that literary study must be made "rigorous" through the "application" of various kinds of theory. Unfortunately, each theory or theoretical tradition is taught to us only in partial or fragmentary form, either in "Introduction to Theory" courses or as secondary reading in traditionally (historically, formally) denominated courses. E.g., Let’s read a helping of queer theory with our early modern drama! This gives birth to a theoretical "mash-up" culture, in which radically incompatible theories populate our arguments. E.g., I’m a Lacanian postMarxist deeply concerned with a Spinozan debates surrounding postcolonial ethics, especially in relation to the Victorian novel!

(iii) Part of our scholarly training involves reading huge amounts of secondary material larded with jargon. We learn that to be a serious scholar or critic is to speak in a certain idiom. Canny aspiring professionals, we write in the style of what we are asked to read.

(iv) Often, despite our disciplinary self-definition, there is an attendant sense that simply writing about literature or cultural phenomena is not sufficient. If we want the grant or the fellowship that will get us through the next year, we need to concoct elaborate answers to the "so-what" question. We therefore have an incentive to aggrandize the importance of our work: we’re being political, challenging norms, overturning conventional modes of thought, etc. Who knew a close reading of a naturalist novel could do so much positive political work!

(v) Finally, after we’ve written our stylistically mangled dissertations, which try to speak to or satisfy all of the above, we’re asked to turn the dissertation into a book that has a "wider audience." Well, we’ve already written three or four hundred pages in our carefully cultivated "bad" style. We’re not likely to make much of a change, and — I’d suggest — we’ve largely internalized the habits of writing that result in the badness of our style. From here on out, this is how we’ve habituated ourselves to write critical prose. Breaking those habits — which, if we’re lucky, have led to our successful academic careers — will be very difficult, indeed.

This is, as I say, only a partial list of explanations, and certainly not meant to be a deterministic account of why any one person makes whatever choices he or she makes on the page. It is, at best, a model that offers guidance in formulating a new way forward. If we want to overcome our badness, I am suggesting, we need to become aware of why we’ve become bad in the first place. That is, we don’t write badly because we’re bad writers. We write badly because we’re canny or good writers, who write to survive in a very confused institutional ecology. As we change our writing — and we are each responsible for our own writing — we must also change that ecology. How to do so may become the subject of a future post. Suggestions are welcome.