An Interview with Jonathan Lethem

Oscar Scholin

I was sitting under an oak tree, streaking shadows on the beige façade. Red terracotta poked the blue sky, which opened onto a lawn criss-crossed with concrete and dotted with trees like neurons. I was in a state rather like a dream, suspended between moments, as an AI ingested, transcoded, and transcribed a conversation I just had with renown author and Pomona Professor Jonathan Lethem, a critically acclaimed American novelist, essayist, and short story writer. He has published 13 novels, including Motherless Brooklyn, which won the National Book Critics Circle Award, numerous short stories and essays, and most recently, a book of poetry. ChatGPT described him as a “like a chameleon, able to blend into different literary styles and genres, often featuring elements of science fiction, noir, and magical realism … and to bring fresh insights to familiar subjects.”

I had Lethem as a professor for the class “Impossible Novels” Fall of 2021, in which we read and discussed a range of strange works from Kafka’s The Castle to Ishiguro’s The Unconsoled. Since last summer when I first began working with neural networks as part of an astrophysics research project, I have felt much like K. gazing up at this veiled wondrous illusion–what is this Castle of code and dreams? So began an obsession to learn and construct my own AI models. As I worked, I realized I could not extricate my questions about the nature of the work from the work itself. To make partial sense of this mental labyrinth I have immersed myself in (and many of us have perhaps experienced to some degree), I wanted to have a conversation with the “chameleon” who guided my first reading of Kafka and, as an author thinking about many related questions, offers a unique vantage of this Castle. Look here, Lethem seems to say, at this reflection in a puddle of moonlight.

Northwest Review (NWR)
I’m really glad to be here talking with you today. Last semester, I took a class with Pomona Professor Jordan Kirk called Medieval Proof. As part of this class, he wanted us to conduct what he calls a “readerly experiment,” a way to engage with the texts we had been reading in an active, exploratory, experimental way. I was curious if I could build a custom neural network to generate literature.

That was a big question for me, because there’s been a lot of attention to these networks that can generate images. Of course, Chat GPT has been all over the news lately. But it seems to me that many of the discussions on these networks seem to be centered on what it means if students can have their essays written for them, how copyright is affected, and plagiarism.

So I’m curious what you see, both as a professor and an author, as the potential impact of these sorts of generative AI networks. More broadly, I’m interested in the effect on art, and in particular literature: both on a practical level and an ontological level if it changes how we think about literature and what it means?*

Jonathan Lethem (JL)
That’s an enormous question. I feel like I’ve been thinking about this practically forever, because of the extent of my early reading and engagement with science fiction (SF). Now, it’s a commonplace that SF wasn’t meant to be predictive, and it’s a banal error to assess it in terms of how much it anticipates real futures or real developments in technologies – instead the real emphasis, is on its capacity to express present states, present collective realities, present technological experiences and political experiences.

But thinking coherently about the present is a form of prediction, too! As a laboratory for cautionary anticipations of certain developments–SF did an incredibly good job of making me feel like I already had encountered certain things before they came along.

It’s happened to me many times. From Philip K. Dick’s “news-clown,” which suggested the news might develop into a form impossible to differentiate from aggressive satire, to the endless rumination on the subjects of computers and robots. Even the very simple stories that Asimov was writing in I, Robot offered a number of philosophical anticipations for literal experiences that weren’t going to be available for a hundred years, or more.

Yet when these things drop into the present, even in some primitive form–or sometimes not so primitive–it seems the actuality is always textured in such a way that it’s still totally disconcerting. And often impossible to accept and cogently think about. But again, it’s also often the case that when something is announced as “just having arrived,” you look around and realize that in some way it was already here. Or then again sometimes things are announced and claimed to exist and you think: No, actually it’s not really here yet.

So in this case I experience some kind of weird combination of: I already thought about that/I haven’t yet begun thinking about that. It’s commonplace and it’s been mistaken for an innovation, and it’s never really gonna be here. All of those feelings nest together when it comes to the AI chatbots.

I’ve glanced at some of those texts. I’ve been presented with the idea, meant to provide maximum anxiety for teachers in the possibility I could be fooled. I mean, let’s take it as a given that I’ve already been fooled–and for that matter, I was fooled before the AI was available, by “fake papers.” But I actually think that the actual domain of literature is a little less changed by this, or changeable, than people seem to be ready to declare, whether in a typical paranoid, dispossessed kind of way, or in the ecstatic technophilic embrace of the idea that we’ll now be able to outsource all our creativity. I tend to think: Isn’t it basically just recombinant existing material?

It’s a lot of recombinant existing stuff. And, therefore, good for fooling you. But also good for educating you about just how much language is floating out there, and reminding you of the degree of either conscious or subconscious resemblance among existing writings.

It’s now a while ago, the period where I was really focused on appropriation in art – my Ecstasy of Influence essay is almost twenty years old. I was thinking then about sampling and music and collage or digital reproduction in visual art and film. One of my feelings was that current kinds of digital applications were only making literal and vivid and creating a sense of urgency around the fact that art is appropriative in its fundamental actions, and always had been.

Some things provide a visible leading edge, like a rap song in which you hear embedded a giant chunk of some old funk song and you’re like: I don’t know if that’s really a new song, it sounds just like, say, Earth, Wind and Fire. Or an Andy Warhol painting that’s someone else’s photograph. But then these are just the leading edge of a tendency. Borrowing is so much more at the root of making any new image or piece of language than many people realize.

In The Ecstasy of Influence, I set out to provoke those anxieties and complicate them at the same time. There’s a magic trick at the end where I reveal that all the language in it comes from other people’s voices. I haven’t read it in a while, but I think I’m still pretty pleased with that result. And it generated a lot of excitement and attention at the time. It’s been taught frequently, especially in art schools.

I suspect that essay would still represent the kinds of thoughts and feelings I’m having when I see these nicely smoothed-out, rendered pieces of digital art that computers now belch out on command, and these seductively polished pieces of essays or potentially fictions. These results are being created by computers that are basically just superhuman appropriation machines.

In doing so, they’re working as tiny, little busy-beaver machines that nibble at the edges of a gigantic ocean of human utterances. They’re taking human sentences and reworking them and stitching them together and smoothing them out and they’re really great at that.

But they’re still just basically a kind of a mechanical flea on King Kong. And that King Kong is that we exist, that we built the appropriation machines and all the sentences too, and that they remain at our beck and call–even though we may be astonished by the results, we’re being astonished by ourselves. They require our instructions. You know, “I want to see a photograph of a horse eating a piece of cake with a fork,” whatever it might be.

It’s a giant mirror pointing back to us. The AI are just making visible what we already do, what we like, what we tend to think about. They’re doing exactly what we want our machines to do, which is to fool us, to cause us to think they’re magic. Which, not incidentally, is also one of the things we often want our art to do.

The production of sentences, and stories, relies upon a vast information bank of earlier such things. This doesn’t seem to have changed in its essence, just because it is now a machine doing the relying. I’m thinking here about the process of writing fiction. The truth is that if someone isn’t a reader, and I mean a voracious reader–which is to say they really love it and go through a decade at least of a kind of compulsive ingestion of different kinds of stories–they don’t really develop a brain which can produce stories. They don’t possess access to enough versions and models of how narrative fiction functions, the many ways storytellers solve those kinds of problems with sentences. In other words, you have to turn your brain into one of those computers, if you want to make a narrative yourself. So the Chat AI, even though in one sense it’s real, also strikes me as being an allegorical object. Thinking about it is a way of becoming fascinated with our own brains and how they operate.

I moved from the visual arts to writing stories. And I’m 58; I grew up when the heroic image of modernism was still really, really powerful. Even if it had been succeeded by what we now call postmodernism, it didn’t yet have that name. I was sometimes confused about the presence of subject matter in my chosen art form, because I’d inherited this conception of abstract art as the highest form of creativity–an art purged of all reference.

It made me interested in literary modernist extremes, like the language experiments of Gertrude Stein, as well as others who tried to abstract language into a pure form. Yet I realized pretty soon that what I was drawn to was a much more prosaic thing. If abstraction was exalted, then I was a fallen person, because I really wanted my stories to be operating in a partnership with reference, mixing my ideas and images with recognizable ideas and images in the minds of other persons. Even in this realization–that the language arts, for me, couldn’t be abstracted–I still held a certain anxiety, and as a result I restricted or tried to restrict certain kinds of ideas from being too prominent in my writing.

Now, I’m actually not such a deep thinker or so philosophically adept–I’m certainly not trained in philosophical thinking, as much as I may be attracted to it. I don’t think I’m really in any danger of writing anything that someone would call the “novel of ideas.” Yet I seemed for a while to be fearful I might commit what could be called proletarian fiction, or write novels that urgently espouse a kind of political point of view, or be too full of sociological content, as though these might be less “artistic” than than ones that are somehow purely concerned with consciousness or language or human experience in a kind of purified, non-sociological sense.

Well, I gradually came to feel that that viewpoint was ludicrous for a lot of reasons. But one was I actually realized that in my own work, it was certain sentences that interested me most were the ones that were trying to do the hard work of thinking about something I found difficult to express–rather than being only decorative or descriptive or funny. It was the ones that struggled to make something esoteric–I mean, to myself–more clear, these were the sentences that struck me as being ones that actually came out the strangest and the most remarkable or unusual.

I’m avoiding the word “original” because that cuts against my whole other rhetoric that maybe fewer things are original than we like to think. But the sentences that interested me most as sentences were in the service of trying to describe some idea that was either uncomfortable or conceptually bizarre. An attempt to capture some relation to the world that I felt only I knew about. And that included sociological descriptions of how exactly I came into the human world in Brooklyn in the 1970s under certain conditions–which were, in fact, ideological situations. And I thought, well, if it’s those pressures that are making the sentences that are the really amazing ones, then it can’t be that excluding content or subject matter or sociology is a good way to get to amazing sentences. Because it came to seem to me actually, almost always the opposite.

And so, to apply this thinking to the Chat AI, I’d say that the one thing we can’t ask the computer to do is try to explain something to us that only it knows or feels. In that sense the Chat AI is structurally excluded from a whole part of the project.

NWR
That’s all really interesting … I’ve been wondering a lot about these questions about originality and ownership. I read part of your Ecstasy of Influence in English 87, which is about writing theory and practice, specifically about plagiarism. I think you had a line like “these stories aren’t mine, but here I’m going to give them to you anyways.” I think it’s interesting because these networks, they’re trained on all stuff that’s already been written. And what’s fascinating is you can have it generate text from its own input, even train it on itself.

Part of this experiment I did is I took some random piece of poetry and then I fed it into my model and then I got this output text and then I put that back in and had this cycle. And what’s interesting is that people actually reacted emotionally to some of these pieces. In a way it’s like if the model was built by humans and trained on human authors, there’s a sort of humanity inextricable from the machine. So in a way, when you use one of these models you are connecting in some sort of way to a “mind,” a composite of a bunch of other minds.

JL
Yes, well, but we also see faces in rocks on Mars. And so again, we’re learning about our incredible capacity to throw ourselves like ventriloquists out into other non-human zones and find ourselves everywhere.
NWR
Maybe because art is a sort of mirror to ourselves. We’re looking for patterns, even though there may not necessarily be something there. But we’re just projecting in a way, I guess.
JL
Artists have always used random generative tricks to make language reveal itself, including to reveal its potential to be emotionally suggestive and philosophically profound. Now, this thing is capable of doing this, perhaps, at the push of a button. But if you look at, say, some of the most heart-rending songs that David Bowie sings, he produced the lyrics by fractured methodologies of cut-ups and exquisite corpse games with language, in which he allows random chance and surprise reveal language combinations. The result can be intensely emotional and human. The method opens doors to existing feelings.
NWR
Do you see a possibility for authors or artists to use this generative AI as a part of their work as opposed to treating it as separate, potentially fun, weird, potentially dangerous thing and then, over here on this other side, there’s the realm of of “real” art? Could these networks inform the creation of art, or do you think the artistic community is a bit more hesitant to kind of bridge those two phenomena?
JL
I think some people will both employ it as a meaningful, generative tool in making things that matter. But that may be a small number. Some number of others will make the claim to be doing that and it won’t be persuasive or impressive, only distracting. We also have to accept the overwhelming likelihood that it will be used to fill up even more of infinite space with utterly forgettable verbal artifacts.

But the thing is, the internet is already that. Without meaning to insult all the people who write such stuff, often out of the kindness of their hearts, because someone else asked “What’s the best restaurant in Florence?” or “How do I kill the rats in my basement?” There’s a lot of volunteer language on the internet that just floats around and fills up space. Language-wise, I mean. It does have other purposes, social purposes. Then there’s also a lot of paid or barely-paid, anonymous kind of material that’s just as forgettable and probably sometimes less grammatically correct than what the AI is going to begin flooding the zone with.

It’s funny–there’s this paranoid theory about what nanobots might do to the universe: that they’ll make everything gray goo by converting all the physical matter in some mistaken way. Someone will give one command that will be misunderstood by the bots to mean that everything they can lay hands on should be reworked into more nanomachines, or into paperclips. And suddenly the whole world will be kind of like a slurry of carbon atoms and we’ll all die.

But in a weird way the internet spurred the production of a kind of language equivalent of gray goo, even before the advent of these adept AI that are now making their debut. People paid to do vague recaps of one another’s articles, just to fill up websites, to produce “content.” So suddenly there’s content everywhere–a lot of cut and paste work, or scantily-changed paraphrase. That’s definitely work that we’ll now see the machines doing instead.

Is that terrifying? I can’t decide whether it is or not. To think of a universe of gray goo filling up, language mud stacking up on servers. Will anyone ever look at it or can even remember a moment after they’ve glanced at it? There’ll be so much. For every movie, there’s gonna be like five million reviews, and five million plot recaps for every episode of every television show. There’s an urge for content to exist even when no one’s pretending to be interested. Just think about how you already need to click through to these gray goo language zones. You searched a review of the most recent film or wanted an article on a subject and you landed on a gray goo equivalent instead. I think people are now freed from doing that work because it’s so very easy to have it done with computers. It’s a little bit like a H. P. Lovecraft story–the horror of knowing how much language there can be without any of it mattering.

NWR
Now that’s a chilling thought. But will this AI gray goo really matter in the long run? Or is it gonna be a big thing right now, but–
JL
I don’t think it has mattered yet. I mean it doesn’t actually. There’s not actually a zero sum where it crowds out language that people are eagerly engaged with. I guess in some practical situations that might make it harder to locate the non gray goo, but that’s already an issue. I don’t know, Oscar. I’m obliging you by trying to have an answer to all these questions. Of course I don’t know.
NWR
I mean none of us really know, but it’s really interesting to think about.
JL
Yeah.
NWR
It’s also interesting I think because there’s a lot of hype, but when you really look at these networks, there are a lot of really funny mistakes and things that they do. I think in some sense that can be kind of a reflection of the source material and the “humanity” it was trained on. I know Facebook had a chat bot called Galactica that was aimed at being a scientist’s companion to consult for research articles or whatnot. It lasted not even three days before being pulled down because the stuff that it was writing was not only–
JL
It turned into a scurrilous racist. Yeah. But this is also a mirror function. Human beings are really, really good at being confidently wrong about things. There’s a lot of that going around. I’ve noticed. Very likely that describes me in this conversation too. I hope it is at least good comedy the way I’m being wrong. The racist AI is quite dark comedy, but it is of course a revelation of our general propensity to wade in and make uncertain situations certain. For racism is above all a reaction to uncertainty, fear, stress–fear of the Other, obviously. The mystery of different forms of being. Even when we’re more benign than a racist AI we land on all kinds of premature wrong conclusions because of this incredible desire to say something that clarifies or settles things that may not be so easy to settle. So, what a surprise: our machines inherited our ability to mansplain and be brazenly wrong. The rapidity with which it landed on racism is like a de-sublimation of human biases. Maybe that’s its best use. It’s like Archie Bunker. We can all be like, “go laugh at the racist bot”–its value is that you feel superior, that you know to distinguish yourself from it.

This makes me think about what a persistent fantasy it is to have a “science companion.” I mean it’s basically Robby the Robot or Mr. Spock, someone when you turn to them, can confidently say, “Well, actually, the likelihood of a collision is presently 47 percent.” You really want to be in the company of that guy. Yet they are always, of course, also the butt of jokes about what they can’t grasp about human life, emotions and so forth, as Robby the Robot and Mr. Spock were prone to be. But we got worse than a Spock or Robby – we got an Archie Bunker instead. That’s pretty rich. That’s pretty funny.

NWR
That is hilarious. It’ll be interesting to see how different fields within literature respond. Take science fiction, which is perhaps more immediately dealing with some of these issues at least in content. I wonder how in response to actually having these sorts of things, what might shift and how that might open up space to kind of think about some of these things.
JL
You know, I read science fiction voraciously early in my life because at the time growing up in the 70s and 80s, the field’s image of itself–which was still partly possible to invest in and believe in then–was that it was one body, and that all the science fiction writers had all read the entire history of the field and they produced meaningful stories because they were always aware of what everyone else had written and said. Of course that wouldn’t have been possible probably after 1944 or 1952, or shortly after.

Back then, if someone said, “Clifford Simak did this better in his City series,” everyone would immediately intone, “Ah yes, Clifford Simak, the City series.” This consensus was, for many reasons, many of them very good, shattering, right around the time I was absorbing that ethos and writing science fiction myself. Of course many things that were meant to be canonical in the field made my eyes glaze over, or made me icy with discomfort at their naivety and their racism. But I did kind of at least play along with the idea that yes, we are all working from the same field of reference.

I don’t know how much the SF field can still maintain that. I’m sure some residue still exists. I detect trace quantities of that in various conversations. It’s like being a jazz musician in 1961, you could say, “Well, Clifford Brown had already done that kind of thing. I heard that solo once before.”

But it can’t really be true. It was already not true when the field was much smaller and more contained because science fiction, to its lasting credit–and also to its lasting confusion–has succeeded in becoming persuasive. Science fiction won–it is the primary culture now, it has become synonymous with popular culture. Therefore the boundary gets confused. Loads of people are working with motifs and materials that they inherited from the science fiction field without any one licensing them to do it. And sometimes without any awareness that they’re repeating motifs that are enormously familiar.

An obvious instance is dystopian literature for children, which has become the predominant thing that teenagers or tweens read. Incredibly dark dystopias were once this sort of exotic substance – now they’re like Archie Comics. You can’t really be proprietary about these motifs. They won because of their seductive appeal and because of their relevance, their explanatory power over the kind of century we were living in, the experiences people were having with modernity, with capitalism, with technology.

After you win, you have to notice that you’re not in charge of everything and everyone anymore: a cult that has to supply its own canonical terms and invent its own body of criticism because no one else cares, no one takes us seriously. You can have a meaningful kind of boundary, but now there’s no boundary.

Yet it is also the case that self-defined genre science fiction still operates and publishes within an imaginary sphere of activities–as though the wider victory hasn’t happened. There remain specialty publishers and little magazines, or online sites that are the equivalent of the old little magazines. And there’s a discourse that continues, a very rich and also very hermetic discourse. If you hear me hesitate, that’s because I’m not a participant, or even witness to this, in it the way I was when I was younger.

In fact, I’m sure that there are people dealing both capably and naively with new technological prospects, likely in amazing ways that would blow your mind, and blow my mind. Maybe you are reading them and I’m not. But I can’t know whether someone is doing better work at thinking about the limits of and prospects for our convergence with machine intelligence than, say, Stanislaw Lem, who to me was the writer who took those forms of exploration to the cognitive limit during my own time of reading SF voraciously.

I can still draw on Lem’s insights when I reread him. I’m still encountering someone who, writing in 1950 or 1960 about what is and isn’t likely to happen, for instance with virtual reality, seems to encompass every possible development. In 1960 Stanislaw Lem was still out ahead of where, for instance, Mark Zuckerberg is, in terms of understanding what humans can and can’t do, and what they will and won’t do, in their involvement in a virtual Metaverse.

How did he get out there? Well, he was a super genius. Certainly not something I was ever capable of doing to be so extravagantly comprehensive and lucid. One reason I stopped making exclusively science fiction stories and in fact became very intermittently someone who made science fiction stories was because I didn’t have the raw cognitive gift that Kim Stanley Robinson or Stanislaw Lem or Olaf Stapledon could apply.

So I was playing catch up. At best, but only sporadically, I had the gifts you’d find in the writers who are associated with Galaxy magazine, like Robert Sheckley and early Philip K. Dick. I could do satirical, sociologically-extrapolated stuff.

Mostly, I just liked dreamlike fiction. I was as into Kafka as I was into the surreal science fiction I was reading. I took my cues from what interested me, and also what I was likely to be better at doing. I dearly hope there’s someone like Stanislaw Lem, perhaps a hundred like Stanislaw Lem, currently bearing down on new ideas about AI for science fiction readers. I don’t know.

NWR
Yeah. Well, this has been really fascinating. Thank you so much for this conversation, and, like you said, it’s such an interesting exercise to try and imagine the future, but there are just so many possibilities, it’s too much–
JL
Too much work. It’s hard enough to imagine the present. That’s my feeling. Just try to imagine the present. Because it is the case that most everyone, out of emotional necessity, as a result of our need to slog through daily life and not be overwhelmed, lives in a convenient fiction of the past. Simply looking around at what is the case is more than challenging, it’s enough to be terrifying. Forget the future.

Jonathan Letham is the author of twelve novels. His thirteenth, Brooklyn Crime Novel, will be published in October. He teaches Creative Writing at Pomona College

Oscar Scholin is a poet and physics-mathematics student currently enrolled at Pomona College. His work has previously appeared in Northwest Review.