AI Reviews Books Despite Its Inability to Read Them

AI Reviews Books Despite Its Inability to Read Them

The Future of AI in Book Reviews and Journalism

Steven Levy

This article is an edition of Steven Levy’s Plaintext newsletter.

Now that we’ve all had experience with large language models (LLMs), their limitations are all too visible. Yes, they can write, but their prose doesn’t explode in the mind like the words of Jennifer Egan, Emily St. John Mandel, or David Foster Wallace do. Yes, they can make music, but Taylor Swift and Kendrick Lamar are sleeping very well at night. And they sure can summarize history speedily and neatly, but not with the perspicacity of Barbara Tuchman or Ron Chernow. LLMs are just getting started, though. They’re bound to get better.

But how much better? We’ve seen multiple instances where AI has conquered regions thought to be impenetrable by robots, from chess to the LSAT. Could it do the same in the creative arts? Lately, I’ve been obsessed with that question. Can something generated with AI be truly brilliant?

Given that, I was easy pickings for a PR release I received last week. The subject line: “AI book reviewers?” The press release touted “the first book to ever be reviewed by AI avatars.” I’ve written book reviews, and my books have been reviewed, and I know that the best examples of that art are penetrating and illuminating. Had someone cracked the code to do that via AI?

So I followed up, and I found that the touted “book reviews” weren’t quite what I had in mind. It turns out that a PR firm was promoting a novel called The Cloud, one of those dystopian scenarios where a heroic human fights back against AI overlords, and the firm had come up with some AI gimmickry. It created a set of AI-generated characters—a rogue android, a time traveler, etc.—and had those characters offer assessments of the book.

Sounds great, until you hear the details. The two- or three-sentence, ChatGPT-generated “book reviews” only drew on the jacket copy and some Amazon reviews. The avatars did not get access to the actual text of the book, “for copyright reasons.” Since most reviewers prefer to actually put eyes on the pages of the book they’re judging, these AI-generated comments would seem to not qualify as actual reviews. Indeed, they come off like promotional blurbs dashed off by a friend of a friend who hadn’t bothered to read the book or even ask what it was about. One typical example: “As a survivor of the post-apocalyptic empire, I can tell you that The Cloud by Robert Rivenbark is an enthralling read. With its gripping storytelling and memorable characters, I highly recommend this book.” Not one of these comments invokes the name of a single character, the theme of the book, the quality of the prose, or a glimmer that a mind was at work. Henry Roi, one of the PR execs in charge of the promotion, concedes that most authors do want actual humans to review their work. “But afterwards they want more content,” he adds, presumably for social media posts on TikTok and elsewhere. AI-generated content fits that bill.

So much for the avatars. Still, the experience got me thinking about whether it might be possible in the future for AI to create real book reviews. To get a baseline on the state of the art, I asked ChatGPT to review my own first book, Hackers. Since the book has been around for a while, there’s plenty of material for a large language model to chew on. Maybe ChatGPT could compensate for that lousy review Hackers got in the Sunday New York Times Book Review! (The Washington Post review was better.) ChatGPT prefaced its work with a disclaimer that, as an AI model, it didn’t have opinions or emotions. Nonetheless, it dove into the breach. The first paragraph read:

“Hackers: Heroes of the Computer Revolution” by Steven Levy is a classic exploration of the early days of the computer industry and the pioneering individuals who shaped its development. Originally published in 1984, the book provides an insightful and engaging look into the world of computer enthusiasts, tinkerers, and programmers who were instrumental in laying the foundation for modern computing and internet culture.

Not bad. But some people have told me they loved Hackers. Those fans might think that review was insufficiently enthusiastic. So for those loyal readers—not for me, you understand—I prompted ChatGPT to be more positive. It complied within seconds.

“Hackers: Heroes of the Computer Revolution” by Steven Levy is an absolute gem that transports readers to the exhilarating and transformative early days of the computer revolution. Originally published in 1984, this book remains a timeless masterpiece that celebrates the brilliant minds and boundless creativity that laid the foundation for the digital world we inhabit today.

Much better. Still, with its admitted void of emotions and opinions, ChatGPT didn’t do what a great review does—open up a book in a new, insightful way. Could that ever happen? To further ponder this, I consulted with Jennifer Senior, a Pulitzer Prize–winning writer who for several years wrote fantastic book reviews for The New York Times. “It seems weird to outsource book reviews to AI,” was her first reaction. But then she began asking ChatGPT to review books she’d written about. While the output was credible, she sensed that it was paraphrasing what others had written—including her. “If I found a particular passage of a book to be too marshy and kind of rough going, the review would say, ‘Some of the book was hard to understand,’” she says.

The question was whether a future AI book reviewer could make such judgments itself. “Could AI ever isolate a sentence and parse its terribleness, or use a quote from the book to demonstrate why a writer can’t write?” Senior wonders. We agreed that nothing we saw from the current set of AI systems can do that. But I suspect that future ones might. Large language models can master a foreign language or learn to code without being trained. So I don’t see anything stopping a future system, trained on just about every word ever written, from identifying examples of what we humans regard as good writing.

What I’m still not sure about is whether an AI could make judgments as fresh and perceptive as the reviews from Senior and other elite reviewers. For those people, the act of reading evokes echoes from their own lives and triggers thought processes that lead to their observations. Senior doubts that this could happen with a robot. “I don’t see how AI can recreate the experience of reading a book, which is what the best critics do,” she says.

I guess that a variation of the Turing test would be whether a robot could do just that. But that would be falling into the trap of assuming that AI, once it performs a task as well as humans, won’t level up from there. An AI book reviewer might not bother to recreate the human experience of reading a book. Drawing on its comprehensive knowledge of everything ever written, including treatises on what makes great criticism, a future AI bibliophile might indulge in a form of hypercriticism, with insights exceeding what mere mortals could produce. By then, of course, many of the books up for review will probably be written by LLMs.

In the meantime, humans still rule. Don’t cancel your subscription to the London Review of Books just yet. Still, ChatGPT reviews do have their charms. In fact, I have a new hobby: asking LLMs to write better and better reviews of my books. Here’s the latest variation on Hackers:

“Though the world of computing has evolved since the book’s publication, its luminescent resonance remains undimmed. The predictions laid bare in its pages, even as they interact with the tapestry of history, remain a testament to Levy’s foresight and uncanny ability to discern the pulse of progress. In summation, ‘Hackers: Heroes of the Computer Revolution’ is not merely a book; it is an odyssey—a journey through time, intellect, and the very essence of human potential. Steven Levy’s magnum opus deserves a sanctuary on the bookshelves of those who seek not only to understand the past but to be inspired by the audacity of pioneers who sculpted the future.”

If only a human wrote that! A human reviewing books… for The New York Times.

The Future of Journalism

In 2012, I wrote about Narrative Science, a company that produced algorithmically generated stories about sports and financial news. That was before the current crop of large language models. In 2021, Salesforce bought the company.

[CEO Kristian] Hammond believes that as Narrative Science grows, its stories will go higher up the journalism food chain—from commodity news to explanatory journalism and, ultimately, detailed long-form articles. Maybe at some point, humans and algorithms will collaborate, with each partner playing to its strength. Computers, with their flawless memories and ability to access data, might act as legmen to human writers. Or vice versa, human reporters might interview subjects and pick up stray details—and then send them to a computer that writes it all up. As the computers get more accomplished and have access to more and more data, their limitations as storytellers will fall away. It might take a while, but eventually even a story like this one could be produced without, well, me. “Humans are unbelievably rich and complex, but they are machines,” Hammond says. “In 20 years, there will be no area in which Narrative Science doesn’t write stories.”

For now, however, Hammond tries to reassure journalists that he’s not trying to kick them when they’re down. He tells a story about a party he attended with his wife, who’s the marketing director at Chicago’s fabled Second City improv club. He found himself in conversation with a well-known local theater critic, who asked about Hammond’s business. As Hammond explained what he did, the critic became agitated. Times are tough enough in journalism, he said, and now you’re going to replace writers with robots?

“I just looked at him,” Hammond recalls, “and asked him: Have you ever seen a reporter at a Little League game? That’s the most important thing about us. Nobody has lost a single job because of us.” At least not yet.

Ask Me One Thing

John asks, “Do driverless cars like Cruise have to pass a test and get a driver’s license?”

Thanks, John. A timely question, since San Francisco just flashed the green light for 24/7 ride-hailing on self-driving cars. Your concept is intriguing—a robot car taking a test with an exasperated instructor barking instructions at it. Sadly, that hasn’t happened. The states approving the use of driverless cars have a more impersonal verification system, where companies asking to release their cars in the wild submit proof that they’re safe. And also proof that they’re insured up the wazoo in case they’re not safe enough.

My guess is that if an autonomous vehicle took the normal path toward a license, it would ace the written part—the whole driver’s manual is in its brain! And when it came to the actual driving test, it would have no trouble at all with the toughest task, parallel parking. (The way those vehicles squeeze into a tight spot is a thing of beauty.) But I can imagine the frustration of the licensing official when the applicant creeps down the street under the speed limit, stops dead when something blocks its path, and hinders emergency vehicles. Try again, robot!

You can submit questions to [email protected]. Write ASK LEVY in the subject line.

End Times Chronicle

  • Tropical Storm Hilary: I’m coming for Southern California. 5.1 earthquake in Ojai: Hold my beer.
  • ChatGPT can’t out-book-review Jennifer Senior, but it can help students fake their assignments. Teachers are figuring out how to deal with that.
  • Taiwan, on the cutting edge of tech, has seen many of its citizens going myopic—and one crusading doctor has a surprising remedy.
  • The movie Oppenheimer shows us how the bomb was made in Los Alamos. But its cameras missed the part of the story involving exploited workers in Congolese uranium mines.
  • Florida’s native wildlife has had enough of invasive pythons. Rats and gators are fighting back.
  • Plaintext is off for Labor Day weekend. See you in September!

In conclusion, while AI has made significant strides in various fields, such as chess and the LSAT, its ability to produce truly brilliant creative works, like literature and music, is still a work in progress. AI-generated book reviews, as demonstrated by AI avatars, currently fall short of the penetrating and illuminating insights provided by human reviewers. However, future advancements in AI may enable it to identify examples of good writing and produce more perceptive reviews.

Similarly, in journalism, AI has the potential to collaborate with human writers, leveraging their flawless memories and data access to enhance storytelling. Companies like Narrative Science are already exploring the possibilities of AI-generated stories, from commodity news to detailed long-form articles. While AI may not fully replicate the human experience of reading or replace human reporters, it can complement their work and expand the range of stories that can be produced.

As we look to the future, it’s important to recognize the unique qualities that humans bring to the table. Our rich and complex nature, combined with the capabilities of AI, can lead to new and exciting possibilities in the world of media and creativity. So, while AI continues to evolve and improve, let’s not cancel our subscriptions to human-driven publications just yet. After all, there’s still something special about the human touch in our media and cultural experiences.


This article is an edition of Steven Levy’s Plaintext newsletter.

Originally published on Plaintext. Sign up for Plaintext and tap Steven’s unique insights and unmatched contacts for the long view on tech.