Connect with us

Book Reviews

Book Review: Large Language Models by Stephan Raaijmakers

mm

As someone who owns more than fifteen volumes from the MIT Press Essential Knowledge series, I approach each new release with both interest and caution: the series often delivers thoughtful, accessible overviews — but not always in the style or depth I expect.

In the case of Large Language Models by Stephan Raaijmakers, however, the author achieves something rare: a crisp, richly informed, and critically balanced book that earns a spot among my most recommended AI books.

Language reconceived: from human art to computation

One of the most striking strengths of Large Language Models is how it reframes “language.” Rather than dwelling purely on philosophical or literary perspectives, the book treats language as a computational phenomenon — a system of structure, statistical patterns, and generative potential that modern neural architectures can exploit. This reframing is not gratuitous: Raaijmakers guides readers through how, under the hood, large-scale neural networks encode, parse, and generate text based on statistical regularities in massive text datasets — a subtle but powerful shift in how readers understand these systems. The book makes it easy to grasp that language, when viewed through this computational lens, becomes something a machine can model rather than something mystical or opaque.

This framing demystifies what LLMs are doing. Rather than portraying them as mystical “understanders” of meaning, Raaijmakers shows how they approximate language: predicting next tokens, modeling syntax and semantics statistically, and recreating plausible language outputs based on learned distributions. In other words — they don’t “think” in human terms; they compute, statistically. For many readers — especially those without deep math or cognitive science background — this is a clarifying and healthy viewpoint. The book thus turns the widespread mystique around LLMs into something more grounded, more understandable.

From data to behaviour: how LLMs learn — and how they’re aligned

After establishing what language is (computationally), the book moves on to how models learn. Raaijmakers explains in accessible terms how contemporary LLMs are built (deep neural networks, attention mechanisms, transformer-style architectures) and how they evolve from mere pattern-matching machines into more aligned, usable tools.

A critical part of that evolution is the use of human feedback by using reinforcement learning from human feedback (RLHF) — a technique by which LLM outputs are evaluated or ranked by humans, and the model is finetuned to prefer outputs deemed more helpful, safer, or aligned with human values. The book draws a distinction (implicitly and explicitly) between the base phase — pretraining on huge volumes of text to learn statistical regularities — and the alignment phase, where human judgments shape the model’s behavior. This distinction matters hugely: pretraining gives the LLM its fluency and general knowledge; RLHF (or feedback-based finetuning) guides it toward desirable behaviours.

In doing so, Raaijmakers doesn’t gloss over complexity or risk. He acknowledges that human feedback and reward-based alignment are imperfect: biases in the feedback, uneven human judgments, overfitting to the reward model, and unpredictable behaviors in novel contexts — all legitimate limitations. By refusing to idealize RLHF, the book maintains credibility.

What LLMs can and can’t do

Raaijmakers excels at laying out both the strengths and the limitations of LLMs. On the plus side: modern LLMs are astonishingly versatile. They can translate languages, summarize text, generate code, produce creative writing, draft essays, answer questions, and assist in many domains — essentially any task that can be reduced to “text input → text output.” Given sufficient scale and data, their generative fluency is often impressive, sometimes uncanny.

At the same time, the book does not shy away from their fundamental limitations. LLMs remain statistical pattern-matchers, not true thinkers: they can hallucinate, confidently output plausible but false information, replicate biases and stereotypes present in their training data, and fail in contexts requiring real-world understanding, common-sense reasoning, or long-term coherence. Raaijmakers’s treatment of these failings is sober — not alarmist, but realistic — reinforcing that while LLMs are powerful, they are not magic.

This balanced approach is valuable — it avoids the two traps of hype and pessimism. Readers walk away with a clear-eyed sense of what LLMs are good for and what they cannot be trusted to do.

Opportunity and responsibility: social promise and peril

Where many technical primers stop at architecture or use cases, Large Language Models goes further — into the social, political, and ethical ramifications of this technology. In chapters like “Practical Opportunities” and “Societal Risks and Concerns”, Raaijmakers invites readers to consider how LLMs might reshape creativity, productivity, human communication, media, and institutions.

On the opportunity side: the potential is enormous. LLMs could democratize access to writing, translation, programming. They could accelerate research, education, and creative expression. They could assist those who struggle with language or writing. They could shift how media is produced and consumed. In a world facing substantial information overload, LLMs might help bridge gaps — if used thoughtfully.

But Raaijmakers doesn’t avoid the dark side. He raises warnings: about misinformation and “hallucinated truths,” about entrenched biases, about erosion of human judgment, about over-reliance on flawed models — all risks already documented in broader AI ethics discourse.

Crucially, this social lens makes the book valuable not only for engineers and researchers but for policymakers, educators, and any thoughtful citizen. It roots LLMs in real-world contexts, not abstract hype.

What comes next — and a call to vigilance

The final chapter, “What’s Next?”, doesn’t pretend that current LLMs are the final word. Instead, Raaijmakers encourages a forward-looking perspective: how might LLMs evolve? How can we improve alignment, transparency, fairness? What governance, regulation, and design principles will protect society as these models proliferate?

For me — as someone deeply invested in the Essential Knowledge catalogue, aware of how some volumes underwhelm — this book deserves to be ranked among the very best. Its clarity, balance, technical grounding, and social awareness make it a stand-out. It strikes rare equilibrium between accessible explanation and serious critique.

Therefore, I urge all who build, deploy, or interact with LLMs — developers, organizations, policymakers, and everyday users — to keep a watchful, critical, and informed eye. Demand transparency. Push for diverse, representative training data. Insist on rigorous evaluation. Question outputs. Don’t treat LLMs as oracles, but as powerful tools — tools whose power must be matched by care, responsibility, and human judgment.

Final verdict

Large Language Models is not just another technical primer — it is a timely, sharp, and deeply considered guide to one of the most consequential technologies of our age. It combines accessible explanation with sober reflection; clear-eyed technical detail with broad social awareness; admiration of potential with cautious realism about risks.

For anyone — engineer, researcher, student, policy-maker, curious citizen — seeking to understand what LLMs are, what they can and cannot do, and what they may mean for our future — the book Large Language Models by Stephan Raaijmakers is essential reading.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.