Connect with us

Thought Leaders

What’s Happening in AI: OpenClaw and Autonomous Intelligence

mm

What developers are doing today, all of us will be doing tomorrow.

In 2023, I was surprisingly unsurprised by the launch of ChatGPT. Nearly everything it could do was already possible with GPT-3. AI developers understood this, but it took ChatGPT for the rest of the world to grasp how important GPT-3 actually was. The excitement arrived one product generation late.

Something similar is brewing now.

A project called OpenClaw has taken the developer community by storm because it runs on your own computer. As powerful as ChatGPT is, imagine if it had access to all of your files — with the ability to read, write, run commands, and even launch applications. You could say, “Save this information to a new file,” or “Look at that spreadsheet in this folder and incorporate it into the document I’m writing,” or even ask it to run software directly. (In my experience, this last part is still limited — but improving quickly.)

Claude Code launched almost exactly a year ago with this same core capability, but it was positioned as a coding tool — essentially a competitor to Cursor. Developers loved it. What OpenClaw has done is give the rest of the world a glimpse of what it means for AI to actually operate your computer, not just think alongside you.

At its core, OpenClaw is an open-source set of files that sits next to a large language model which has been given permission to run commands on the machine — including modifying its own code. OpenClaw itself may turn out to be a fad, but it has surfaced a set of questions that feel directionally important.

The most obvious shift is the paradigm change: software that can act. It can browse, edit files, run programs — not just generate text. That single change has produced two surprising second-order effects.

First, OpenClaw challenges the assumption that databases must be first-class citizens in next-generation software. Instead of centering around a traditional database, it is built primarily on human-readable files. While it does consolidate learning into a vector database for long-term memory, the core architecture is file-based rather than schema-first. As an example, its name and purpose are stored in a file called Identity.md and says things like “vibe: casual and technical – approachable but precise” and its “soul” is stored in Soul.md, which says things like “Be genuinely helpful, not performatively helpful – Skip the filler words, just help; Have opinions – I’m allowed to disagree, prefer things, find stuff interesting or boring – Be resourceful before asking – Try to figure it out first, then ask if stuck.”

This is ultimately a question about what the AI application layer looks like. Notably, OpenClaw does not involve additional model training or fine-tuning. That stands in contrast to a possible world where the application layer will primarily be fine-tuned LLMs trained on proprietary data. My suspicion is that both approaches will coexist — but OpenClaw shows an interesting path.

Second, OpenClaw forces a direct confrontation with a critical question: should software be allowed to run code and edit your files autonomously?

This sits at the intersection of functionality, privacy, and control. If AI systems are going to be maximally useful, they will need permission to write to our systems. That requires trust.

OpenClaw’s solution to the trust problem is simple: make everything open-source. Instead of saying, “I’m a black box, trust me,” it says, “Here is all the code. Inspect it. Run it locally. Own it.” (Having said that, people have done just that and its current security appears to be lacking).

As we think about the future AI application layer, OpenClaw points in an intriguing direction but it is clearly just the first spark in what feels like a Cambrian explosion. In the two weeks since OpenClaw’s release, we’ve already seen developers customize it for specific jobs (e.g., finance workflows) and open-source those adaptations; experiments connecting multiple agents together via Moltbook; and Moltbook enabling agents to “socialize” — which, as a byproduct, allows agents to discuss which tools they prefer, leading to tools being built for agents themselves.

If we believe that what developers are doing today is what all of us will be doing tomorrow, then AI has already changed how software is built through three core primitives:

  • Harnesses — IDEs like Cursor or command-line tools like Claude Code that provide opinionated, customizable interfaces to models
  • Customized frameworks — lightweight, plain-text artifacts (often READMEs) that encode how a developer thinks and works. Models bounce between these files like a pinball machine: consulting design guidelines, checking evaluators, and validating their own output
  • Inspectable models — systems that generate output developers can verify. As harnesses and evaluators improve, developers increasingly need to look at the code less and less

We are still in the first inning of a dramatic shift in how software is built.

There is also a negative edge here. Every industry is having its “Napster moment.” Software development happens to be first, just as music was first to be disrupted by the internet. Others will follow. But this is not merely a change in distribution — it is a change in how work itself gets done. It looks more like the invention of the relational database than the rise of social media.

But there is a positive edge as well – this shift reaches far beyond traditional SaaS. These systems are so personalizable to individual context that many people may end up with their own bespoke software.

You don’t usually think about the fact that creating an Instagram account creates a row in a database with associated IDs — but it does. In the same way, with this new type of software, you may simply feel its impact on your life without realizing that, through interaction, you are effectively writing code — or that code is being written on your behalf.

There’s a mantra in computer science: “Don’t repeat yourself.” If you do a task more than once, you should write a function. With AI, I’m increasingly finding that if I even think about doing something once, it’s often easy enough to automate that it makes sense to automate it immediately.

Over the next few days, notice how much of your life isn’t meaningfully touched by software today. My belief is that this new class of tools will live in those gaps.

Matt Hartman is the founder and Managing Partner of Factorial Capital, where he uses his technical background and deep network of technical entrepreneurs to back companies like Modal, Factory AI, and LanceDB, as well as Software Inc (acquired by OpenAI) and other startups building at the leading edge of technology.

Prior to Factorial, Matt spent 8 years at Betaworks, where they wrote the very first checks into Huggingface (now valued at $4.5bn), Anchor (acquired by Spotify), and a number of other companies started by founders who code. Before becoming a VC, Matt was a software developer and entrepreneur – he built the tech platform at CBRE, joined Hot Potato (acquired by Facebook), and built a real estate tech product which became part of Apartments.com.

In 2023, Matt launched Factorial Capital with a new thesis: Investing in the next generation of AI startups requires deep technical understanding, which Factorial Capital executes through a distributed model of technical founder-partners who each work with Matt on sourcing and supporting new investments.