Thought Leaders
The Road to Unicorn: The Next Billion-Dollar Startups Will Be Built by Tiny Teams

Is two weeks a reasonable timeline to build a custom CRM that combines deals, accounting, fundraising, agents, and partner workflows in one interface? Conventional logic says no. Yet I keep seeing versions of this happen, because the cost of building internal software has plummeted while integration and onboarding have not.
A recent example from our own work demonstrates this. Our non-technical co-founder Denis, built an internal CRM in roughly two weeks, with orchestration support from our engineer and me, and parts of it were already running in production while he was still tinkering with it. The system connected to a real database through an admin panel so that the team could monitor 1000+ clients’ health in real time, and it also covered partner management with referral links and payout tracking.
He built it to solve a problem that every fast-growing team runs into. Off-the-shelf CRMs pull you into someone else’s workflow. You spend time learning features you do not need, you run into limitations, and you spend even more time integrating tools so the system reflects how your business actually works. When the underlying tools let you build faster than you can onboard, the old build-versus-buy tradeoff changes, and more teams start building their own operating layer.
Shortening the loop between intent and execution
Across the market, AI is reducing the time between an idea and a working first version. This change came about because you can now hand an agent a well-described task and get back a first draft that is usable enough for a senior engineer to review, correct, and merge. At SquareFi, we estimate that about 95 percent of our code is produced with AI assistance, and our core technical group went from roughly ten people to four. This isn’t simply a gimmick to cut costs —although unicorns do try to stay lean—it is a realignment of resources. With fewer humans, we are shipping 10x more high-quality code.
This is useful to us within and across several departments. Design teams increasingly use Figma plugins to convert designs to HTML, then use AI tools to build small prototypes for first-level testing before anything reaches the development queue. Now we can iterate by testing ideas early without waiting for capacity.
We also run agents where the downside of slow feedback is high. We have security agents that continuously analyze logs and firewall activity for unusual patterns, and we use an agent that analyzes every GitHub commit before it merges to production while comparing it against the current threat landscape. Humans rarely do that kind of repetitive diligence consistently, even when they care a lot.
The broad result is that actions move through fewer handoffs and fewer delays caused by waiting for a specialist to become available.
Knowing what to do matters more than knowing how to do
You can ask an AI agent to build almost anything, and you can do it at a fraction of the time and cost of training a person to produce the same first draft. Output quality still tracks the precision of your request and the strength of your validation.
In many startups now, specification quality is the constraint. The most valuable people in an AI-driven team are often the ones who deeply understand the domain, can describe systems precisely, and can validate results without hand-waving. New job labels have started to follow that reality, including spec writers, domain owners, and AI orchestrators. The label matters less than the capability.
This shift also changes who becomes effective. Strong managers who can understand a project quickly and describe it simply can now produce more output than many engineers, because their intent can be multiplied through agents.
I’m often asked by other founders how far this can go. I don’t think there is a universal answer but I do think the philosophy maps well to traditional fintech because it’s an area where work is complex but the systems are describable and testable.
Yes. Humans will still have jobs.
The last thing I want this to be read as is an evil fintech founder who wants to extinguish the human race. Any sane organisation knows that it is people who keep the gears turning.
I believe that fintech necessitates discipline and accountability. The AI part ensures the former while the human aspect ensures the latter. Large financial transactions should remain human gated. Agents can prepare a payment order and a human should sign it. Final compliance decisions also carry legal responsibility. If a compliance officer approves a counterparty, the accountability sits with the officer, not the agent that prepared the case.
So the question is not whether you can automate everything. The question is how you allocate human judgment to the highest risk moments, while using agents to remove the bulk work that slows experts down. Compliance preparation is a good candidate. Adverse media checks, counterparty analysis, and documentation assembly can be automated so a compliance officer receives a case that is mostly prepared and spends their time on the decision.
That combination is efficient and can be held accountable.
How to be AI-first
A lot of teams say they are AI first, and by that they mean a chat interface on top of the same infrastructure. I am much more interested in AI as an internal operating model.
In our work, we use AI heavily internally, while product-level AI is currently limited to specific areas like support and accounting agents. This is more of a practical boundary than ideological. Risk behaves differently in finance, and product autonomy needs careful constraints.
One trend I expect to grow is developer-facing infrastructure that plugs into agent workflows. For example, we are planning to release a SquareFi MCP server so developers can integrate with the our API more easily and connect us into their own agents. The practical use of this is a finance agent that can analyze your finances, prepare a payment order, and then ask you to sign it.
This is also why I pay attention when leading labs publicly argue that models are not yet equipped to make irreversible high-stakes decisions autonomously. Fintech does not get to pretend that errors are harmless.
What this means for founders building now
The CRM Denis built was an internal project, but it represented a larger reality where building is getting cheaper while coordination is still hard. Communication, often treated as a soft skill, is rising in value, and technically skilled people will need to invest in it if they want to thrive in an environment where machines can do much of their work faster and much cheaper.
In this context, it becomes important to protect time for quiet thinking. The faster agents can execute, the more valuable it becomes to slow down before you give them direction. Understanding a complex architecture deeply before you describe it to an agent is where the quality is decided.
If I were starting again, I would focus on three disciplines.
- First, I would train myself and my team to write better specs. You want people who can break down a problem, define success, define failure, and describe tests. This is the new standard for operational excellence.
- Second, I would build a strict validation culture. AI makes it easy to ship quickly, and it also makes it easy to ship errors quickly. Your advantage doesn’t just come from speed but also by improving with high standards.
- Third, I would treat human judgment as a scarce resource and protect it. In high-risk domains, teams perform better by handing over preparation and repetition to agents while keeping decision-making with accountable humans.
The competitive advantage is shifting toward testing and improving, because the slope of it has changed. Small teams can now produce what used to require much larger orgs, since agents make communication and coordination much smoother. This does not remove the need for talent but rather, raises the bar on what talent means.












