Why I'm Betting On Open Source AI
This week, Kimi K2.5 dropped out of China. An open-source model matching frontier lab models on meaningful intelligence benchmarks. GPT-5.2 and Opus 4.5 territory.
A few months ago, it was revealed that 20-30% of new companies applying to a16z are already running open source models.
Open-source AI is clearly on the rise... But not for the reasons many people think.
I’m not betting on open source because frontier open source will necessarily beat frontier closed source in raw intelligence (AGI/ASI, etc). It is possible this happens long term, but clearly OpenAI, Anthropic, and Google are going to be large players for some time.
I’m betting on it because I believe a future where private, open-source AI is accesssible to everyone, is objectively a better future...
AND because the market shift to open source is already occuring in a big way, in large part due to the convergence of these three trends.**
1) AI Is Moving Beyond the Chat App
Despite what most consumers think, AI is not synonomous with a chat interface. At its core, AI is a data transformation tool. Chat is one application.
As we expand into deeper enterprise and agentic use cases, most AI will live behind the scenes. Agentic workflows. Multi-model pipelines. AI agents spinning up their own AI agents, orchestrating swarms of small and medium models (perhaps fine-tuned on your proprietary data).
For many existing business use cases, it already makes more sense to use pipelines of tuned smaller models than to route every query to GPT-5.2 or Opus 4.5. Why pay 100x the price for tasks that don’t require frontier models?
This trend accelerates as smaller models and new AI frameworks become more powerful. The agentic era doesn’t need one massive brain. It needs coordinated networks of specialized intelligence.
The diaspora of open-source models of all sizes, combined with strategic fine tuning for your end use cases, are better suited to serve this growing demand for applications beyond AI chat.
2) Open-Source Unlocks Better Unit Economics
This might be the most important trend: the bottom line increasingly favors AI ownership vs rental. This means dedicated compute.
For any company of significant size (& AI usage volume), running open source models on dedicated compute usually costs dramatically less than paying per API call to rent someone elses model. You’re paying for hosting with effectively unlimited requests, bounded only by your GPU bandwidth, versus paying per token, forever.
Combine this with the first trend: most in-the-weeds use cases don’t need the biggest, most expensive models. Nor should you be using them. You’re overpaying for capabilities you don’t need while building dependency on infrastructure you don’t control.
The bottom line always wins. And the bottom line is shifting.
3) Privacy Becomes Existential
In July 2025, Sam Altman said something remarkable on a podcast:
“If you talk to ChatGPT about your most sensitive stuff and there’s a lawsuit, we could be required to produce that.”
He called it “very screwed up.” ChatGPT conversations have no legal privilege, unlike communications with lawyers, doctors, or therapists.
This isn’t hypothetical. OpenAI is currently fighting court orders in the New York Times lawsuit requiring retention of user chats. Data you thought disappeared when you closed the browser tab may still exist in systems that can be subpoenaed.
Here’s where it gets serious: as AI becomes embedded throughout operations, every company becomes a dual organism. The core business that delivers products and services, and a model factory that trains AI to support (and eventually run) those operations.
When your AI manages your core IP, that AI becomes your competitive moat.
If you don’t have verifiable control over who accesses your models and your data, you’re at the mercy of whoever you’re trusting not to steal, sell, mismanage, or reproduce your core IP.
Look at Web 2.0. Microsoft, Facebook, Google built empires on user data and copying business models. As AI becomes a multi-trillion dollar race with ever-higher stakes, the pressure intensifies. Corporate promises are thin walls.
Open source gives you the opportunity to own your intelligence, not rent it.
On Alignment
AI alignment is impossible at the individual or company level without end-to-end privacy. You cannot align AI to your values if someone else controls the infrastructure. You cannot build genuine trust with systems that share your data with third parties.
AI sovereignty isn’t just strategic advantage. It’s the foundation of any AI future worth building.
If you are interested in deeper philosophical layer here --> check out my Cognitive Sovereignty Manifesto.
The Perfect Storm
The agentic era demands distributed, specialized models, not monolithic API calls. The unit economics at scale increasingly punish renting AI vs deploying on dedicated compute. And privacy becomes non-negotiable as AI becomes your competitive moat.
As these trends converge, the demand for open-source AI will increase, and the players that make it extremely easy to build on a secure, open-source stack will win.
This is what Covenant Labs is betting on.
We’re building the infrastructure layer for encrypted, open source AI. We’ve already open sourced an early version of Conduit
, our developer framework with a custom runtime optimized for smaller models.
Soon we’re releasing our secure compute cloud: a managed service that lets even lightly technical users deploy models and build powerful agentic pipelines on dedicated, secure compute.
If you, and/or your firm wants to participate in this shift and capture the benefits of secure, open-source AI --> please don’t hesitate to reach out :)
--
**To be clear, I’m not claiming there won’t still be large, centralized AI models, but that open source will represent a significant chunk (if not the majority) of all AI usage.


