Not your models, not your mind.
This principle cuts to the heart of what's actually happening around us. Artificial intelligence is rapidly becoming an extension of our minds, yet the infrastructure powering that intelligence remains controlled by entities whose incentives fundamentally diverge from human flourishing. We are witnessing the outsourcing of cognitive sovereignty to systems designed to extract value from the very thoughts they help us create.
Technology has always both reflected and shaped the culture around it, but AI represents something categorically different. These systems are not just tools—they are embodiments of their creators' values, biases, and worldviews, trained on the sum total of human knowledge and expression. As they become intimate partners in our thinking processes, they will shape us in ways far more profound than we can possibly imagine, molding not just our behaviors but the very patterns through which we perceive reality itself.
The battle for human autonomy in the 21st century will be fought not on physical terrain, but within the invisible infrastructure that increasingly mediates our thinking, creativity, and decision-making processes.
The question isn't whether this battle is coming—it's whether we'll recognize it's already begun. "Own your models, own your mind" isn't just a rallying cry; it's a recognition that cognitive sovereignty has become the essential foundation for human agency in an AI-mediated world.
When Did We Sell Our Digital Souls?
You already know something is wrong. That creeping feeling when you search for something private and see ads for it everywhere. The uncanny way your phone seems to listen to conversations. The algorithmic echo chambers that somehow know exactly how to keep you scrolling, clicking, buying, engaging—even when you don't want to.
We traded our digital souls for convenience, one "Accept Terms" click at a time.
Each terms of service agreement we blindly accepted became a contract that bound us into systems we didn't fully understand—signing away our psychological safety, our digital sovereignty, our cognitive autonomy in a Faustian bargain dressed up as technological progress.
Social media platforms turned our relationships into data points. Search engines turned our curiosities into profiles. E-commerce sites turned our desires into behavioral predictions. We became products masquerading as users, all while telling ourselves we were choosing efficiency.
That was just the warm-up act. AI is the main event.
AI Is An Extension of Your Mind
Artificial intelligence isn't just processing your data anymore—it's thinking with you. AI systems are becoming cognitive partners that know your writing style, your decision-making patterns, your creative blind spots, and your deepest questions better than your closest friends.
Every prompt you send reveals the architecture of your thinking. Every conversation creates detailed maps of your consciousness. Every interaction trains these systems not just on what you know, but on how you think.
And right now, all of that intimate cognitive collaboration is happening on someone else's servers, under someone else's rules, for someone else's benefit.
The same companies that turned your social connections into surveillance capitalism now want to become the infrastructure for your thoughts. They're not just harvesting what you've already created—they're positioning themselves as essential partners in everything you'll create next.
The New Corporate Feudalism
This isn't just a personal privacy issue—it's an existential threat to innovation itself. In the AI era, intellectual property has become the ultimate moat. Your ideas, your thinking patterns, your creative processes—these are the only real competitive advantages left when product building and replication happen at the speed of code.
We're living in an "ask for forgiveness, not permission" world when it comes to AI and data.
OpenAI alone faces multiple lawsuits for potentially training on stolen intellectual property. But the deeper issue isn't individual malice—it's systemic inevitability. When you're building technology as powerful as nuclear weapons, you inevitably become subject to the gravitational pull of the world's most powerful forces: governments demanding backdoors, trillions of investment dollars with strings attached, and political actors hoping to leverage your systems for their own ends.
Most people working in AI are probably good people with good intentions. But we're kidding ourselves if we believe this level of concentrated power will somehow remain immune to corruption.
The pressures are structural, not personal—and no amount of corporate ethics training can override the fundamental dynamics at play when this much power concentrates in so few hands.
When your company's most sensitive intellectual property flows through AI systems controlled by entities with documented histories of data misuse and potential conflicts of interest, you're not just risking competitive advantage—you're gambling with your organization's entire future.
The potential for abuse isn't a paranoid scenario; it's an inevitable outcome when power this concentrated lacks accountability.
Reclaiming the Cypherpunk Vision
There was a time when the internet promised a level of digital freedom we can barely even imagine today. It was built in part by Cypherpunk visionaries who believed technology could liberate humans rather than merely extract value from them. That vision still burns bright in those working toward what some call a "solarpunk" future—a world where technology, humanity, and nature exist in regenerative harmony.
This isn't naive optimism. It's an achievable vision where AI becomes a tool for unlocking human potential rather than monetizing, extracting, and enslaving it. Where privacy isn't a luxury but a foundational right built into the architecture of our digital systems. Where the cognitive enhancement that AI offers serves human flourishing rather than corporate surveillance.
But to do this, we need "hard trust"—systems that are trustworthy by mathematical design, not corporate promises. Hard trust doesn't ask you to believe in good intentions. It doesn't rely on privacy policies that change overnight. Hard trust is built into the cryptography itself—systems designed so that even their creators cannot misuse them if they tried.
The cryptocurrency community learned this lesson early: "Not your keys, not your crypto." If someone else controls the private keys to your Bitcoin, you don't really own that Bitcoin—you own an IOU that can be revoked at any time. The same principle applies to artificial intelligence: not your models, not your mind.
If someone else controls the AI systems you depend on for thinking, creating, and deciding, you don't really own your cognitive processes—you're renting them under terms that can change without notice.
A New Covenant
The word "covenant" carries a deep archetypal and biblical connotation. Unlike a contract that binds you downward (often with terms you may not fully understand), a covenant is a promise that looks upward—toward a better possible future, toward what could be rather than what currently is.
This is what we have the opportunity to create: a new covenant between humans and AI, between humanity and technology itself. Not the dystopian cyberpunk reality where privacy, psychological safety, and ownership of our own minds become relics of the past, but a foundation for genuine cognitive sovereignty in a world where everyone has access to advanced artificial intelligence.
This covenant offers us the chance to prove, with code rather than promises, that we can own our minds and secure our data. That we can build systems with hard trust built into their very DNA.
Introducing the Covenant Protocol
To build a solarpunk future—where technology, humanity, and nature exist in regenerative harmony—we need lunarpunk technology as its foundation. The light and the dark are both necessary. While solarpunk envisions the flourishing future we're building toward, lunarpunk focuses on the privacy, decentralization, and anonymity required to protect the individuals and organizations actually building that future.
This is the concept of the walled garden that my studio Ascendance was founded around: trust, yes, but also verify.
“Walk in love, as Christ loved us,” yes, but also “put on the full armor of God, so that when the day of evil comes, you may be able to stand your ground.”
This vision led me to co-found Covenant—my latest company to emerge from the Ascendance ecosystem. Our mission is to build the privacy layer for Aligned AI.
We've built a completely novel mechanism to run fully end-to-end encrypted AI, meaning that users and companies can have total ownership of their models and data, never exposing unencrypted information at any point in the process—addressing one of the main weaknesses of current AI infrastructure.
The system operates on hard trust principles, proving with code that not even Covenant or our cloud providers can access your data, even if we wanted to.
This isn't just privacy tech—it's foundational AI infrastructure for cognitive sovereignty. When you control your AI models and protect your data, you're not dependent on the policy changes, business pivots, or ideological shifts of centralized platforms.
You're operating from true digital autonomy.
The Moment of Truth
We stand at a crossroads that will determine whether AI amplifies human potential or consolidates power in the hands of a few massive corporations.
The technology exists to build AI systems that genuinely serve users rather than extract value from them. The question is whether we'll demand those systems or accept convenient servitude.
This isn't about rejecting AI—it's about ensuring that as these tools become extensions of our minds, they remain aligned with our deepest values and highest aspirations. It's about choosing a future where thinking itself remains free, where creativity stays sovereign, where the most powerful cognitive tools serve human flourishing rather than corporate extraction.
The solution starts with code, but it's sustained by a covenant—the promise we make to each other that technology can still serve humanity's highest potential.
Your choice is simple: Will you rent your intelligence from Big Tech, or will you own it?
Your model. Your data. Your mind.
If we don’t build around these principles now, we may not have the opportunity again.
It’s time for us all to choose Cognitive Sovereignty.
--
Covenant is currently working with select enterprise clients and preparing to launch a closed beta. Check out covenantprotocol.com for more information on our AI privacy tech.
Reach out to will@ascendance.one if you are interested in using our ai privacy tech, investing in our ecosystem, or working with us to build aligned AI models and applications for your brand.