Back to Portfolio
Feb 2026 • 10 min read

Inside the Singularity: Building AI When the Rules Are Still Being Written

We may have already crossed the threshold. What does it mean to build companies when the curve goes vertical?

"The Singularity is not a wall you hit. It's the water you're already swimming in—and only once you stop to look do you realize how deep you've gone."

The Threshold Question

In 1993, computer scientist Vernor Vinge wrote that within thirty years, we would have the technological means to create superhuman intelligence—and that shortly after, the human era would be ended. He called this moment the Singularity: a point beyond which prediction fails, because the intelligence doing the predicting would no longer be ours.

What Vinge couldn't anticipate—what nobody could—is that the Singularity might not arrive as a single dramatic moment. It might arrive as a period. A transition zone with no clear sign marking the crossing. You don't step through a door. You drift through a threshold that was never visible until you look back.

I think we entered that zone somewhere between 2023 and 2025. I can't give you the exact date. Neither can anyone else. But the empirical evidence is hard to ignore: each new generation of language models didn't just perform better on existing benchmarks—it invalidated the assumptions underlying those benchmarks. GPT-3 was impressive. GPT-4 crossed qualitative thresholds that made GPT-3 feel like a different category of tool. Then came reasoning models that solve problems through chains of thought that, frankly, no human would have designed. Each generation makes the prior one feel like a rehearsal.

That is not linear progress. Linear progress is predictable. What we have been living through is something else.

What AGI Actually Means

The popular conception of AGI—Artificial General Intelligence—tends toward the dramatic: a robot that is smarter than Einstein at everything, that wakes up one morning and decides to rewrite the laws of physics. That's ASI, Artificial Superintelligence. A different thing entirely.

AGI, more precisely defined, is simpler and more unsettling: a system capable of performing any intellectual task that a human can. Not necessarily faster. Not necessarily with perfect memory. Just... capable of doing the cognitive work.

By that definition, we are functionally in AGI territory for a growing list of high-leverage domains. Software engineering. Scientific literature synthesis. Legal analysis. Strategic planning support. Diagnostic reasoning in medicine. These are not narrow tasks—they were, until recently, considered the exclusive province of years of human training and expertise.

The Benchmark Collapse

Every test designed to measure "human-level AI" gets passed within 12–18 months of publication. The ARC-AGI challenge. The bar exam. MMLU. SAT. PhD-level science questions. When the bar is cleared, we don't declare victory—we design a harder test and reset the goalpost.

This is worth sitting with. We keep redefining intelligence as "the thing AI can't do yet." The moving goalpost is itself a signal.

It doesn't matter whether academic consensus has declared AGI. What matters is whether the economic and operational impact is equivalent. By that measure, in software, in knowledge work, in creative production—we are already there.

The Promise — What This Moment Contains

Every major technological revolution in human history—agriculture, the printing press, the industrial revolution, electrification, the internet—took generations to reach full impact. The mechanisms of adoption were slow: literacy rates had to rise, infrastructure had to be built, cultural norms had to shift.

AI is compressing those timelines into years. Sometimes months.

AlphaFold didn't just solve the protein folding problem—it changed the methodology of structural biology entirely. Problems that would have remained mysterious for decades are now tractable. Drug discovery pipelines that previously required a decade are being rebuilt from scratch around AI-assisted molecular design. This is not incremental. It is a change in kind.

The most profound near-term implication is what I call cognitive democratization: the leverage redistribution that happens when a small team with AI capabilities can match the output of organizations that previously required hundreds of people. This isn't about replacing workers—it's about fundamentally changing who can build what.

At Gnosix, we build systems today that would have required a team of fifteen engineers two years ago. Our product roadmap is limited more by imagination than by technical constraint. I have never been able to say that before in my career. That shift—from "what can we build?" to "what should we build?"—is itself a marker of where we are.

The Complexity — Why the Promise Is Not Simple

Holding the promise and the complexity simultaneously is the only honest way to engage with this moment.

The alignment problem—ensuring AI systems pursue the goals we actually intend—is no longer a theoretical concern for researchers at frontier labs. When you deploy autonomous agents that make real decisions in the world—calling APIs, executing code, generating outputs that affect people's lives—the question of what the agent optimizes for becomes immediately and practically urgent. We grapple with it every week at Gnosix.

There is also an epistemic crisis beginning to unfold. As AI-generated content saturates the information environment, the mechanisms humans have always used to form beliefs are degrading. We evolved to assess credibility through social signals, provenance, and effort. AI decouples content production from all three. The singularity may produce a secondary crisis—not of capability but of epistemology. How do we know what is true when generating convincing falsehoods costs nothing?

Concentration Risk

The compute and talent required to train frontier models is concentrated in three to five organizations globally. The singularity could produce the most asymmetric distribution of power in human history if the governance structures that emerge don't account for this.

The Pace Gap

Regulatory frameworks, ethical standards, and social adaptation operate on 5–10 year cycles. AI capability cycles are now 12–18 months. This gap—between the pace of capability and the pace of governance—is itself a civilizational risk that compounds with each passing year.

Building Inside the Threshold — A CEO's Perspective

The practical experience of running an AI company at this moment is unlike anything I've encountered before as a founder. Every product decision is made with the knowledge that the model you're building on today may be obsolete in eight months.

This forces a discipline I've come to think of as capability design over feature design. A feature is a specific function. A capability is an adaptable approach. At Gnosix, we architect for capability—we build systems that can survive multiple generations of underlying model changes because the architecture is sound, not because it's locked to a particular API.

When you deploy autonomous agents that interact with people's decisions—their work, their learning, their understanding of the world—the ethical weight becomes immediate and operational. The question of what your system optimizes for isn't abstract anymore. It's the most consequential design decision in the room. You build with that weight or you build badly.

The right epistemic posture for building in this moment is not confidence. It's not paralysis either. It's something I'd call calibrated uncertainty: decisive action, executed with intellectual humility about what we don't know, and genuine commitment to reversibility wherever possible.

What Comes After the Threshold

The honest answer is: nobody knows.

This is not a cop-out. It is the most important thing to say clearly, because the temptation—on both sides—is to collapse the uncertainty into a story. The optimists tell you it's all going to be fine, the dystopians tell you we're already doomed. Both are performing confidence they don't actually have.

Scenario planning is useful. Three attractors are worth naming:

Managed Transition

AI and human civilization co-evolve with sufficient governance. Knowledge becomes post-scarce. The leverage gain benefits most of humanity, not just the builders. This requires institutions moving faster than they historically have.

Controlled by Few

A small number of actors capture most of the capability leverage. Functional stability persists but structural inequality deepens in ways that make prior eras look egalitarian. The world works—for some.

Loss of Coherence

The pace of change exceeds the adaptive capacity of human institutions, epistemology, and social fabric simultaneously. Not a single catastrophic failure—a diffuse loss of shared reality and governance capacity.

Which of these we navigate toward is not predetermined. The decisions being made in 2025 and 2026 are foundational. Not because any individual is omnipotent, but because the systems being architected now will run on the infrastructure of tomorrow. Path dependencies in technology are real. The choices made in the early web shaped the internet we got. The choices made now will shape the AI-native world that follows.

The only productive stance, as far as I can tell, is to be fully present to what is actually happening. To resist the temptation to euphemize the risks or catastrophize the promise. To build systems that are as beneficial as we can make them, stay honest about what we don't know, and hold the thread of human dignity through all of it.

The singularity is not an event you survive or don't survive. It is a transition you navigate—and the quality of your navigation is the only variable still under your control.

Ulises Arellano
CEO, Gnosix