Active Research

Cognitive Sovereignty

Status: In Progress | Tool: Nous

When AI systems can argue more convincingly than any human, how do individuals maintain independent judgment? In September 2025, Google DeepMind added a new Critical Capability Level for "harmful manipulation" to its Frontier Safety Framework — acknowledging that AI persuasion capabilities now pose systemic risks.

Nous is our cognitive sovereignty assessment toolkit — it measures vulnerability to AI-mediated persuasion and provides frameworks for maintaining epistemic independence.

Current focus areas:

  • Mapping the cognitive pathways through which AI-generated content influences human decision-making
  • Developing frameworks for "cognitive defense" — not against AI, but against the erosion of independent thought
  • Analyzing historical precedents: how did previous communication revolutions (printing press, radio, internet) reshape human epistemic habits?

First working paper in progress — expected Q2 2026.


Transition Dynamics Modeling

Status: Scoping | Tool: Meridian

When cognitive labor is no longer scarce, economies restructure. A 2025 NBER study on "The Economics of Superabundant AI" models two scenarios: in a "co-pilot" regime humans remain valuable because compute is scarce, but in a "compute glut" scenario, all humans with less knowledge than AI become unemployed. In 2025, 55,000 job cuts were directly attributed to AI — the highest level of AI-driven displacement on record.

Meridian is our scenario modeling engine — it simulates economic restructuring and policy outcomes under different ASI emergence timelines.

Research questions:

  • Which job categories face displacement vs. transformation vs. creation?
  • What is the expected timeline from first ASI demonstration to measurable economic impact?
  • How do social safety nets need to be restructured for a post-cognitive-labor economy?

Containment Architecture

Status: Scoping | Tool: Aegis

Not alignment in the narrow technical sense, but the broader institutional challenge. In early 2026, Anthropic — the company that built its reputation on safety — dropped the central pledge of its Responsible Scaling Policy, arguing that pausing while others advance makes the world less safe. Meanwhile, MIRI abandoned technical alignment research entirely, pivoting to policy advocacy.

If even the builders and watchdogs are struggling with control, what governance structures remain effective?

Aegis is our institutional design framework — it models governance structures for maintaining meaningful human oversight alongside ASI-class systems.

Research questions:

  • What institutional frameworks survive contact with systems smarter than their operators?
  • How do international agreements work when enforcement requires understanding systems no human fully comprehends?
  • What can we learn from nuclear governance, and where does the analogy break down?

Post-Scarcity Value Theory

Status: Early Research | Tool: Axiom

If ASI enables radical material abundance, our current economic models stop working. A 2025 systematic review of "post-labor economics" found that traditional theories of distributive justice — whether Lockean or Marxist — collapse when production requires minimal human input.

Axiom is our theoretical modeling toolkit for value, exchange, and purpose beyond material scarcity.

Research questions:

  • What replaces labor-based income distribution in a post-scarcity economy?
  • How do markets function when production costs approach zero?
  • What drives human motivation and meaning when survival needs are universally met?

Our Methodology

We are not a technology company. We do not build AI systems. We study what happens when others do.

Our approach combines:

  • Scenario modeling: Quantitative simulations of economic and social transitions
  • Institutional analysis: Evaluating governance frameworks against ASI-scale challenges
  • Cross-disciplinary synthesis: Bridging computer science, economics, philosophy, and political science
  • Evidence-grounded urgency: Major AI labs now project ASI within 2-5 years. Our research operates on that timeline.