TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

The Meridiem
Founder Exodus Signals AI Inflection: Chat-to-Coordination Transition AcceleratesFounder Exodus Signals AI Inflection: Chat-to-Coordination Transition Accelerates

Published: Updated: 
3 min read

0 Comments

Founder Exodus Signals AI Inflection: Chat-to-Coordination Transition Accelerates

Elite AI researchers leaving Anthropic, OpenAI, Meta, xAI, and DeepMind to build coordination models signals market consensus: conversational UI is commoditized. Foundation models shift toward enterprise workflow orchestration.

Article Image

The Meridiem TeamAt The Meridiem, we cover just about everything in the world of tech. Some of our favorite topics to follow include the ever-evolving streaming industry, the latest in artificial intelligence, and changes to the way our government interacts with Big Tech.

  • Humans& raises $480M to build coordination-first foundation models, signaling market shift from chat to multi-agent collaboration

  • Founder pedigree matters: simultaneous exits from Anthropic, OpenAI, Meta, xAI, DeepMind suggests builder consensus that chat UI is no longer the frontier

  • For enterprises: window to evaluate coordination layers vs. point-solution agents is open now, before architectural decisions lock in for 18+ months

  • Watch for: First multi-agent coordination wins in enterprise workflow automation. This will validate whether Humans& is onto a genuine capability shift or funding theater.

When the top talent at the world's leading AI labs simultaneously leaves to build the same capability, you're seeing genuine market consolidation, not hype. Humans&, the new startup backed by researchers from Anthropic, Meta, OpenAI, xAI, and Google DeepMind, just closed a $480 million seed round to build foundation models for coordination, not chat. This isn't incremental. It signals that the market sees single-user conversational AI as table-stakes—the baseline—and the real value frontier as multi-agent orchestration. The timing matters: as enterprises move from AI pilots to production workflows, they need systems that coordinate teams, not just answer questions.

The article frames this as a founder story, but the real inflection is architectural. Humans& CEO Eric Zelikman and co-founders Andi Peng and Yuchen He didn't leave their positions at the world's top AI labs to build a slightly better chatbot. They're architecting foundation models designed from the ground up for social intelligence—systems that understand not just how to answer questions, but how to navigate competing priorities, track decisions over time, and keep organizations aligned.

This matters because it reflects a hard realization spreading through the AI research community: ChatGPT-style models hit a ceiling. They're phenomenally good at generating one-off responses—answering a question, summarizing a document, writing code. But they're terrible at the work that actually defines human collaboration: understanding stakeholder preferences, maintaining context across conversations, remembering what matters to each person, and optimizing for group outcomes rather than individual queries.

The evidence is in the departures themselves. When researchers at Anthropic, OpenAI, Meta, xAI, and DeepMind simultaneously leave to join the same startup, you're seeing consensus forming. These aren't people chasing capital—they're people betting their reputations on a specific technical vision. According to Zelikman's comments to TechCrunch, "It feels like we're ending the first paradigm of scaling, where question-answering models were trained to be very smart at particular verticals, and now we're entering what we believe to be the second wave of adoption."

The market timing is acute. Companies are in the awkward transition phase right now—models are competent enough for production use, but the workflows built around them are still fragmented. You've got point-solution AI agents, traditional collaboration tools, and people manually coordinating across systems. That friction is massive. Reid Hoffman crystallized the problem this week: "AI lives at the workflow level. The people closest to the work know where the friction actually is." The coordination layer—the connective tissue between people, teams, and AI systems—remains largely unaddressed.

Humans& wants to own that layer. The startup is positioning its model as a "central nervous system" for organizations, the kind of system that understands not just what work needs doing, but who should do it, what competing pressures exist, and how to drive consensus. This requires rethinking foundation model training entirely. Rather than optimizing for immediate user satisfaction ("Did the user like this response?"), Humans& is training for long-horizon reinforcement learning—models that plan, act, revise, and follow through over time. They're also using multi-agent RL, which means the model learns in environments where multiple AIs and humans are in the loop simultaneously.

The competitive response is already visible. Anthropic launched Claude Cowork to optimize work-style collaboration. Google embedded Gemini into Workspace, baking AI-enabled collaboration directly into tools people already use. OpenAI is pitching developers on multi-agent orchestration frameworks. But here's the crucial asymmetry: none of the major players seem willing to fundamentally rewrite their foundation models around social intelligence. They're bolting collaboration features onto chat-optimized architectures. Humans& is starting from scratch with a different base assumption.

The risk structure is real though. Building a new foundation model requires capital that scales—endless rounds of funding for compute access, training runs, and talent acquisition. Humans& is competing with Anthropic, OpenAI, and Meta for both GPU capacity and researcher talent. Those are non-trivial constraints. The startup is also pre-product, which is a euphemistic way of saying they're raising nearly half a billion dollars on architectural vision alone. Peng told TechCrench that "we're designing the product in conjunction with the model." That's ambitious when your seed round is already larger than most Series A funding rounds for established companies.

The acquisition risk is also non-trivial. Meta, OpenAI, and DeepMind are actively hunting for AI talent, and a startup built by their own alumni with demonstrable technical vision is exactly the kind of acquisition that makes sense. Humans& says they've turned down acquisition offers and aren't interested, but those conversations will intensify if the model architecture proves viable.

What matters now is validation. Do enterprises actually need coordination-layer AI, or are existing point solutions and chat interfaces sufficient? The answer determines whether Humans& represents a genuine architectural shift or an exceptionally well-funded bet on a false premise. Companies like Granola—the AI note-taking app that raised $43 million at a $250 million valuation by adding collaboration features—are running a parallel experiment. If coordination becomes the differentiating layer in AI-native tools, Humans& has the pedigree to win it. If chat-plus-integrations prove sufficient, they're an expensive acquihire waiting to happen.

Humans& matters because it's not just a startup—it's a canary in the coal mine for where the market thinks AI value is moving. When the top talent at Anthropic, OpenAI, Meta, xAI, and DeepMind simultaneously leaves to build coordination models, that's consensus. For builders, the question is whether to invest in coordination-layer architecture now or wait for consolidation. For investors, it signals a potential second wave of AI infrastructure companies focused on orchestration rather than inference. Enterprise decision-makers should treat this as a signal to evaluate coordination models alongside point-solution agents. For professionals in AI, this is a warning that chat expertise is table-stakes, not differentiation—coordination and long-horizon RL are where the frontier moved.

People Also Ask

Trending Stories

Loading trending articles...

RelatedArticles

Loading related articles...

MoreinAI & Machine Learning

Loading more articles...

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiemLogo

Missed this week's big shifts?

Our newsletter breaks them down in plain words.

Envelope
Meridiem
Meridiem