- ■
Quadric closed a $30M Series C round led by ACCELERATE Fund, bringing total funding to $72M, as on-device AI inference proves viable at scale
- ■
Revenue grew 5x year-over-year from $4M (2024) to $15-20M (2025), with $35M targeted for 2026, signaling rapid enterprise adoption
- ■
Enterprise decision-makers now face a timing question: implement edge deployment strategies in Q1-Q2 2026 or risk architectural lock-in to cloud-centric approaches
- ■
First commercial products ship this year starting with AI laptops and industrial devices; watch for sovereign AI expansion in India and Malaysia as infrastructure alternative
The infrastructure layer beneath the AI inference stack just shifted. Quadric, a chip-IP startup that licenses programmable AI processors to other silicon makers, just raised $30 million in Series C funding and posted $15-20 million in licensing revenue for 2025—a 5x jump from $4 million the year before. This isn't startup growth noise. This is the moment when enterprises stopped treating on-device AI as an option and started treating it as operational necessity, driven by cloud costs that keep climbing, latency that cloud can't solve, and regulatory demands for data sovereignty.
The shift from cloud-centric to distributed AI inference just crossed from strategic debate into operational reality. Quadric's numbers tell the story: $15 to $20 million in licensing revenue last year, up from basically zero scale two years ago, with CEO Veerbhan Kheterpal now targeting $35 million in 2026. The post-money valuation jumped from $100 million in the 2022 Series B to between $270 and $300 million today. This is what inflection point looks like at infrastructure layer.
The core tension driving this shift is elegantly simple. AI models now change every few months—transformer architectures emerged in 2023, agents are shifting inference patterns today. But semiconductor design cycles take years. A chip made in 2024 might be obsolete for AI workloads by 2026. That's not a product roadmap problem. That's an architectural problem.
Quadric's solution: instead of baking specific AI instructions into silicon, it licenses programmable processor IP—essentially a blueprint—that customers embed into their own chips. The intelligence lives in software, not hardware. When models change, companies push a software update. No redesign. No new fab run. No 18-month wait. That's the unlock.
"Nvidia is a strong platform for data-center AI," Kheterpal told TechCrunch in the interview. "We were looking to build a similar CUDA-like or programmable infrastructure for on-device AI." The comparison matters. Nvidia dominated datacenter AI by making the hardware and software ecosystem so integrated that alternatives became impossible to adopt. Quadric is building the same lock-free programmable layer for edge devices—companies like Qualcomm, which traditionally embedded AI in their own processors, now face a customer base that wants optionality.
The customer base spans printers, cars, and laptops already. Kyocera uses Quadric's tech. Denso, Toyota's supplier, is building Quadric-based chips. The first consumer products—AI laptops—are shipping this year. That's not beta. That's production.
But the real inflection point extends beyond traditional commercial markets. Quadric is now pursuing what governments are calling "sovereign AI"—the ability to run advanced AI workloads locally rather than depend on U.S. cloud infrastructure. EY documented this shift in a November report highlighting how policymakers and industry groups are pushing for domestic AI capabilities. The World Economic Forum echoed the same trend: inference is moving closer to users, away from centralized architectures. Quadric is exploring India and Malaysia as expansion markets, with Moglix CEO Rahul Garg as a strategic investor shaping the company's India approach.
This matters because the cost math has inverted. Running AI queries through the cloud works economically when you have 100 queries a day per user. But when you're running thousands of inferences locally—on a laptop, an industrial controller, a printer—the bandwidth cost, latency cost, and data residency risk of cloud queries becomes prohibitive. Companies and governments are looking at their AI infrastructure bills and realizing that distributed compute, despite complexity, beats centralized cost at scale.
The competitive angle matters too. Qualcomm locks customers into Qualcomm silicon. Synopsys and Cadence sell neural processor engine blocks that customers struggle to program efficiently. Quadric avoids both traps. Its licensing model means it stays neutral on silicon—customers can use it with their preferred chipmakers. Its programmability means customers don't need teams of hardware experts to deploy new model architectures.
For Quadric, the challenge is execution risk at scale. The company has signed "a handful" of customers so far, per the original reporting. Much of the valuation depends on converting today's licensing deals into high-volume production shipments and sustainable royalties. That's a path that takes years to fully prove. But the infrastructure shift it's riding is already visible in enterprise IT budgets. Companies over 10,000 employees are making edge deployment decisions right now because the window is closing—once Qualcomm, Intel, and other chipmakers embed competitive on-device AI infrastructure, architectural flexibility will disappear.
For investors, the timing signal is clear: the infrastructure layer that enables cloud-to-edge transition is moving from experimental to production within the next 12-18 months. Quadric is one of the few pure-play infrastructure bets on that shift.
The cloud-to-edge transition in AI infrastructure just moved from strategic planning to capital deployment. Quadric's 5x revenue growth, tripled valuation, and Series C close validate that programmable on-device inference is crossing into operational necessity. For enterprises over 10,000 employees, the decision window is now—implementation timelines should begin in Q1-Q2 2026 before chipmaker lock-in becomes the default. For investors, this is the infrastructure inflection that enables broader platform shifts (like Apple's on-device Siri pivot). For chip designers and AI engineers, edge deployment specialization is becoming core skill. Watch for Q3 2026 when first-generation laptop and industrial products ship at volume.





