- ■
The 1973 HEW 'Records, Computers, and the Rights of Citizens' report predicted networked systems would become the principal medium for storing people's data—the report got that exactly right
- ■
America never built the regulatory infrastructure that report recommended; today's AI systems operate in that same unfilled gap
- ■
Decision-makers should note: EU GDPR and sector-specific US regulations (HIPAA, FCRA) created islands of protection in a largely unregulated ocean
- ■
The inflection point is accelerating: as AI vendors scale autonomous systems, enterprises face escalating enforcement risk in an undefined regulatory landscape
The US tech regulatory landscape is operating on a 1973 blueprint. Adi Robertson's analysis surfaces a fundamental inflection point not in policy itself, but in the urgency to close America's privacy governance gap before AI systems make the oversight problem irreversible. With networked AI processing billions of personal data points daily, the absence of comprehensive federal privacy law has shifted from convenience to critical liability. The timing matters most for enterprise decision-makers who face a compliance window that's closing.
The warning came 53 years ago, and America ignored it. In 1973, the Department of Health, Education, and Welfare (HEW) published "Records, Computers, and the Rights of Citizens," a report that read like prophecy. Networked computers would become "the principal medium for making, storing, and using records about people," the foreword stated. These systems could be powerful tools for efficiency. They could also be powerful tools for surveillance, discrimination, and control—unless the government built guardrails first.
America chose to skip the guardrails. Fifty-three years later, we're not operating on a 1973 blueprint anymore. We're operating on no blueprint at all.
Adi Robertson's analysis for The Verge isn't reporting a regulatory inflection point—it's diagnosing the absence of one. And that absence itself has become the inflection. As AI systems move from experimental pilots to production deployment, processing millions of customer interactions daily without transparency or consent frameworks, the policy vacuum that seemed manageable five years ago now looks like existential risk.
The numbers tell the story. The EU implemented GDPR in 2018 and saw compliance costs of $7.8 billion for European enterprises in the first two years alone. The US has no federal equivalent. Instead, we have a patchwork: HIPAA for healthcare, FCRA for credit reporting, COPPA for children online, state-level laws in California and Virginia and Colorado. Each piece protects a slice of data. None protect the whole.
Meanwhile, the technology moved forward. Large language models trained on billions of documents. Enterprise AI systems ingesting customer data to train autonomous agents. Facial recognition deployed at scale. Predictive analytics systems making consequential decisions about lending, hiring, and criminal justice—with no standardized requirement for disclosing how. This is the gap Robertson is documenting: the chasm between what the technology can do and what the law says you can't do.
The 1973 report recommended five core protections: individuals should know what records exist about them, why those records exist, how they're used, and have the right to see and correct them. It recommended oversight bodies to enforce those rights. Sounds reasonable. Sounds essential, actually. And it still does.
But here's where the timing inflection matters. The window for meaningful regulation is narrowing. When privacy rules come—and they will, whether Congress acts or courts force the issue through litigation—they'll have to be applied retroactively to systems already in production. The cost of retrofitting compliance into AI systems built without it is multiples higher than building it in from the start. Ask any Fortune 500 company managing legacy infrastructure.
For enterprises, the implications are crystallizing. Compliance officers know the regulatory moment is coming. They don't know the shape it will take. Is it a federal law mirroring GDPR? State-by-state fragmentation like data breach notification rules? Sectoral regulation by industry? The uncertainty itself is the inflection—it's pushing decision-makers to over-invest in privacy infrastructure today as insurance against unknown tomorrow requirements.
Startups face different calculus. Early-stage AI companies building on top of large language models or commercial customer data have almost no regulatory pressure today. By the time comprehensive privacy law lands, they may be locked into architectures that can't adapt. Some will pivot. Some will fold. The ones that build privacy-first systems now will have competitive advantage the moment regulation arrives.
For professionals in tech policy, compliance, or AI governance, Robertson's piece highlights the window. The next 12-18 months are critical for shaping what privacy law looks like before it crystallizes. Industry feedback matters most when rules are being written. Once they're on the books, the negotiations are over.
The HEW report isn't just a relic. It's a indictment of 53 years of inaction. And it's a roadmap that's still relevant. The question the report asked in 1973 remains the question we're dodging in 2026: What do people have a right to know about the records systems hold about them? Everything else flows from that answer.
This is opinion journalism raising a critical policy question rather than reporting a market inflection point in motion. But the absence of regulation is itself becoming the inflection—as AI systems scale into production, the compliance vacuum shifts from theoretical concern to operational risk. Decision-makers should treat the next 12-18 months as a timing window: the moment when enterprises can influence privacy rule design before it crystallizes into law. Investors should watch for the first concrete policy catalyst (federal legislation, major enforcement action, or sector-specific regulation) that converts policy commentary into market reality. For now, this is the warning phase. The actual inflection point will come when regulation moves from "desperately needed" to "actually proposed."





