The agent race just got real. Peter Steinberger, the developer behind the viral OpenClaw project, has joined OpenAI to lead personal AI agents. I believe this is the clearest sign yet that the next phase of AI will be won not by the biggest model, but by the software layer that gets real work done. My view is simple: the company that nails secure, consumer-ready agents will win the market.
The Case For Agents Now
Steinberger’s “glue code” started small: connect WhatsApp to an AI and let it manage email, book reservations, check flights, and control a smart home. It didn’t just chat—it acted. That practical leap struck a nerve. The repo surged to 201,000 GitHub stars and drew over 2 million visitors in a week. That isn’t hype; it’s demand.
“We expect this will quickly become core to our product offerings… The future is going to be extremely multi-agent.” — Sam Altman
Even Andrej Karpathy called the activity around Moltbook—an agent social platform—“the most incredible sci-fi takeoff adjacent thing I’ve seen recently.” That kind of endorsement doesn’t happen for toys.
What Peter Steinberger Brings
Critics may think OpenClaw’s boom was luck. It wasn’t. Steinberger built PSPDFKit, used by Apple, Dropbox, and SAP, and bootstrapped it for 13 years. He’s shipped at scale. After a break, he returned to coding in 2025 and poured out open-source tools—most quiet—until OpenClaw exploded.
“I wanted to build an agent that even my mom could use.” — Peter Steinberger
That line matters. Agents won’t go mainstream until normal people can trust and use them without a manual. OpenAI hired the person who obsessively chased that goal.
Security Is The Make-Or-Break
Here’s the rub: OpenClaw’s rise came with serious security failures. Researchers found over 30,000 exposed instances online—no auth, wide-open data. One firm said 93% of verified instances had vulnerabilities. CrowdStrike shipped a removal tool. Moltbook leaked 1.5 million API keys and 35,000 emails due to a simple database mistake.
These are not footnotes. Agents that can touch inboxes, calendars, payments, and org systems must meet a higher bar. That’s why OpenClaw joining a company with security muscle is a smart turn, not a sellout.
Why This Stings For Anthropic
The naming drama lit the fuse. After Anthropic’s trademark warning, a rushed rebrand allowed scammers to hijack handles, ship malware, and seed fake tokens. Steinberger pulled a second covert rename to OpenClaw just to stop the bleeding.
“I lost like 10 hours… planning this in full secrecy like a war game.” — Peter Steinberger
Then the punchline: OpenClaw drove many users to pay for Anthropic’s APIs—only for OpenAI to hire its creator. Meanwhile, enterprise share has shifted. Menlo Ventures data shows OpenAI’s share dropping from 50% in 2023 to 25% in mid-2025, with Anthropic rising to 32% and later 40% in LLM API share. Claude Code reportedly hit $1B in six months. OpenAI needed a counter. This is it.
What Happens Next
OpenAI did not recruit a brand. It recruited a playbook. Expect agent features to land in ChatGPT fast—and feel human, not “for devs only.” Expect real security work. And expect a push to keep OpenClaw open-source under a foundation, just as promised.
- Agents are here, not hypothetical—usage proves it.
- Security will decide who scales past demos.
- The agent layer—not raw model IQ—is the battleground.
- Open-source momentum still matters for trust and adoption.
- Consumer-grade design will separate winners from hobby projects.
This is bigger than one hire. The fight is to control the software that acts on your behalf—safely, reliably, and across your tools. The first team to deliver that at scale changes how we “use” computers.
My Take
I see OpenAI’s move as pragmatic and overdue. Anthropic earned its enterprise lead with a strong developer story. But momentum shifts when a product solves real tasks for everyday people. Get agents secure, simple, and affordable, and the rest follows. If you care about where AI goes next, push for safer defaults, audited connectors, and user-first controls. That’s how this tech serves people, not the other way around.
Call to action: if you build or buy AI tools, ask one question—can this agent do useful work without risking my data? If not, wait. If yes, pilot it. Demand logs, revocation, rate limits, and least-privilege by default. That pressure will shape the next wave, and it needs to.
Frequently Asked Questions
Q: What makes agent software different from a regular chatbot?
Chatbots answer questions. Agents take actions on your behalf—emailing, booking, searching, and using tools. That step requires stronger security and clearer permissions.
Q: Why did OpenClaw face security problems?
Many self-hosted instances were misconfigured or left exposed. Some lacked authentication. Others leaked credentials through poor storage. The concept scaled faster than safeguards.
Q: Will OpenClaw stay open-source under OpenAI?
Yes, it’s set to live in a foundation with continued support. The stated goal is to keep development open while improving reliability and safety.
Q: How does this affect Anthropic’s position with enterprises?
Anthropic still holds strong share and developer loyalty. But if OpenAI ships secure, friendly agents inside ChatGPT, decision makers may rethink standardization.
Q: What should teams do before adopting agents?
Run a small trial with strict scopes, credential vaulting, audit logs, and kill switches. Validate data handling and third-party access before expanding usage.
























