Six years ago, a global shock arrived in weeks. Many missed the early signs. That same pattern is showing up again—only this time, the agents are algorithms. My view is simple: the pace and scope of AI advances are being underestimated, and waiting on the sidelines is the riskiest move you can make.
Why does this matter now? Because people building and using the latest systems are not making predictions—they’re reporting their present. Their jobs already changed. And if they’re right, yours will, too, far sooner than you expect.
The Core Claim: This Isn’t Hype—It’s Arrival
Matt Schumer, CEO of Hyperight, argues that the “overblown” phase has passed. He says professionals are sounding alarms because they’ve lived the change. His description is blunt:
“I’m no longer needed for the actual technical work of my job. I describe what I want built in plain English and it just appears… the finished thing.”
He adds that new models aren’t just executing—they’re exercising something that feels like judgment and taste:
“They weren’t just executing my instructions. They were making intelligent decisions.”
I agree with the thrust: the threshold has shifted. Users describe a step-change since releases like GPT 5.3 Codeex and Claude Opus 4.6. The anecdotes rhyme: less hand-holding, fewer “please fix” loops, more first-pass success.
Evidence The Ground Already Moved
One research group, Meter (MER), tracked how long expert-level tasks take for humans versus AI. Their timeline shows a startling pattern: from GPT‑3.5 handling seconds-long work to recent systems reliably tackling hours-long tasks—and stretching into double-digit hours. The punchline: the length of tasks AI can complete has been doubling about every seven months.
This isn’t just coding. The same acceleration shows up across math, scientific Q&A, browsing, robotics simulations, and video analysis. A researcher called this “probably the most important single piece of evidence about AGI timelines right now.”
There’s a sharper edge: the builders are using AI to build the next AI. OpenAI said GPT 5.3 Codeex was “instrumental in creating itself,” from debugging training to managing deployment. Anthropic leaders describe a similar loop inside their shop: AI writing much of the code, then using that to push the next models faster. Smarter models beget smarter models.
Yes, There Are Counterpoints—But They Miss the Clock
Some push back on dramatic analogies or argue change will look more like the internet: sweeping, but over years. Fair. Yet even the skeptics concede rapid job shifts are coming. The disagreement is mostly about timelines, not the destination. And as I see it, whether the horizon is two years or ten, your best move right now is the same: start using these systems for real work.
Jobs, Judgment, And The Wrong Kind Of Comfort
Leaders inside Anthropic and others warn of major white-collar automation. Daario Amodai suggests half of entry-level roles could vanish, with joblessness rising into double digits. Nvidia’s Jensen Hong says every job is affected. The old refuge—“AI can’t match judgment or taste”—already looks shaky given what recent systems exhibit in practice.
“It is my guess that by 2026 or 2027, we will have AI systems that are broadly better than all humans at almost all things.” — Daario Amodai
You can dismiss the exact date. You cannot ignore the direction.
What To Do Next
Don’t wait for a memo. Move first, learn fast, and make the tools work for you. Here’s how to start without burning months:
- Use paid versions of leading models; free tiers trail by a year or more.
- Stop treating AI like a search box—assign end-to-end tasks with real stakes.
- Iterate: give context, supply files, request revisions, and measure output quality.
- Target your bottlenecks: drafting, analysis, QA, reporting, prototyping.
- Ship small internal tools that cut hours into minutes; stack wins weekly.
These steps aren’t theory; they mirror what power users describe daily. The gap between dabblers and operators is widening fast.
My Take
The most dangerous story right now is “I tried AI in 2024 and it wasn’t impressive.” That experience is obsolete. The last six months changed the baseline. People who work with current models are already offloading hours of skilled tasks. If a model shows a hint of a skill today, the next version often makes it reliable.
I’m not arguing for blind cheerleading. The risks—misuse, deception, economic shock—are real. But ignoring the tools won’t slow them down. It will only sideline you from the decisions and the opportunities.
Act now: invest a daily hour, pick one meaningful workflow, and automate it with AI end-to-end. Repeat weekly. Become the person who walks into meetings with results, not opinions.
Whether the curve “zooms past” in two years or ten, you will want to be the one already moving at speed.
Frequently Asked Questions
Q: How do I know if my role is at risk?
Map your weekly work into tasks: drafting, analysis, coding, support, review. If a task is repeatable and screen-based, test a top model on it now. Many white-collar workflows already qualify.
Q: I used AI last year and wasn’t impressed—what changed?
Recent systems handle longer, more complex tasks with fewer corrections. Users report first-pass outputs that cut iteration loops and reduce the need for constant “please fix” prompts.
Q: What’s the first practical step to get value fast?
Choose one painful bottleneck—reports, slide builds, contract markup, test automation—and force a paid model to deliver a complete draft. Provide files, context, and accept iterative refinement.
Q: Are these models really making “judgment” calls?
Users, including builders, say outputs show taste-like choices in layout, phrasing, and design trade-offs. It’s not human judgment, but it’s useful enough to change workflows today.
Q: What if the big disruption takes longer than predicted?
Then you gain compound advantages early: lower costs, faster cycles, and credibility. The actions that help in two years also help in ten—there’s little downside to starting now.





















