Consumer AI took a big step from hype to help this week. The message is clear: the tools are growing hands. They can touch files, scan photos, and act on our behalf. My view is simple: this new power is worth using, but only if we demand control, choice, and transparency.
Agents That Actually Do Work
Anthropic’s Claude CoWork is the most practical agent I’ve seen. It doesn’t just chat; it organizes, renames, and cleans. The demo said the quiet part aloud: this is a safe, guided way to let software act on your machine.
“Basically, you can give Claude CoWork access to different folders on your computer, and it can do things on your computer on your behalf.”
Matt Wolfe showed the value with a messy Downloads folder. Claude scanned “roughly 270 items,” proposed folders, flagged duplicates, and executed with visible checklists. Behind the scenes, it ran terminal commands. That is real utility, not novelty.
“Organization complete… It gives me a pretty clear picture of where I can probably free up some storage.”
That’s the promise: AI as a dependable desk worker. Yet access matters. It launched on Mac and higher tiers, with broader availability now at $20 per month. Good. Useful automation should not hide behind a $100 paywall.
Your Data Is the New Interface
Google’s Gemini “personal intelligence” ties AI to your life. Connect Photos, Gmail, and more, and it can answer questions about your own history. The illustration was direct: ask for tires without naming your car, and it finds the answer from your camera roll.
“He didn’t say what his car was. It used his camera roll to figure out what tires he would need.”
It also pulled license plate numbers from photos. That’s both powerful and unsettling. Google says the content isn’t used to train models. Even so, the default must be clear controls, easy off-switches, and support for business accounts. As Matt noted, it’s not yet helpful for those living inside Workspace. That needs to change fast.
Open Systems Win, Lock-In Loses
Developers voted with their keyboards this week. Anthropic restricted subscribers from using their Claude Code access in third-party IDEs. Users balked. On the same day, OpenAI moved to support those IDEs. People don’t want a walled garden; they want tools that work where they work.
“They’re basically telling their customers… you must use Claude Code. And if you use this API key somewhere else… you’re dead to us.”
It’s the wrong fight. If a model is great, let it meet users in their workflow. Interoperability builds trust. Hard limits drive them away.
The Stakes: Platforms Are Consolidating
Google’s week was busy: better video generation (VO 3.1), AI-assisted Trends, retail checkout moves, and an open translation model. Apple even tapped Gemini for Siri’s harder questions. AI is being baked into the devices and services we use every hour. That scale brings convenience—and concentration of power.
Elsewhere, OpenAI partnered with Cerebras on inference hardware, DocuSign added contract explaining and prep, and open image models kept closing the gap with closed systems. The drumbeat is the same: faster, more connected, more embedded.
- Agents will act on local files and apps.
- Personal data will fuel tailored answers.
- Walled workflows will face user backlash.
- Big platforms will set the defaults on phones and browsers.
That mix can help us, or box us in. It depends on how we respond.
My Take—and What We Should Demand
I like what works. Claude CoWork works. Gemini’s personal lookups work. But I don’t like being forced into one app, one plan, or one platform. We should support tools that respect user choice, publish clear data policies, and price fairly.
So here’s the ask:
- Use agents with visible plans and approvals, not black boxes.
- Keep personal-data links off by default and review permissions often.
- Favor products that allow third-party integrations.
- Push vendors to support business accounts, not just personal ones.
- Test open models where they meet your needs.
AI is finally making itself useful. Let’s make sure it stays useful on our terms, not the platforms’ terms.
Frequently Asked Questions
Q: What makes Claude CoWork different from a regular chatbot?
It executes real actions on your computer—creating folders, moving files, and running safe shell commands—with an approval step and visible progress so you stay in control.
Q: How private is Gemini’s personal intelligence feature?
Google says your emails and photos are used to answer your request and aren’t used to train models. Still, review permissions, keep it off by default, and disconnect when not needed.
Q: Why are developers upset with Anthropic’s restrictions?
Subscribers expected to use their access in other IDEs. Blocking that breaks workflows. Many prefer tools that integrate broadly, which is why OpenAI’s support for third-party IDEs landed well.
Q: Is Apple’s use of Gemini replacing Apple Intelligence?
No. Apple keeps a smaller on-device model for simple tasks. For tougher requests, Siri can route to Gemini in the cloud, combining speed with broader reasoning when needed.
Q: Should small teams adopt open models like Translate Gemma and GLM Image?
If cost, control, or customization matter, they’re worth testing. Open models may trail top closed systems sometimes, but they improve quickly and can be run locally or on your own cloud setup.
























