A new book titled I Am Not a Robot places the role of artificial intelligence in journalism under sharp focus, pressing media and tech to answer hard questions about trust and truth. Written by a working journalist and author, the book examines how newsrooms use automation, how social platforms reshape reporting, and what it means to cover the future responsibly.
The author argues that the debate is no longer abstract. AI tools are already part of reporting, editing, and distribution. The core story centers on how reporters can keep human judgment at the center while using machines to work faster and smarter.
Why This Conversation Matters Now
News outlets have tested AI for transcription, translation, research, and drafts. Some have faced public backlash when tools produced errors or borrowed language too closely from sources. Others now set stricter rules, tracking where AI appears in the process and labeling its use.
Social platforms also drive new habits. Short video and algorithmic feeds push speed and novelty, while subscription models demand loyalty and depth. The author argues that AI sits at the crossroads of these pressures, influencing which stories get told and how.
The Book’s Central Questions
“AI, new media, and covering the future.”
Those words frame the project. The book asks how reporters should verify AI-assisted work, what to disclose to readers, and when to draw a hard line. It also examines how creators, from independent newsletters to large outlets, can keep control over their voice when tools remix or imitate their style.
- Where to set guardrails for AI use in reporting
- How to explain AI’s role to readers
- What standards build trust across platforms
Risks and Opportunities for Newsrooms
Automation can reduce time on routine tasks and free reporters for field work. That may help smaller teams cover more ground. But reliance on automated summaries or image tools can mask bias or insert small errors at scale. The book argues that editors should treat machine output like any source: verify with documents, data, and humans.
The author also weighs labor concerns. If AI handles basic drafts, what happens to entry-level jobs that train future reporters? The book suggests publishers pair AI pilots with paid training, so new staff learn core skills rather than only tool prompts.
Audience Trust and Transparency
Trust remains fragile. Readers want to know how a story was made. The book backs clear labels when AI helps produce text, images, or audio. It urges outlets to publish sourcing notes, model limits, and the steps taken to check claims. Plain language, not legal jargon, is the goal.
It also highlights the threat of deepfakes and synthetic voices. Verification workflows need updates, from reverse image searches to audio forensics. Editors should slow down when evidence is digital-only and push for independent corroboration.
Covering the Future Without Hype
The author calls for beat reporting that tests claims with data and real users. Instead of repeating product demos, reporters can track how tools perform in schools, courts, clinics, and city offices. That approach checks promises against outcomes and flags harms early.
The book encourages diverse sourcing. Technologists, policy experts, and communities affected by AI should share equal billing. That mix helps surface blind spots and avoids one-sided narratives.
What to Watch Next
Policy debates will shape newsroom practices. Copyright suits, data access rules, and disclosure standards are moving fast. News organizations may form shared guidelines on provenance and audit trails. Education for reporters and editors will be key, as skills in verification and data analysis become standard.
As the author frames it, the task is simple to say and hard to do: keep the human in charge. That means setting rules, testing tools, and telling readers exactly how stories come together.
I Am Not a Robot lands with a clear message. AI can help reporters serve the public, but only with strong oversight, open disclosure, and relentless fact-checking. Readers should expect firmer labels, better sourcing notes, and more reporting that tests claims in the real world. Watch for newsrooms to publish detailed AI policies and for coverage that treats every algorithm like a source that must be verified.
Deanna Ritchie is a managing editor at DevX. She has a degree in English Literature. She has written 2000+ articles on getting out of debt and mastering your finances. She has edited over 60,000 articles in her life. She has a passion for helping writers inspire others through their words. Deanna has also been an editor at Entrepreneur Magazine and ReadWrite.























