
Artificial Intelligence is changing how work gets done, not by eliminating human expertise, but by shifting its role. As Generative AI (GenAI) and Agentic AI improve, more and more of the creation work will be automated, moving humans into roles of editing and supervising.
This shift isn’t just a technical change. It’s a philosophical one. And, as always, adoption will be shaped as much by perception as by performance.
The Spreadsheet Debate: 95% Accuracy and the Fear of Editing
When GPT-5 was released, one widely circulated demo showed it generating a spreadsheet with 95–98% accuracy. The creator marveled that what once took him 4–8 hours now took minutes — he could “walk his dog and come back to a dense spreadsheet.”
But as discussed on Y Combinator’s Hacker News, not everyone was convinced. One commenter noted:
“It feels like either finding that 2% that’s off (or dealing with 2% error) will be the time-consuming part in a lot of cases… especially when the 2% error is subtle and buried in step 3 of 46 of some complex agentic flow.”
Others echoed this concern. If editing the AI’s output takes as long as creating it from scratch, then the value is questionable. This skepticism is real and it’s one reason some business leaders remain slow to adopt AI.
The Fallacy of Perfection
But as another Hacker News contributor highlighted, this reflects a broader fallacy: assuming that because something isn’t perfect, it isn’t useful.
John Dewey, quoted in the same discussion, called this the “philosophical fallacy” — the error of taking a truth from one context and applying it universally. Just because editing can sometimes be time-consuming doesn’t mean it always negates the value of AI creation.
The real insight is that success and satisfaction come from specific efforts, not universal ideals. AI doesn’t need to be perfect to be transformational. It needs to be useful.
The Intern Analogy
Several contributors compared AI to hiring an intern:
- Like an intern, AI can handle tasks senior team members don’t want to spend time on.
- Like an intern, AI’s work must be reviewed.
- But with oversight, AI frees up experts to focus on higher-value activities.
One commenter noted:
“The proper use of these systems is to treat them like an intern or new grad hire… give them the work no senior person wants but review it thoroughly.”
This analogy resonates deeply in healthcare RCM. Coders don’t want to spend their days on repetitive, low-value edits and organizations can’t afford to waste their expertise. AI, like an intern, can take on the bulk of that grunt work, leaving humans in the supervisory seat.
From Creation to Editing in RCM
In the world of mid-revenue cycle coding, the shift from creation to editing is not just theoretical. It’s already happening.
- Creation work: Coding every encounter line-by-line, manually applying rules, and spotting every opportunity.
- Editing work: Reviewing AI-optimized claims, correcting edge cases, and applying judgment where nuance matters.
White Plume’s STAR² Ai platform embodies this shift. It doesn’t try to replace coders. Instead, it empowers them by handling the routine, repeatable changes and surfacing the exceptions. Coders then more quickly step into the role of editors and supervisors, becoming more productive and focused where their expertise is irreplaceable.
A Practical Example
One White Plume client began by automating about 35% of coding and billing changes. With analytics, AI, and collaboration between their coders and our Client Success Team, they grew their automation rate to 55%.
That meant moving from 10,000 to over 16,000 automated changes per month. Far from replacing coders, this augmented their intelligence, freeing them to edit, supervise, and improve revenue integrity without being bogged down in manual tasks.
The Takeaway: Perfect vs. Good in Agentic AI
Y Combinator’s Hacker News community is known for being forward-thinking, and their debate around AI editing captures the challenge ahead: do we reject AI because it isn’t flawless, or do we embrace it for the leverage it provides?
In RCM, the answer is clear. The real value comes not from demanding perfection but from shifting coders into the editor role. By doing so, organizations unlock the compounding power of GenAI and Agentic AI by driving:
- faster productivity
- safer compliance
- higher revenue capture
The future of coding isn’t about AI doing 100% of the work. It’s about coders becoming supervisors of AI — editors of work that’s already 90% complete.
Ready to empower your coders as editors, not creators? Book a demo today and discover how White Plume is making Agentic AI work for healthcare.