Sunday, January 18, 2026
ap7i.com🛡️
Adaptive Perspectives, 7-day Insights
AI

What Do We Do When Everyone's a Developer?

AI is democratizing software development the way the printing press democratized books. That's exciting—and it requires a different kind of organizational thinking.

A member of the executive team where I work shared a Wall Street Journal article today: “Claude Is Taking the AI World by Storm, and Even Non-Nerds Are Blown Away .” The piece describes what developers are calling getting “Claude-pilled”—the moment when software engineers, hobbyists, and even non-technical users hand a task to Claude Code and watch it produce working software in minutes.

I imagine he shared it in part because two of us have mentioned how much value we get from Claude at work.

A respected physician responded thoughtfully. His take, paraphrased: We’re entering extraordinary times, and as an organization we need to embrace AI while being thoughtful about how we use it. One concern is that everyone now thinks they can tackle any software-based project. A non-programmer can generate working code—but getting that code to production still requires experience. There’s HIPAA, security, architecture. And there’s potential for lost productivity if hundreds of people are all trying to build their own thing.

He’s right. And his comments reminded me of something I heard at a C-Vision International forum in New York last year. An executive from a regulated industry—banking, if I remember correctly—posed a question that’s stuck with me since:

What do we do when everyone’s a developer?

A Historical Parallel

I’ve been thinking about this in historical terms.

Before the mid-15th century, books were rare and expensive. A single manuscript was copied by hand, often by monks in monasteries, sometimes taking years to complete. By 1300, the largest library in Europe—in Paris—held just 300 manuscripts.

Then came Gutenberg’s printing press around 1440. A single press could produce 3,600 pages per day, compared to roughly 40 by hand-printing. The first major work was the Gutenberg Bible in 1456—180 copies, produced over three years. By 1500, just 50 years later, there were an estimated 15 to 20 million books in circulation across Europe.

The impact was transformative. A book that once cost a fortune could now be afforded by a merchant, a student, even a tradesman. Literacy in England rose from 30% in 1641 to 62% by 1800. The printing press enabled the Renaissance, the Reformation, newspapers, public opinion, mass education, and eventually democracy itself.

But not everyone could buy a printing press and typeset their own newspaper. The barrier to entry dropped, but it didn’t disappear.

Then came the personal computer and WYSIWYG word processors. Suddenly a much larger slice of the workforce could create documents, print them, and distribute them. Later, anyone could spin up a WordPress blog and reach a global audience—not all that different from me starting this blog on January 1st, except that I focused on performance and global caching instead of a database-backed CMS.

And here’s the thing: the economic value of a written word arguably diminished over time as it became cheaper to produce and distribute. What was once scarce became abundant.

But here’s the critical difference: that transformation unfolded over generations. Gutenberg’s press arrived in 1440. English literacy didn’t reach 62% until 1800—360 years later. People had entire lifetimes to adapt. Careers evolved gradually. A scribe’s grandchildren might become typesetters; their grandchildren might become journalists.

AI is compressing that same magnitude of change into months.

Claude Code launched in February 2025. Eleven months later, it’s a Wall Street Journal headline. The tooling I was using last summer isn’t nearly as capable as what’s available today. Major model releases arrive every few months. Features that seemed futuristic in spring are routine by fall.

This pace has profound implications. We can’t wait for the next generation to figure it out. The adaptation has to happen within existing careers—within individual people who are mid-career today and may hope to work another 15 to 30 years. That’s a fundamentally different challenge than anything the printing press posed.

Is Software Next?

With AI, is software development going through a similar democratization?

The WSJ article notes that people are using Claude Code “for everything from health-data analysis to expense-report compiling.” One executive said he used the tool to finish a project in a week that would have taken him a year without AI. The Shopify CEO I wrote about last week built a custom MRI viewer in a single prompt.

At some point, will most people create much of the specialized software they need—the way most professionals now create their own documents instead of dictating to a typing pool?

If so, the overall value of commodity software diminishes. There will still be specialized applications that hold their value, just as there are still bestselling books despite the abundance of written content. But the industry gets shaped by the ease of entry.

The same question applies beyond software. Will we one day prompt our own movies the way we prompt images today? Will the ease of creation diminish the value of Hollywood blockbusters? You could ask this about every industry that produces digital goods.

Back to the Doctor’s Concern

His point stands: just because you can generate code doesn’t mean you can run it in production.

In healthcare, there’s HIPAA. There’s audit logging, identity management, data retention policies, encryption at rest and in transit. There are integrations with existing systems that have their own quirks and constraints. There’s the question of who supports the thing when it breaks at 2 a.m.

The printing press didn’t eliminate the need for editors, publishers, and booksellers. It created new roles and restructured existing ones. AI-assisted software development will likely do the same.

What Organizations Should Do

So what do we do when everyone’s a developer?

Educate on guardrails. In healthcare, every employee goes through HIPAA training annually—but most don’t know the technical requirements of a HIPAA-compliant software system, or what makes software production-ready versus a quick prototype. That’s not a criticism—it’s not their job to know. But if AI tools make it easy for anyone to build something, organizations need to establish and communicate the context in which AI can responsibly be used. What’s encouraged? What requires review? What’s off-limits entirely?

Create a channel for AI ideas. Not everyone can stop their day job to work on AI projects. But they can identify opportunities. A few weeks ago, a manager asked if I could find a way to transcribe recorded phone calls. The task—listening to half an hour of audio and typing it up—had been consuming hours of someone’s time every week. I built a secure, HIPAA-compliant transcription tool in a day. It now saves hours weekly and costs a few dollars a month to run. I wouldn’t have known the need existed if she hadn’t brought it forward.

Consider a suggestion box for AI opportunities—with a small incentive for ideas the organization acts on. The people closest to repetitive tasks are often the first to recognize them. Give them a way to surface those tasks without requiring them to solve them.

Leaders should audit their own processes. Every manager should be asking: what recurring manual work happens within my scope? Unless every single thing is different from the next—sometimes it feels that way when triaging IT support tickets—there are probably opportunities for AI-assisted productivity improvements.

This might sound threatening to people who do repetitive work today. But historically, when someone fixed the biggest bottleneck in a factory, a new bottleneck appeared elsewhere. There’s usually more work to do. And today’s AI tools will likely give way to better ones every few months. The opportunities to improve will keep appearing.

Start small, with supervision. The right first projects are internal tools with limited scope, clear success criteria, and someone technical reviewing the output. Not customer-facing systems. Not anything touching PHI without proper architecture. Build organizational muscle memory before raising the stakes.

Treat AI literacy as a core competency. Just as spreadsheet proficiency became expected of knowledge workers in the 1990s, AI fluency will become expected soon. The intuition for when to reach for AI, how to prompt it, and what it’s good at versus what it isn’t—that only comes from practice. Encourage experimentation in low-risk contexts. For Connecticut residents, the state offers a free Connecticut Online AI Academy through Charter Oak State College—five weeks, fully online, roughly 10-15 hours total.

The Bigger Picture

Democratization doesn’t mean chaos. It means more people have access to tools that were once reserved for specialists. That’s generally a good thing, but it requires new organizational structures to absorb the change.

The printing press led to publishers, editors, and standards. Personal computers led to IT departments and acceptable use policies. AI-assisted development will lead to its own structures—governance, review processes, centers of excellence, or whatever organizations decide to call them.

But here’s what keeps me up at night: those earlier transitions took decades to work out. We don’t have decades. The printing press gave society 360 years to develop copyright law, journalism ethics, and publishing standards. AI is giving us maybe 360 days before the next wave arrives.

The question isn’t whether AI changes how software gets built. It already has. The question is whether organizations adapt deliberately or let adaptation happen haphazardly—and whether they do it fast enough.

I’d rather be deliberate. And I’d rather start now.