Thursday, February 26, 2026
🛡️
Adaptive Perspectives, 7-day Insights
AI

Five Sandbox Projects This Week

At an AWS security symposium in Manhattan, I expected caution. What I found was a room full of security leaders urging their audiences to build faster.

Five Sandbox Projects This Week
AWS AI Security Symposium, City Winery NYC

“Don’t do five sandbox projects this year. Do five sandbox projects this week.”

That was the advice Hart Rossman, Vice President of AWS Security, offered to an audience member who’d mentioned that the only way their regulated organization could experiment with AI right now was in a sandbox environment.

I was sitting at City Winery on Pier 57 in Manhattan—ice flowing down the Hudson just outside—expecting a security symposium to be about caution. Risk mitigation. The importance of saying no.

Instead, I heard security leaders from AWS, Paramount, CoreWeave, and others urging their audiences to build—and to build now.

Acknowledging My Biases

Before going further, I should acknowledge where I stand.

I’m very pro-AI. I consider Claude Code an indispensable partner in my daily work. I consume 3-4 hours of AI news every day. I build something new using AI more often than once a month.

Even so, I was genuinely surprised at the pace people were advocating for. Especially people in charge of security, not innovation, speaking on behalf of large employers. If this symposium had a prevailing message, it wasn’t “slow down.” It was “accept reality, set guardrails, bring it on.”

Security as Enablement

The dominant theme of the day: security teams must be enablers and partners, not blockers.

Hart Rossman, wearing a sparkly jacket and an unusual winter hat, set the tone in the opening session. He leads security for AWS’s global services organization, including the AWS Customer Incident Response Team. His framing: we’re in an “age of intent,” where the number one skill for security leaders is resource management—budget, people, and tools directed toward outcomes.

His slides included: “Govern by enabling, not by restricting. Documentation should empower, not overwhelm.”

John Buckles, SVP of Security Architecture at SitusAMC, reinforced the point: “In the security setting, we used to be proactive by getting in front of things and locking them down. Proactive in the new way is to empower developers and others.”

KC O’Brien from Paramount said it simply: “I don’t want to be a blocker.”

Nearly every speaker returned to this theme. If security stands in the way, people will route around it.

The Shadow AI Reality

And route around it they have.

Alex Goryachev, a WSJ bestselling author and former Managing Director of Innovation at Cisco, shared statistics that should concern any governance-minded organization:

  • 70% of employees use AI without permission
  • 50% hide AI tools from leadership
  • 29% personally pay for AI apps at work

His point wasn’t that this is a problem to be solved by restriction. His point was that these are your most innovative employees. Shadow AI users are early adopters running experiments that could benefit the entire organization—if there were a safe channel for it.

“Turn shadow AI into safe innovation,” he urged.

The speaker showed an “AI Trust Paradox” slide: in personal life, AI use is unlimited, visible, and fast—leading to high trust and high adoption. In the workplace, it’s restricted, hidden, and slow—leading to low trust and shadow use.

Security leaders can close that gap by building guardrails that empower rather than block. Templates for secure prompts. Approved environments for experimentation. Channels for ideas to flow upward without requiring every employee to solve the compliance problem themselves.

Everyone’s a Builder Now

Rossman echoed a claim I’ve written about before: “Today, each and every one of us can be a Computer Scientist—can be a builder.”

At this symposium, I heard the same idea repeated multiple times.

Goryachev put it memorably: “The hottest programming language is human language.” And not just English—his slide showed dozens of languages. AI can handle prompts in many of them, even mixed in the same sentence.

An audience member asked Rossman how to keep things in check when everyone can “vibe code” but not everyone is a software engineer. His answer: “If everyone is going to be vibe coding, I can provide templates that everyone uses. If you’re doing spec-driven development, I can make sure that everyone threat-models. Now I can guarantee that we have a threat model for everyone.”

The tooling is changing what’s possible. Rossman mentioned Kiro, a new AI development environment, as the thing that brought him back to Computer Science. He’d prompted it with “I’d like to build a modern threat intelligence system.” It generated a specification. He used the spec to build a threat model. Then updated the specification from the threat model. From there, it went to a task list, then interactive code generation, then deployment.

No lines of code written by hand. Hours instead of months.

What Changes With AI

James Ferguson, a Principal Security Solutions Architect at AWS, walked through the new risk landscape. Traditional defense in depth still applies, but AI introduces new considerations:

  • Functionality is non-deterministic. The same prompt can produce different responses.
  • Input becomes code. User prompts become instructions that the AI executes. Data and instructions are no longer separate.
  • Autonomous decision-making. Agents make independent decisions without explicit programming.
  • Dynamic learning and adaptation. Systems can learn from interactions and change behavior over time.

Rossman referenced Simon Willison’s “lethal trifecta”: if an AI agent has access to private data, exposure to untrusted content, and the ability to communicate externally, you’re likely to have problems. All three together create an exfiltration risk that’s difficult to mitigate.

The takeaway wasn’t that AI is too dangerous to deploy. It was that organizations need to understand these new risk patterns and build accordingly.

A New Kind of Authority?

Goryachev mentioned his first-grade son, who always wants to know “what AI says” on any topic.

That stuck with me. Are we developing a new form of the appeal to authority fallacy, where AI itself becomes the authority? Not AI thought leaders—AI systems themselves.

When a child defaults to asking AI before asking a parent or teacher, something has shifted. When an adult uses AI to interpret medical imaging results before consulting their doctor—as Goryachev admitted he did with his own chest CT—something has shifted.

This isn’t necessarily bad. AI systems can synthesize vast amounts of information quickly. They can surface relevant research. They can provide second opinions at negligible cost.

But they can also be confidently wrong. And unlike human experts, they don’t have the social cues that signal uncertainty. A wrong answer from AI comes formatted exactly the same as a right one.

I don’t have a tidy conclusion for this concern. It’s worth noting, though, that even in a room full of AI enthusiasts, no one claimed AI should replace human judgment. The framing was always augmentation: AI as a tool to prepare for conversations, to accelerate research, to automate the routine so humans can focus on the exceptional.

The Counter-Argument

I should acknowledge that not everyone shares this optimism.

Recent surveys show that nearly three-quarters of Americans anticipate widespread job cuts due to AI. Middle-income workers in particular feel left behind by innovations that seem to benefit the elite. Tech entrepreneur Jerry Kaplan has compared the AI investment boom to a potential combination of the 2000 dot-com bubble and the 2008 housing crisis.

Gary Marcus, an emeritus professor at NYU and prominent AI skeptic, argues that large language models are fundamentally limited—predictive by nature, incapable of true understanding, unlikely to deliver the superintelligence their promoters promise.

Only 10%–30% of AI proofs of concept are expected to scale to production, according to figures Rossman cited. That’s a lot of experimentation that goes nowhere.

The people at this symposium were bullish. So am I. But the skeptics aren’t without arguments.

The Commute Home

The day ended with happy hour overlooking the Hudson. Ice still flowing. Sponsors raffling off tech gadgets and travel vouchers. I slipped out quickly and caught a 5:03 PM train from Grand Central back to Milford.

I spent the ride thinking about what I’d heard. Security leaders encouraging speed. Frameworks for enabling rather than restricting. The acknowledgment that shadow AI is a feature of engaged employees, not a bug.

Are today’s thought leaders right? Am I right to be as optimistic about AI as I am?

Only time will tell. The skeptics have data points. The enthusiasts have momentum. Both have blind spots.

But I know where I want to be. I want to be part of a team that embraces the possible. Embraces the theoretical. Embraces the future. And just keeps building.

Even if I’m not quite up to five sandbox projects this week.