Ievgen Bondarenko// researcher · governance · offense

TheterritoryIkeepwalkingiswhereAIsystemsactfasterthantheframeworksbuilttogovernthem.Whereauthoritygetsre-derivedatruntime.Whereevidencebreaks.Whereanactioncommitsbeforeanyonesigns.Iwriteaboutit.AndIbuildtheagentsthattestwhatI'mwritingabout.

// writing
Essays & frameworks.

Trying to write the parts of AI security that the documentation hasn’t caught up to yet.

  1. I.

    Zero Trust Is Not a Product

    What NSA's 2026 Guidance Actually Tells Us

    Reading the new NSA guidance not as architecture but as an admission — that the assumption underneath most enterprise security programs has quietly become unworkable, and the official answer is finally catching up to the operational reality.

  2. II.

    Auditability is not explainability

    On what evidence actually means in autonomous systems

    Most AI governance writing collapses these two ideas. They are not the same thing — and the systems being deployed today need both, structured separately, or neither layer holds when an auditor reads the chain six months later.

  3. III.

    Execution · Authority · Evidence

    A working framework for governing systems that act

    When agentic systems collapse decision-making into runtime, the three layers most control frameworks keep separate — what executed, what authorized it, what proves the authorization — collapse with them. The framework is an attempt to keep them legible.

  4. IV.

    AI Risk-Integrated Cybersecurity Governance

    A model aligned with NIST CSF 2.0

    A conceptual framework that separates auditability, governance controls, and technical safeguards in regulated AI systems. Designed for organizations where AI is no longer adjacent to security but inside it.

More short-form thinking on LinkedIn — posted weekly.Subscribe / get in touch ↗
// building
Tools that probe what the writing argues about.

Frameworks are theory until something tests them on a real system.

01

Two agents reading the same system

One agent reads source code and proposes failure modes. Another verifies them against a live deployment with real authentication. The gap between what they each see is where most of the breakdowns I write about — execution drifting from authority, evidence quietly going stale — actually surface as code, not theory.

02

Differential fuzzing across language ports

Same cryptographic primitive, four different implementations of it, one shared corpus. If any two ports disagree about anything, that disagreement is almost always a bug somewhere — and often a bug older than every implementation. The framework keeps producing them.

03

Live boundary probes

Specs describe what should happen. Live systems describe what does. Most of the work is in the gap — building probes patient enough to surface it, with operator-controlled accounts, without making noise the system owners do not want. The probes are the part of governance that actually closes the loop at runtime.

// about

Two security disciplines that don’t usually share a desk — governance and offense — keep teaching each other things on mine.

Day work
Security & compliance advisory · Summit Range Consulting
Night work
Offensive AI infrastructure research
Studied at
Johns Hopkins · Healthcare data security & compliance
Working from
Northern California
Reading lately
Auth specs that nobody enforces.

My day work is helping organizations build defensible AI governance — execution boundaries, authority chains, evidence that survives an auditor reading it six months later. My night work is building offensive agents that probe the same kinds of systems those frameworks are written for. The two keep teaching each other things.

I keep getting pulled to questions that don’t have settled answers. What an AI system actually does versus what its policy claims. Where authority breaks under autonomy. What an evidentiary record looks like when no human committed the action. The kind of thing that looks dry on a slide and turns out to be where everything interesting lives.

If you’re working at this seam — building agentic systems, governing them, defending them, or trying to write the frameworks that catch up to what they already do — [email protected].