Everyone has an AI story, but almost none of them are believable. ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­    ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­  
View in browser
AdobeStock_705190755-with-peg-newsletter-header

Greetings,

 

The annual RSA conference is a vendor circus. It never fails to amaze me how much security companies will spend in an attempt to stand out from the sea of companies when they're almost all saying the same thing. And that thing is "AI!!!".

 

I usually like to see what's new and what people are doing, but I've struggled with that this year. The scare statements and solution promises are all over-the-top and, frankly, less differentiated than ever. They all more or less come down to, "we'll fix your AI and make your agents safe."

 

But no one can make LLMs or agents safe unless they're so restricted that they can't accomplish anything. I've written and spoken at length on this. The issues in the underlying technology of neural networks mean you can't really know what a model is going to do. Given the same inputs it might produce a good result ten times in a row, but make a horrifying left turn on try number eleven.

 

It undermines the whole industry when the B.S. overwhelms everything else, but there are good solutions to parts of these problems. Using cryptography (Cloaked AI) to protect the data in AI workflows, for example, is real tech solving real problems. It's clear without claiming to solve all of AI's intractable issues.

 

We're not going to sponsor luchadores wrestling on an expo floor to get the word out (real thing this year), but we're prepared to have an honest, technical discussion about our approach. Drop me an email to set something up.

 

We appreciate all of you out there doing your own wrestling in the trenches. Until next time,

 

Patrick

 

PS Here's my latest blog, sparked by the Cryptographer's Panel session at RSA.
Patrick Walsh CEO IronCore Labs

Patrick Walsh
CEO, IronCore 

 

 

Upcoming events:

  • OWASP SnowFROC: Hacking AI-Enabled Apps: Exploit Demos, Data Compromises, and Hardening Patterns
    • April 17 (Denver University Cable Center)
    • Abstract: Adding LLMs to your product is deceptively easy: drop in a chat UI, add RAG, connect tools, and call it done. But when untrusted content becomes part of the prompt, models can be steered into revealing secrets, leaking tenant data, or taking actions you never intended that leaves your app vulnerable to non-obvious attacks.

      We’ll demonstrate exploits mapped to the OWASP Top 10s that start with user-generated content and end with real security impact. We then discuss practical mitigations including architectural patterns, protection strategies, and a decision framework for which AI use cases are safe, safe-enough, or too unsafe to ship.

 

encryption-keys-newsletter

Key Management Is 90% of the Problem

Why Even Cryptography Experts Can’t Get Key Management Right 

A breakdown on where IronCore draws the line on the use of generative AI and coding agents to ensure that private data stays private while still taking advantage of the productivity that these tools can bring.

 

> Read the full blog

 

issa-ai-data-blindspots-webinar-2026-newsletter

ISSA Webinar: AI Data Blind Spots 

What Security Professionals Need to Know 

 

In this webinar, you’ll learn about modern AI systems and how to secure them, as well as an introduction to the role of vector embeddings and how to protect embeddings with encryption-in-use. Companies building AI systems on private data need to know how to keep the data safe without inhibiting the usefulness of new AI products.

 

> Watch the recording

 

seald-migration-newsletter

Seald's U.S. Shutdown: Migration Options

Comparing Seald's Offering to IronCore's Zero-Trust, Scalable End-to-End Encryption Approach

 

Seald's U.S. shutdown came with little notice, while its European customers get 'alpha' status with no support. Here's a comparison with a better option for those wanting end-to-end encryption or cryptographic access controls, especially if using groups to manage access.

 

> Read the blog

 

LinkedIn
X
GitHub
Mastadon
YouTube

IronCore Labs, 1750 30th Street #500, Boulder, CO 80301, United States, 3032615067

Unsubscribe Manage preferences