Every customer says the same thing: don’t train on our data; plus attack demos, new videos, and how to use privacy-preserving AI
View in browser
AdobeStock_705190755-with-peg-newsletter-header

Greetings,

 

For my latest presentation, which I delivered last week at LASCon, I dug into the world of MCP servers.  People are empowering LLMs with "tools" (arbitrary function calls) to boost productivity.  When these things work, they're pretty amazing.  But holy smokes is the security situation terrible.

 

We'll share the video of that talk when it hits the tubes as I demo some attacks.

 

And if you're more interested in how data leaks out of AI systems without MCP servers, then check out my DEF CON talk from August, which just posted.

 

Finally, we keep hearing that the number one AI roadblock put up by Enterprise customers is around their data being used to train models.  It's the one prohibition that everyone is making.  In my latest blog, I talk about how to build privacy-preserving models and how to adjust contract language to support an exception for models that are built in this way.  Businesses can get the benefits of AI without throwing their data out the window.

 

And did I mention that IronCore has been recognized as a Gartner Cool Vendor in Data Security for our pioneering data protection for AI data?

Patrick Walsh CEO IronCore Labs

Patrick Walsh
CEO, IronCore 

 

Upcoming events:

  • OWASP Global AppSec
    • November 6, 10:30am ET in Washington, DC
    • Title: Hidden Risks of Integrating AI: Extracting Private Data with Real-World Exploits
    • Abstract: This talk explores the hidden risks in apps leveraging modern AI systems—especially those using large language models (LLMs) and retrieval-augmented generation (RAG) workflows—and demonstrates how sensitive data, such as personally identifiable information (PII), can be extracted through real-world attacks. We’ll dive into techniques like model inversion attacks targeting fine-tuned models, and embedding inversion attacks on vector databases—key components in RAG architectures that supply private data to LLMs for answering specific queries.

 

privacy-preserving-ai-key-newsletter

Privacy-Preserving AI: The Secret to Unlocking Enterprise Trust

Why Fortune 500 companies are blocking vendors — and how encrypted AI models win them back

 

Enterprises are blocking vendors from training AI on their private data. Learn how privacy-preserving tech like encrypted models can restore trust, win deals, and enable AI features.

 

> Read the full blog

 

defcon-33-recording-screencap-newsletter

DEF CON 33 - Exploiting Shadow Data in AI Models and Embeddings

Illuminating the dark corners of AI

 

This talk explores the hidden risks in apps leveraging modern AI systems — especially those using large language models (LLMs) and retrieval-augmented generation (RAG) workflows — and demonstrates how sensitive data, such as personally identifiable information (PII) and social security numbers, can be extracted through real-world attacks. We demonstrate model inversion attacks targeting fine-tuned models and embedding inversion attacks on vector databases, among others.

 

> Watch the video

 

gartner-cool-vendor-newsletter3

IronCore Named a 'Cool Vendor in Data Security' by Gartner

 

IronCore Labs is a Gartner Cool Vendor in Data Security 2025 for its ability to encrypt large amounts of AI data while keeping it usable.

 

> Read the full blog

 

LinkedIn
X
GitHub
Mastadon
YouTube

IronCore Labs, 1750 30th Street #500, Boulder, CO 80301, United States, 3032615067

Unsubscribe Manage preferences