AI adoption has shifted from hype to hesitation—here’s how to move forward securely
View in browser
AdobeStock_705190755-with-peg-newsletter-header

Greetings,

 

My, how the headlines have changed.  Last year, we were talking about the AI rush to adoption ahead of meaningful security, but that was a novel story.  Now, many people are saying the same thing, and Enterprises seem to be slowing their rollouts and growing wary of oversized promises and a lack of security.

 

Unfortunately, we hear people worry about whether their data will be used for training models.  Which is to say, if their data won't be used for that purpose, then they're okay.

 

However, this is a poor test since there are numerous other factors to consider, and that is just one (see our 12 security questions blog). Plus, secure and privacy-preserving models are possible (see our private models white paper).

 

The story here is really one of uncertainty.  There are no established best practices that CISOs can point to as references for what they should be doing.  And now, in the absence of that, we're seeing the pendulum swing from wild adoption to paralysis.

 

The caution is better, but the paralysis is unnecessary. There are good ways to approach building apps with AI safely.  I'll be at two different OWASP conferences this Fall (see below), delivering research-based suggestions on how to build safe, secure, and private AI features that include guidelines for where not to (yet) use AI.  Check it out if you're there; if not, please let us know if you'd like to attend a webinar.

 

And finally, I'm absolutely thrilled to announce that IronCore has been recognized as a Gartner Cool Vendor in Data Security for our pioneering data protection for AI data.  We're honored to be recognized for our work here.

Patrick Walsh CEO IronCore Labs

Patrick Walsh
CEO, IronCore 

 

Upcoming events:

  • OWASP LASCon
    • Oct 24, 1pm CT in Austin, TX
    • Title: Hidden Risks of Integrating AI: Extracting Private Data with Real-World Exploits
    • Abstract: We’ll dive into techniques like model inversion attacks targeting fine-tuned models, and embedding inversion attacks on vector databases—key components in RAG architectures that supply private data to LLMs for answering specific queries.

  • OWASP Global AppSec
    • November 6, 10:30am ET in Washington, DC
    • Title: Hidden Risks of Integrating AI: Extracting Private Data with Real-World Exploits
    • Abstract: This talk explores the hidden risks in apps leveraging modern AI systems—especially those using large language models (LLMs) and retrieval-augmented generation (RAG) workflows—and demonstrates how sensitive data, such as personally identifiable information (PII), can be extracted through real-world attacks. We’ll dive into techniques like model inversion attacks targeting fine-tuned models, and embedding inversion attacks on vector databases—key components in RAG architectures that supply private data to LLMs for answering specific queries.

 

oauth-jewel-blog-newsletter

The Terrifying Takeaways from the Massive OAuth Breach

What Google, Salesforce, and the Rest Keep Missing

 

Massive Salesforce, Google, and Salesloft hacks expose OAuth risks. We discuss why tokens are dangerous, how SaaS vendors are screwing up security, and we score who handled the problem well and who had a disappointing response.

 

> Read the full blog

 

gartner-cool-vendor-newsletter3

IronCore Named a 'Cool Vendor in Data Security' by Gartner

 

IronCore Labs is a Gartner Cool Vendor in Data Security 2025 for its ability to encrypt large amounts of AI data while keeping it usable.

 

> Read the full blog

 

ai-is-random-newsletter

When Randomness Backfires:
Security Risks in AI

The Most Important Tool When Hacking AI Is Persistence

 

LLMs produce different results every time and sometimes those results are outliers that can be used by hackers to exploit systems. Most unsafe outputs, data leaks, and allowed jailbreaks or prompt injections are due to the random component in an LLM. In this blog, we explain that and why it's so dangerous for security.

 

> Read the full blog

 

LinkedIn
X
GitHub
Mastadon
YouTube

IronCore Labs, 1750 30th Street #500, Boulder, CO 80301, United States, 3032615067

Unsubscribe Manage preferences