New stats on AI breaches are dismal and only just beginning; new blog explains why; plus a bunch of upcoming conference talks.
View in browser
AdobeStock_705190755-with-peg-newsletter-header

Greetings,

 

I've been talking a lot about the problems with adding AI features and the impact to application security.  Why?  Because I think this is the biggest change to the landscape of security threats since the Internet.  And while I've been saying that for quite awhile now, the evidence of its truth is starting to come in.

 

This year's IBM Cost of a Data Breach report, for the first time ever, includes breakouts for AI and the numbers are eye-popping:

  • 13% of breaches involved AI models or applications
  • 97% of those had no access controls (as AI generally doesn't)
  • 60% of AI breaches led to compromised data

If you're Chicken Little, there's no satisfaction in being right.  And this is just the start. Hackers are only just figuring out how to take advantage of the massive new problems companies are adding to their infrastructures.  Hopefully these data points will spur more people to take the security of data -- and especially of AI data -- more seriously.

 

I'll be out ringing the alarm in the coming months starting with DEF CON in another week or so, the AI Risk Summit a couple weeks after that, and OWASP LASCon in October.  If you'll be at any of them, please check out our talks, which are filled with demos and details, and then come by and say hi after.  Not all of these are recorded so if you get a chance to go, you definitely should.

 

Finally, we have a new blog that may help to explain the many bad behaviors and security risks of AI: When Randomness Backfires: Security Risks in AI.  I hope your summer has gone well and you have plans to make the most of the rest of it.

Patrick Walsh CEO IronCore Labs

Patrick Walsh
CEO, IronCore 

 

Upcoming events:

  • DEFCON 33
    • Aug 9, 11am PT in Las Vegas, NV
    • Title: Illuminating the Dark Corners of AI: Exploiting Data in AI Models and Vector Embeddings
    • Abstract: This talk explores the hidden risks in apps leveraging modern AI systems—especially those using large language models (LLMs) and retrieval-augmented generation (RAG) workflows—and demonstrates how sensitive data, such as personally identifiable information (PII) and social security numbers, can be extracted through real-world attacks.
       
    • AI Risk Summit
      • Aug 19, 4pm PT in Half Moon Bay, CA
      • Title: Smart Tech, Dumb Moves: AI Adoption Without Guardrails
      • Abstract: This talk explores concrete examples of threats, real-world attacks, and systemic risks that every security professional should understand. It also provides guidance on how to critically evaluate vendors introducing AI features, helping you identify red flags and spot when security precautions are being underattended.


    • OWASP LASCon
      • Oct 24, 1pm CT in Austin, TX
      • Title: Hidden Risks of Integrating AI: Extracting Private Data with Real-World Exploits
      • Abstract: We’ll dive into techniques like model inversion attacks targeting fine-tuned models, and embedding inversion attacks on vector databases—key components in RAG architectures that supply private data to LLMs for answering specific queries.

     

    ai-is-random-newsletter

    When Randomness Backfires:
    Security Risks in AI

    The Most Important Tool When Hacking AI Is Persistence

     

    LLMs produce different results every time and sometimes those results are outliers that can be used by hackers to exploit systems. Most unsafe outputs, data leaks, and allowed jailbreaks or prompt injections are due to the random component in an LLM. In this blog, we explain that and why it's so dangerous for security.

     

    > Read the full blog

     

    training-ai-white-paper-newsletter-2

    Training AI Without Leaking Data

    How Encrypted Embeddings Protect Privacy

     

    (Previously linked versions were drafts, this is the official final version.)
     
    How to train AI models over encrypted training data making models that can only be run if you have access to the key. This is useful for building categorization and recommendation models over sensitive customer data without exposing that data to engineers and researchers and while keeping the data in the models safe.

     

    > Download the PDF (new link)

     

    LinkedIn
    X
    GitHub
    Mastadon
    YouTube

    IronCore Labs, 1750 30th Street #500, Boulder, CO 80301, United States, 3032615067

    Unsubscribe Manage preferences