Some thoughts on planning for a year when AI and security intersect more deeply and data is more at risk than ever before.
View in browser
AdobeStock_705190755-with-peg-newsletter-header

Greetings,

 

This is the time of year when those of us on calendar-year planning and budgeting cycles think about what we need to do next. I see two important themes that I hope all security teams and appsec engineers are thinking about:

 

  1. Proactive and preventive security approaches are a long-term investment that pays off better than reactive ones. For data security, that means application-layer encryption.
  2. A plan for managing the risks that come with AI adoption, plus budget and time to tackle those risks. Because adoption is coming with or without security, and it behooves you to make it "with."

At each of the last two conferences I attended, there were quite a few talks on the security of AI.  But also in each, there was one featured talk or keynote from a security person whose position was essentially, "I don't care about the security implications, I'm all in on using all the AI tools."  The first was "AI All the Things" by Marcus Carey, and the second was "The Future of AppSec is Continuous Context" by Daniel Miessler, whose CV includes "Head of BI, InfoSec" at Apple.

 

Daniel's talk in particular made a big impression on me.  He's left the corporate world to focus on AI, and he uses it to conduct pen-tests, among other things, but he's also fed his entire life into Claude Code.  And he's getting some incredible results from doing this.  But he's also showing a disregard for the privacy of his personal data.

 

I'll talk more in the future about how to achieve what he's doing in more privacy-preserving ways, and I've previously discussed how to evaluate the security of vendors with AI features. But my most recent blog dives into the question of MCP servers (a critical component in these digital assistant setups) and how to think about them.  Are they secure?  For the moment, let's assume they are.  Are they safe?  Almost certainly not.  Buyer beware.

Patrick Walsh CEO IronCore Labs

Patrick Walsh
CEO, IronCore 

mcp-servers-electric-newsletter

MCP Servers Are Electric

But not in the way you might hope

 

MCP servers promise magic, but one prompt can blow up your GitHub, Salesforce, or entire stack. Here's why LLM integrations are far more dangerous than vendors admit.

 

> Read the full blog

 

privacy-preserving-ai-key-newsletter

Privacy-Preserving AI: The Secret to Unlocking Enterprise Trust

Why Fortune 500 companies are blocking vendors — and how encrypted AI models win them back

 

Enterprises are blocking vendors from training AI on their private data. Learn how privacy-preserving tech like encrypted models can restore trust, win deals, and enable AI features.

 

> Read the full blog

 

defcon-33-recording-screencap-newsletter

DEF CON 33 - Exploiting Shadow Data in AI Models and Embeddings

Illuminating the dark corners of AI

 

This talk explores the hidden risks in apps leveraging modern AI systems — especially those using large language models (LLMs) and retrieval-augmented generation (RAG) workflows — and demonstrates how sensitive data, such as personally identifiable information (PII) and social security numbers, can be extracted through real-world attacks. We demonstrate model inversion attacks targeting fine-tuned models and embedding inversion attacks on vector databases, among others.

 

> Watch the video

 

LinkedIn
X
GitHub
Mastadon
YouTube

IronCore Labs, 1750 30th Street #500, Boulder, CO 80301, United States, 3032615067

Unsubscribe Manage preferences