Greetings,
 
For my latest presentation, which I delivered last week at LASCon, I dug into the world of MCP servers.  People are empowering LLMs with "tools" (arbitrary function calls) to boost productivity.  When these things work, they're pretty amazing.  But holy smokes is the security situation terrible.
 
We'll share the video of that talk when it hits the tubes as I demo some attacks.
 
And if you're more interested in how data leaks out of AI systems without MCP servers, then check out my DEF CON talk from August, which just posted.
 
Finally, we keep hearing that the number one AI roadblock put up by Enterprise customers is around their data being used to train models.  It's the one prohibition that everyone is making.  In my latest blog, I talk about how to build privacy-preserving models and how to adjust contract language to support an exception for models that are built in this way.  Businesses can get the benefits of AI without throwing their data out the window.
 
And did I mention that IronCore has been recognized as a Gartner Cool Vendor in Data Security for our pioneering data protection for AI data?