We’ve drawn a clear engineering "line in the sand" for AI use. Here’s the policy, the why behind it, and what we’re refusing to trade for AI speed.
View in browser
AdobeStock_705190755-with-peg-newsletter-header

Greetings,

 

On this Data Privacy Day, have you thought about what AI is doing to the privacy of your data?  It's not good.  We're less in control of our data than we've ever been before, and that goes for individuals as well as businesses.

 

Lately I've found myself in a quandary.  AI is extremely dangerous from a security and privacy perspective.  It can be subverted and undermined easily and even without a malicious actor, it can go off and randomly pick bad paths to follow.  Yet at the same time, these tools are too powerful to ignore.  If you're not leveraging them, you're falling behind your peers who do.

 

Although I give a lot of talks warning people of AI dangers, that doesn't mean I'm not using the tools personally or at work. The trick is in figuring out how to leverage AI responsibly and, more importantly, when not to use AI.

 

Our current choices here at IronCore are the subject of my latest blog, AI Coding Agents: Our Privacy Line in the Sand, which covers our policy for coding agents and beyond. The thinking may be of use to some of you.

 

Beware your blind spots and how AI exacerbates them. Stay safe out there.

Patrick Walsh CEO IronCore Labs

Patrick Walsh
CEO, IronCore 

 

 

Upcoming events:

  • ISSA Webinar: AI Data Blind Spots: What Security Professionals Need to Know
    • February 17, 1:00pm - 2:00pm ET (virtual)
    • Abstract: This webinar explores the hidden risks in apps leveraging modern AI systems, especially those using large language models (LLMs) and retrieval-augmented generation (RAG) workflows. The speaker will demonstrate how sensitive data, such as personally identifiable information (PII), can be extracted through real-world attacks. This includes techniques like model inversion attacks targeting fine-tuned models, and embedding inversion attacks on vector databases, which are key components in RAG architectures that supply private data to LLMs for answering specific queries.
  • IOPD Webinar: Hidden Risks of Integrating AI: Managing Data Proliferation and Leakage
    • March 20, 11:00am - 12:00pm ET (virtual)
    • Abstract: A discussion of the hidden risks in apps leveraging modern AI systems, especially those using large language models (LLMs), retrieval-augmented generation (RAG) workflows, and agentic workflows. We will be prepared to demonstrate how sensitive data, such as personally identifiable information (PII), can be extracted through real-world attacks such as a vector inversion attack. We will also be prepared to discuss or demonstrate how to prevent such attacks through the use of encryption and other PETs, plus the wise application of policy.
ai-policy-newsletter

AI Coding Agents: Our Privacy Line in the Sand

How IronCore balances AI productivity with data protection

 
A breakdown on where IronCore draws the line on the use of generative AI and coding agents to ensure that private data stays private while still taking advantage of the productivity that these tools can bring.

 

> Read the full blog

 

human-error-breaches-newsletter

One Unchecked Box, One Billion Records: The Human Error Problem

The misconfiguration epidemic that training can’t fix 

 

Human errors as root causes of breaches have increased to record levels and we have the stats and case studies to prove it. You can’t eliminate mistakes, but you can design for them. Read what 17 billion exposed records taught us about resiliency.

 

> Read the full blog

 

owasp-hidden-risks-newsletter

Hidden Risks of Integrating AI

Extracting Private Data with Real-World Exploits

 

This talk explores the hidden risks in apps leveraging modern AI systems and demonstrates how sensitive data, such as personally identifiable information (PII), can be extracted through real-world attacks. We dive into agentic systems and MCP servers as well as RAG workflows and vector inversions with a crash course in how AI works under the hood.

 

> Watch the video

 

LinkedIn
X
GitHub
Mastadon
YouTube

IronCore Labs, 1750 30th Street #500, Boulder, CO 80301, United States, 3032615067

Unsubscribe Manage preferences