Greetings,
My, how the headlines have changed. Last year, we were talking about the AI rush to adoption ahead of meaningful security, but that was a novel story. Now, many people are saying the same thing, and Enterprises seem to be slowing their rollouts and growing wary of oversized promises and a lack of security.
Unfortunately, we hear people worry about whether their data will be used for training models. Which is to say, if their data won't be used for that purpose, then they're okay.
However, this is a poor test since there are numerous other factors to consider, and that is just one (see our 12 security questions blog). Plus, secure and privacy-preserving models are possible (see our private models white paper).
The story here is really one of uncertainty. There are no established best practices that CISOs can point to as references for what they should be doing. And now, in the absence of that, we're seeing the pendulum swing from wild adoption to paralysis.
The caution is better, but the paralysis is unnecessary. There are good ways to approach building apps with AI safely. I'll be at two different OWASP conferences this Fall (see below), delivering research-based suggestions on how to build safe, secure, and private AI features that include guidelines for where not to (yet) use AI. Check it out if you're there; if not, please let us know if you'd like to attend a webinar.
And finally, I'm absolutely thrilled to announce that IronCore has been recognized as a Gartner Cool Vendor in Data Security for our pioneering data protection for AI data. We're honored to be recognized for our work here.