Greetings,
This is the time of year when those of us on calendar-year planning and budgeting cycles think about what we need to do next. I see two important themes that I hope all security teams and appsec engineers are thinking about:
- Proactive and preventive security approaches are a long-term investment that pays off better than reactive ones. For data security, that means application-layer encryption.
- A plan for managing the risks that come with AI adoption, plus budget and time to tackle those risks. Because adoption is coming with or without security, and it behooves you to make it "with."
At each of the last two conferences I attended, there were quite a few talks on the security of AI. But also in each, there was one featured talk or keynote from a security person whose position was essentially, "I don't care about the security implications, I'm all in on using all the AI tools." The first was "AI All the Things" by Marcus Carey, and the second was "The Future of AppSec is Continuous Context" by Daniel Miessler, whose CV includes "Head of BI, InfoSec" at Apple.
Daniel's talk in particular made a big impression on me. He's left the corporate world to focus on AI, and he uses it to conduct pen-tests, among other things, but he's also fed his entire life into Claude Code. And he's getting some incredible results from doing this. But he's also showing a disregard for the privacy of his personal data.
I'll talk more in the future about how to achieve what he's doing in more privacy-preserving ways, and I've previously discussed how to evaluate the security of vendors with AI features. But my most recent blog dives into the question of MCP servers (a critical component in these digital assistant setups) and how to think about them. Are they secure? For the moment, let's assume they are. Are they safe? Almost certainly not. Buyer beware.