RSA, AI, and the Return of Old Ideas
- 6 days ago
- 4 min read
Cyber Thoughts Newsletter
APRIL 2026
RSA was surprisingly vibrant this year. There were plenty of reasons to expect attendance to be down: TSA chaos, declining foreign travel to the US, a new RSAC CEO not favored by the current administration, government budget cuts, and the ever-present risk that San Francisco finally breaks off and sinks into the ocean due to its loose morals and tax policy.
And yet, despite all that, RSA had a strong showing. Our portfolio companies reported solid traffic and, more importantly, customers who were actually eager to try new technology.
Maybe this AI thing has legs.
AI Drives Spend, Not Savings
A hotly debated topic at the conference was the impact of AI on cybersecurity. A popular narrative was that AI will reduce the need for security tooling.
We think the opposite. Let’s break it down.
More code → more attack surface.
More non-coders shipping code → more mistakes.
More attack surface + more mistakes → bad.
Throw in that no one trusts the systems generating that code to secure themselves, and you have a recipe for increasing, not decreasing, spend.
The solution is to trust, but verify.
In another example of Jevons Paradox, making it easier to write code has increased the demand for programmers.
Definition: Jevons Paradox
“When technological improvements make a resource more efficient to use, total consumption of that resource can actually increase rather than decrease.”
Here we see Jevon’s Paradox at work in software jobs, which have begun to rebound.
This dynamic will hold for cybersecurity as well. Increased efficiency and automation will not reduce security spend. They will expand it.
Cybersecurity is not a bottleneck to AI.
It is the enablement layer that will make AI adoption possible.
Are LLMs the new Fuzzers?
If you are old enough, and nerdy enough, you remember when fuzzers took the scene by storm.
Definition: Fuzzer
“A fuzzer is a tool that breaks your software by bombarding it with inputs no sane user would ever try.”
Fuzzers unlocked a wave of new vulnerabilities. A generation of researchers built reputations finding bugs at scale. And then, over time, the novelty faded.
Organizations started fuzzing their own systems. The easy bugs disappeared. The tool didn’t go away, it just became standard.
We think LLM-based bug finding will follow the same path.
Today, these systems are uncovering swaths of vulnerabilities and generating tons of excitement. Early adopters will build names on top of this wave.
But eventually, this will become table stakes.
And we will return to equilibrium.
What was old is new again. Now with AI.
Putting Something in Front of It
This all feels familiar.
The last major wave of security innovation was simple:
Put a proxy in front of it.
Cloud Reverse Proxy Security Companies

We’ve seen this playbook before.
Put something in front of the system.
Make it smarter over time.
Sell it as security!
Now Add AI!
We’re starting to see hints of the next version of this playbook.
It’s not quite “put a proxy in front of it.”More like “put an AI in front of the decision.”
We’re not there yet…
But give it a few years, and someone will try to put AI in front of everything.
Final Thought
AI will not eliminate cybersecurity.
It will increase the amount of software, increase the number of mistakes, and increase the need for systems to monitor and defend it all.
Which means more tools.
More layers.
More spend.
Not less.
Lastly, if you appreciate our highlights and heresies, follow us on Twitter and LinkedIn, we post regularly about real things worthy of your attention.
What We're Reading
Here's a curated list of things we found interesting.
How a Poisoned Security Scanner Became the Key to Backdooring LiteLLM
Security tools being the supply-chain security risk is exactly the sort of dystopian future we’ve come to expect.
On March 24, 2026, two versions of the litellm Python package on PyPI were found to contain malicious code. The packages were published by a threat actor known as TeamPCP after they obtained the maintainer's PyPI credentials through a prior compromise of Trivy, an open source security scanner used in LiteLLM's CI/CD pipeline.
Anthropic took down thousands of GitHub repos trying to yank its leaked source code
In an effort to claw back the source code Anthropic accidentally released they caused a large number of legitimate Github repositories to be taken down. “Oops, our bad.”
On Tuesday, a software engineer discovered that Anthropic had, seemingly by accident, included access to the source code for the category-leading Claude Code command line application in a recent release.
Cascade of A.I. Fakes About War With Iran Causes Chaos Online
Information has always been a tool of war, this is just the latest incarnation where combatants try to sway public opinion. But now with AI.
A torrent of fake videos and images generated by artificial intelligence have overrun social networks during the first weeks of the war in Iran.
Transactions
Deals that caught our eye.
Wiz Joins Google Cloud as Landmark Acquisition Closes
Google has completed its $32 billion acquisition of the cloud security giant, which will maintain its brand. The landmark deal officially closed on March 11, 2026.
Podcasts
What we’re listening to.
Security Cryptography Whatever: AI Finds Vulns You Can’t With Nicholas Carlini
AI is getting very good at finding bugs, but not necessarily the ones you care about. We’re shifting from a world where discovery was scarce to one where judgment is the bottleneck.
About Lytical
Lytical Ventures is a New York City-based venture firm investing at the intersection of Cybersecurity and AI. We aim to be the most connected, most helpful team for founders, investors, and anyone else who cares about cybersecurity and its adjacencies.








