From Rust to Skibidi: Your AI Coding Forecast
- Lytical Ventures
- Oct 8
- 3 min read
Cyber Thoughts Newsletter
OCTOBER 2025
It’s the End of the Code as We Know It (And I Feel Fine)
—with apologies to R.E.M.
We tried to generate a Sora video of this very phrase—slow-motion montage of crumbling mainframes, dancing ASCII, maybe a weeping DevOps engineer—but the guardrails said no. Something about “content restrictions” and “copyright law.” We argued fair use, parody, and creative despair. Sora didn’t care. Sora is a cruel mistress.
Anyway…
Here’s a thought experiment for your AI-baked brain: what happens when most code isn’t written by humans anymore?
As LLMs take over more and more of the keyboard, the number of programming languages and frameworks will begin to collapse. Not because of some grand standards war, but because LLMs are just better at things they have more examples of. They’ve eaten the training data. They want the well-trodden paths.
Given that AI doesn’t so much care about the syntax or brevity, languages that are ideal for humans may not be the ones that win. Could we nudge LLMs to use more secure coding languages and frameworks to make code become more secure overall? Maybe memory safety becomes the norm. Maybe Rust wins. Maybe the industry finally gets the secure-by-default stack we’ve been chasing for decades.
Or maybe… we go even further.
What if, instead of relying on shaky probability models, we built software like synthetic biologists build molecules: from verified, mathematically proven primitives? In biology, large language models trained on protein strings—Protein Language Models—are already exploring vast spaces of possible molecules, combining amino acids like digital Lego blocks to discover new drugs. Imagine applying a similar idea to software: what if we could assemble secure operating systems from formally verified Lego-like components? We’ll have to ask our smart friends, but the metaphor is intriguing.
Of course, even the smartest model still makes mistakes. (See the article below for a particularly “fun” one.) So yes, testing still matters. However, a new wave of tools is emerging—startups that test code, probe systems, and red-team systems. And the next 24 months? As the kids say, the next 24 months are going to be lit. Skibidi toilet. 😭💀🤓
Lastly, if you appreciate our highlighted content, please follow us on Twitter and LinkedIn, where we regularly post about things worthy of attention.
What We're Reading
Here's a curated list of things we found interesting.
OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws
In case you were waiting for models to get a little bit better before trusting them you should probably know that it’s mathematically impossible. Constant vigilance! Also, our jobs are safe for another month. ;)
In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.
Deepseek outputs weaker code on Falun Gong, Tibet, and Taiwan queries
You probably shouldn’t be trusting AI to code mission critical applications anyway, but if you needed more of a reason here we go.
A CrowdStrike study found that Chinese AI system Deepseek delivers less secure code when prompts involve politically sensitive topics, raising concerns about political bias in technical outputs.
Inside the Jaguar Land Rover hack: stalled smart factories, outsourced cybersecurity and supply chain woes
The UK government has had to step in to support their biggest car manufacturer after a devastating cyberattack. We are furiously googling the term “Moral Hazard”. Must be nice to be too-big-to-fail.
Being a carmaker where ‘everything is connected’ has left JLR unable to isolate its plants or functions, forcing a shutdown of most systems.
Transactions
Deals that caught our eye.
Mitsubishi Electric to acquire Nozomi Networks in $1 billion deal
Industrial conglomerate Mitsubishi Electric has agreed to acquire OT and IoT cybersecurity specialist Nozomi Networks in a transaction that values the San Francisco-based firm near the $1 billion mark.
Podcasts
What we’re listening to.
AI Agents Can Write 10,000 Lines of Hacking Code in Seconds
Ever wondered what happens when AI agents start talking to each other—or worse, when they start breaking things? Ilia Shumailov spent years at DeepMind thinking about exactly these problems, and he's here to explain why securing AI is way harder than you think.
About Lytical
Lytical Ventures is a New York City-based venture firm investing in Corporate Intelligence, comprising cybersecurity, data analytics, and artificial intelligence. Lytical’s professionals have decades of experience in direct investing generally and in Corporate Intelligence specifically.







