As someone who just recently got into baking sourdough and was literally discussing earlier today some of the challenges around different starters, conditions, and lots of bad / misleading advice online, this is fun to see.
As a bit of an aside, I've found my starter is a lot more resilient than the internet would have me believe. I managed to bring it back to life from the brink after accidentally pouring boiling water directly on it. Fingers crossed no more near-death experiences for it again!
I also find King Arthur's guides and recipes super helpful; recently, I've started popping them into ChatGPT and requesting modifications (e.g. volume, ingredient alterations based on my learnings or needs, altitude adjustments etc.).
If you refresh are you still seeing it? I just got a 404 but am now able to access on refresh.
Copy/pasting below for easier reading in case you still have issues:
An AI Agent Broke Into McKinsey’s Internal Chatbot and Accessed Millions of Records in Just 2 HoursA red-team experiment found an AI agent could autonomously exploit a vulnerability in McKinsey’s internal chatbot platform, exposing millions of conversations before the issue was patched.
A security startup said their autonomous AI agent was able to break into McKinsey’s internal generative-AI platform in roughly two hours, gaining access to tens of millions of chatbot conversations and hundreds of thousands of files tied to corporate consulting work.
Researchers at red-team security firm CodeWall targeted McKinsey as part of a controlled test designed to simulate how modern hackers might use AI agents to probe corporate infrastructure. The experiment ultimately allowed the system to obtain full read-and-write access to the company’s AI chatbot database, according to a report by The Register.
CodeWall’s AI agent identified a vulnerability in Lilli, McKinsey’s proprietary generative-AI platform introduced in 2023 and now widely used across the firm. The chatbot has become a central tool inside the consulting giant. About 72 percent of McKinsey’s employees—more than 40,000 people—use Lilli, generating over 500,000 prompts every month, according to The Register.
Within two hours of launching the automated test, the researchers said their AI agent had accessed 46.5 million chatbot messages covering topics such as corporate strategy, mergers and acquisitions, and client engagements. The system also exposed 728,000 files containing confidential client data, 57,000 user accounts, and 95 system prompts that govern how the chatbot behaves, The Register reported.
Because the vulnerability allowed both reading and writing data, an attacker could theoretically manipulate the chatbot’s internal prompts, quietly altering how it responds to consultants across the company. That means someone exploiting the flaw could potentially poison the advice generated by the system without deploying new code or triggering standard security alerts.
“No deployment needed. No code change,” the researchers wrote in their blog post. “Just a single UPDATE statement wrapped in a single HTTP call.”
How the AI Agent Broke In
The attack began when CodeWall’s AI agent identified publicly exposed API documentation tied to Lilli. The documentation included 22 endpoints that required no authentication, one of which logged user search queries.
While analyzing the system, the agent discovered a classic flaw: The software was taking information from users and plugging it directly into its internal database without checking it first—known as SQL injection. That’s like a building security desk automatically letting anyone make their own keycards to get in.
CodeWall disclosed the vulnerability chain to McKinsey on March 1. By the following day, the consulting firm had patched the exposed endpoints, taken the development environment offline, and restricted access to the API documentation, The Register reported.
“Our investigation, supported by a leading third-party forensics firm, identified no evidence that client data or client confidential information were accessed by this researcher or any other unauthorized third party,” a McKinsey spokesperson told The Register. “McKinsey’s cybersecurity systems are robust, and we have no higher priority than the protection of client data and information we have been entrusted with.”
The Autonomous Cybersecurity Threat
For CodeWall’s CEO, Paul Price, the bigger concern is not this specific vulnerability but the speed and autonomy of the attack itself. The AI agent that conducted the probe operated without human guidance, Price said.
“We used a specific AI research agent to autonomously select the target,” he told The Register. “Hackers will be using the same technology and strategies to attack indiscriminately.”
That shift could enable cybercriminals to conduct machine-speed intrusions, automating reconnaissance, vulnerability discovery, and exploitation at a scale traditional attackers couldn’t achieve. And as companies increasingly deploy internal AI systems like McKinsey’s Lilli, those platforms may become some of the most valuable, and vulnerable, targets.
Super interesting take...is this something you've seen / experienced before?
I had sort of assumed best intent but probably good to be a little more skeptical / critical but also narcissism or psychopathy seems like a pretty significant deal if true.
I think how one might've learned even a couple years ago vs. how one might learn now are somewhat different.
For me, and I consider myself still learning / non-expert, having projects I wanted to build combined with looking for ways to learn the fundamentals (mostly free online courses + books) and then leveraging AI to get unblocked and coached has helped. Of course, take AI answers with a grain of salt.
Harvard's CS50 MOOC is a good "learn how to think about programming" that quickfire introduces you to a lot of fundamentals and challenges.
If you want more structure and more courses, Frontend Masters has a ton of learning paths that are great as well.
I've also heard good things about the Odin project but have not personally tried it out.
Memory management is one of the most challenging parts of working with Claude Code; too little effort or too much, and you waste tokens and Claude gets confused.
> "We attempted to use CLAUDE.md and continue to do so. Our root-level CLAUDE.md helps communicate some of the rules of our repo, such as approaching changes via test-driven development (TDD), as well as tribal knowledge our team has internalized. However, we don’t want to overload it with information about every area of the codebase, given context window constraints and our desire to avoid confusing Claude with irrelevant details."
Having issues with Claude [dot] md seems to be a common experience, and leveraging rules and having a background agent analyze each session is a clever approach here that works well. I've found this to be incredibly helpful.
> "I like to think about shipping better code in terms of technical debt. We take on technical debt as the result of trade-offs: doing things "the right way" would take too long, so we work within the time constraints we are under and cross our fingers that our project will survive long enough to pay down the debt later on.
The best mitigation for technical debt is to avoid taking it on in the first place."
This is actually something I've been thinking about a lot, to the point that I'm now building agentic experiences to try to help make it easier to create and maintain better code and iterate in an AI-centric way while building software.
> "Embrace the compound engineering loop"
Agreed! I feel like making it easier for AI agents to create high-quality code that's easy to test, iterate on, and maintain (without adding tech debt) is crucial here.
reply