LLMs are chaos monkeys in the tightest feedback loop of the software development process.
LLMs are chaos monkeys in the tightest feedback loop of the software development process.
Project managers who are practitioners of continuous improvement techniques have spent decades finding ways to reduce cycle time and increase throughput. One of the key ways that you do that is by reducing variability. Variability is the enemy of confidence and predictability. So when you tell someone who has built up a ton of experience removing risk that they need to use a tool that one day could bump their productivity by 5X, but the next day be completely useless, you’ve got kind of an oil and water situation. In addition to fear and ignorance, I think this is one of the biggest reasons why people abandon digging into AI tools more.
But, just like any new tool, there is inefficiency in using it well until you become adept at techniques for using it better. We embraced the uncertainty and discomfort, and are now pushing through new ideas of how to use the tools better to reduce variability in the results. Honestly, I haven’t really heard many other people talking about it this way. There’s a lot of prompt engineering and shiny tool syndrome going around, but I’m looking to connect with people who are really digging into using these tools more deeply. Please tag someone you know who’s pushing the envelope in this space. I want to connect.