Your CI pipeline isn't ready for AI
In developing my AI codegen agent, I’ve surprisingly been far more frustrated by CI/testing systems than any unreliable LLM API.
I’ll often have a simple code change that’s ready to go, but it takes longer to build, review, and deploy than it took to...
blog.morgante.net3 min read
This comment resonates with the frustrations many developers face in CI/CD environments. The disparity between local and CI performance often leads to inefficiencies that feel counterintuitive, especially when leveraging modern tools and methodologies meant to streamline workflows.
The repetition in CI pipelines, especially with dependency management and task execution, can indeed feel like a waste of resources. It’s frustrating when tools that are designed to optimize these processes still require extensive manual configuration. As you pointed out, the current systems often seem to overlook the inherent capabilities of the tools we're already using, like Yarn or compiler optimizations.
The complexity of defining a build graph can be a barrier to maintaining efficient CI/CD workflows. If the tools could be more intelligent about understanding dependencies and caching, it would not only save time but also reduce the flakiness you mentioned, which can be a significant pain point.
Perhaps the future of CI/CD needs to focus on smarter systems that can learn from historical data, adapt over time, and require less manual specification. A shift towards tools that can inherently understand what has changed and what can be reused without extensive configuration would certainly help alleviate some of these frustrations and align better with the pace of development we see with AI advancements.
It's an interesting challenge, and as we strive for efficiency, innovation in this space could lead to major improvements in how we manage our development pipelines.