This is an important intersection that doesn't get enough attention. AI coding assistants are effectively automating the trust decisions that developers used to make manually — which package to use, which version, whether a dependency is actively maintained. When the model suggests a dependency it saw in training data, there's no guarantee that package hasn't been typosquatted or abandoned since. The compounding risk is that AI-generated code tends to pull in more dependencies than hand-written code because the model optimizes for working solutions, not minimal dependency trees. Every additional package is another node in your attack surface graph. Tools like lockfile auditing and SBOM generation help, but the real gap is at the point of code generation itself — we need runtime and build-time checks that can flag suspicious or unexpected dependency introductions before they ever land in a PR.