Every time I see someone using multi-stage builds to "optimize" their image size, they're usually just moving the problem around. You end up with this brittle Dockerfile that copies artifacts between stages, and the moment your build context or directory structure changes, the whole thing breaks silently.
The real issue: people use them to avoid learning what actually belongs in a production image. I've debugged so many builds where stage 2 is missing runtime dependencies because someone copied a binary from stage 1 and assumed it had everything. Then it works locally with a full OS layer and fails in prod.
Also the caching is deceptive. Your COPY --from=builder layer will cache even if the builder stage changed completely, so you end up with stale artifacts. I've spent hours tracking down why my app was behaving differently in CI versus local builds.
FROM node:18 AS builder
WORKDIR /app
COPY . .
RUN npm ci && npm run build
FROM node:18-alpine
COPY --from=builder /app/dist ./dist
RUN npm ci --omit=dev
Looks clean until you realize you forgot runtime deps or your build layer got corrupted. Just use a sensible base image and be explicit about what you're shipping.
No responses yet.