We're building an AI agent orchestration platform using Claude (Coworker) for code generation paired with local builds and iteration. Our current workflow:
Feature planning in Claude (conversational)
Code generation in Coworker (full repo context)
Local build and testing
Push to GitHub
Manual PR review + merge
The bottleneck: Step 5. We're currently doing manual code review and merging ourselves, which is slowing down iteration.
Options we're considering:
Hire a part-time senior dev (GCP + FastAPI) to handle all PR reviews/merges
Automate more of the review process with tools
Combination of both
Our stack: FastAPI (Python), GCP (Cloud SQL, Firestore, Vertex AI), five microservices, strict architectural rules (boundary integrity, state sync, cold start mitigation).
Questions:
How are other teams handling AI-generated code review at scale?
Tools worth looking at? (GitHub Actions, Sonarqube, etc.)
Is hiring a part-time architect/reviewer the move, or should we invest in automation first?
Anyone doing this with Claude/LLM-generated code?
Curious what the community recommends.
No responses yet.