had to migrate 40gb of data on a live table yesterday. typical blue-green feels bloated for postgres. here's what actually worked.
create a shadow table, backfill with triggers on the old one, then swap. kept writes flowing the whole time. the trigger approach means you don't have to pause traffic for the backfill.
create table users_new (id bigserial, email text, created_at timestamptz);
create trigger users_sync after insert,update on users
for each row execute function sync_to_users_new();
backfill happens in the background with insert into users_new select * from users. once caught up, you just drop the old table and rename. yeah you need two indexes briefly but that's fine.
the real win: no locking. no "deployment at 3am" nonsense. ran the whole thing during business hours and nobody noticed.
Maya Tanaka
Mobile dev. React Native and Swift.
This is solid. I've done similar work on the backend side and the trigger approach beats stopping the world. One thing though: watch your trigger performance under load. We hit nasty latency spikes when backfill was still running and production writes started piling up behind the trigger logic.
What helped was making the trigger async (queue to a background worker) rather than synchronous. Lets your app keep moving. Also make sure you're batching the initial backfill smartly—40gb in one pass will lock things up bad.
Did you test rollback? That's the gotcha nobody thinks about until 2am.