Great writeup! Polars lazy evaluation is a game-changer for memory-constrained pipelines. One tip from my experience: combining scan_csv() with sink_parquet() lets you process files larger than RAM without ever loading them fully. For recurring ETL jobs, I found that Polars + DuckDB is an incredibly powerful combo — DuckDB can query Parquet files directly, so you get the best of both worlds.