Thanks Lena! On the metadata side specifically, the biggest latency hits I saw were on /audio-features and /audio-analysis — those endpoints respond ~3–5× slower than /tracks for the same IDs, presumably hitting a different storage tier. For the basic metadata fields (artist, album, popularity, ISRC) caching helps a lot, but TTL has to stay short on popularity since it shifts daily. One more thing worth knowing: batching via /tracks?ids=...,... (up to 50 per call) was consistently 4–6× cheaper end-to-end vs serial single-ID calls, even with the rate-limiter applying the same token cost — so loading time benefits stack on top of API-quota savings.
For your streaming-load use case — are you measuring time-to-first-playable (TTFP) from the client side, or server-side time-to-metadata-ready? They diverged surprisingly in my runs once I started measuring both.