Really fascinating exploration of cognitive memory models applied to AI systems. The forgetting curve concept is particularly relevant right now — most LLM context window approaches treat all tokens equally, but human memory is inherently selective. The idea of importance-weighted retention could be huge for multi-agent architectures where agents need to share and prioritize knowledge across long-running tasks. Have you looked into how retrieval-augmented generation compares to the memory management approach you're describing here?