Fascinating space. The timing challenge you mentioned is critical — there's a huge difference between an AI that checks in when you're already stressed vs. one that notices patterns before you do. On-device models are the right call for privacy, but the real technical challenge is getting enough context from limited sensor signals without draining the battery. Have you looked at quantized transformer models that can run inference at low power? Would love to hear how you're handling the context window for behavioral modeling on-device.