I treat consent as a process, not a checkbox. I explain what the system does, what data it needs, what risks exist, and what alternatives or choices users have.
Account data- email, age range, language preferences.
Usage data- feature use, timestamps, device type.
Health-related inputs (user-provided)- mood check-ins, journal text, symptom questionnaires.
Conversation data (if chatbot)- messages, safety flags.
Optional signals (only if needed)- sleep/activity indicators from wearables (opt-in).
I collect data for clear purposes
To personalize support and usability.
To detect safety risks and escalate when needed.
To evaluate quality and reduce harmful outputs.
To audit performance across groups for fairness.
I store data in secure systems, restrict access by role, and log access events to trace activity if issues occur.
“I do not share with third parties except essential infrastructure providers.”
I define retention periods up front. I delete or de-identify data when it is no longer needed for the stated purpose, and I document retention rules.
Withdraw consent- I stop future collection for that feature.
Access- I provide a copy/summary of stored data (where feasible).
Correct- I fix inaccuracies when users report them.
Delete- I support deletion requests unless legal/safety rules require limited retention.