The safest peer
platform ever built.
Five independent layers of protection — automated AI, trained humans, and licensed professionals — working in parallel so no teen is ever truly alone in a crisis.
No single point of failure.
Each layer operates independently. If the AI misses something, humans catch it. If a human can't respond, a professional steps in. Redundancy is not optional — it's the design.
How it works
Every message is scanned by a fine-tuned NLP classifier before delivery to the recipient. The model checks for self-harm language, suicidal ideation, bullying, hate speech, sexual content, and personal identifying information exposure.
Flagged messages are held in a quarantine queue and reviewed by the human moderation team within 5 minutes. The sender sees a brief notification. The recipient never receives the flagged message.
Technical specs
- ModelGPT-4o mini + fine-tuned BERT
- Latency<200ms per message
- Accuracy>97% precision, >94% recall
- Categories8 harm categories monitored
- UpdatesRetrained monthly on new data
How it works
Every AI flag reaches a trained human moderator within 5 minutes during peak hours, and within 15 minutes at all other times. Moderators complete mental health first aid training and follow documented escalation protocols based on severity.
Moderators can: warn and release, remove content, temporarily suspend accounts, issue permanent bans, or escalate to a licensed mental health professional.
Moderator standards
- TrainingMental Health First Aid certified
- Coverage24/7 across all time zones
- Response (peak)<5 minutes
- Response (off-peak)<15 minutes
- ReviewMonthly audits + supervisor oversight
How it works
When imminent self-harm or suicidal intent is detected — by AI or by a human moderator — the conversation is immediately escalated to a licensed mental health professional on call. Simultaneously, crisis resources are surfaced in-app for both participants.
The peer user is never expected to handle a crisis situation alone. Our approach follows established safe messaging guidelines from the American Foundation for Suicide Prevention.
Crisis resources shown in-app
- US988 Suicide & Crisis Lifeline
- US (text)Crisis Text Line — text HOME to 741741
- UKSamaritans — 116 123
- AULifeline — 13 11 14
- CACrisis Services Canada — 1-833-456-4566
How it works
Block, report, and end-chat buttons are always prominently visible in the UI — never buried in settings, never hidden behind confirmation dialogs. Tapping block takes instant effect with no friction.
Reports go directly to the moderator queue with high priority. Users can optionally add context. Reported users are flagged for immediate monitoring and reviewed within 1 hour.
User control specs
- BlockInstant — no confirmation needed
- Report reviewWithin 1 hour, always
- Categories6 report types + free text
- OutcomeEmail notification within 24hrs
- AppealsAll bans are appealable
How it works
For users aged 13–15, Solaria requires parental or guardian consent during onboarding. Once completed, the guardian receives access to an optional dashboard showing activity summaries — never chat content.
Chat content privacy is non-negotiable. Research shows teens are far less likely to seek support if they believe they're being monitored. We protect that trust absolutely — while still giving parents meaningful visibility into usage patterns.
What parents can see
- ✓ VisibleSession count and duration
- ✓ VisibleForum participation overview
- ✓ VisibleUsage hour patterns
- ✓ ConfigurableUsage hour limits and restrictions
- ✗ NeverChat content — ever
Your data is yours.
Completely.
We built Solaria's data architecture with privacy as the starting point — not an afterthought.
All messages are AES-256 encrypted at rest and TLS 1.3 in transit. Signal Protocol for chat arrives in Phase 2.
We never sell user data to third parties. We never have. We've built our business model entirely around subscriptions and licensing — not data.
Delete your account and all associated data — messages, journal entries, mood data, everything — with a single button. Deletion is permanent and immediate.
Any research contributions are aggregated, anonymized, and stripped of all PII before processing. Opt-out available at any time in account settings.
Full compliance for users aged 13–15. Parental consent flow, minimal data collection, no behavioral advertising, right to deletion.
School partnerships comply fully with FERPA. No student education records shared without consent. School dashboards show only aggregated, non-identifying data.
EU users have full GDPR rights: access, rectification, erasure, portability, and objection to processing. Data minimization by design.
HIPAA-aligned data handling for all mood and mental health data. No PHI shared with third parties without explicit, informed consent.
If you're in crisis right now —
please reach out to a professional immediately. These services are free, confidential, and available 24/7.
Call or text 988
Text HOME to 741741
Call 1-866-488-7386
Call 116 123 (free, 24/7)
Call 0800 068 4141
Text SHOUT to 85258
Call 1-833-456-4566
Call 1-800-668-6868
Text HELLO to 686868
If you're in immediate danger, call your local emergency services (911, 999, 000, or 112).
Seen something unsafe?
Report safety issues, content violations, or anything that concerns you. Our moderation team reviews every report.