No single point of failure.

Each layer operates independently. If the AI misses something, humans catch it. If a human can't respond, a professional steps in. Redundancy is not optional — it's the design.

01
AI Real-Time Moderation
200ms response on every message
+

How it works

Every message is scanned by a fine-tuned NLP classifier before delivery to the recipient. The model checks for self-harm language, suicidal ideation, bullying, hate speech, sexual content, and personal identifying information exposure.

Flagged messages are held in a quarantine queue and reviewed by the human moderation team within 5 minutes. The sender sees a brief notification. The recipient never receives the flagged message.

Technical specs

  • ModelGPT-4o mini + fine-tuned BERT
  • Latency<200ms per message
  • Accuracy>97% precision, >94% recall
  • Categories8 harm categories monitored
  • UpdatesRetrained monthly on new data
02
Human Moderator Escalation
Trained staff review within 5 minutes
+

How it works

Every AI flag reaches a trained human moderator within 5 minutes during peak hours, and within 15 minutes at all other times. Moderators complete mental health first aid training and follow documented escalation protocols based on severity.

Moderators can: warn and release, remove content, temporarily suspend accounts, issue permanent bans, or escalate to a licensed mental health professional.

Moderator standards

  • TrainingMental Health First Aid certified
  • Coverage24/7 across all time zones
  • Response (peak)<5 minutes
  • Response (off-peak)<15 minutes
  • ReviewMonthly audits + supervisor oversight
03
Crisis Intervention
Licensed professional on call 24/7
+

How it works

When imminent self-harm or suicidal intent is detected — by AI or by a human moderator — the conversation is immediately escalated to a licensed mental health professional on call. Simultaneously, crisis resources are surfaced in-app for both participants.

The peer user is never expected to handle a crisis situation alone. Our approach follows established safe messaging guidelines from the American Foundation for Suicide Prevention.

Crisis resources shown in-app

  • US988 Suicide & Crisis Lifeline
  • US (text)Crisis Text Line — text HOME to 741741
  • UKSamaritans — 116 123
  • AULifeline — 13 11 14
  • CACrisis Services Canada — 1-833-456-4566
04
User Controls
Block, report, end — always one tap away
+

How it works

Block, report, and end-chat buttons are always prominently visible in the UI — never buried in settings, never hidden behind confirmation dialogs. Tapping block takes instant effect with no friction.

Reports go directly to the moderator queue with high priority. Users can optionally add context. Reported users are flagged for immediate monitoring and reviewed within 1 hour.

User control specs

  • BlockInstant — no confirmation needed
  • Report reviewWithin 1 hour, always
  • Categories6 report types + free text
  • OutcomeEmail notification within 24hrs
  • AppealsAll bans are appealable
05
Parental Oversight
Optional guardian dashboard for ages 13–15
+

How it works

For users aged 13–15, Solaria requires parental or guardian consent during onboarding. Once completed, the guardian receives access to an optional dashboard showing activity summaries — never chat content.

Chat content privacy is non-negotiable. Research shows teens are far less likely to seek support if they believe they're being monitored. We protect that trust absolutely — while still giving parents meaningful visibility into usage patterns.

What parents can see

  • ✓ VisibleSession count and duration
  • ✓ VisibleForum participation overview
  • ✓ VisibleUsage hour patterns
  • ✓ ConfigurableUsage hour limits and restrictions
  • ✗ NeverChat content — ever

Your data is yours.
Completely.

We built Solaria's data architecture with privacy as the starting point — not an afterthought.

🔒
End-to-End Encryption

All messages are AES-256 encrypted at rest and TLS 1.3 in transit. Signal Protocol for chat arrives in Phase 2.

🚫
Zero Data Selling

We never sell user data to third parties. We never have. We've built our business model entirely around subscriptions and licensing — not data.

🗑️
Right to Deletion

Delete your account and all associated data — messages, journal entries, mood data, everything — with a single button. Deletion is permanent and immediate.

📊
Anonymized Research

Any research contributions are aggregated, anonymized, and stripped of all PII before processing. Opt-out available at any time in account settings.

COPPA
Children's Online Privacy Protection Act

Full compliance for users aged 13–15. Parental consent flow, minimal data collection, no behavioral advertising, right to deletion.

FERPA
Family Educational Rights and Privacy Act

School partnerships comply fully with FERPA. No student education records shared without consent. School dashboards show only aggregated, non-identifying data.

GDPR
General Data Protection Regulation (EU)

EU users have full GDPR rights: access, rectification, erasure, portability, and objection to processing. Data minimization by design.

HIPAA
Health Insurance Portability & Accountability

HIPAA-aligned data handling for all mood and mental health data. No PHI shared with third parties without explicit, informed consent.

If you're in crisis right now —

please reach out to a professional immediately. These services are free, confidential, and available 24/7.

🇺🇸
United States
988 Suicide & Crisis Lifeline
Call or text 988
Crisis Text Line
Text HOME to 741741
Trevor Project (LGBTQ+)
Call 1-866-488-7386
🇬🇧
United Kingdom
Samaritans
Call 116 123 (free, 24/7)
PAPYRUS (under 35)
Call 0800 068 4141
Shout Crisis Text
Text SHOUT to 85258
🇨🇦
Canada
Crisis Services Canada
Call 1-833-456-4566
Kids Help Phone
Call 1-800-668-6868
Crisis Text Line (CA)
Text HELLO to 686868
🇦🇺
Australia
Lifeline
Call 13 11 14
Kids Helpline
Call 1800 55 1800
Beyond Blue
Call 1300 22 4636

If you're in immediate danger, call your local emergency services (911, 999, 000, or 112).

Seen something unsafe?

Report safety issues, content violations, or anything that concerns you. Our moderation team reviews every report.