
Welcome — in this lesson we dig into how AI systems shape young people’s mental health, social lives and privacy, and what educators, designers and policy makers can do about it. We’ll look beyond headlines and hype to the everyday ways AI can help — and harm — learners, and practice concrete, ethical approaches for prevention, support and design that preserves dignity and autonomy.
Why this matters
- Young people encounter AI everywhere: apps, tutoring systems, content feeds, safety tools and school platforms. Those systems influence emotions, relationships, learning and identity.
- The impacts aren’t just technical — they’re social, developmental and legal. Small design choices can amplify harm or build supports that actually help.
- If you create, select or regulate tools for young people, you need practical ways to assess risk and build safer, inclusive options that respect privacy and autonomy.
What you’ll get from this lesson
By the end you’ll be able to:
- Describe key mental‑health and socio‑emotional risks associated with AI for young people.
- Identify how AI can amplify harms like harassment, misinformation and surveillance.
- Explain how AI can be used ethically for prevention and support — and where it falls short.
- Weigh safety, autonomy and non‑surveillance alternatives in real decisions.
- Produce a basic harm‑assessment and mitigation plan for an AI feature used with young people.
How the lesson is structured
We’ll move through five short topics — mix of mini‑lectures, examples and a hands‑on activity:
-
Mental‑health and socio‑emotional risks
Quick tour of effects like anxiety, addiction/engagement loops, identity harms, social comparison and developmental concerns. -
AI‑amplified harms: harassment, misinformation, surveillance and privacy threats
Concrete examples of how algorithmic amplification, recommendation loops and data collection scale harms and create new vulnerabilities. -
AI‑enabled prevention and support: detection, moderation and ethical monitoring
When detection, content moderation and therapeutic tools can help — and the trade‑offs they introduce (false positives, bias, pathologizing). -
Balancing safety, autonomy and non‑surveillance approaches
Frameworks and practical tactics for protecting young people without over‑surveillance or paternalism; designing consent, agency and de‑escalation into systems. -
Activity: harm assessment and mitigation plan for an AI feature
A guided group or solo exercise: pick a feature (chatbot, recommendation engine, monitoring tool), identify harms, rank risk and design mitigations with policy and practice suggestions.
Time and materials (suggested)
- Total time: ~60–90 minutes (depending on discussion depth)
- Materials: lesson slides or notes, one case study prompt, simple harm‑assessment worksheet (risk matrix), breakout groups or discussion forum.
Teaching tips
- Keep it learner‑centered: prompt participants to bring real examples from their classrooms/products/policies.
- Use small groups for the activity — different roles (teacher, designer, privacy officer, student advocate) help surface tradeoffs.
- Emphasize contextual judgment — there are rarely perfect answers; aim for defensible, transparent choices.
- Include young people’s perspectives where possible — their experiences and preferences matter.
Reflection prompts (use in discussion or journaling)
- What AI interaction have you seen young people use recently that worried you — and why?
- When is surveillance justified for safety, and when does it do more harm than good?
- What’s one design change you could propose tomorrow to reduce risk in a tool you work with?
Ready to start? Head into Topic 1 to explore the mental‑health and socio‑emotional risks in more detail.
