Human‑rights and learner‑centred principles (deliverable: draft guiding principles)

This is a short, practical set of principles you can copy, adapt, or use as the starting point for local policy, procurement, classroom practice or product design. Each principle has a plain-language statement, why it matters, quick “what to do” actions, red flags, and examples you can copy into lesson plans, contracts or checklists.
Tone: learner-centered, rights-respecting, practical. Use what fits your context — I’ve added quick prompts at the end to help you adapt locally.
1. Respect learner dignity and rights
Plain statement:
Treat every learner as a rights-holder: respect their dignity, autonomy and entitlements (education, privacy, safety).
Why it matters:
Learners—especially children and young people—have legal and moral rights that must underlie technology choices.
What to do
- Require age-appropriate consent / parental involvement per local law.
- Avoid tools that force learners into intrusive profiling.
- Include a clause in vendor contracts that prohibits collecting data beyond what’s necessary for learning.
Red flags
- Systems that profile learners for “risk” without transparent criteria.
- Mandatory data collection for non-essential features.
Example
Classroom: Offer an opt-out for non-essential AI features (e.g., automated discussion analysis) and provide an alternative activity.
2. Prioritize safety and protection from harm
Plain statement:
Design and choose AI so learners are protected from physical, psychological, sexual and social harms.
Why it matters:
AI can amplify risks (exposure to harmful content, biased feedback, grooming, stigmatizing labels).
What to do
- Screen models for harmful output; block or flag unsafe content.
- Have clear escalation and reporting procedures when AI identifies risks.
- Do regular safety testing with diverse scenarios.
Red flags
- No safety testing or unclear procedures for handling flagged harms.
- Overreliance on AI to “detect” harm without human oversight.
Example
Product: A chatbot used in health education includes a “flag” that notifies a trained human facilitator, not an automated referral, when a learner expresses self-harm risk.
3. Minimize data collection and protect privacy
Plain statement:
Collect the least amount of personal data needed and protect it with strong safeguards.
Why it matters:
Less data = less risk. Learner data is sensitive (health, sexual behavior, neurodiversity) and must be handled carefully.
What to do
- Apply data minimization, anonymization and short retention periods.
- Prefer local, device-based processing where possible.
- Log and publish a clear data map: what is collected, why, how long it’s retained, who can access it.
Red flags
- Vendor requires raw access to full chat logs or video feeds as a condition.
- Indefinite data retention or vague data policies.
Example
Policy: “Only pseudonymized engagement metrics are stored beyond the academic year; raw audio/video is processed on-device and not retained.”
4. Promote equity and non‑discrimination
Plain statement:
Prevent AI from reproducing or amplifying bias; ensure fair access and outcomes for all learners.
Why it matters:
Bias can lead to unfair labeling, exclusion, lower expectations or unequal access to support.
What to do
- Test tools on diverse datasets and report disparities.
- Provide accommodations and alternatives to AI-driven pathways.
- Monitor outcomes by demographic groups and act on disparities.
Red flags
- No demographic testing or refusal to share evaluation data.
- One-size-fits-all adaptive learning that assumes a single norm.
Example
Design: Adaptive learning must allow teachers to override recommendations and to set multiple learning pathways.
5. Support learner agency and participation
Plain statement:
Center learners’ voices: give them control over how AI affects their learning and invite them into decisions about it.
Why it matters:
Agency supports development, trust and better learning outcomes.
What to do
- Provide clear opt-ins/opt-outs and user settings for personalization.
- Use age-appropriate explanations about what the AI does.
- Involve learners in testing and feedback loops.
Red flags
- Hidden personalization without user control.
- No mechanism for learners to correct or contest AI outputs about them.
Example
Classroom: A short interactive activity explains how a learning recommender works and asks students to set their comfort level for data use.
6. Be transparent and explainable
Plain statement:
Make how and why AI makes decisions understandable to learners, families and educators.
Why it matters:
Transparency builds trust and allows meaningful consent and oversight.
What to do
- Publish plain-language model descriptions, data sources and limitations.
- Provide case-level explanations (“why did I get this recommendation?”).
- Train staff to explain AI behavior to learners and families.
Red flags
- “Black box” claims with no explainability features.
- Technical jargon used in communications to avoid real understanding.
Example
Procurement: Require an “explainability pack” from vendors that includes sample explanations for common outputs.
7. Be developmentally and culturally appropriate
Plain statement:
Align AI content, interactions and expectations with learners’ developmental stage and cultural context.
Why it matters:
What’s safe or appropriate for one age/culture may be harmful for another.
What to do
- Use age gates and content tuning; involve local educators in content review.
- Avoid universal assumptions about maturity or norms.
- Localize content and sensitivity settings.
Red flags
- Default settings optimized for adult norms or a single culture.
- No local review or teacher input in content decisions.
Example
Curriculum: Sexuality education chatbots should be reviewed by local educators and child protection experts before classroom use.
8. Ensure accessibility and inclusion
Plain statement:
Design AI tools so learners with disabilities, language differences and diverse needs can use them effectively.
Why it matters:
Accessibility is a rights obligation and supports learning for everyone.
What to do
- Follow accessibility standards (WCAG) and provide multiple modes (text, audio, visuals).
- Include assistive tech compatibility and captions/transcripts.
- Test with learners who have diverse needs.
Red flags
- Interfaces that rely solely on voice or small visual cues without alternatives.
- No plan for translation or special education adaptations.
Example
Tool: An AI tutor provides text summaries, audio narration, and simplified language options.
9. Build accountability and effective redress
Plain statement:
Establish clear responsibilities, oversight mechanisms and ways to resolve harms or mistakes.
Why it matters:
Learners and families need paths to complain, correct errors, and seek remedies.
What to do
- Define roles (who monitors, who responds, who owns the risk).
- Create clear reporting channels and time-bound response commitments.
- Include contractual liability and audit rights for vendors.
Red flags
- No named responsible party or inaccessible complaint processes.
- Vendors refusing audits or data access for oversight.
Example
Policy clause: “The vendor will respond to safety or privacy reports within 48 hours; audits may be conducted annually by the school district.”
10. Monitor, evaluate and iterate continuously
Plain statement:
Treat deployment as an ongoing process—monitor impacts, iterate based on evidence and engage stakeholders.
Why it matters:
Risks and contexts change; continuous review prevents drift and harm.
What to do
- Define KPIs for safety, equity, engagement and satisfaction.
- Schedule regular reviews with learners, parents and staff.
- Keep a changelog for model updates and re-evaluate after major changes.
Red flags
- One-time review only at procurement with no monitoring plan.
- Model updates pushed without re-evaluation.
Example
School: Quarterly impact review includes student focus groups, data checks and a public summary of findings.
How to adapt these locally — quick prompts
- Legal & cultural fit: What local child rights and data protection laws apply? What cultural norms must be respected?
- Age bands: How do these principles change for early childhood, primary, secondary and adolescent learners?
- Resources: Which actions are essential vs. nice-to-have given your budget/staffing?
- Stakeholders: Who must sign off? (e.g., school board, teacher unions, parent groups, student representatives)
- Escalation: What is your local reporting and child-protection pathway?
Sample short policy language (copyable)
- “We will only use AI tools that: (1) minimize learner data, (2) provide clear explanations tailored to learners’ age, (3) allow opt-out of non-essential features, and (4) undergo quarterly safety and equity reviews with stakeholder input.”
Simple procurement checklist (yes/no)
- Does the vendor provide a plain-language model card?
- Is data collection minimized and documented?
- Can raw learner data be exported or deleted on request?
- Are there documented safety tests and mitigation plans?
- Does the vendor allow audits or share evaluation results?
- Are accessibility and localization options available?
- Is there a clear escalation and complaint process?
Quick classroom checklist for teachers
- Has the AI tool been reviewed by school leadership for safety/privacy?
- Are learners told what the tool does and why?
- Is there a non-AI alternative activity available?
- Do we have consent/opt-out records?
- Do students know how to report issues and who to talk to?
Reflection prompts for your team (15–30 minute session)
- Which of these principles do we already meet? Which are missing?
- What’s our single highest-risk use case for AI in the next 12 months?
- Who will own monitoring and communication with learners/families?
- How will we involve learners in evaluation?
Indicator ideas for monitoring (pick 3–5)
- Number of safety incidents flagged and resolved within SLA.
- Disparities in outcomes across learner groups (grade, language, disability).
- Percentage of learners with an active opt-out for personalization.
- Time to respond to privacy or safety complaints.
- Results from learner/parent trust surveys.
Final note
Keep this draft short, public and adaptable. The goal is not perfection but to put human rights and learner needs at the center so every decision—procurement, classroom practice, contract or product design—can be tested against a clear set of principles. If you want, I can convert this into a one-page poster, a short policy template, or a checklist spreadsheet for procurement. Which would help you most?
