Pedagogy First Agent Powered · SAS2027 Aligned

LearnIQ: Quiet AI that supports learning

A quiet team of AI agents that expands teacher capacity, supports student-led pathways like Catalyst, Quest and Try Time, and respects SAS values of excellence, extraordinary care and possibilities.

Why LearnIQ?

AI is increasingly built into the tools schools use. LearnIQ sets out how SAS will use it in ways that serve SAS2027, personalised pathways, and the Eagle Way, rather than simply following product roadmaps.

Update – December 4, 2025

Just announced on December 4, 2025, Google's Workspace Studio for Gemini agents ↗ and Microsoft's emerging agentic enterprise models confirm the direction LearnIQ was designed for: small, school-shaped teams of agents working quietly behind a single assistant. Because LearnIQ is pedagogy-first and vendor-neutral, SAS can adopt tools like Workspace Studio or future agent platforms without changing the core principles, governance and pathways described on this page.

Core Principles

  • Start from SAS learning goals and DSLOs, not from tools
  • Give teachers back time for feedback, conferences and design
  • Help students plan, reflect and act on their own evidence
  • Keep data, privacy and wellbeing at the centre
  • Ensure transparency and human oversight at every stage
Anchored in SAS Pathways

Connecting AI to Catalyst, Quest and Try Time

LearnIQ is not a new programme. It is the layer that helps teachers and students use AI to make existing programmes richer: Catalyst projects with clearer scaffolds, Quest with smarter planning support, Try Time with better reflection and next steps.

SAS2027 · Excellence Extraordinary Care Possibilities DSLOs in Action

Personalised Pathways

The learning core is the first "layer" of LearnIQ. It contains the things agents must respect: teacher judgement, student goals, SAS pathways, and the Eagle Way.

Explore SAS Pathways

Select a pathway to see how LearnIQ agents support each programme whilst maintaining teacher control and student agency.

Questions Before Technology

The Core Prompts for Any AI Use at SAS

  • What do we want students to understand, create or improve here?
  • How does this connect to Catalyst, Quest, Try Time or core course aims?
  • Which part of this work is high-value for teachers and students?
  • Which parts repeat and can safely be handled by an agent?
  • What data is needed, and how do we protect it?

Only once these are clear does LearnIQ suggest which agents to use, and what data they are allowed to access.

Quiet Help in the Background

Staff and students interact with one assistant in natural language. Behind that sits a small team of agents, each with a narrow, well-defined job. This follows the same thinking as corporate "frontier" models, adapted for a K–12 setting.

Ethics & Data Protection

Trust requires transparency. LearnIQ operates under clear governance structures that prioritise student wellbeing, data protection, and human oversight.

Control Plane

LearnIQ Agent Registry

Every agent is registered, documented and reviewed. There are no hidden prompts, no unmanaged tools. Complete transparency ensures accountability.

  • Zero trust approach to identity and access control
  • Data flows mapped and checked against SAS policy
  • Bias-aware testing with real SAS use cases and diverse scenarios
  • Staff can see and amend what any agent is allowed to use
  • Regular audits and impact assessments by governance committee
Human Oversight

The AI Specialist at SAS

The AI Specialist's role is half pedagogy, half governance, ensuring AI serves learning goals whilst maintaining ethical standards.

  • Co-design pilots with teachers and students, not vendors
  • Write and update clear, practical AI guidelines for the community
  • Lead reviews of impact on workload, learning and wellbeing
  • Act as human-in-the-loop for any high-stakes or sensitive use
  • Bridge between educational vision and technical implementation
  • Work with technology and legal teams on data residency, uptime, and incident response plans
  • Support academic integrity by helping staff and students use AI to deepen thinking, not shortcut it
  • Work with staff to position AI as a thinking partner that suggests prompts, questions and scaffolds rather than a source of finished work
Core Principles

Our Commitment to Ethical AI

Privacy First

PDPA and GDPR compliance by design. Student data stays under SAS control, is processed within approved regions where possible, and is not used to train public models.

Bias Prevention

Regular testing with diverse scenarios. Outputs are flagged for review when confidence is low, when bias is detected, or when an agent would otherwise need to guess.

Human-in-the-Loop

Agents suggest, humans decide. High-stakes decisions always involve teacher judgement and student voice.

People & Practice

Professional Learning & Academic Integrity

For LearnIQ to work, teachers, students, and families need time and space to learn with it. Professional development and clear guardrails help AI support strong habits rather than replace them.

  • Regular, practical workshops and coaching for staff on safe, effective classroom use
  • Shared language with students about when and how AI can be used for planning, drafting, and reflection
  • Guidance that protects academic honesty while still allowing meaningful experimentation
  • Attention to equity and access so that AI support does not depend on devices or subscriptions at home
  • Support for accessible formats such as clear structure, transcripts, alternative media and simplified explanations where needed

A School Day with LearnIQ

Below is a simple way to picture LearnIQ in action. Select a role and see how the same agentic layer supports different stakeholders across a school day.

Choose Your Journey

See how LearnIQ adapts to serve teachers, students, parents, and school leaders with context-aware, personalised support.

What Success Looks Like

Success for LearnIQ should feel obvious in the corridors and in the data. We track this through a blend of quantitative metrics and qualitative stories, across teachers, students, and operations. The figures below are illustrative goals for a mature pilot. Final targets would be co-designed with divisions as part of implementation.

Teacher Time Regained
0 hrs
Illustrative goal: average hours per fortnight shifted from admin to planning and feedback
Student Agency
0%
Illustrative goal: rise in students reporting "I can plan and track my learning"
Pilot Retention
0%
Illustrative goal: percentage of pilots adopted into regular practice after evaluation
Operational Friction
0%
Illustrative goal: reduction in avoidable timetable clashes and duplicate administrative tasks
Methodology

How We Measure Impact

Below are sample indicators. The AI Specialist works with divisions to agree local versions and check them regularly through a continuous improvement cycle: Pilot → Evidence → Adjust → Scale.

Quantitative Data

  • → Time logs from teachers
  • → Usage analytics from agents
  • → System efficiency metrics

Qualitative Feedback

  • → Student voice surveys
  • → Teacher focus groups
  • → Parent feedback sessions

Continuous Review

  • → Termly impact reports
  • → Ethics committee reviews
  • → Community consultation

Foundations & Inspiration

LearnIQ draws on established research and enterprise AI frameworks, adapted thoughtfully for K–12 education. We prioritise pedagogy over hype, evidence over marketing.

Technical Foundation

Google Gemini & Agentic AI

Kaggle's Agentic AI course provides the technical architecture: orchestrator agents coordinating specialist agents with narrow, defined tasks. LearnIQ adapts this for educational contexts.

Enterprise Model

Microsoft WorkIQ Concepts

Microsoft's Frontier Model ideas using multiple specialised agents behind a unified interface, inspired LearnIQ's structure. LearnIQ adapts these concepts thoughtfully from corporate to educational settings.

Pedagogical Grounding

SAS Learning Framework

LearnIQ starts with SAS2027, DSLOs, and proven pathways like Catalyst and Quest. Technology serves pedagogy, not the other way around. Human judgement remains central.

⚠️ Important Context

LearnIQ is a concept framework, not a deployed system. This site demonstrates how SAS might thoughtfully integrate AI whilst maintaining our values. It represents research, planning, and vision, not vendor promises or operational reality. Any implementation would require extensive pilot testing, community consultation, and rigorous ethical review.

Continuous Adaptation

LearnIQ is not a one-time implementation. It's a continuous cycle of piloting, gathering evidence, adjusting based on feedback, and scaling what works.

The Process

Pilot → Evidence → Adjust → Scale

1

Pilot

Start small with volunteer teachers and students. Test one agent, one pathway, one use case.

2

Gather Evidence

Collect both quantitative data and qualitative stories. What worked? What didn't? Why?

3

Adjust

Refine based on feedback. Change prompts, boundaries, or abandon what doesn't serve learning.

4

Scale

Only expand what's proven valuable. Scale gradually with ongoing support and monitoring, and review agents regularly as new tools, regulations, and community feedback emerge.

Stakeholder Involvement

Who Shapes LearnIQ?

  • Teachers – Co-design pilots, provide feedback, define boundaries
  • Students – Share experiences, suggest improvements, test usability
  • Parents – Voice concerns, ask questions, shape transparency standards
  • Leaders – Align with strategic goals, allocate resources, ensure sustainability
  • Alumni – Offer external perspective, share university/workplace insights
Feedback Channels

How We Listen

Regular surveys, focus groups, open forums and a public feedback portal give the SAS community an ongoing voice in how LearnIQ develops. Families also receive clear explanations of what AI can and cannot see or do, and practical guidance for safe, age-appropriate use at home. Transparency is built into every stage.