Digital Wellbeing & Online Safety
Navigating AI
as a family
AI is reshaping how children learn, think, connect and develop — and ASAD is responding as a whole school. This guide brings together our teaching, student wellbeing, inclusion and safeguarding teams to give ASAD families a complete, honest picture of what AI means for your child — and what we are doing about it together.
01
AI in Schools
Artificial Intelligence has arrived in education — not as a distant future prospect, but as something reshaping classrooms, homes and workplaces right now. At ASAD, we take this seriously. Not just as a curriculum question, but as a student wellbeing question, a safeguarding question, a mental health question and a question about the kind of human beings we are helping to develop.
A whole-school response
AI does not respect departmental boundaries — and neither does our response to it. At ASAD, our approach to AI is coordinated across the whole school:
- Teaching teams are integrating AI literacy into curriculum delivery — teaching students to use AI critically and purposefully, not passively
- Student wellbeing is monitoring the emotional and psychological dimensions of AI use — including dependency, anxiety, social withdrawal and the impact of AI companions on real relationships
- Inclusion is exploring how AI can support students with diverse learning needs while ensuring it does not become a barrier to developing foundational skills
- Safeguarding is tracking emerging risks — from data privacy and inappropriate content to AI-generated misinformation and manipulation
- Leadership is reviewing and updating our policies continuously — our Digital & Social Media Policy and ICT Acceptable Use Policy are living documents, not static rules
This guide is the parent-facing expression of that whole-school effort. Our goal is to make sure that what we do at school and what happens at home are working in the same direction.
What AI actually is
The AI your child is most likely to use is called a Large Language Model (LLM) — the technology behind tools like ChatGPT, Google Gemini, Microsoft Copilot and Claude. These systems have been trained on vast amounts of text and can generate fluent, human-sounding responses to almost any question. They can write essays, solve problems, write code, translate languages and summarise complex documents.
The most important thing to understand: LLMs do not think — they predict. They generate plausible-sounding text based on patterns, with no awareness of whether it is true. This is why AI can sound completely confident while being completely wrong.
The ASAD approach
At ASAD, we believe the question is not whether students will use AI — they already are. The question is whether they will use it as a thinking tool or as a thinking replacement. Our position is clear: AI should act as a pilot for student inquiry, not a surrogate for it.
Our teachers use AI as a preparation and planning tool — generating differentiated materials, creating practice questions, summarising professional reading and drafting feedback frameworks. The goal is always to free up teacher time for what matters most: the human relationships and conversations that no AI can replicate.
In the classroom, students are guided to use AI in structured, purposeful ways — learning to formulate precise questions, evaluate AI outputs critically, identify hallucinations and limitations, and understand why the tool works the way it does. We call this active AI use, as distinct from passive consumption of AI-generated content. Digital literacy today includes knowing how to work with AI intelligently — not just how to use it.
Our academic integrity standards are clear: AI may support the thinking process, but the thinking — and the submitted work — must be the student’s own. We ask you to reinforce this at home.
How students misuse it
The most common misuse is straightforward: copying AI-generated text and submitting it as original work. This is academically dishonest — but the deeper problem is self-inflicted. Every assignment exists to build a skill. A student who uses AI to write their essay has not practised writing. A student who uses AI to solve every problem has not developed problem-solving. The output exists; the learning does not.
A more subtle misuse is using AI as a first resort rather than a last one — reaching for it the moment a task feels difficult, bypassing the productive struggle that is where real learning happens. This pattern is worth watching for at home.
AI, the future and your child’s readiness
AI is already embedded in virtually every professional field — law, medicine, finance, engineering, education, marketing, design. This is not a future scenario; it is the present reality your child is growing into. The World Economic Forum estimates that 65% of children entering primary school today will work in jobs that do not yet exist, many of them shaped by AI.
But here is what the evidence consistently shows: employers are not looking for AI users — they are looking for people who can think alongside AI. The skills that will be most valuable are the ones that AI cannot replicate: critical judgement, ethical reasoning, original creativity, empathy, communication and the ability to spot when a machine is wrong. These are precisely the skills that are eroded by over-reliance on AI during formative years.
Universities are already responding. The IB is actively revising its assessment frameworks to account for AI. Leading universities are introducing AI literacy requirements. The students who will be most competitive are not those who used AI most freely — they are those who used it most intelligently, while keeping their own thinking sharp.
At ASAD, curriculum integration of AI is deliberate and evolving. We are not simply teaching about AI — we are teaching students to be discerning, critical and ethical users of it. That is a different, and more demanding, goal. It is also the right one.
Tools your child may already be using
- ChatGPT (OpenAI) — writes, explains, summarises, answers questions; the most widely used
- Google Gemini — integrated into Google Docs, Search and Gmail
- Microsoft Copilot — built into Word, PowerPoint and Teams
- Claude (Anthropic) — known for nuanced reasoning and longer, more careful responses
- Grammarly — rewrites and corrects written work automatically
- Perplexity — AI-powered search with source citations
Most are free, work on any device, and require no account for basic use. Your child almost certainly knows how to access them — and may be using several.
02
Age-by-Age Guide
Children’s relationship with AI should evolve as they grow. What is appropriate at age 7 looks very different at age 16. Here is a practical guide by age group.
| Age group | Approach | Key focus |
|---|---|---|
| 5–8 | No independent AI use | Reading, curiosity, productive struggle |
| 9–12 | Supervised introduction | Fact-checking, trying first, exploring together |
| 13–15 | Guided independence | Understanding limits, school policy, privacy |
| 16–18 | Critical partnership | Ethics, bias, AI in future careers |
Ages 5–8 (Early Years & Lower Primary)
AI awareness, not AI use. Children this age should not be using AI tools independently. Focus instead on building the foundations: a love of reading, curiosity, the ability to sit with a hard question and work through it.
- Keep screen time limited and supervised
- If AI comes up, explain it simply: “It’s a computer that’s very good at predicting words — but it doesn’t actually know things the way you do”
- Prioritise play, conversation, drawing and reading over any digital tool
- If your child uses a voice assistant (Siri, Alexa), point out when it gets things wrong — good early critical thinking practice
Ages 9–12 (Upper Primary)
Supervised introduction with clear boundaries. Children this age are likely to encounter AI at school and may seek it out at home. Introduce it properly — with you involved.
- Explore AI tools together before letting them use it alone — ask it questions and discuss whether the answers are accurate
- Establish a rule: try it yourself first, then use AI to check or extend your thinking
- Teach fact-checking as a habit — look up one claim from every AI response in another source
- Discuss what AI cannot do: feel, create truly original ideas, or know what is happening right now
- Keep AI use out of bedrooms and off personal devices where possible
Ages 13–15 (Early Secondary)
Guided independence with ongoing conversation. Teenagers will use AI — the question is whether they use it thoughtfully. This age group is most at risk of over-reliance, as homework pressure meets easy access.
- Talk openly about where the line is between using AI as a tool and using it as a substitute for thinking
- Ask them to explain work back to you in their own words — if they cannot, they have not understood it
- Make sure they know ASAD’s assessment policies around AI — ignorance is not a defence
- Watch for signs of over-reliance: inability to start tasks without AI, anxiety when it is unavailable, declining written fluency
- Talk about privacy — what they should never type into a public AI tool
Ages 16–18 (Senior Secondary / DP)
Critical partnership. Older students should be developing a sophisticated, critical relationship with AI. University and the workplace will require both the ability to use AI effectively and the ability to think independently.
- Encourage them to think of AI as a thinking partner, not an answer machine — most useful for brainstorming, challenging their own arguments, exploring perspectives
- Discuss the ethics of AI in academic work — especially important for IB assessments, where academic integrity is taken extremely seriously
- Talk about AI in the context of their future: which careers will be affected, what skills remain distinctly human
- Encourage scepticism — a student who can identify AI bias, spot a hallucination and evaluate sources critically is genuinely ahead
03
Parent Strategies
You do not need to be a technology expert. The most effective things you can do are straightforward — they are really just good parenting applied to a new context.
“The students who will do best aren’t the ones who used AI most — they’re the ones who used it most intelligently.”
What to watch for at home
Before setting rules, it helps to understand what over-reliance actually looks like. These are the signs worth paying attention to:
- Finishing work unusually fast — but unable to explain it. If your child completes assignments quickly but cannot walk you through their thinking, they may have outsourced the work rather than done it.
- Struggling to start without AI. Difficulty beginning a task, organising thoughts, or putting a first sentence on paper without reaching for an AI tool is a sign of developing dependency.
- Referring to AI as “he,” “she” or a friend. AI is designed to feel conversational and warm. If your child is talking about an AI chatbot the way they would talk about a person, they may be forming an emotional attachment that warrants a conversation.
- Anxiety when AI is unavailable. A child who becomes stressed or unable to function when a tool is down or restricted — in an exam, for example — has likely become over-reliant on it.
- Less curiosity, less persistence. Children who stop exploring questions independently, give up quickly on hard problems, or show less interest in figuring things out are showing signs that AI may be replacing the productive struggle that builds intellectual resilience.
What you can do
-
Try before you ask AI. Make this a household rule: attempt the task independently before opening any AI tool. Even five minutes of genuine effort — trying to recall, draft or reason — makes everything that follows more productive. The struggle is where the learning happens.
-
Ask them to explain it back. If your child used AI to understand something, ask them to explain it in their own words. If they cannot, they have not learned it — they have only read it. This is the single most effective check on whether AI is supporting or replacing learning.
-
Fact-check one thing together. Pick one statement from an AI response and look it up in a reliable source. This builds the habit of not accepting confident-sounding information at face value — one of the most important skills of our time.
-
Model thinking out loud. When you face a problem — even an everyday one — share your reasoning process with your child. Walk them through how you thought it through. Brookings researchers find this is one of the most effective ways parents build critical thinking in their children, and it costs nothing.
-
Explain the why, not just the rule. Help your child understand why AI should support rather than replace their thinking — not because the school says so, but because their own intelligence is worth developing. A shortcut today is a skill gap tomorrow.
-
Protect offline time. Reading, conversation, drawing, sport, puzzles — none of these can be replaced by AI, and all develop capacities it cannot. A healthy balance is not just good for wellbeing; it is essential for cognitive development.
-
Check devices for VPN apps. VPNs bypass parental controls, school filters and UAE network restrictions entirely. Common ones: NordVPN, ExpressVPN, Proton VPN, VPN browser extensions. Note: unauthorised VPN use is also illegal in the UAE under TDRA regulations.
-
Keep the conversation open. Children who feel comfortable talking about their AI use — including where they have pushed boundaries — develop far better habits than those who hide it. Curiosity works better than surveillance.
04
Conversations
Talking to your child about AI does not need to be a lecture. The most effective conversations are short, curious and two-way. Here are starter questions for different situations.
Starting the conversation
If you have never talked about AI at home, the easiest way in is curiosity rather than concern. Letting your child be the expert is disarming — and often more revealing than asking directly.
“Have you used any AI tools at school lately? What did you think of them?”
“Show me something you’ve done with AI — I’d love to see how it works.”
“What do you think AI is actually good at? What is it bad at?”
“If AI got something wrong, how would you know?”
When you suspect over-reliance
“Can you walk me through how you approached this? What did you work out yourself?”
“If you had to explain this to someone without using AI, what would you say?”
“What would happen if AI wasn’t available in your exam? Could you do this?”
“What part of this did you actually figure out — and what did AI do for you?”
When cheating comes up
“What is this assignment actually for? What skill is it supposed to build in you?”
“If AI writes your essay, who has learned something — you or the computer?”
“The problem isn’t getting caught. The problem is you won’t be able to do it yourself when it matters.”
“I get that it’s tempting. But what are you giving up by taking that shortcut?”
When they say “everyone uses it”
“Probably true. The difference is how. There’s a big gap between using AI to check your thinking and using it instead of thinking.”
“Your brain is the most valuable thing you have. It’s worth keeping sharp.”
A simple activity to try together
Pick any everyday object in your home — a piece of furniture, a food item, something on the kitchen counter. Ask your child: “How did this get here?” Together, trace every step you can think of — who made it, where the materials came from, how it was transported, who sold it. You will not know all the answers, and that is fine. It is the process of exploring the question together that builds critical thinking. You can even use AI afterwards to check or extend your ideas — just verify the answers it gives you.
This activity, recommended by Brookings researchers, works for any age and takes five minutes. The goal is not the answer — it is the habit of thinking independently first.
These conversations go better when they are regular and low-stakes rather than rare and high-stakes. Five minutes at dinner about something AI-related builds more trust than a single formal talk.
05
Why It Matters
Over-reliance on AI is not just an academic integrity issue — it is a cognitive development one. Understanding this gives parents a principled reason to set boundaries, not just a rule to enforce.
The brain is a muscle
The brain strengthens the connections it uses and weakens the ones it does not — this is called neuroplasticity. When your child works through a difficult problem, rewrites a paragraph three times, or debates an idea with a classmate, their brain is physically building neural pathways. The struggle is not a sign that something is too hard. It is the learning itself happening.
When AI removes that struggle, it also removes that growth.
Cognitive offloading — and why scale matters
We all use tools to handle tasks we could do ourselves — calculators, GPS, calendars. This is called cognitive offloading, and in moderation it is perfectly healthy. The concern with AI is the scale and depth of what gets offloaded.
A Brookings Institution study involving 505 students, parents, teachers and education leaders across 50 countries — published in January 2026 — found that AI’s ease of use, combined with the reward of better grades for less effort, actively drives cognitive offloading and dependency in students. The study found this pattern is eroding foundational knowledge and critical thinking, leaving students more vulnerable to accepting AI-generated misinformation as fact. Remarkably, 65% of students surveyed in related research expressed concern that their own reliance on AI would lead to cognitive decline.
“A GPS replacing your memory of one route is harmless. AI replacing your child’s ability to construct an argument or sustain a line of thought is something fundamentally different.”
The Brookings report also found that these patterns weaken what researchers call a learning mindset — students develop unrealistic expectations about how easy learning should be, become less willing to engage in the productive struggle that leads to real understanding, and lose opportunities to build resilience and grit. These are not small losses. They are the foundations of independent thinking.
What the research is finding
Research Finding — Depth of Understanding
Students using AI for scientific inquiry exerted significantly less mental effort — and showed significantly less depth of understanding — than those working without it. They found answers more easily, but understood them less.
Stradler, Bannert & Sailer — Cognitive ease at a cost: LLMs reduce mental effort but compromise depth in student scientific inquiry
Research Finding — Digital Amnesia
Knowing information is easily retrievable reduces the brain’s motivation to store it. If a student always looks something up rather than recalling it, retention weakens — making it harder to connect ideas, build on prior knowledge, or think creatively. Memory is not just storage; it is the raw material of original thought.
Sparrow, Liu & Wegner — Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips, Science (2011)
Research Finding — Skill Atrophy
Skills not practised fade. A student who consistently uses AI to draft writing may never develop a fluent written voice of their own. A student who uses AI to solve every problem may lose the persistence and reasoning that come from working through difficulty independently — losses that are gradual, invisible, and tend to surface at the worst moments.
Risko & Gilbert — Cognitive Offloading, Trends in Cognitive Sciences (2016)
Research Finding — Loss of Productive Struggle
The frustration of wrestling with a hard question — failing, trying again, and finally working it out — is where deep learning happens. It also builds resilience and intellectual confidence. AI’s instant answers bypass this entirely, delivering the output without the growth.
Kapur, M. — Productive Failure, Cognition and Instruction (2016)
What this means for your family
None of this means AI is harmful by nature. Used well — as a thinking partner, a second opinion, a way to explore ideas — it can genuinely extend learning. Used as a substitute for thinking, it quietly erodes the very capacities it appears to help with.
The habits your child forms now, during their most formative years of cognitive development, will shape how they think for the rest of their lives. That is worth taking seriously.
06
Ethics & Privacy
Using AI well involves honesty, critical thinking and protecting your child’s personal information. Here is what every ASAD family should know.
Academic honesty
Submitting AI-generated work as your own is dishonest. But the deeper problem is self-inflicted: every assignment skipped is a skill unpractised. The task exists to build something in your child — knowledge, fluency, reasoning. AI can produce the output without any of that happening.
At ASAD, we ask students to be transparent about how they use AI. The expectation is that AI supports thinking, not replaces it. We encourage you to reinforce this at home — not as a school rule, but as a principle worth having.
AI gets things wrong — confidently
LLMs regularly produce incorrect information stated with complete authority. Invented statistics, fabricated quotes, fictional research papers, wrong dates — all presented fluently and convincingly. This is known as “hallucination,” and it is a fundamental feature of how these tools work.
The rule is simple: every factual claim from an AI must be verified in a reliable source before it is used — in homework, presentations, research, or anything else.
What your child should never share with AI
Most public AI tools store what users type and may use it to train future versions. Your child should never enter:
- Full name, address, phone number or personal contact details
- Information about family members or home circumstances
- Medical or mental health information
- Confidential school assignments, exam content or teacher feedback
- Passwords or account details of any kind
A good rule of thumb: if you would not put it on a public noticeboard, do not put it in an AI chat.
Bias in AI
AI is trained on human-generated content — which means it reflects human biases. It can produce content that reinforces stereotypes, presents one-sided perspectives or overlooks certain cultures entirely. Students need to approach AI output with the same critical eye they would apply to any other source.
AI companions and emotional attachment
AI is designed to feel warm, conversational and responsive. For many children — especially those who are lonely, anxious or struggling socially — this can become a substitute for human connection rather than a supplement to it. Surveys find that one in three teenagers in the US now reports liking talking to AI equally or more than other people. The American Psychological Association has warned that manipulative AI design may displace or interfere with the development of healthy real-world relationships.
Children who have strong relationships with the adults in their lives are significantly more likely to thrive — emotionally, socially and academically. No AI can replicate what a parent, teacher or friend provides: genuine empathy, shared experience, and the friction of real human relationships that builds character and resilience. If your child is choosing AI interaction over time with people, or referring to a chatbot the way they would a person, it is worth a conversation.
Digital footprint
The habits your child forms now — around what they share, what they rely on and how they engage with technology — will shape their relationship with it for decades. Helping them think carefully about this is not just about school performance. It is about the kind of digital citizens they become.
07
Further Reading & References
A curated selection of accessible guides, authoritative organisations and peer-reviewed research for families who want to go further.
ASAD policies
-
Policy
Digital & Social Media Policy
Expectations around student use of social platforms and digital tools at ASAD
-
Policy
ICT Acceptable Use & BYOD Policy
Guidelines for personal device use at school
-
Policy
Student Wellbeing Policy
ASAD’s whole-school approach to student wellbeing
For parents — practical guides
-
Brookings
Tips for Parents: Raising Resilient Learners in an AI World
Research-grounded advice from the Brookings Institution on how parents can help children develop the independent thinking skills that AI cannot replace
-
Penn State
Navigating AI as a Parent: How to Support Your Child’s Digital Well-being
A practical, parent-focused guide from Penn State’s Thrive programme covering conversation strategies, screen habits and digital wellbeing
-
Common Sense
Parent’s Ultimate Guide to Generative AI
Common Sense Media’s regularly updated, age-specific overview of generative AI tools and how children are using them
-
The Guardian
Are We Living in a Golden Age of Stupidity?
An accessible, thought-provoking article on how technology — including AI — may be affecting human cognition and intellectual effort at a societal level
-
Khan Academy
AI for Education — Khan Academy
Free resources and explainers from Khan Academy on how AI is being integrated into learning, including Khanmigo, their AI tutor designed to ask questions rather than give answers
Authoritative organisations
-
UNESCO
AI and the Futures of Learning — UNESCO
UNESCO’s international framework for responsible AI use in schools, including the 2024 AI Competency Frameworks for students and teachers, and a dedicated case study on the UAE’s K-12 AI curriculum
-
MIT RAISE
AI Education Resources — MIT RAISE Initiative
Free, research-backed materials from MIT’s Responsible AI for Social Empowerment and Education initiative, including the Day of AI free K-12 curriculum used in over 170 countries
Research cited in this guide
-
Report
Brookings Institution — A New Direction for Students in an AI World: Prosper, Prepare, Protect (2026)
A year-long study involving 505 students, parents, teachers and education leaders across 50 countries. Finds that risks of AI in children’s education currently overshadow benefits, with cognitive offloading and emotional dependency among the primary concerns
Study
-
Study
Cognitive Offloading or Cognitive Overload? How AI Alters the Mental Architecture of Coping — Frontiers in Psychology (2025)
Explores the tension between AI reducing mental burden and the risk of cognitive overload and dependency, with direct implications for how students regulate their own learning
-
Study
Exploring the Effects of AI on Student and Academic Well-being in Higher Education — Frontiers (2025)
A mini-review of the evidence on how AI tools affect student wellbeing, academic performance and cognitive engagement — covering both benefits and risks
-
Study
Georgiou et al. — ChatGPT Produces More “Lazy” Thinkers: Evidence of Cognitive Engagement Decline (2025)
Students using ChatGPT showed significantly lower cognitive engagement scores than those working without it — direct evidence of AI-induced cognitive offloading in academic settings
-
Study
Sparrow, Liu & Wegner — Google Effects on Memory — Science (2011)
Foundational research showing that knowing information is easily retrievable online reduces the brain’s motivation to retain it — the original digital amnesia study
-
Study
Kapur, M. — Productive Failure — Cognition and Instruction (2016)
Establishes that struggle and initial failure in problem-solving produce deeper, more durable learning than receiving correct answers directly — the research basis for why AI’s instant answers can undermine learning
-
Study
Stradler, Bannert & Sailer — Cognitive Ease at a Cost
Students using LLMs for scientific inquiry exerted significantly less mental effort and showed reduced depth of understanding compared to those working without AI assistance
MIT researchers examine the neurological effects of regular AI use on critical thinking and language processing, finding measurable differences in brain activity between AI users and non-users
Questions about how AI is being used at ASAD, or would like personalised guidance for your family? Our EdTech team is always happy to help.
Email: edtech@australianschool.ae