Artificial intelligence is no longer a theoretical issue for schools. It is already shaping how students research, write, study, translate language, and complete assignments – often in ways that are difficult for educators to see, verify, or interpret.
As a result, teachers and school leaders are being asked to make decisions they were never fully trained for, often without clear policies, shared norms, or time to slow down and think. The challenge is not simply whether AI is being used, but how educators respond responsibly when its use is suspected, uneven, or contested.
That challenge is at the center of an AI in the Classroom simulation designed to reflect a realistic and increasingly common situation: a teacher reviewing student work that appears far more advanced than expected and wondering whether artificial intelligence played a role. While the moment may seem narrow, it quickly expands into a web of instructional, relational, ethical, and leadership decisions.
What becomes clear is this:
AI in schools is not primarily a technology problem. It is a judgment problem.
The Real Decision-Making Challenges Educators Are Facing
AI complicates teaching not because it replaces professional judgment, but because it forces judgment into the open.
Educators routinely face tensions that cannot be resolved with a single rule or tool, including:
- Academic integrity vs. relationships
How do teachers uphold standards while preserving trust with students and families? - Consistency vs. professional discretion
How much autonomy should individual educators exercise when policies are still evolving? - Punitive vs. restorative responses
Is suspected misuse best addressed through discipline, learning conversations, or redesign? - Efficiency vs. accuracy
Should AI detection tools be used when their limitations and biases are well documented? - Tradition vs. adaptation
Should long-standing assignments remain unchanged, or be redesigned to reflect a new reality?
These decisions unfold over time – across grading periods, parent communications, administrative check-ins, and collaborative planning. Each choice sends signals about fairness, expectations, and values.
Importantly, no option is consequence-free.
What Research Helps Us Understand About AI in Classrooms
Current research offers several important insights that align closely with educators’ lived experiences:
- AI detection tools are unreliable as definitive evidence, with false positives and disproportionate impact on certain student populations.
- Students use AI in varied ways, from shortcutting work to translating content, generating practice questions, or organizing ideas.
- Blanket bans tend to be ineffective, often driving AI use underground rather than fostering transparency or learning.
- Assessment design matters more than surveillance, particularly approaches that emphasize process, reflection, and explanation.
In a recent Journal article by Professors Azukas and Gibson, their research deepens this understanding by framing AI use not as a single instructional issue, but as an interaction across multiple systems – students, teachers, classrooms, schools, and broader institutional expectations. Decisions made at one level often create ripple effects at other levels, sometimes unintentionally.
This systems perspective helps explain why well-intentioned classroom decisions can escalate into schoolwide or district-level challenges if they are not aligned or supported.
Why Simulation-Based Learning Fits the Nature of the Problem
Simulation-based learning places participants inside realistic scenarios and asks them to make decisions, observe consequences, and reflect – without real-world harm.
In the context of AI, simulations allow educators and leaders to:
- experience ambiguity without risking student trust or community relationships
- test different responses and see how consequences unfold over time
- practice difficult conversations with students, parents, and colleagues
- reflect on trade-offs rather than search for “right” answers
- build confidence navigating uncertainty before stakes escalate
The Azukas et. al. research highlights that learning environments are most effective when they support empathy, experimentation, and reflection. Simulation aligns naturally with this approach by allowing participants to inhabit multiple perspectives and understand how decisions feel and function across roles.
Importantly, simulations do not stop at discipline. They push participants toward instructional redesign, clearer communication, and shared norms – mirroring how sustainable solutions tend to emerge in real schools.
The Social Dimension: Why Shared Experience Matters
One of the most overlooked aspects of AI-related challenges in education is professional isolation.
Many teachers and administrators quietly ask themselves:
- What if I get this wrong?
- What if parents push back and I don’t have support?
- What if my colleagues handle this differently and I’m exposed?
- What if I don’t fully understand the technology myself?
When uncertainty stays private, it often turns into stress, avoidance, or rigid decision-making.
Simulation-based learning changes this by creating shared experience. When educators work through the same scenario together – synchronously or asynchronously – they realize that others are grappling with the same questions and fears.
This collective engagement helps participants:
- feel less alone
- surface assumptions safely
- hear how others reason through similar dilemmas
- develop shared language for difficult conversations
In this way, simulations transform isolated anxiety into collective sensemaking.
Naming Fear So It Can Be Faced
The Azukas et. al. research underscores that innovation in complex systems often stalls when fear remains unacknowledged. AI introduces fears related to credibility, fairness, accountability, and loss of control.
Simulation-based discussions provide a structured, low-risk space for educators and leaders to name those fears explicitly:
- fear of inconsistency
- fear of being perceived as permissive
- fear of being blamed for unclear outcomes
Once named, fear tends to lose some of its power. It does not disappear – but it becomes manageable. That shift is critical for thoughtful decision-making.
A Light Framework for Thinking About AI Decisions
The Azukas et. al. research introduces a framework that integrates design thinking with systems thinking, emphasizing how decisions are shaped by context, constraints, feedback, and interaction across levels.
Rather than prescribing solutions, the framework encourages educators and leaders to ask:
- Who is affected by this decision, and how?
- What assumptions am I making?
- What feedback loops might this create?
- How might this play out over time?
Simulation-based learning operationalizes this framework without requiring participants to master its theory. By experiencing consequences and reflecting on them, participants naturally engage in the kinds of systems-aware thinking the framework describes.
Readers interested in a deeper exploration of this framework are encouraged to consult the original research directly.
From Classrooms to Leadership and Governance
As AI-related challenges persist, they inevitably extend beyond individual classrooms.
School leaders must address inconsistency, parent concerns, and alignment. District leaders must balance guidance with professional discretion. School boards increasingly face questions related to academic integrity, equity, and community trust.
Research on school boards shows that governance is especially strained when issues are fast-moving, emotionally charged, and poorly understood at the classroom level. Board-level simulations help bridge this gap by providing shared context – without blurring the line between governance and management.
Practicing Judgment Before It Counts
Artificial intelligence will continue to evolve faster than policies or tools can keep pace. What schools cannot afford to neglect is the development of professional judgment under uncertainty.
Simulation-based learning does not promise certainty. It offers something more durable: practice navigating complexity, engaging others, and making principled decisions when trade-offs are unavoidable.
Just as importantly, it reminds educators, leaders, and board members that they are not navigating this moment alone.
That shared experience – grounded in reflection, dialogue, and courage – is how systems move from fear to capacity, and from reaction to thoughtful leadership.
References
- Azukas, M.E. & Gibson, D. (2025). Co-Intelligence in the Classroom: The DOT Framework for AI-Enhanced Teaching and Learning. AI Enhanced Learning, 1(2), 269-293. Association for the Advancement of Computing in Education (AACE).
- Marsh, J. A., Bridgeforth, J., et al. (2025). California School Boards: Navigating Democracy in Divided Times. USC Rossier EdPolicy Hub.
- International Society for Technology in Education (ISTE). (2023–2024). AI in Education Guidance and Digital Citizenship Resources.
- Council of Chief State School Officers. InTASC Model Core Teaching Standards.

Complete the form below to join our next webinar or receive the recording for the webinar related to this blog.