I didn’t want to do this.
I’d already lived through one tech revolution in education—survived it, adapted, even thrived. The internet changed everything. I rode that wave all the way to cyber school.
But let’s be clear: it wasn’t smooth.
I started in brick and mortar. We had laptop carts that came with exactly two working chargers and a waiting list longer than a Taylor Swift presale. Half the websites we needed were blocked “for safety.” The other half took eight minutes to load because Tim in the back corner was streaming dubstep on a proxy site.
We were told to “integrate technology” into our lessons but also… no phones, no YouTube, no chat, no music, and definitely no fun.
I learned to teach in digital duct tape as the district learned to create a safe environment for kids and teachers. And, it was hard.
But I figured it out. Eventually, we all did. Security measures relaxed, kids became comfortable, teachers began to embrace the technology, and it evolved into something that worked–really well.
So yeah—I wasn’t thrilled when AI came knocking. Another revolution? Another round of “just try this new tool”? Another year of tech PD led by someone who’s never had to teach 32 kids with 11 working Chromebooks?
I was good. My students were passing. My instruction worked, and the technology was giving the kids opportunities I didn’t have when I was in school in the 90’s. All was good!
Now, they are rolling out something even bigger than the internet! And I am a cyber teacher with kids using Ai to cheat their way through classes.
No thanks. I don’t want to make room in my brain for something new and complicated. It’s already reached its capacity for tech, and Ai was making my job HARDER.
Resistance First, Tools Later

I didn’t just roll my eyes in private—I made it a lesson plan.
One of my quarterly assessments? Oral defense video submission.
Prompt: “Convince me AI isn’t the beginning of the end for humanity.”
Nobody passed (Metaphorically of course. Rest assured, GPAs are safe in my room).
But they tried. And honestly? Some of their arguments were better than mine.
They talked about how AI could advance medicine—catching cancers in scans that even top radiologists might miss. They talked about how it could personalize treatments based on genetic profiles, help nonverbal kids communicate, streamline disaster relief, preserve languages, write songs, tell stories, and, yes—help teachers.
One student said, “It can think faster than we can.”
Another said, “Maybe it’s not the end of humanity—it’s just the beginning of something different.”
It was the most alive I’d seen them in weeks.
And it cracked something open in me. Not enough to trust it yet—but enough to stop pretending I could ignore it.
So I went all in, thinking I’d use AI to build my way through the resistance. I figured if I could create something truly interactive, students would connect more. Cyber doesn’t leave a lot of room for presence, but what if I could simulate it?
I started designing chat-based assessment tools using SchoolAI and MagicSchool—digital Socratic partners that helped kids apply what they were learning.
If they truly understood the content, they should be able to demonstrate it conversationally, right?
That was the idea.
And it worked. Sort of.
They liked the bots at first. It was different. Fun. Novel. And some of them actually clicked deeper into the learning than they would have through traditional assignments.
But others just… clicked.
Clicked to get through. Clicked to get the points. Clicked because that’s what the speed of technology in any learning environment has hard coded them to do: complete, not consider.
That’s when I knew: this wasn’t about the tech. It was about how the tech framed the learning.
So I paused. Again.
Not to give up. But to reset.
I stopped building things that looked human and started building things that demanded humanness from the learner.
That’s when I brought AI back in—not as the expert, not as the teacher. As the assistant.
I prompted with purpose:
- “Show me the gaps in this logic.”
- “Give me a model Socratic question based on Bloom’s Level 4.”
- “If this were a real-world simulation, what would the learner need to prove?”
I wasn’t chasing scripts anymore. I was chasing scaffolds.
AI became my thinking partner. It helped me spot where I’d overcomplicated. It helped me clarify intent. It let me reserve my human energy for the moments where presence mattered.
That’s when things started to change.
Learning with AI is Learning, Not Automation

Turns out, I was learning how to teach again.
And not in some inspirational, “growth mindset” poster kind of way. In a real, ugly, honest way.
This is what no webinar will tell you: learning with AI isn’t convenient.
It’s not plug-and-play. It doesn’t make anything easier—at least not at first.
It’s frustrating. Incomplete. Iterative. It holds up a mirror you didn’t ask for and forces you to see your blind spots—fast.
But if you treat it like a coach instead of a cheat code, something shifts.
You start seeing where your lessons rely on charisma instead of clarity.
You realize how much of your instruction was powered by presence—not by structure.
You notice that some of your “content” is just filler. That you’ve been doing things a certain way for so long, you forgot to question them.
And that’s when I finally understood: learning—real learning—requires metacognition and critical thinking.
Not just for students. For me.
No matter what I was trying to learn—video production, course flow, AI prompts, behavior patterns—it always came back to those two skills.
- What am I doing?
- Why am I doing it?
- How do I know it’s working?
- What do I do when it’s not?
Metacognition and critical thinking weren’t part of the course design. They were the course design.
The Course Is Built. But It’s Not Done

Once I started playing with AI, I brought the idea to my boss. I told her, “I’d like to try to design a cyber class—something that uses AI to make the learning feel more real.”
And she said the magic words every real leader should say, “Build it.”
So I did.
I built a full cyber digital literacy course using everything I had learned up to that point. I pulled in tools from MagicSchool and SchoolAI. I built rooms, bots, conversations, media, modules—all of it.
The course was done.
But I wasn’t.
Because once the bots were live and the kids were using them, I realized something was off.
They worked—but they didn’t work well.
Some students lit up. They connected, they questioned, they leaned in. But others went full autopilot. They found ways to click through, just like they always do.
That’s when I knew: I hadn’t built for deeper learning.
I had built to check the “interactive” box.
The AI did what I told it to do. That was the problem.
I didn’t know what I should’ve been telling it.
And because I was still learning how I learn with AI, I kept finding gaps in the work I’d already finished.
So the course lives.
But it lives in beta.
Every time I evolve, the design needs to evolve with me.
That’s not failure. That’s the process.
Here’s What this Is—and What You Can Do

I’m not just writing about AI. I’m blogging through the experience of learning with it—in real time.
The project?
I’m using AI to teach me how to build the ideal education assistant—one that actually helps teachers preserve the human core of their classrooms while leveraging AI’s power to handle the rest.
It’s not about automation.
It’s about amplification.
If you’ve ever looked at all the AI tools out there and thought, “Cool, but what does this actually mean for me?”—this blog is for you.
Join us in Teachers Using AI (Without Losing Their Minds) on Facebook.
It’s where I’m showing my work, asking the hard questions, and helping others figure it out without the burnout.
Follow on Instagram, X, LinkedIn
I post raw takes, real-time updates, tool walkthroughs, and sometimes the stuff I wish someone had told me earlier.
Next Post Drops Soon: “What I Thought I Knew About Learning Before AI Showed Me Otherwise”.
Spoiler: I wasn’t just wrong—I was tired. And AI called me on it.