I was lucky. I grew up before tech took over childhood. I didn’t grow up with tech in my back pocket.
I got my first cellphone in my 20's AFTER college. My laptop weighed more than my newborn. Now, I’m raising kids in a world where voice assistants, adaptive learning platforms, and generative AI like ChatGPT are becoming normal — and sometimes expected.
I use AI in my work.
I even teach other moms how to use it to manage the mental load of parenting.
But even I’m asking: Is this okay? Is this smart? Where is the line?
This blog isn’t a guide to coding or how to block screen time. It’s a real conversation about the ethical tensions we’re all feeling, but not all talking about.
Let’s get the obvious out of the way. AI isn’t neutral. It’s built by humans, trained on human content (not all of it helpful or safe), and it reflects the values of whoever builds it.
According to the Center for Humane Technology:
AI models are trained on large datasets without transparency.
AI can reinforce bias, it outputs and reflects the input
AI tools are already being used in schools without parent understanding or input.
A 2024 survey from Pew Research showed that 70% of U.S. adults feel AI will impact jobs — but only 18% feel it will help more than harm. In other words: we’re anxious about the role of AI. And that’s not irrational.
As an occupational therapist, I’ve always believed in supporting the whole human experience. If AI makes us faster but more disconnected, what are we really gaining?
You can turn off Alexa.
You can skip iPads.
But AI is already in:
search results
music and video suggestions
speech and reading tools
curriculum design
toys
tracking apps
school platforms
This is what I call “passive AI exposure.” You didn’t choose it, but it’s there. So now what?
We need to guide it, not pretend it isn’t happening. The ethics aren’t just about AI “out there” — they’re about how we model it in our homes. Do we want our kids to think smart tools mean thinking less? Or talking to tech like a servant?
Or do we want to show them how to stay grounded, smart, and human, even while using powerful tech?
I’m screen-free with my kids. I don’t let them use AI, watch TV, movies, tablets or any other screen. We believe that they should have a chance to grow their brains without tech dollar competing for their attention (or addiction).
But I do use AI to manage my work and home life more calmly.
I use it to help write:
birthday party invites
reminder checklists
dinner menus based on what’s in my pantry
self-regulation scripts
homeschool rhythm plans
bedtime story prompts with my kids’ names in them
I don’t hand the parenting over to the tool. But I do let it lighten the load so I can show up more present.
And I talk openly with my kids about how it works.
We use language like, “That’s something the computer suggested — but let’s decide what’s best.”
Or, “The robot doesn’t know our feelings, so let’s use our own heart here.” This is the emotional literacy + tech literacy approach I wish more of us were hearing about.
Before we ask, “Should I use AI?” let’s ask:
Does this support or replace my values?
Will it reduce pressure or increase dependence?
Am I using this with intention or just defaulting to convenience?
Will my kids see me using it well?
You don’t have to become anti-tech to have a conscience. And you don’t have to be AI-obsessed to admit it can help. But you do have to lead the conversation at home — no one’s going to do it for you.
That’s exactly what my AI for Moms workshop is about.
It’s not about jumping on the bandwagon.
It’s about taking the wheel — and helping your family feel calmer, not more rushed or robotic.
Because the real ethical issue is when we ignore what’s already here and let it decide for us.
Want to keep your values while still getting things done?
Join the upcoming AI for Moms Workshop — live + replay access available.
Click here to register
Teaching moms to use tools like ChatGPT to simplify tasks, ease mental overload and enjoy motherhood.
About Us
Pricing
Courses
Contact Us
Blog