For the past few years, I’ve split my time between two worlds that rarely talk to each other:
the education system (where I teach economics across several universities) and the AI/tech world, where I’m building a tool called GradeProAI.
What surprised me most was not the mechanics of the technology…
but the emotion around it.
If you talk to teachers—even very seasoned ones—you’ll hear a mix of anxiety, frustration, and sometimes outright panic about AI. Some worry that students will stop thinking. Others see AI as “cheating,” or fear the loss of creativity, or imagine a future where machines replace human judgment.
But something interesting happened during this project:
The more I talked to educators, the more I realized their fear isn’t really about AI.
It’s about losing the emotional core of teaching.
A student in a TEDx Youth talk said something that stuck with me:
“We don’t just learn through words. We learn through emotions.”
And she’s right. Think about your own favorite teacher growing up — it wasn’t just what they taught, but how they made you feel that shaped your learning.
That’s why AI tools often trigger pushback: education has always been deeply human, and anything that feels “cold” or “automated” gets rejected emotionally before anyone evaluates its usefulness.
Where this gets relevant to Indie Hackers
When we started building GradeProAI, our idea was simple:
Teachers spend countless hours typing repetitive feedback.
Online classes create huge volumes of written assignments and discussion posts.
Most instructors are overwhelmed, especially adjuncts with multiple classes.
But they still want to give meaningful feedback.
The surprise was how quickly conversations moved from workflow pain → to deeper philosophical fears.
That shaped how we built the product.
We realized that if an AI tool was going to be accepted by teachers, it couldn’t feel like a “replacement.”
It had to feel like a support tool that preserves the human part of teaching.
So we designed the workflow to:
keep the instructor fully in control
let them edit everything before sending
provide multiple tones (formal, friendly, encouraging)
focus on speeding up feedback, not generating grades
anonymize student names for unbiased evaluation
integrate cleanly with Canvas/Brightspace instead of external dashboards
In other words:
We didn’t build an AI grader.
We built an AI assistant that frees teachers to teach.
What we learned from talking to hundreds of instructors
Most are not anti-AI — they’re anti-loss-of-humanity.
Good AI tools reduce burnout; bad ones erode trust.
Adoption skyrockets once teachers feel THEY control the output.
Universities move slowly, but individual instructors move fast.
Emotional reassurance is as important as technical capability.
This influenced everything—from interface choices to our upcoming “ZenMode” (automatic LMS insertion) to tone controls and transparency features.
Why I’m sharing this here
Indie Hackers is where founders talk honestly about what works and what doesn’t.
For anyone building in AI or EdTech, here are my takeaways:
Don’t underestimate the emotional component of product adoption.
The education market isn’t small—it’s just trust-heavy.
User interviews matter more when your user group is skeptical.
Sometimes fear from users is actually insight about product direction.
And personally, this project showed me that AI’s biggest opportunity in education isn’t automation.
It’s restoring time to teachers so they can be more human.
If anyone here is building in EdTech, AI, productivity, or workflows, I’d love to hear what you’re learning from your user groups too.
Happy to share more about our journey, the Chrome extension build, LMS challenges, or teacher onboarding if it’s helpful.