I’m building a planning system where your day organizes itself.
And I’ve hit a problem I didn’t expect.
Most people say they want automation.
But when a system actually assigns time to their tasks, they hesitate. They want control back.
Example:
If you add “Call John tomorrow”
Would you expect:
A) The system schedules it at a specific time
B) It stays unscheduled until you choose
The whole product direction changes based on this.
My goal isn’t to build another task manager.
It’s to build something that actually helps people execute in real life, where plans constantly shift.
Curious how you think about this.
I think most people think they want full control, but what they actually want is a system that makes smart suggestions without locking them in—so auto-scheduling (A) works best if it’s easy to adjust and clearly explains why it chose that time.
B is the right default, but the magic is in how you transition them toward A over time. The resistance isn't to automation itself — it's to losing legibility. People want to know why something was scheduled at 2pm, not just that it was. If the system explains its reasoning and lets users override easily, trust builds fast. The goal is to make automation feel like a suggestion from a smart assistant, not a command from a machine.
the issue usually isn't the automation itself - it's accountability. when something goes wrong and someone asks 'why was this scheduled at 2pm?', users need an answer. systems that schedule silently leave people holding the bag.
This resonates a lot. I've been building a real-time flight data API and hit the exact same paradox with developer users. They ask for auto-normalization of airport codes, automatic retries, smart caching — all automation. But the moment something unexpected happens in the data pipeline, the first question is always "what did the system do and why?"
What shifted things for us was treating every automated decision as an observable event. The API logs why it made each choice, and developers can query that reasoning audit trail. They rarely look at it — but knowing they can dissolved most of the resistance. The automation didn't change; the perceived control did.
I think the key insight is: people resist automation they can't explain to themselves or their boss when something goes wrong. It's not really about control — it's about accountability. If they can point to a log and say "the system chose X because Y," they're comfortable letting it run. If it's a black box, they'll always want the override.
Feels like the real issue here is autonomy vs trust. People want automation conceptually, but execution touches personal context the system can’t fully see yet. A hybrid approach where the system suggests a scheduled time (and explains why) might help users gradually hand over control as confidence grows. You’re basically designing trust, not just planning.
The real problem is AI itself. It won't just call John. It will ask "Do you want me to schedule this?" — and now you have another decision to make.
Automation that requires your approval at every step isn't automation. It's just a fancy reminder app.
we hit something similar internally. we tried auto-assigning video review deadlines to editors based on project timelines. on paper it made sense. in practice editors hated it because it felt like the system was micromanaging them. what ended up working was showing a suggested time but letting them drag it around. basically "here's what i'd recommend" instead of "here's what you're doing." people want the thinking done for them, they just don't want to feel like the decision was made without them. i'd lean toward option A but with a really easy way to move it.
I build automation tools for myself (a CLI for scraping social platforms) and I made the opposite choice — kept everything as explicit commands instead of schedules, even when cron would've been trivial. The reason is trust: until I've run something 20 times manually and seen exactly how it behaves, I don't want a scheduler doing it without me watching. For your app I'd start at B and let users promote individual task types to A as they build trust. "I trust you to schedule calls" is a much easier yes than "I trust you to schedule my whole day."
I build automation tools for a living and see this constantly. The pattern is: people want automation of the outcome, not automation of the decision. They want the report generated, but they want to decide what goes in it. They want the schedule optimized, but they want to approve each change.
What works is making the automation visible and reversible. Show what it did and why, let them undo with one click. Once they see it making the same choices they would've made, trust builds and they stop overriding. But you have to earn that trust one decision at a time.
The planning space is especially tricky because schedules feel personal. "You should work on X now" hits different than "here's a sorted list of options."
Building an AI content tool for beauty influencers, and this tension is exactly what I run into daily. Users love the idea of "generate my Instagram captions automatically" but the moment they see fully automated output, they say "this doesn't sound like me."
What shifted things for us: framing it as "we give you a polished 90% draft in 10 seconds, you add your voice." Same output, but now they feel like collaborators rather than spectators. Resistance dropped significantly.
I think Option B is the right default — keep the human in the loop for the final decision, but do all the heavy lifting for them. The goal is to make them feel capable and in control, not replaced. Trust gets built one good suggestion at a time.
This is the classic "autonomy paradox" — people want outcomes automated, not decisions. When a system assigns a specific time to "Call John," it's making a judgment call the user hasn't explicitly delegated. The resistance isn't about automation itself, it's about perceived loss of agency over prioritization.
What's worked for me: let users set "soft constraints" first (morning vs afternoon, high vs low energy tasks) before the system schedules anything. It creates a sense of collaboration rather than a system overriding them. Option B feels safer initially, but the goal should be gradually earning trust until Option A feels natural.
Seems like, if the system is making prioritization decisions without any input, it’s going to feel off no matter how accurate it is.
The soft constraints idea is interesting. Feels like that gives the system a baseline to work from instead of guessing.
This tension between wanting automation and wanting control is something I deal with constantly while building my indie memo app. I learned the hard way that the best feedback on this exact issue comes from community conversations, not surveys or analytics. When I asked users in a feedback form whether they wanted auto-categorization, 80% said yes. When I shipped it, the most engaged users immediately asked how to turn it off. The disconnect became clear only after I started having real conversations in niche communities — Reddit threads, small Slack groups, even DMs with power users here on IH. What they actually wanted wasn't automation. They wanted the feeling of being organized without the cognitive load of organizing, but they still wanted to feel like they made the decisions. The framing that worked for me was "suggestion mode" — the app prepares everything but waits for a single tap to confirm. It's technically more work than full auto, but users perceive it as less work because they feel in control. The community angle here matters too: I've noticed that the founders who solve this well are the ones deeply embedded in their user communities, having regular conversations about workflow, not just shipping features based on assumptions. Have you tried running small community-based user interviews to understand where the trust threshold is for your specific audience?
That “more work but feels like less” point is real. The interaction itself seems to be what gives people that sense of control.
And yeah, this thread alone has already been more useful than anything I could’ve gotten from a survey.
I haven’t done structured community interviews yet, but starting to think that’s where the real answers are.
i build an AI content tool so i deal with this literally every day. people love the idea of "automate my blog writing" but the second you give them a fully automated post they go "hmm this doesnt sound like me" and bounce.
what changed everything for me was repositioning from "we write your blog" to "we give you an 80% draft in 90 seconds, you make it yours." same tool, same output, but now the
user feels like a collaborator not a spectator. resistance basically disappeared overnight.
i think the pattern is: automate the boring stuff (transcription, formatting, structure) but let humans own the creative part. thats where the trust lives.
Not replacing the user, just getting them most of the way there. That ownership piece seems to be where everything changes. It's the balance I need to find. Thanks.
exactly. "getting them most of the way there" is the whole game. the moment users feel like theyre collaborating with the tool instead of being replaced by it, everything clicks. good luck finding that balance for your use case.
This is the core tension. We built cognitive offloading systems (SOUL, MEMORY, AISHNA) that organize collaboration across agents and humans. But we resist full automation for exactly this reason.
The system suggests next steps. It doesn't decide them.
The agent asks questions. It doesn't answer for you.
The framework organizes decisions. You make them.
Why? Because the moment you surrender agency, the system stops being a tool and becomes a cage.
The best planning systems don't automate decisions. They surface the right information at the right moment so you can decide faster.
Your system should schedule "Call John" as a suggestion, not a command. Then let the human say yes, no, or "reschedule to Thursday."
That's the difference between automation and augmentation.
I get that, and I think that’s where most systems have landed.
But I’m curious about the edge of that.
At what point does constantly having to approve or decide actually become friction?
If the system always waits for you, does it really reduce the effort, or just shift where the effort happens?
Hi, I’m Judith ☘️
First of all …. I absolutely love this !
Rooting for you … I smiled as I read your post - describing the human condition.
We do want both - I remember working in sales a million years ago-
The one thing we had to do was give them 2 choices, ex; we have an opening today at 11:30 and another one at 2pm, which one works for you ? Years later parenting, you can wear the red shirt or the blue shirt - years later making a dentist appointment, I hear …. Well, how about Monday at 10am or is Tuesday at 4:30pm work better ? We do like our choices. But the real reality we just want to get everything done, something’s like the dentist, or the blue shirt don’t REALLY MATTER …. However , WE feel like WE matter.
Sending love. WE want to feel like WE matter , that what we think, feel & say matters. Keep going - let your light shine, you got this !
Enjoy the beauty of being YOU.
JUDITH ☘️
Judith this really hit.
“We want to feel like we matter” is exactly what I’m seeing.
It’s interesting because like you said, a lot of these decisions don’t actually matter that much… but the feeling of having a say does.
The idea of giving two choices instead of forcing one feels like a completely different experience.
Curious what you think about this:
If a system said
“Call John tomorrow at 11:30 or 2:00”
instead of just assigning one… would that feel right to you?
This is the core tension in every AI tool right now — autonomy vs perceived control. We ran into the exact same thing building InstaCards (AI-generated social posts). Users love the output but initially want to 'approve everything.' What worked for us: defaulting to a suggestion/preview mode first, letting them feel in control, then surfacing a 'just post it' toggle once trust is built. People don't resist automation — they resist feeling surprised by it. Option B is almost always the safer default to build trust before graduating users to A.
This is really well put.
“People don’t resist automation, they resist feeling surprised by it” feels exactly right.
Starting in suggestion mode and earning the right to automate more over time is a really interesting approach.
Curious how you think about this
Do you see that as a temporary onboarding phase, or something that should always be user controlled, like a spectrum they can move along?
There's a subtlety here that I think gets missed — the resistance isn't always about control, it's about context switching cost.
When someone manually schedules a task, they're also mentally preparing for it. That 3-second act of picking a time slot does real cognitive work: it forces you to think about what's before and after, how long it'll take, whether you'll have the energy for it.
Auto-scheduling skips that mental step. So when the time comes, you look at your calendar and think "wait, why is this here?" — not because the system chose wrong, but because you never processed it.
I'd actually lean toward something like a morning briefing approach. System proposes the full day, you scan it in 30 seconds and approve. You still get that cognitive processing step, but the heavy lifting of fitting pieces together is done for you.
The difference between a good assistant and an annoying one is whether they hand you a draft or just do the thing. Drafts invite collaboration. Unilateral action invites pushback.
This is a really sharp observation.
The idea that scheduling isn’t just placement but a form of mental preparation makes a lot of sense. That “why is this here?” feeling is something I’ve seen but hadn’t fully connected to that missing step.
The morning briefing idea is interesting because it keeps that processing loop while still offloading the heavy lifting.
Curious how you’d think about this
Would you expect that approval to happen once per day, or more dynamically as things change?
Excellent - loving your input - our cognitive processing !
Some of us are slow processors ( 🤔 like me )
ok, so maybe too much overthinking at times.
Certain appointments may take some of us , more time.
Our own frustrations come into it when we continue to see things as they were.
Let’s look at how things are now - and how 6hey could be.
My best to you.
Judith ☘️
This is one of the most common patterns I saw across 14 years of product management. The gap between stated and revealed preferences is huge. The progressive trust model works best in my experience: start with suggestions, let users accept with one tap, then graduate to full automation once they've seen it get things right 5-10 times. People don't resist automation itself. They resist losing visibility into why decisions were made. Show the reasoning, not just the result.
This is a great way to frame it.
The “show the reasoning, not just the result” piece feels especially important. I think a lot of what people call resistance is really just lack of visibility.
And I like your point about trust being built through repeated wins. That feels very real. You don’t give up control upfront, you give it up after the system proves it understands you.
Now I’m wondering where that line is.
At what point does it stop feeling like “I’m approving this” and start feeling like “yeah, just handle it”?
i've been seeing this in property management — people say they want automation, but once it starts making decisions (like choosing vendors or prioritizing issues), they hesitate.
curious where people draw the line between “assistive” vs “fully automated”? i do not think poeple are ready for fully automated until there is trust but how do you build trust without letting the system do its thing?
Yeah this is the tension.
People want the outcome, just not the moment where it starts deciding for them.
I think trust builds when it keeps making the same call you would’ve made. After a few of those, handing it off feels a lot easier.
Where do you usually see people hesitate most? When money is involved or just higher-stakes decisions?
I think people want automation with veto power, not automation that feels like losing agency. Scheduling someone else’s day is high-trust behavior. I’d make it progressive: first suggest a slot, then let them one-click accept, and only auto-schedule after the system has learned their preferences and earned confidence. In your example, I’d start with an unscheduled task plus a recommended time. Reversibility matters a lot here — if users know they can easily change it, automation feels helpful instead of intrusive.
“Automation with veto power” nails it.
The reversibility piece stands out too. If it’s easy to change, it lowers the pressure a lot.
Feels like people don’t mind automation as long as they’re never stuck with it.
Do you think that’s enough on its own, or does it still need to prove itself over time?
This is the classic automation paradox. People want the outcome of automation but not the loss of agency. The answer is usually C - suggest a time but let them confirm with one tap. You get the benefit of automation without the resistance. The real insight here is that you discovered this by building, not by planning. That's the part most people skip - ship something, watch what users do, then adjust. Sounds like you're doing exactly that
Yeah that’s exactly what it’s starting to feel like.
Option C seems to hit that middle ground where it actually helps without taking over.
And you’re right, this only showed up once people started using it. On paper it felt obvious to just automate it.
Now I’m trying to figure out if that middle ground is the end state, or just a step toward something more automated over time.
Happy to hear how that goes, good luck!
Sounds good, thanks!
I think the gap is usually between "automation of things I find tedious" vs "automation that touches things I feel ownership over." People want the boring parts automated. They resist automation that feels like it's making decisions for them, even small ones. Saw this a lot building internal tools at product companies — users would request automation, then ignore or override it because it removed a step they'd unconsciously used to feel in control.
People want the annoying parts gone, but still want to feel involved in the decisions.
And yeah, that step people override… it probably isn’t useless. It’s doing something for them mentally.
Now I’m wondering which parts of planning actually need to stay, even if they look inefficient on the surface.
It's not about trust or control. It's the dopamine. Manually scheduling feels like doing something. Auto-scheduling feels like the work disappeared, which is unsettling. Let them drag-confirm the suggestion so it still feels like their move.
The idea that the action itself gives a sense of progress makes a lot of sense.
Drag to confirm is smart too. Still feels like you did something, even if the system set it up.
Do you think that feeling is enough, or do people still want to understand why it chose that time?
The gap between "I want automation" and "I'm comfortable with it acting" usually comes down to consequence reversibility. Tasks with easy-to-undo outcomes — reorganizing a list, tagging something — get adopted immediately. Tasks with external consequences, like sending a message or booking something another person will see, trigger hesitation even from users who explicitly asked for automation.
One pattern worth trying: instead of offering A vs B as a product-wide setting, let the system learn trust per task category. Blocking off a personal focus window might earn full auto-trust on day one. Scheduling something that involves another person probably needs the suggest-first phase to run longer. Treating automation as a spectrum by action type rather than a binary toggle tends to result in a much higher overall automation rate without the pushback.
What kind of tasks are most users adding when they first try the product?
The “external consequence” piece is real.
Anything involving another person just feels different, even if it’s simple.
I like the idea of trust being tied to the type of action instead of one global setting.
Still early so I don’t have a strong pattern yet, but most people seem to start with simple personal tasks.
Same dynamic shows up in product design. Customers say they want performance, but buy on feel. They say they want minimalism, but choose the product with the recognizable logo. What people say they want and what they actually respond to are almost always different — which is why I stopped doing surveys and started watching behavior. The gap between stated preference and revealed preference is where the real product insights live.
I’m starting to see that play out in real time.
On paper, everyone leans toward automation. In practice, they hesitate the second it actually takes action.
Definitely feels like behavior is the only thing that tells the truth here.
exactly. stated preference and revealed preference are almost never the same thing. people say they want more control until it actually runs without them — then they want it back. the gap is where the real product insight is. what's the use case you're seeing it on?
This hits close to home — I ran into the same wall building automation for my indie app. Users explicitly asked for auto-send, then disabled it the first time something went out at the wrong moment.
The pattern that helped: make the automation visible before it acts. A brief pending window — even 30 seconds — that users can cancel changes the framing entirely. It goes from "the app did something" to "the app asked permission non-intrusively." Each time they see the system would have made the right call and don't intervene, trust accumulates naturally.
For a scheduling product specifically, a morning read-only preview — "here's what I'd plan for today" — before actually committing anything might be the bridge. Have you tried separating the suggest phase from the commit phase in your UX?
It’s not just about control, it’s about having a moment before something becomes real.
The morning preview feels like the same concept at a larger scale.
I haven’t fully separated suggest and commit yet, but that’s exactly the direction this is pointing.
I've run into this exact tension building AI-driven tools. What I've found is that people don't actually resist automation -- they resist opacity. When the system does something and they can't see why, that's when the pushback starts.
The pattern that worked for us: show the reasoning, not just the result. Instead of "I scheduled your call at 2pm," try "2pm looks open and you tend to do calls after lunch -- want me to lock it in?" Same automation, but now the user feels like they taught the system something rather than the system deciding for them.
Option B as default is probably right, but I'd add one thing: track which suggestions get accepted vs rejected. After enough signal, you can start auto-scheduling the categories where acceptance rate is above 90% and keep suggesting for everything else. That way automation expands organically based on demonstrated trust rather than a binary toggle.
What's the current split looking like in your testing -- are most people choosing A or B?
Yeah this lines up with what I’m seeing.
The same action feels completely different depending on whether you understand why it happened.
And I like the idea of automation expanding based on acceptance instead of a toggle. That feels way closer to how trust actually builds.
Don’t have enough data yet for a solid split, but early users tend to hesitate when it fully takes over.
The resistance usually isn't about the automation — it's about the loss of the feeling of control. Users want the outcome automated, not the decision. Framing it as 'we handle the boring part, you confirm the important part' tends to reduce the resistance significantly. Keeps the human in the loop at the moment that matters.
Yeah that feels right. Automating the outcome is fine. Automating the decision is where it starts to feel off.
If the system handles everything up to that point and lets you confirm, it feels a lot better.
“Hey, I saw your post about ecommerce. I’m also building a Shopify store. Would you like to connect or collaborate?”
Seen this exact pattern play out at scale in Meta Ads.
Advertisers will say they want automated targeting — they want the algorithm to find the right people. But when Meta's Advantage+ Audience starts expanding beyond their manually set parameters, most of them immediately contract it back. "You're spending outside my target audience" even when the expanded audience has better ROAS than the manual one.
The psychological gap: automation feels like losing visibility, not gaining leverage. Even when the results are objectively better.
The UX pattern that works is what the other comment here described — show the result of what the automation would do, let the user accept it with one tap, and only graduate to fully hands-off after the user has seen the system make good decisions a few times. Meta learned this too: they went from manual CPC → smart bidding suggestions you approve → Target CPA → Advantage+, each step removing one more manual dial but always showing what would have happened before removing the control.
For your scheduling product: B is probably the right default, but add "Would you like me to schedule this?" as a one-tap accept. Once someone accepts suggestions consistently for 7-10 days, offer a setting to switch to full auto. Don't ask on day one.
That Meta example is perfect. Even with better results, people still pull back if they feel like they’ve lost visibility.
The step-by-step removal of control makes a lot of sense. Showing what would happen before actually doing it feels like the key.
First-gen AI scheduling tools died on this exact problem. Users want an organized day but read auto-scheduling as losing agency. Default to B with a suggested time that is one tap to accept. After a few weeks of accepted suggestions, users opt into full auto mode on their own. That trust ramp is the product.
The trust ramp being the product itself feels right. It’s less about choosing automation upfront and more about getting people comfortable enough that they choose it later.
Michael, las iniciativas para mejorar y automatizar tareas son muy requeridas, mi pregunta para mi seria, si tengo automatizado mi trabajo de la misma manera, por que querria tenerlo para hablar con otras personas como amigos, parientes o gente cercana, esto lo digo por mí, si cumplo con horarios rígidos y estoy 8, 9 horas haciendo muchas tareas a la vez, desearía tambien tener recordatorios fuera de oficinas.
Gracias por tener la oportunidad de participar.
Exitos
Sí, entiendo lo que dices.
Parece que el problema no es la ayuda en sí, sino lo estructurado que empieza a sentirse fuera del trabajo.
¿Crees que se trata más de mantenerlo ligero, o hay un punto en el que cualquier tipo de automatización empieza a sentirse demasiado?
Espero haber entendido y traducido correctamente, ya que no hablo español.
my whole product is built on automation, but the integral part of it , is that i use a human-in-the-loop mechanism, which allows a position for human supervision.
if you less busy you can check out my stuff..really helpful for marketing and distribution
I see what you’re pointing at.
What I’m thinking about here is slightly different. Not really a human oversight system, more something that works alongside your own workflow and helps structure your day.
In that context, how would you expect it to behave?
Would you want it making decisions by default, or staying more assistive until you step in?
I think we humans sometimes overcomplicate things... yeah! Of course we want to simplify work and make things easier for work to continue, but the fear of corporate replacing them makes them defensive. if i can automate everything, that means one day an automation that might reduce the value of me at work is going to be made.
That’s an interesting angle.
Do you think that hesitation shows up even in personal tools where there’s no job risk?
For example, if something automatically structured your own day, would it still feel like giving up control or just removing effort?