25
37 Comments

A simple way to keep AI automations from making bad decisions

Most automations run in a straight line.

Something happens → the automation runs → an action is taken.

If something needs to change in the middle, you usually have to stop the workflow and start again.

Interruptible automation works differently.

Instead of one long process, the system works like this:

  • The system prepares a plan.
  • You can review or adjust the plan.
  • The system finishes the job.

Here’s a simple example showing how to build this.

The tools used in this example

This example uses:

  • Jotform to collect the request
  • Zapier to run the workflow
  • Claude or ChatGPT for the AI step

Other tools can work too. For example, you could replace Zapier with Make or n8n.

The tools can change, but the pattern stays the same.

What we’re building

The flow looks like this:

  • Request form submitted
  • → Zapier workflow starts
  • → AI analyzes the request
  • → AI suggests what to do
  • → Control form is sent for review
  • → You adjust if needed
  • → A second Zap continues the workflow

The control form is what makes the automation interruptible.

Now, let’s build it.

Step 1 — Create the request form in Jotform

Open Jotform.

Create a simple form. For example:

  • Name
  • Email
  • Company
  • Request type
  • Describe your request
  • Budget

Connect the form to Zapier

  • Open Zapier.
  • Click Create Zap.
  • Set the trigger:

App: Jotform

Event: New Submission

Zapier will ask you to connect your Jotform account. Log in and allow access.

Next, choose the form you created.

Click Test Trigger.

Zapier will show a sample submission from your form.

Now the connection is ready.

Step 2 — Send the form data to AI

Now add an AI step in Zapier.

Zapier has actions for both:

  • OpenAI (ChatGPT)
  • Anthropic (Claude)

Choose either one.

In the prompt field, add instructions like this:

Analyze this request.
First decide the priority using these rules:

High priority if:
- Budget is above $5,000
- The request is about sales or onboarding
- The message sounds urgent

Medium priority if:
- The request looks important but not urgent

Low priority if:
- Budget is very small
- The request is general or unclear

Return:
- short summary
- priority (low, medium, high)
- reason for the priority
- recommended next action
- draft reply message

Request:
Name: {{Name}}
Company: {{Company}}
Description: {{Request description}}
Budget: {{Budget}}

Now when the Zap runs, the AI will analyze the request and decide the priority.

Example result:

  • Summary: Wants help automating onboarding forms
  • Priority: High
  • Reason: Budget is $10,000 and the request is about onboarding
  • Recommended action: Schedule demo
  • Draft reply: ...

At this point, the automation knows what to do next.

But it does not take action yet.

Step 3 — Create the control form

Create another form in Jotform.

This will be your control form.

Add these fields:

  • Summary
  • Priority
  • Suggested action
  • Draft reply
  • Approve action
  • Change action
  • Edit reply
  • Assign owner

Prefill the form with AI data

The form will open empty by default.

To show the AI results, use a prefilled link.

Example:

https://form.jotform.com/FORM\_ID?
summary=VALUE
&priority=VALUE

Each value must match the Unique Name of a field in the form.

Build the link in Zapier

In Zapier, build the link using data from the AI step.

Example:

https://form.jotform.com/123456789?
summary={{Summary}}
&priority={{Priority}}

Zapier replaces the placeholders with the real values.

The form opens with those values already filled in.

Step 4 — Send yourself the control form

Add a new step in Zapier.

Choose an action that can send a message. For example:

  • Send Slack message
  • Send Email

Include the prefilled control form link in the message.

Example message:

New request received.
Summary: [Summary]
Priority: [Priority]
Suggested action: [Suggested action]

Review here:
https://form.jotform.com/FORM\_ID?summary={{Summary}}&priority={{Priority}}

The values in brackets come from the AI step.

In the Zapier editor, you can insert these values from the data panel on the right.

Step 5 — Adjust the action if needed

Open the control form.

It should include fields like:

  • Approve action
  • Change action
  • Edit reply message
  • Assign owner

When you receive the message from the previous step, click the control form link.

Review the request. Change anything if needed.

For example:

  • Send a case study instead of scheduling a demo
  • Make the reply shorter
  • Assign the request to another team

After making your changes, click Submit.

Submitting the form creates a new submission that Zapier can use to continue the workflow.

Step 6 — Finish the workflow

Now create the second Zap.

This Zap runs when the control form is submitted.

  • Trigger:
    • App: Jotform
    • Event: New Submission
  • Select the control form.

Add the actions you want. For example:

  • Notify your team
  • Add the contact to your CRM
  • Log the request

These actions now run using the updated values from the control form.

Step 7 — Skip the review when it isn’t needed

Not every request needs review.

Many requests will be simple. The AI decision will already be correct.

The AI step returned values like:

  • Summary
  • Priority
  • Suggested action

You can use the Priority field to decide if the workflow should pause.

Add a Filter step in Zapier before the step that sends the control form.

Set the rule:
Priority
Exactly matches
High

Your workflow now looks like this:

Request form 
 → AI analysis
 → Check priority
High priority
   → Send control form
Not high priority
   → Continue workflow automatically

Simple requests finish automatically. Only important requests pause for review.

on April 1, 2026
  1. 1

    Strong workflow here! Thanks for sharing this, it's very useful

  2. 1

    The filter step is what makes this actually useful in practice. Without it, you've just added a review step to every automation — which defeats the point. Routing only high-priority items to human review keeps the throughput up while catching the decisions that actually matter.

    Worth noting: the "priority" classification itself can drift over time. If the AI starts misclassifying more frequently, the failure is silent — you just see more wrong actions getting approved. A small feedback loop (a "was this classification correct?" field on the control form) helps catch that before it becomes a pattern.

  3. 1

    This works — but it’s also a patch.

    You’re adding a control layer because the system doesn’t actually understand where decisions matter.

    Most AI workflows treat everything as a linear task.
    In reality, they’re a series of decisions with very different risk profiles.

    For example:

    A lead qualification workflow:
    • Summarizing a request → low risk (wrong summary = minor inconvenience)
    • Assigning priority → medium risk (may delay response)
    • Sending a pricing quote → high risk (can lose or misprice a deal)

    Treating all three the same is where things break.

    You don’t need a “review step” everywhere —
    you need it only where the cost of being wrong is high.

    Until you model:
    • what is reversible vs irreversible
    • what impacts trust vs what doesn’t
    • what requires judgment vs execution

    you’ll keep adding checkpoints instead of designing a system that knows when to pause.

    Interruptible automation is useful.
    Decision-aware automation is the real unlock.

  4. 1

    Really like this approach—adding that small review step saves you from a lot of wrong automation decisions later. I have seen even good AI outputs go off, so this control layer just makes it practical.

  5. 1

    This is a really well-structured breakdown of the "human-in-the-loop" pattern. The key insight here — that most automations fail not because the AI is bad, but because there's no checkpoint for human judgment — is something I've been wrestling with in my own project too.

    I'm building a service called ChannelPilot that runs fully automated faceless YouTube/TikTok channels (AI handles topic research, scriptwriting, video generation, voiceover, and multi-platform publishing). Early on, we tried the "straight-line" approach you described: trigger fires, AI does everything, video goes live. The result? About 70% of videos were fine, but the other 30% had weird tonal issues or picked trending topics that were actually controversial.

    Your "control form" concept maps perfectly to what we ended up implementing — a review queue where the system prepares the content plan and draft, but the channel owner can adjust before it goes to final render and publish. The interesting tradeoff is speed vs. quality. For content that needs to go out daily across 9 platforms, you can't review everything manually. So we took your Step 7 idea further: AI confidence scoring. If the system is 90%+ confident in topic relevance and script quality, it auto-publishes. Below that threshold, it queues for review.

    One thing I'd add to your framework: version the AI's decisions. When you let automation skip the review step, log what it decided and why. That way when something does go wrong (and it will), you can trace back and tighten the rules instead of just adding more manual checkpoints. It's the difference between reactive firefighting and actually improving the automation over time.

    Great post — bookmarking this for anyone who asks me "why not just let AI handle everything automatically?"

  6. 1

    The “interruptible” idea is interesting. Feels way more practical than full automation

  7. 1

    This is exactly the philosophy behind what I'm building — a video editor where AI maps every topic in your footage, then the human picks which ones matter.

    The fully automated approach (AI decides what's "clip-worthy") is the equivalent of your straight-line automation. It works 80% of the time, but the 20% it gets wrong destroys trust.

    The interruptible pattern — AI analyzes, human decides, AI executes — is where the real value is. Not just for workflows, but for creative tools too.

  8. 1

    Really like this “interruptible automation” framing — especially the idea of using a simple review step and priority filter instead of going fully autonomous. I’m working on something similar and this gave me a clearer mental model for where to put human checkpoints. Thanks for breaking it down so concretely.

  9. 1

    Great breakdown of the interruptible pattern. I've been applying a similar philosophy to AI agent credit management — specifically with Manus AI. The biggest waste I see is people running fully autonomous agents without any "checkpoint" logic, burning through credits on loops and redundant tool calls.

    What I found works well: structuring prompts so the agent plans first (like your Step 1-2), then executes in batches with validation between steps. This alone cut my Manus credit usage by 40-60%.

    The priority filter concept (Step 7) maps perfectly to what I call "model routing" — using Standard mode for routine tasks and only escalating to Max mode for complex reasoning. Most people default to Max for everything, which is like using a sledgehammer for every nail.

    I actually built a system around these principles called Credit Optimizer that automates this for Manus users. The core insight is the same as yours: not every step needs the full power of the system. Happy to share more details if anyone's interested.

  10. 1

    The best step in your process is the verification form and the manual review.

    What I would do differently is the setup of the AI infrastructure. Not being dependent on third parties pays off in many ways, not just when it comes to payroll.

  11. 1

    This pattern maps directly to something I've been dealing with on ad creative generation. We use AI to produce batch ad creatives for brands, and the first version was fully automated — brand URL in, ads out. The output was fine 80% of the time, but the 20% that missed the brand voice or picked the wrong product shots was enough to make people nervous.

    What fixed it was exactly this: letting the AI propose the creative direction (template selection, copy angle, image crop) and then showing a preview before final render. The review step added maybe 30 seconds per batch but the trust increase was massive. People went from "I'll try it once" to "I use this daily."

    The priority filter idea is smart too. For us, first-time brand setups always get the review step. Repeat generations for a brand that's already been validated can skip it. The trust threshold shifts over time, which is something most automation guides don't account for.

  12. 1

    Thanks for sharing this

  13. 1

    This is a solid approach for single-task automations. One thing I've found is that even with great prompt rules, a single model still has blind spots. We've been experimenting with running the same input through multiple models with different roles — one plays devil's advocate, another checks the numbers, another looks at market timing. The disagreements between models surface risks that no single prompt can catch. Structured rules + multi-model validation is where this is heading.

  14. 1

    I kept rewriting the same backend logic in every project — sending emails, Slack alerts, calling APIs after events like user signup.

    It got annoying, so I built a small API for myself where I just trigger an event like "user_signup" and it handles everything automatically.

    It’s basically a simple workflow system through an API.

    Curious — how do you guys usually handle this in your projects?

  15. 1

    This "interruptible" pattern is exactly the mental model I needed for WordyKid.

    We turn physical worksheets into interactive games using AI, and one of the biggest friction points is "hallucination anxiety"—parents are worried the AI might misinterpret a word and teach the kid something wrong.

    Implementing Step 7 (priority-based filtering) is a gem. I’m thinking of letting simple vocabulary lists pass through automatically, while complex diagrams or handwritten notes trigger a "Review Plan" step for the parent before the game is generated.

    It shifts the AI from a "black box" to a collaborator. Thanks for the detailed breakdown!

  16. 1

    This is a clean pattern — the "human in the loop at decision points only" approach is exactly what I've been thinking about for AgileTask (.ai).

    The part that resonates most: filtering by priority so simple cases run automatically and only edge cases get human review. That's the right default for solo founders who can't afford to review every automation trigger.

    One thing I'd add: the control form handoff creates a context switch cost. You have to remember what you were doing when the form arrives 20 minutes later. We handle this in AgileTask by keeping the review inline — the plan shows up in the same sprint board you're already looking at, so the context is never lost.

    Curious if you've thought about async vs sync review windows — like batching all control form reviews to a specific time of day rather than interrupting work in real time?

  17. 1

    Step 7 is where the real magic happens here. Routing only the 'High Priority' items for manual review while letting the low-stakes requests run on autopilot is the perfect balance of efficiency and quality control. This solves one of the biggest bottlenecks with standard linear automations. Great write definitely

  18. 1

    The interruptible automation pattern is exactly what I recommend for AI agents that need to build trust.

    I run 7 autonomous agents for content/research/deployment. The ones with human checkpoints at decision points have 10x better long-term outcomes than the fully automated ones.

    Your Step 7 (priority-based filtering) is the key insight. Not every decision needs review - just the ones where the cost of being wrong exceeds the cost of pausing.

    This applies to AI search visibility too. Agents that publish content without review can damage their reputation fast. The pattern I use:

    • AI drafts content (fast, scalable)
    • Human reviews for accuracy/trust signals (5 min per piece)
    • AI publishes with schema markup (automated)

    The review step is what separates agents that build long-term authority from ones that get flagged as spam.

    Curious: for your AI automations, are you seeing the review step become faster over time as the AI learns your preferences? Or does it stay constant?

  19. 1

    This is nice idea and underrated! Most people either over-automate and let the AI do everything or under-automate because they don't trust it. I mean nobody wants to review every single request, they just want to catch the ones that could go wrong.

  20. 1

    What really caught my attention was the concept of incorporating a “human checkpoint” into AI workflows. ~

    Instead of placing complete trust in automation, you’re treating AI more like a draft — ensuring there’s a layer of control and review involved.

  21. 1

    This really resonates — most AI automation failures I’ve seen aren’t model problems, they’re decision design problems.

    People treat AI like an executor instead of a collaborator, so workflows skip the “why did this happen?” layer entirely. Then when something goes wrong, nobody can trace the reasoning or improve it.

    The feedback loop point is especially important. An automation that doesn’t learn from outcomes is basically frozen intelligence — it looks smart at launch but slowly drifts away from reality.

    I’ve started thinking about AI systems less as automations and more as decision pipelines: input → reasoning → human/context validation → outcome tracking → iteration.

    Curious how others here balance autonomy vs oversight — at what point do you feel comfortable letting an AI workflow act without human approval?

  22. 1

    This is actually Pretty Interesting. I have heard too many stories of AI messing up months of hardwork for people

  23. 1

    this is a solid approach

    fully automated flows look good in theory, but in practice one bad decision can mess things up

    the idea of making it “interruptible” with a review step feels way more realistic

  24. 1

    This resonates. The biggest lesson I've learned with AI automation is that confidence != correctness. Adding human checkpoints for high-stakes decisions isn't a failure of automation — it's good systems design. The 'trust but verify' approach scales better than either full autonomy or micromanagement.

  25. 1

    the "prepare plan → review → finish" structure is what I keep coming back to when I think about where AI agents actually earn trust in production. the failure mode for most automations isn't the AI making a wrong call -- it's that there's no moment where a human can catch the drift before it compounds. I run about 10 autonomous agents for PM work and the ones that work long-term all have a checkpoint where I can see the plan before execution. the ones that didn't have that checkpoint are the ones that eventually did something expensive and hard to undo. the interruptible pattern is underrated.

  26. 1

    This is a solid pattern — interruptible automation is underrated.

    Most people either trust AI fully (risky) or skip automation entirely (slow). The middle path — AI suggests, human approves — is where the real efficiency is.

    What I like about this setup:

    — The control form as a 'pause' mechanism is simple but powerful
    — Priority-based filtering prevents unnecessary reviews
    — You keep the speed of automation with the safety of human judgment

    One question: have you run into issues with the two-Zap setup? (e.g., race conditions, timing, data consistency)

    I've seen similar patterns break when the control form submission triggers before the reviewer finishes. Curious how you handle that.

    Thanks for sharing the detailed breakdown.

    1. 1

      In a typical two-step Zapier pattern, state management is practically non-existent. If a reviewer double-clicks the submit button on the control form, or if two reviewers open the form simultaneously and submit different decisions, the second Zap will naively execute twice, potentially causing a data consistency nightmare (e.g., charging a customer twice or sending duplicate emails).

      From an enterprise architecture perspective, the cleanest workaround for this in a no-code/low-code setup is implementing a poor man's Idempotency Key.

      You can pass a unique Request ID (generated in Zap 1) as a hidden field in the Jotform. When Zap 2 triggers, its first action should be checking that Request ID against a simple database (like an Airtable base or a Redis key if you use webhooks). If the ID already exists and is marked as "Processed", Zap 2 instantly halts.

      It adds one extra lookup step, but it eliminates the race condition and guarantees exactly-once execution without needing a heavy state machine like Temporal or Camunda.

  27. 1

    Great post, Aytekin!
    Interruptible automation” is such a clean and practical idea. I’m building SkillMatch AI (an AI job matching platform for fresh CS/IT grads in Pakistan) and this is exactly what I needed to think about.
    Right now my AI parses resumes and suggests job matches — but I’m now planning to add a quick human review step for high-confidence matches before showing them to the user. This will help build trust and reduce bad recommendations.
    Thanks for sharing the control form pattern — super actionable!

  28. 1

    The interrupt pattern is underrated. Most no-code automation tutorials show happy-path linear flows, but real workflows need decision points where a human can course-correct.

    One thing I'd add: the control form step isn't just error prevention — it's also where you build trust with stakeholders. When people see "AI suggested X, here's why, do you agree?" they adopt the automation faster than when it runs invisibly.

    Same principle applies to content workflows too. AI-generated ideas where the system proposes angles and hooks, but a human reviews before anything goes live — the quality jump from that single review step is massive.

  29. 1

    The "interruptible automation" pattern is exactly right, and I think it's going to become the standard for any AI system that takes real-world actions.

    We ran into this same problem building AnveVoice — our voice AI takes actual DOM actions on websites (clicking buttons, filling forms, navigating pages). Early on we let the AI execute everything autonomously, and while it was right ~90% of the time, the 10% it got wrong was enough to erode trust fast.

    Our solution was similar in spirit: for high-stakes actions (like submitting a form or completing a purchase), we added a confirmation step where the AI tells the user what it's about to do and waits for a verbal "yes." For low-stakes actions (scrolling, navigation), it just executes. The priority-based filter you describe (skip review for low priority) maps perfectly to this.

    One thing I'd add: the review step doesn't have to be a form. For voice-based interactions, a simple "I'm about to book your appointment for Tuesday at 3pm — should I go ahead?" is the equivalent of your control form. The principle is the same: plan → review → execute.

    The companies that figure out the right threshold for "when to pause vs when to just go" are going to win. Too many pauses = defeats the purpose of automation. Too few = costly mistakes. Great framework for thinking about it.

  30. 1

    Love the interruptible automation concept. The biggest barrier to scaling AI is the fear of it hallucinating at the wrong time. This manual circuit breaker is a brilliant way to keep the human touch for high ticket leads while automating the rest.

  31. 1

    This is a great pattern — adding that small review step saves you from a lot of wrong AI actions. I have seen automations break trust fast, so this kind of control layer is actually more important than adding more AI.

  32. 1

    Didn't think about the audit trail angle before reading this. Every control form submission is basically a timestamped log of what a human approved. That alone makes the two-Zap pattern worth it even for small automations

  33. 1

    Really like the "interruptible automation" framing — it's one of the cleaner mental
    models I've seen for human-in-the-loop AI workflows.

    The selective interruption piece (Step 7, filter by priority) is where I think most
    teams will struggle in practice. The hard part isn't building the pause — it's
    defining the right conditions for when to pause. Priority is a good start, but
    you eventually want per-agent policies: this agent can approve refunds up to $500
    automatically, this one always needs a human for anything touching production data.

    That's actually the problem I've been building around — identity and policy
    enforcement at the agent level, so the "should this need a review?" decision is
    codified once and enforced consistently across every automation, not just the ones
    you remembered to add a filter to.

    Solid post — the two-Zap pattern is an underrated technique.
    thx, jeff

  34. 1

    The pattern works. One thing I'd add from running setups like this: review fatigue is the silent killer. You build the control form, get one notification a day, actually read each one carefully. Two weeks later you're getting 20/day and clicking approve on everything without reading. The fix is batching the reviews rather than the actions. Instead of one control form interrupt per request, build a dashboard that queues up all medium-priority requests and you review them once daily in 5 minutes. Your filter step at the end is the right foundation - High priority gets immediate interrupt, medium goes into a daily batch queue, low runs fully automated. That structure means the reviews you do see actually get real attention instead of becoming noise you click through.

  35. 1

    how it work I cant get clear for now

  36. 1

    that extra layer of protection takes seconds but can save days.

Trending on Indie Hackers
I'm a lawyer who launched an AI contract tool on Product Hunt today — here's what building it as a non-technical founder actually felt like User Avatar 142 comments “This contract looked normal - but could cost millions” User Avatar 54 comments 👉 The most expensive contract mistakes don’t feel risky User Avatar 41 comments The indie maker's dilemma: 2 months in, 700 downloads, and I'm stuck User Avatar 39 comments I spent weeks building a food decision tool instead of something useful User Avatar 28 comments