7
3 Comments

A cleaner way to build file upload features

If your product accepts file uploads, you are probably storing too many files.

Most of them are never used again.

In many cases, the file is only needed for a short time.

Example: A user uploads a file to get a result (like a summary or analysis).

Once the result is created, the file is no longer useful.

But it still gets stored.

This increases cost and risk for no reason.
Here’s how to extract the useful data with AI and delete the file.

What you’re building

You’re building one simple flow:

User uploads a file → the file is stored for a short time → OpenAI extracts useful information → you save only that information → the raw file is deleted

Here’s an example use case

Let’s say you have a small app where users upload voice notes to give feedback.

Instead of storing every audio file forever, you can:

  • Turn the audio into text
  • Extract a short summary
  • Detect the user’s intent (bug report, idea, complaint, etc.)
  • Save only that

Then delete the audio file.

Now you keep the insight, not the file.

Tools

  • Jotform → collect files from users
  • Zapier → runs the whole workflow
  • ChatGPT (OpenAI) in Zapier → extracts useful information from text
  • Cloudflare R2 → stores the raw file for a short time
  • Supabase → stores the final structured data
  • Vercel → hosts a simple page for download/delete requests

Before you build the workflow

Before you open Zapier, set up the places where the data will go.

You need:

  • One Jotform form
  • One Cloudflare R2 bucket
  • One Supabase table

That way, when you build the automation, every step has somewhere to send the data.

Step 1 — Create the upload form in Jotform

Go to Jotform and create a new form.

Add these fields:

  • File Upload
  • Email address only if you really need it
  • Checkbox: “I agree to data processing”

Now turn on encryption.

In Jotform:

  • Click Settings (top menu) → Form Settings → Encrypt Form Data
  • Turn it ON

Note:

  • Encryption is only available on certain Jotform plans
  • Once you turn it on, you cannot turn it off later

Important

  • Do NOT collect name if you don’t need it
  • Do NOT collect location by default

You now have a minimal input layer (this is really critical for privacy).

Step 2 — Set up Cloudflare R2

This is where files will live for a short time.

Go to Cloudflare R2:

  • Click Create bucket
  • Add a name
  • Save

Then create API credentials and copy them.

Use this only for raw files. Do not use your database for this.

At this point, you have a place to store the uploaded file before it gets deleted later.

Step 3 — Create a table in Supabase

Now create a place to store the final AI output.

In Supabase, create a table called user\_insights.

Add fields like:

  • id
  • created\_at
  • email
  • summary
  • topics

This table should only store the useful output.

Do not store raw files here.

Do not store full transcripts unless you really need them.

Step 4 — Connect Jotform to Zapier

Now that the form, bucket, and table are ready, you can build the workflow.

Go to Zapier and create a new Zap.

Set the trigger to: Jotform → New Submission

Connect your Jotform account.

Run a test.

Zapier should now pull in the submission fields:

  • The file
  • The email field, if you included it
  • The consent field

Now the workflow starts every time a user uploads a file.

Step 5 — Send the file to Cloudflare R2

Now move the file out of Jotform and into your own storage.

Go to Zapier:

  • Click Add step
  • Choose Webhooks by Zapier
  • Select POST

You are going to send the file to your upload endpoint.

Important: Zapier cannot upload directly to Cloudflare R2.

So you need one small step in between: Zapier → your endpoint (on Vercel) → Cloudflare R2

In Zapier:

  • Paste your Vercel endpoint URL into the URL field

Now scroll down:

  • Set Payload Type → multipart/form-data
  • In Files, select the file field from Jotform

Test it.

The file leaves Jotform and is now stored in your R2 bucket.

Step 6 — Extract useful information

Now we process the file.

First, make sure the file is in text form.

If the file is audio, convert it to text.

In Zapier:

  • Click Add step → OpenAI → Create Transcription (or similar)
  • Map the file from the previous step

Test it.

Now pull the insights

Add another step:

  • Click Click Add step → OpenAI → Conversation

Paste this prompt (or similar):

Extract:
 - Summary
 - Key topics
 - Sentiment
 - Intent

Do NOT include personal data.

Return JSON

Now, map the input

  • In the message/input field, insert:
    • The transcript (from previous step)
    • Or raw text (if already text)

Test it.

You now have useful data instead of a raw file.

Step 7 — Save only the AI output in Supabase

Now save only the useful data.

First, go to Supabase and create a table.

Add fields like:

  • summary
  • topics
  • sentiment
  • intent

In Zapier:

  • Add step → Supabase → Insert Row
  • Choose your table
  • Map the AI output like this:
  • summary → summary from AI
  • topics → topics etc

Do the same for the rest of your fields.

Test the step.

You should see a new row in Supabase.

You now store only useful data.

Step 8 — Set raw file deletion

Now, delete files automatically.

  • Go back to Cloudflare R2.
  • Go to Settings
  • Open your bucket
  • Find Lifecycle rules
  • Click Add rule
  • Set the rule to delete files after - say - 7 days

The file now has a time limit.

Step 9 — Give users control

Create a simple page on Vercel.

Add two buttons:

  • Download my data
  • Delete my data

Now connect them.

Download flow:

  • Find user in Supabase (by email or ID)
  • Return their data as JSON

Delete flow:

  • Delete their row in Supabase
  • Delete their file in R2 (if it still exists)

That’s it.

Now users can control their data.

on April 29, 2026
  1. 1

    The cleaner pattern is not “file upload.”

    It’s “temporary input → permanent signal.”

    That shift matters more than most teams realize.

    A lot of products quietly become storage companies by accident.
    They think they’re collecting user input.
    What they’re actually doing is retaining liability.

    The useful asset usually isn’t the file.
    It’s the structured signal extracted from it.

    The teams that figure this out early end up with:

    lower storage cost
    less compliance surface area
    cleaner data models
    better product defensibility

    Because they stop storing raw exhaust and start storing reusable intelligence.

    That usually becomes the real moat.

  2. 1

    File uploads are one of those things that always end up messier than expected — curious what makes your approach cleaner than the usual multipart form + storage bucket combo?

  3. 1

    a clear path to streamlining your online life! thanks again

Trending on Indie Hackers
The most underrated distribution channel in SaaS is hiding in your browser toolbar User Avatar 194 comments How are you handling memory and context across AI tools? User Avatar 109 comments I gave 7 AI agents $100 each to build a startup. Here's what happened on Day 1. User Avatar 105 comments Do you actually own what you build? User Avatar 66 comments Code is Cheap, but Scaling AI MVPs is Hard. Let’s Fix Yours. User Avatar 34 comments How to see your entire business on one page Avatar for Aytekin Tank 29 comments