As a solo founder, you are the builder, the marketer, the support team, and the QA team.
Most founders ignore the last one.
That’s risky.
A small change can break:
You might not notice right away.
And if a payment or signup flow breaks in production, you lose money.
Instead of manually clicking around before every deploy, you can build a small browser QA agent that checks your most important flows automatically.
We’ll build a signup funnel as an example.
But this works for any critical flow.
It goes through your site step by step.
It:
You run it before you deployment.
If it fails, you fix the issue first.
Use a browser automation tool.
For example:
Make sure it can:
If it cannot check results and fail the test, it is not real QA.
Optional: You may also use an AI model for summarizing logs.
That’s all you need.
Choose one key user journey.
Something that would hurt if it were to break.
Examples:
Give it a simple name.
Example: Login QA
Open your browser tool and create a test.
Now, add the same steps a real user takes:
Now it behaves like a user.
Next, we add checks.
After the Submit step, add checks.
In most tools, you do this by:
Now add two checks.
Check 1 — URL Check
Add a rule: Current URL contains /welcome
If not:
Most tools have a setting like:
Turn that on.
Check 2 — Success text check
Add another check.
Rule: Page contains text “Welcome” or “Account created”
If not:
One check is not enough. The page can load, but still fail.
Now make sure the test fails if there are hidden errors.
First, turn on logging.
In your tool’s settings, enable:
These are usually found in:
Turn them on.
Now decide how errors should behave.
If your tool has options like:
Turn them on.
You’re done.
If your tool does NOT have automatic failure:
Create these two rules:
If either number is greater than 0 → FAIL the test.
Open test settings (every tool has a version of this).
Turn on these three things:
If your tool doesn’t have “screenshot on failure”, you fake it by adding:
Every failure should leave proof.
You need a place to store the results of each test run.
Create a folder called: qa-runs
Where you create it depends on how you run your tests (the folder should live in the same place your test executes):
Each test run should save:
If your tool allows it, save each run in its own subfolder using a timestamp.
Now you can review history.
Use this when:
You can paste logs and results into your AI model of choice and ask it to:
This saves you from manually scanning raw logs.
If you’re technical and your logs are clean, you can skip this step.
This is the most important part.
Change your process from: Code → Deploy
To: Code → Run QA → Deploy
You can connect it to:
In all cases: If QA fails → do not deploy.
No exceptions.
how to lose a customer, 101. also, how to prevent it easily. ty!