3
1 Comment

The Process Behind Building a GPT3 Powered NoCode App

I recently posted my GPT3-powered blog topics generator - Inspired - on Twitter and just crossed 10 free users.

Here's what I've learned so far from building a GPT3 powered tool:

The examples you feed it matters a lot

The examples you feed to the AI will make or break your tool. If your examples aren't good, it doesn't matter how good the user's input is, the tool will not be able to yield good results.

Feeding examples is definitely more of an art than a science. It takes a lot of trial and error to get examples that are both specific enough that tool knows what to emulate but broad enough that the results aren't all the same.

To give you an idea, OpenAI gives a credit of, I believe, 300,000 tokens and I used just about half of that to really nail down the inputs.

You have to get the temperature just right

There's a setting called Temperature which is a measure of how free or constrained your output will be based on the inputs. The lower the temperature, the less your output deviates from the input.

There's no right or wrong temperature. It depends entirely on what you're trying to do with the results. But, at first, I thought moving the temperature by 10s (it's a 1-100 scale) would be all that I needed to see a difference. But I soon learned that when you start to get really granular in your results, moving the temperature even by 2 points makes a difference in the quality of results.

It took me 100s of tries to nail down the right temperature.

Direct your user's input in the right direction

While the main predictors of the quality of results are your examples and settings, the user's input matters a bit too. For example, in my tool, users will get different results if they input a one sentence description vs a three-sentence description. So when they first sign up, my first direction to them is to input a 2-3 sentence description of their business. In addition, I direct them to include a few keywords in the description that they might want to see in the results. This direction immediately improves the quality of results, as opposed to someone left to their own devices to write a fragment of a sentence.

A little bit down the directions list, I instruct that if they're not seeing the results they expect, they should modify their description. This gives them next steps in case they're getting stuck.

Build in a way to get feedback on result quality

On my app, I have a thumbs up, thumbs down on each result and I've found it extremely helpful to gauge the quality of results people are receiving. You want to be able to know how your app is doing in the wild because when you're testing it, you're still a biased user. When people start entering unpredictable inputs, that's when the app truly gets put to the test.

Build in payment right away

GPT3 is expensive, especially if you're using the DaVinci engine. Build in a way to make the app paid from day one to offset your costs.

That's all folks. Let me know in the comments if you have any questions about GPT3 or about building a GPT3 powered app (I built it with Bubble).

  1. 1

    Nice post Graciolli! I read No code substack and got inspired by what you built.

    Do you have any course recommendations on that I can learn how to build an AI product?

Trending on Indie Hackers
How I grew a side project to 100k Unique Visitors in 7 days with 0 audience 49 comments Competing with Product Hunt: a month later 33 comments Why do you hate marketing? 29 comments My Top 20 Free Tools That I Use Everyday as an Indie Hacker 15 comments $15k revenues in <4 months as a solopreneur 14 comments Use Your Product 13 comments