
Looking for a quick and easy way to get answers to your programming questions? Here is how you can ask GPT-3 programming questions directly in the terminal using Python.
This tweet inspired me to build my own programming question CLI:
The terminal is the one application on my Mac that is always open. Whether I'm writing a blog post (GatsbyJS), building a new app or just tinkering in Jupyter Notebook. I always have a terminal window open.
So I thought, why not create a CLI app where I can ask GPT-3 programming-related questions?
Why not just use GitHub Copilot?
Simply because I don't always have VisualStudio Code open, but my terminal is literally always open and running.
So let's build our own CLI with these steps:
1. Get OpenAI credentials
2. Create a Python script calling GPT-3
3. Create a bash script
4. Add bash script to PATH
5. Test the flow
Here's what we'll use:
1. Python 🐍
2. GPT-3 API 🤖
3. Bash 🖥
Let's start with getting the credentials for GPT-3.
In this section, we'll get our OpenAI API key which we'll be using in a Python script to get an answer to our programming questions.
Go to https://beta.openai.com/, log in and click on your avatar and View API keys:

Then create a new secret key and save it for the request:

The next step is to create the Python script we'll use to call the OpenAI API.
For the endpoint text-davinci-002, we'll need a prompt that asks the endpoint to give us an answer to a question. Feel free to play around with this prompt. Here's the one I'm using:
prompt = f"You're a programming expert, what is the answer to this question with code \n\n --- question start ---- \n\n {programming_question} \n\n --- question end --- show with code:"
Now we can make a call to the Davinci endpoint for each question:
import openai
openai.api_key ="YOUR_OPEN_AI_GPT3_API_KEY"
# Create a prompt for the completion endpoint
prompt = f"You're a programming expert, what is the answer to this question with code \n\n --- question start ---- \n\n {programming_question} \n\n --- question end --- show with code:"
response_gpt3 = openai.Completion.create(
model="text-davinci-002",
prompt=prompt,
temperature=0.7,
max_tokens=100,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
# Update tweet with keywords
answer = response_gpt3['choices'][0]['text'].strip()
Let's create a Python script, create a file called main.py with this starting point:
import argparse
def main(args):
"""
Get programming question answer with GPT-3
:return:
"""
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--q', help= "The programming question" )
args = parser.parse_args()
main(args)
Add the snippet calling the OpenAI API to main.py, and your file should look like this now:
import argparse
import os
import openai
def main(args):
"""
Get programming question answer with GPT-3
:return:
"""
programming_question = args.q
print("Question: ", programming_question, '\n')
# Create a prompt for the completion endpoint
prompt = f"You're a programming expert, what is the answer to this question with code \n\n --- question start ---- \n\n {programming_question} \n\n --- question end --- show with code:"
openai.api_key ="YOUR_OPEN_AI_GPT3_API_KEY"
response_gpt3 = openai.Completion.create(
model="text-davinci-002",
prompt=prompt,
temperature=0.7,
max_tokens=100,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
# Get answer
answer = response_gpt3['choices'][0]['text']
print("Answer: ", answer, '\n')
The next step is to create the bash script you'll call from your terminal to ask GPT-3 questions.
Create a new file ask without any file extension, and add the following to the file:
#!/bin/bash
python /YOUR/ABSOLUTE/PATH/main.py --q "$1"
Add the absolute path to your main.py file we just created, where we're calling the OpenAI API.
The last step is to make your bash script accessible from your terminal.
In this step, we'll make your newly created bash script accessible from your terminal. We want to be able to call the script ask without a path. But when you enter a command without a path, in this case, ask in the terminal, the shell will look through all directories in your PATH and the directory you're currently in for this command.
There are two ways to make the file accessible. Either move your bash file into /usr/local/bin path (or other paths already added to your PATH) or add your script file’s path to your PATH.
/usr/local/bin (or other paths)Start by entering echo $PATH in your terminal. You'll get all the directories the shell will look through in order.
Then simply move your bash script to /usr/local/bin if it is included in the directories listed when you entered echo $PATH in your terminal.
If you have a .zshrc (using zsh) or a .bash_profile, open that file by entering:
YOUR_TEXT_EDITOR_OF_CHOICE ~/.zshrc
or:
YOUR_TEXT_EDITOR_OF_CHOICE ~/.bash_profile
Then add the full folder path to your bash script:
# ASK
export PATH="/FOLDER/PATH/TO/YOUR/ASK/FILE:$PATH"
Then source your .zshrc or a .bash_profile by entering this in your terminal if you altered the .zshrc file:
source ~/.zshrc
Or this if you altered the .bash_profile file:
source ~/.bash_profile
Finally, run this in your terminal to make the ask bash script executable:
chmod u+x /FULL/FILE/PATH/ask
We're all set. Now you can test to ask a programming question right from the terminal.
Remember to wrap your question with quotes "":

Here's the answer:

1. Repo with source code
Here is the repo with the source code if you'd like to implement this on your own ⬇️
https://github.com/norahsakal/ask-gpt3-programming-questions-in-terminal
2. Do you need help with implementing your own bash script? Or do you have other questions?
I'm happy to help, don't hesitate to reach out ➡️ [email protected]
Originally published at https://norahsakal.com/blog/ask-gpt3-programming-questions-in-terminal
How I built a curated AI job board using Astro and a custom JSON-based pipeline
I’m not failing, but I’m not breaking out either. So I’m changing how I work!