Utilization of OpenAI API - 180D-FW-2023/Knowledge-Base-Wiki GitHub Wiki
Introduction
Most people know that the OpenAI ChatGPT is one of the most innovative, helpful assistant that has been released in the recent year of 2022, being popularized as an intelligent tool and an AI companion. However, there is much more that could be done by using the API, which can not only generate text, but also convert speech to text, generate images, process image inputs, and more. The basis of ChatGPT is built up from the models that can be accessed through the API, where a variety of them offer different services and at various price points. Some models are built to communicate like humans, and others to use embeddings, meaning that it helps to search, classify, and compare text. Since some models are developed before and after ChatGPT, they can be both less and more advanced in capabilities.
The OpenAI API
An API is a service that allows a user to use programs from a third party, where the user requests and the API responds. For OpenAI, this means that the user will be able to use different models by creating requests that specify which model they would like to use, with the proper parameters, settings, and content. There are some things to note regarding the API. The text generation models, or generative pre-trained transformers (GPT), have been trained to receive and process input to understand what it means. As a result, the user must design inputs that are known as "prompts", which may include instructions or examples that help the model understand its objective. With the input, some models that use text embedding, which is a vector representation of a piece of data that has parts of its meaning and content saved, to achieve functions like searching, classification, recommendation, completing the blanks, etc. The input is broken up into tokens, which are commonly occurring sequences of characters and usually about 4 characters per token. The number of tokens that can be input and output in total is limited to a max number depending on the model.
Advanced Utilization of the API Using prompt adjustment
Attempting to force the GPT to perform functions such as classification requires a technique called shot-prompting. Shot-prompting is the process of giving examples of prompts and corresponding responses to demonstrate to the model what the format and the content that the responses should be in. As a brief example, if one wants the model to respond to the question "What is your favorite animal?" with a concise answer, the following prompts would be inputted in the message parameter of the request:
messages=[{
"role": "system",
"content": "You are a kindergarten teacher taking care of kids."
},{
"role": "user",
"content": "What is your favorite animal?"
},{
"role": "assistant",
"content": "cat"
},{
"role": "user",
"content": "What is your favorite animal?"
},{
"role": "assistant",
"content": "frog"
},{
"role": "user",
"content": "What is your favorite animal?"
}]
There are different roles within a single model: "system", "user", and "assistant." If ground truths, which are prompts and responses that are known, are available and short, then shot-prompting to help with formatting is optimal. However, the most important role for indicating formatting is the system, where one can determine the many conditions for inputting the specified response without the back and forth prompting. In this case, shot-prompting is used, and upon receiving this message, the assistant will respond with the format of a lowercase, single word (or phrase) animal. Examples have been given before the final prompt is given in the form of the user.
One thing to remember with shot-prompting is that not only does it require already known examples, it also sometimes lead to a skewed/over-fitting response, where the model only uses the example outputs as its own response. This is especially the case for examples that are long and extremely specific, because the model does not have enough computational power to determine the pattern of the response with limited context. Additionally, since model input and output length is limited by the amount of tokens each model can take, it is difficult to make multiple examples if they are long.
Having multiple chat completions instead of just one, where the result of one is used in the next one is a great way to further modify the output to what the user may want. For instance, if an adjective needs to be added to the animal's name, the next prompt could directly ask for it:
text = "Add an adjective to " + previous_answer
messages=[{
"role": "system",
"content": "You are a kindergarten teacher taking care of kids."
},{
"role": "user",
"content": text
}]
This could help expand the context in the cases of long examples and modify the responses so that they're closer to what the user needs at a higher cost.
One of the parameters in chat-completions endpoint is the temperature. The temperature value should be adjusted based on which outputs should be varied and which ones should remain the same no matter what. For reference, a temperature value of 0 means the most determinant outputs, meaning that the outputs do not change when the same text is input. A value of 1 means the most non-determinant outputs, which means the same input produces very different outputs. In this case, one would want a higher value for the temperature, because a different output is desired when the question is asked every time. Having a temperature of 1 would be too high since it may be over-creative and stop following the format given. For having a varying response that still follows the format, a temperature of 0.6-0.8 is a good balance. The resulting request would look something like this:
result = openai.ChatCompletion.create(
model=model_name,
messages=[{...},{...}],
temperature=0.6,
max_tokens=512
)
To achieve the correct response, continue to add limitations as part of the system prompt. The models will often try to add extra text or answer a question differently.
To make sure that the responses are reliable and consistent, run at least 50 cases testing the same prompts with the same system to analyze the accuracy. For complex prompts, chances are that the temperature adjustment is extremely important in making sure there is a creative answer that still fits the format of the desired output.
Applying to an Exercise Trainer
Now, an expansive example will be demonstrated in the form of a GPT-3.5 turbo HIIT fitness trainer that suggests exercises based on a person's needs and statistics. The the generative AI is adjusted with specific prompts and responses to accommodate to the user's needs. The user wants to input weight, age, height, gender, and goal as part of their statistics.
In order to actually read the responses in a form that could be used in code, the response must be rigorous like the following:
Day 1: squat, 20 sec, rest, 10 sec, jumping jack, 20 sec...
Day 2: rest
Day 3: ...
...
Day 7: ...
Each day is essentially a line with exercises and respective times being separated by commas. This helps with the parsing process with a "," delimiter. The "Day x" at the beginning of each line helps to indicate which day it is, and rest days are important for exercise routines as well. The request to the API would look something like this:
text = "I am a " + str(weight) + " kg, " + str(height) + " cm, " + str(age) + "-year-old " + gender_list[gender] + " who wants to " + goal_list[goal] + " for " + str(daily) + " minutes every day."
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{
"role": "system",
"content":
"""You are a trainer who will help design high-intensity interval training routines, including rest days.
Your task is to respond with a list of days with names and duration of exercises and rests for each day. Your responses will be separated by only commas.
Ensure that the total exercise time of each day is exactly equivalent to what the user requests. For example, if the user wants to exercise for 30 minutes every day, the exercise times must add up to 30 min.
Ensure that the exercises have proper rest times in between.
Do not respond with sets and reps. Use only time to represent how much the exercises should be done.
Do not use "warm up" or "cool down." Mention the exercises directly.
Specify how many time the routine should be repeated with a number if needed.
Do not suggest an exercise set with only 1 type of exercise and rest
Example prompt: Design an exercise plan that's suitable for a 92 kg, 188 cm, 25-year-old male who wants to maintain health for exactly 10 minutes every workout. How long should my workout plan be and what exercises should I do?
Example response:
'Day 1: squat, 20 sec, rest, 10 sec, jumping jack, 20 sec, rest, 10 sec, squat, 20 sec, rest, 10 sec, jumping jack, 20 sec
Day 2: lunge, 30 sec, high knee, 15 sec, rest, 15 sec, lunge, 30 sec, high knee, 15 sec, rest, 15 sec, lunge, 30 sec
Day 3: ...
Day 4: ...
Day 5: ...
Day 6: ...
Day 7: rest day
Repeat this routine 2 times.'
Example prompt: Design an exercise plan that's suitable for a 55 kg, 160 cm, 21-year-old female who wants to build strength for 30 minutes every workout. How long should my workout plan be and what exercises should I do?
Example Response:
'Day 1: squat, 30 sec, lunge, 30 sec, v boat, 60 sec, straight leg lifts, 60 sec, plank, 60 sec, rest, 60 sec, squat, 30 sec, lunge, 30 sec, v boat, 60 sec
Day 2: plank, 60 sec, rest, 60 sec, donkey kick, 60 sec, rest, 60 sec, squat, 60 sec, rest, 60 sec, high knee, 60 sec
Day 3: ...
Day 4: rest day
Repeat this routine 0 times.'
Example prompt: Design an exercise plan that's suitable for a 46 kg, 151 cm, 40-year-old female who wants to lose weight for 9 minutes every workout. How long should my workout plan be and what exercises should I do?
Example Response:
'Day 1: squat, 30 sec, high knee, 15 sec, rest, 15 sec, squat, 30 sec, high knee, 15 sec, rest, 15 sec, squat, 30 sec, high knee, 15 sec
Day 2: ...
Day 3: rest day
Day 4: rest day
Day 5: jumping jack, 45 sec, rest, 15 sec, lunge, 45 sec, rest, 15 sec, plank, 45 sec, rest, 15 sec, v boat, 45 sec
Day 6: rest day
Day 7: rest day
Repeat this routine 3 times.'
"""
},
{
"role": "user",
"content": text
}],
temperature=0.6,
max_tokens=1500,
)
The request is as specific as possible for conditions of the result that are desired. The model is reminded of many things in the system prompt to ensure certain behavior. Since there is a limit to the context, the examples are shortened with the use of "...", which the model is able to recognize with natural language that the user is excluding some days. The resulting output will likely follow the examples, with some exceptions due to temperature
being set to 0.6, which allows the model to have variance in its responses. The setting of max_tokens
also helps limit the length of the suggested exercise set.