Manual mode (complete)
Last updated
Last updated
The manual completion mode is a thin layer on top of the OpenAI API's completion endpoint. The completion endpoint returns an answer/a completion for a prompt that the user provides. The mode allows you to customize all aspects of the model inference.
The completion endpoint is considered legacy by OpenAI. Click here for a list of supported models. The underlying versions are GPT-3.5 and before.
Model: You can select any model from OpenAI that supports the completions
endpoint. A list of supported models can be found here. We recommend to use text-davinci-003
. It is the latest model that supports this endpoint.
Max tokens: This defines the number of tokens in the output of the model. Depending on the model, there is a limit to the number of tokens that a model can take as input from the prompt and the output of the model, i.e. the number of tokens in the input + max tokens should not exceed this limit.
Temperature: The temperature defines the variability/creativity of the model's response. It's value lies between -2 and 2.
Frequency penalty and presence penality: Both of these parameters together determine how repetitive the answer is. This includes repetitions in words and the content of the text. A positive presence penalty penalizes repetitive tokens while a positive frequency penalty decreases the likelihood that the model repeats it's verbalism. More info can be found here.
Prompt: The most important part of the action is the prompt. The prompt contains the instructions for the language model. This could be as simple as "Write a poem about Berlin in the summer." or as complex as "Imagine you are a technical writer for a SaaS company. You write step-by-step guides based on the internal product documentation. Here are examples of your work {{state.examples}}
. Write an step-by-step guide based on these docs: {{docs.value}}
:". As you can see, you are free to feed in any dynamic value your app or data source using moustache syntax, i.e. {{}}
.