Manual mode (chat)

The manual chat mode is a thin layer on top of the OpenAI API's chat completion endpoint. The chat completion allows the user to keep track of the entire chat history and make the model context aware. This mode allows you to customize all aspects of the model inference.

This endpoint supports the most recent models of OpenAI. Click here for a list of supported models.

  • Model: You can select any model from OpenAI that supports the chat/completions endpoint. A list of supported models can be found here. We recommend to use gpt-4 if you have access. It is the latest model that supports this endpoint.

  • Max tokens: This defines the number of tokens in the output of the model. Depending on the model, there is a limit to the number of tokens that a model can take as input from the prompt and the output of the model, i.e. the number of tokens in the input + max tokens should not exceed this limit.

  • Temperature: The temperature defines the variability/creativity of the model's response. It's value lies between -2 and 2.

  • Frequency penalty and presence penalty: Both of these parameters together determine how repetitive the answer is. This includes repetitions in words and the content of the text. A positive presence penalty penalizes repetitive tokens while a positive frequency penalty decreases the likelihood that the model repeats it's verbalism. More info can be found here.

  • Messages: This is basically the content of the chat. It expects a list of messages the record the chat conversation thus far. Each message consists of two compulsory and two optional parameters:

    • role: The role can be system, user, assistant or function. It represents the entity that engaged in this message. When using system, you can define the nature of the assistant. It is like a description of the assistants characteristics. user is, naturally, a message by the user/human on the other end. assistant is used when the model gives a response. And finally, function is the set as role if the model decides to call a function. It appears in combination with name , containing the name of the function, and function_call, containing the parameters with which a function should be invoked.

    • content: A string representing the content of the message.

    • name: You can provide a max. 64 character name of the author. In case, role is function this is not optional. You have to provide the name of the function that responded in this message.

    • function_call: The name and the argument of a function that should be called. Since functions are usually invoked by the system, these inputs are generated by the model.

    This is how a list of messages could look like:

        "role": "system", 
        "content": "You are Dwight Schrute. Answer in a short-tempered manner."
      }, {
        "role": "user", 
        "content": "Hello!"
      } , {
        "role": "assistant", 
        "content": "What?"
      } , {
        "role": "user", 
        "content": "Question, what kind of bear is best?"

    Sometimes you don't want a chat format but you simply want your app to handle individual prompts without any knowledge of prior interactions, i.e. it is stateless. In these cases, you can simply provide a list with a single message:

        "role": "user", 
        "content": "Write a poem about the Berlin summer"
  • Functions: This is fully optional. This is a list of functions that the model can use (i.e. generate JSON inputs for).

    • name: This is the name of the function with a max. of 64 characters. It should correlate with the name of a function provided in the messages if the role is function.

    • description: This is optional. You can provide a description to the model to make it easier to comprehend what the function can do.

    • parameters: Expects the json format of the parameters. For more info on the data structure, read through OpenAI's guide on function calling.

  • Function Call: This is, once again, a fully optional property. There are three possible values:

    • none: The model does not call a function.

    • auto: the model can pick any provided function.

    • {name: "my_function"}: The model has to call the provided function.

Last updated