Uify Docs
  • Welcome to Uify
  • Getting started
    • Quickstart
  • Editor
    • Visual UI Builder
    • Component Properties
    • Environments & Deployment
  • Writing code
    • Component properties
    • Actions
      • Managing actions
      • Triggers
      • Execution model
      • Preview
    • State variables
    • Exposed variables
    • Type safety
    • Managing usages
    • Console
  • Data Management
    • Data sources
    • Table preview
    • Secrets
  • Integrations
    • BigQuery
    • Google Sheets
    • Microsoft SQL
    • MongoDB
    • MySQL
    • OpenAI
      • Manual mode (chat)
      • Manual mode (complete)
      • Text to structured data
    • Rest API
    • Salesforce
    • Slack
    • Snowflake
  • Collaboration
    • Permissions
  • Component reference
    • Shared properties
    • Button
    • Chart
    • Checkbox
    • Container
    • Data Grid
    • Form
    • Funnel
    • Inputs
    • Modal
    • PDF
    • Radio group
    • Selects
    • Slideout
    • Switch
    • Table
Powered by GitBook
On this page

Was this helpful?

  1. Integrations
  2. OpenAI

Manual mode (complete)

PreviousManual mode (chat)NextText to structured data

Last updated 1 year ago

Was this helpful?

The manual completion mode is a thin layer on top of the OpenAI API's completion endpoint. The completion endpoint returns an answer/a completion for a prompt that the user provides. The mode allows you to customize all aspects of the model inference.

The completion endpoint is considered legacy by OpenAI. here for a list of supported models. The underlying versions are GPT-3.5 and before.

  • Model: You can select any model from OpenAI that supports the completions endpoint. A list of supported models can be found . We recommend to use text-davinci-003. It is the latest model that supports this endpoint.

  • Max tokens: This defines the number of tokens in the output of the model. Depending on the model, there is a limit to the number of tokens that a model can take as input from the prompt and the output of the model, i.e. the number of tokens in the input + max tokens should not exceed this limit.

  • Temperature: The temperature defines the variability/creativity of the model's response. It's value lies between -2 and 2.

  • Frequency penalty and presence penality: Both of these parameters together determine how repetitive the answer is. This includes repetitions in words and the content of the text. A positive presence penalty penalizes repetitive tokens while a positive frequency penalty decreases the likelihood that the model repeats it's verbalism. More info can be found .

  • Prompt: The most important part of the action is the prompt. The prompt contains the instructions for the language model. This could be as simple as "Write a poem about Berlin in the summer." or as complex as "Imagine you are a technical writer for a SaaS company. You write step-by-step guides based on the internal product documentation. Here are examples of your work {{state.examples}}. Write an step-by-step guide based on these docs: {{docs.value}}:". As you can see, you are free to feed in any dynamic value your app or data source using moustache syntax, i.e. {{}}.

Click
here
here