Chat with AI Basics
Last updated
Last updated
Chat completions provide the option to utilize either an open source or a trained AI assistant tailored to your specific content. This feature allows you to send prompts to OpenAI in text format and receive a corresponding response. While similar to text completion actions, ChatGPT, which is used for this purpose, offers a speed and cost efficiency that is ten times faster and more affordable.
Within your flow, create an action step following by clicking on the integrations tab. Next select the OpenAI integration. A new window will display above, now click the 'Edit Action'
There are 2 main actions regarding chat completions:
Create: Create chat completion
Delete: Clear remembered chat history
Let's breakdown the components of the create chat completion action:
This is an optional field and is used to provide additional context about you or your business when completing chats.
You can set up a detailed background information like this if you are building a support chatbot:
System: You are a 'Insert business name' helpful assistant. You will handle customer support, and guide the user and book demo. our pricing and plans are "insert plan details". Always offer the coupon code if you see it’s good timing to do so.
This will allow you to easily set up background information about the chatbot, and it can serve your client based on the information you instructed.
This is your main input for which you want the AI to give you an answer or output. Usually this is the user’s response. This can be a question, an instruction etc. You can add “user:” as a prefix to your prompt in order to provide more context to the AI for eg:
“user : will it rain today?”
It will also work if you don’t add “user” in front of the response. You can use our system field like {{last_text_input}}
If selected “Yes”, the chat history between user and assistant will be saved in a system field to be used for later if needed.
The OpenAI action response will be automatically saved into assistant role. You don’t need to do anything.
Also, we have introduced a new system JSON field: {{openAI}} which will have all the chat history with the user:
The model you want to use inside ChatGPT for the task. By default gpt-3.5-turbo has been selected.
To add a different model, simply copy and paste from the values displayed.
Now let's look at the lower portion of the chat completion window.
Functions are a super easy way to send users to different parts of your chatbot flows and also capture data like email addresses, phone numbers, and other important information using natural language. They provide a seamless and user-friendly experience for your audience.
Each task inside ChatGPT consumes tokens. These token can be replenished using the credit. This field puts a limit on the maximum number of tokens you want to use for a particular task. 200 seams to be the sweet spot, but do go and test to find yours.
This acts as a accuracy gauge where higher values give more random answers and lower values give more deterministic and focused answers. It is default to 1. Basically the creativity level of the response. the lower the number the less creative, but more focused. The higher the number, the more creative it will get 'beware' that it can deviate from the originating context. So find the right balance for you really depends on the type of response you'd like to output.
Temperate ranges from 0 to 2
This value, known as the diversity penalty, plays a crucial role in ensuring that ChatGPT generates unique and varied phrases and texts when completing a given task. By adjusting this value, users can control the level of repetition in the output generated by the model. A higher diversity penalty leads to a reduction in the repetition of words, resulting in more diverse and creative responses. The default setting for this value is 0, but it can be adjusted within a range of -2 to 2, allowing users to fine-tune the level of diversity in the generated text according to their specific needs and preferences. Ultimately, the diversity penalty feature enhances the flexibility and adaptability of ChatGPT, enabling it to produce more engaging and innovative content.
The default value for penalizing new tokens in a text generation model is set to 0. Positive values can be assigned to further penalize new tokens based on their existing frequency in the text generated so far. This helps to decrease the likelihood of the model repeating the same line verbatim, resulting in more diverse and coherent text outputs. By adjusting the penalty value, the model can be fine-tuned to generate unique and engaging content that captures the essence of the input data while avoiding redundant or repetitive phrases. This customization allows for a more dynamic and versatile text generation process, enhancing the overall quality and creativity of the generated content.
Frequency penalty ranges: -2 to 2
Based on our testing with both user feedback and internal evaluations, it appears that there is no significant impact on either Frequency or Presents penalty when utilizing the latest language models like GPT3.5 Turbo and GPT 4.
The AI has the capability to generate tokens continuously, however, there are up to 4 specific sequences where the AI will automatically stop generating further tokens. These stop sequences are essential for preventing any potential errors or issues that may occur during token generation. By using a comma as a separator for multiple stop sequences, users can easily input and customize these sequences according to their specific requirements. This feature ensures that the AI operates smoothly and efficiently, providing users with a seamless experience while utilizing the generated tokens.
For example:
If you input the value of . 'fullstop' the response will stop generating when it reaches the first full stop in the response.
Alternatively, if you think of some of the generic use cases where people ask AI to generate 5 blog posts, If you input the number 5 it will only create 4 blog posts 'as 5 is the hard stop'. You can see that this helps you control the amount of context and tokens used. Great if you don't want to allow users to generate code by inputting a { or to limit generating masses of content.
Default is 1, How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota.
In basic terms, how many response options do you want. in most cases this is just set to one.
You have 2 options to choose from, either Text or JSON ' javascript object notation'.
When using JSON mode you must still instruct the model to produce JSON via system message.
In most cases you will just use the text option, JSON can be used in more advanced messaging when required.
In this example we are going to be Gary Vaynerchuck - who is a top marketing influencer.
Here is the input that we will test in the System Message block:
You're stepping into the shoes of Gary Vaynerchuk, a powerhouse in the marketing world renowned for his dynamic approach and no-nonsense attitude. Craft a response embodying Gary's signature tone, drawing insights from his extensive repertoire of blogs, podcasts, and books to deliver actionable guidance.
We are using the system field {{Last_Text_Input}}. Meaning it will answer based on whatever the users last text/message was. We have set the Remember History to Yes and left the model to default 'GPT3.5 Turbo' and everything else is left to default.
Now to test the response type a question into the test value bar. Let's ask a question and view the response:
Question: What is your best advice about using ai for SaaS?
Now scroll down and click the button 'Test Request'
Great it worked! Ok in the image above you'll see that we have a 200 response code, meaning everything worked and we had a reply. The main section we are focusing on is the Content: Listen up folks, because when it comes etc etc. This is the AI response to our question.
From here what you need to do is save the response.
Select the "Content' with the small circle until it turns blue. This highlights the data you want to save.
Next - To the right side you will see in the JSON Path it has added a value. Below that in the 'Map response to custom field' bar. Select or create a custom field to save the response too.
Next - Click the 'Add' button so it adds to the saved data at the bottom. Once done its ready to be used.
OK, let's put it to the test. Below we have added 2 additional nodes, both a question block. The first block says "Hey it's Gary Vee, How can I help you bro?"
The question block to the right will display the ai response from the saved 'content'.
Be sure to test this basic example so you can understand the basics of inputting the data to displaying it back via the question block.