Large Language Models
Open AI
Avaliable models.
What is an OpenAI node?
Node in Stack AI
The OpenAI node allows you to integrate OpenAI’s large language models (LLMs) into your workflows.
The models available from OpenAI are the following ones:
- GPT-4o: This is the latest and most advanced version from OpenAI, boasting a massive context window of 128,000 tokens, equivalent to over 300 pages of text in one go. It represents a significant leap forward in natural language understanding and generation capabilities.
- GPT-4 Turbo Preview: Similar to GPT-4, this variant also features a whopping 128,000 token context window, showcasing its ability to handle extensive textual inputs efficiently. It’s designed for users who require deep insights from large volumes of text.
- GPT-4 32K: With a substantial context window of 32,000 tokens, this model strikes a balance between performance and resource usage. It’s ideal for applications that need to process lengthy texts without overwhelming computational resources.
- GPT-4: Equipped with an 8,000 token context window, GPT-4 offers robust capabilities for handling complex tasks involving longer texts. Its design focuses on delivering high-quality outputs within manageable limits.
- GPT-3.5 Turbo 16k: This model stands out with its 16,000 token context window, providing a solid foundation for applications needing detailed analysis of moderately sized texts. It combines efficiency with depth, making it suitable for a wide range of NLP tasks.
- GPT-3.5 Turbo: Known for its excellent balance between speed and cost-effectiveness, GPT-3.5 Turbo operates with a 4,000 token context window. It’s perfect for scenarios where quick processing of shorter texts is required without compromising on quality.
How to use it?
To utilize the OpenAI node, you must establish the following connections:
- Input: This node necessitates a text input. Typically, you would connect the input to an Input node (e.g., message from a user), or to another LLM node (generating a text for this new LLM).
- Output: This node outputs the response from the LLM.
On the OpenAI node, you will find different field boxes and parameters.
- System: This field box is where you especify how you’d like the LLM to respond (e.g., tone, style, language, etc.). Typically, you’d especify things that won’t change throughout the conversation with the user.
- Prompt: This field box is where you especify where the question from the user is coming from, what context to consider, etc. Connected inputs will pop up as labels below the prompt.
A warning will appear if you haven’t specify each input in the prompt. You include inputs by using curly brackets (e.g.,
{in-0}
). - Formatted prompt: In this field, you will see once you play run, the prompt that is sent via API to OpenAI. You will see all inputs substituted by text values (e.g., question from user, chunks of information from your knowledge bases, etc.).
- Token: On the bottom right corner, you will see a count of the tokens used in the prompt. This is important because OpenAI charges per token.
- Latency: Additionally, on the bottom right corner, you will see the latency of the node: how much time it took to run it.