Announcing function calling and JSON mode
We are excited to introduce JSON mode & function calling on Together Inference! They are designed to provide you with more flexibility and control over your interactions with LLMs. We currently support these features in Mixtral, Mistral, and CodeLlama with more coming soon. In this post, we'll introduce and walk you through how to use JSON mode and function calling through the Together API!
Introduction to JSON mode and function calling
While both JSON mode and function calling can enhance your interaction with LLMs, it's important to understand that they are not interchangeable — they serve different purposes and offer unique benefits. Specifically:
JSON mode allows you to specify a JSON schema that will be used by the LLM to output data in this format. This means you can dictate the format and data types of the response, leading to a more structured and predictable output that can suit your specific needs.
Function calling enables LLMs to intelligently output a JSON object containing arguments for external functions that are defined. This is particularly useful when there is a need for real-time data access, such as weather updates, product information, or stock market data, or when you want the LLM to be aware of certain functions you’ve defined. It also makes it possible for the LLM to intelligently determine what information to gather from a user if it determines a function should be called. Our endpoint ensures that these function calls align with the prescribed function schema, incorporating necessary arguments with the appropriate data types.
JSON Mode
With JSON mode, you can specify a schema for the output of the LLM. While the OpenAI API does not inherently allow for the specification of a JSON schema, we augmented the response_format
argument with schema
. When a schema is passed in, we enforce the model to generate the output aligned with the given schema.
Here's an example of how you can use JSON mode with Mixtral:
Example:
In this example, we define a schema for a User
object that contains their name and address. The LLM then generates a response that matches this schema, providing a structured JSON object that we can use directly in our application in a deterministic way.
The expected output of this example is:
More Examples:
Array and Optional argument:
We also support optional arguments. This example is based on the one above, but with an optional argument called explanation
.
Nested data types:
This example demonstrates handling nested data types in JSON responses. Two Pydantic models are defined: Address
with fields street
, city
, country
, and zip
, and User
with fields name
, is_active
, and address
(which uses the Address
model). The model generates a JSON response that creates a user with the given details, following the defined schema.
For more detailed information, check out our documentation on JSON mode.
Function Calling
With function calling, it will output a JSON object containing arguments for external functions that are defined. After the functions are defined, the LLM will intelligently determine if a function needs to be invoked and if it does, it will suggest the appropriate one with the correct parameters in a JSON object. After that, you can execute the API call within your application and relay the response back to the LLM to continue working.
Let's illustrate this process with a simple example: creating a chatbot that has access to weather data. The function is defined in tools
:
Example:
In this example, we define an external function that gets the current weather in a given location. We then use this function in our chat completion request. The AI model generates a response that includes calls to this function, providing real-time weather data for the requested locations. The expected output is:
If tool_choice="auto"
, the model might choose not to invoke any function calls. To always use a function, you can simply specify tool_choice= {"type": "function", "function": {"name": "<function_name>"}}
. Moreover, you can prevent the model from calling functions by specifying tool_choice="none"
.
More Examples
Parallel function calling:
This example demonstrates how to call a function in parallel for multiple inputs. The user request is to get the current temperature of New York, San Francisco, and Chicago. The model generates a response that calls the get_current_weather
function for each of these locations in parallel, returning an array of function call objects.
No function calling:
This example shows how the model behaves when the user request does not require a function call. The same function get_current_weather
is defined as a tool, but the user request is to find the location of Zurich, which does not require the function. The model generates a response that informs the user it cannot provide geographical information and only retrieves the current weather in a given location.
Multi-turn example:
This example shows how to use a function call in a multi-turn conversation to enrich the dialogue with function responses and generate a response based on the updated conversation history.
For more detailed information, check out our documentation on function calling.
Conclusion
We believe that JSON mode and function calling are a significant step forward, bringing a new level of versatility and functionality to AI applications. By enabling a more structured interaction with the model and allowing for specific types of outputs and behaviors, we're confident that it will be a valuable tool for developers.
We can't wait to see what you build on Together AI! For more info, check out our function calling and JSON mode docs.
LOREM IPSUM
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
LOREM IPSUM
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
Value Prop #1
Body copy goes here lorem ipsum dolor sit amet
- Bullet point goes here lorem ipsum
- Bullet point goes here lorem ipsum
- Bullet point goes here lorem ipsum
Value Prop #1
Body copy goes here lorem ipsum dolor sit amet
- Bullet point goes here lorem ipsum
- Bullet point goes here lorem ipsum
- Bullet point goes here lorem ipsum
Value Prop #1
Body copy goes here lorem ipsum dolor sit amet
- Bullet point goes here lorem ipsum
- Bullet point goes here lorem ipsum
- Bullet point goes here lorem ipsum
List Item #1
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
List Item #1
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
List Item #1
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
- Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
List Item #2
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
List Item #2
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
List Item #3
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Build
Benefits included:
✔ Up to $15K in free platform credits*
✔ 3 hours of free forward-deployed engineering time.
Funding: Less than $5M
Grow
Benefits included:
✔ Up to $30K in free platform credits*
✔ 6 hours of free forward-deployed engineering time.
Funding: $5M - $10M
Scale
Benefits included:
✔ Up to $50K in free platform credits*
✔ 10 hours of free forward-deployed engineering time.
Funding: $10M - $25M
article