Inference

Announcing function calling and JSON mode

January 31, 2024

By 

Together AI

We are excited to introduce JSON mode & function calling on Together Inference! They are designed to provide you with more flexibility and control over your interactions with LLMs. We currently support these features in Mixtral, Mistral, and CodeLlama with more coming soon. In this post, we'll introduce and walk you through how to use JSON mode and function calling through the Together API!

Introduction to JSON mode and function calling

While both JSON mode and function calling can enhance your interaction with LLMs, it's important to understand that they are not interchangeable — they serve different purposes and offer unique benefits. Specifically:

JSON mode allows you to specify a JSON schema that will be used by the LLM to output data in this format. This means you can dictate the format and data types of the response, leading to a more structured and predictable output that can suit your specific needs.

Function calling enables LLMs to intelligently output a JSON object containing arguments for external functions that are defined. This is particularly useful when there is a need for real-time data access, such as weather updates, product information, or stock market data, or when you want the LLM to be aware of certain functions you’ve defined. It also makes it possible for the LLM to intelligently determine what information to gather from a user if it determines a function should be called. Our endpoint ensures that these function calls align with the prescribed function schema, incorporating necessary arguments with the appropriate data types.

JSON Mode

With JSON mode, you can specify a schema for the output of the LLM. While the OpenAI API does not inherently allow for the specification of a JSON schema, we augmented the response_format argument with schema. When a schema is passed in, we enforce the model to generate the output aligned with the given schema.

Here's an example of how you can use JSON mode with Mixtral:

Example:

import os
import json
import openai
from pydantic import BaseModel, Field

# Create client
client = openai.OpenAI(
    base_url = "https://api.together.xyz/v1",
    api_key = os.environ['TOGETHER_API_KEY'],
)

# Define the schema for the output.
class User(BaseModel):
    name: str = Field(description="user name")
    address: str = Field(description="address")
    
# Generate
chat_completion = client.chat.completions.create(
    model="mistralai/Mixtral-8x7B-Instruct-v0.1",
    response_format={
        "type": "json_object", 
        "schema": User.model_json_schema()
    },
    messages=[
        {"role": "system", "content": "You are a helpful assistant that answers in JSON."},
        {"role": "user", "content": "Create a user named Alice, who lives in 42, Wonderland Avenue."}
    ],
)

created_user = json.loads(chat_completion.choices[0].message.content)
print(json.dumps(created_user, indent=2))

In this example, we define a schema for a User object that contains their name and address. The LLM then generates a response that matches this schema, providing a structured JSON object that we can use directly in our application in a deterministic way.

The expected output of this example is:

{
  "address": "42, Wonderland Avenue",
  "name": "Alice"
}

More Examples:

Array and Optional argument:

We also support optional arguments. This example is based on the one above, but with an optional argument called explanation.

from typing import Optional

client = openai.OpenAI(
    base_url = "https://api.together.xyz/v1",
    api_key = os.environ['TOGETHER_API_KEY'],
)

class Result(BaseModel):
    ordered_numbers: List[int]
    explanation: Optional['str']

chat_completion = client.chat.completions.create(
    model="mistralai/Mixtral-8x7B-Instruct-v0.1",
    response_format={
        "type": "json_object", 
        "schema": Result.model_json_schema()
    },
    messages=[
		{"role": "system", "content": "You are a helpful assistant that answers in JSON."},
        {"role": "user", "content": "Please output this list in order of DESC [1, 4, 2, 8]."}
    ]
)

response = json.loads(chat_completion.choices[0].message.content)
print(json.dumps(response, indent=2))

'''
{
  "ordered_numbers": [
    8,
    4,
    2,
    1
  ],
  "explanation": "The function 'desc' sorts the input list in descending order."
}
'''

Nested data types:

This example demonstrates handling nested data types in JSON responses. Two Pydantic models are defined: Address with fields street, city, country, and zip, and User with fields name, is_active, and address (which uses the Address model). The model generates a JSON response that creates a user with the given details, following the defined schema.

import os
import json
import openai
from pydantic import BaseModel, Field

# Create client
client = openai.OpenAI(
    base_url = "https://api.together.xyz/v1",
    api_key = os.environ['TOGETHER_API_KEY'],
)

# Define the schema for the output.
class Address(BaseModel):
    street: str
    city: str
    country: str
    zip: str

class User(BaseModel):
    name: str = Field(description="user name")
    is_active: bool = Field(default=True)
    address: Address
    
# Generate
chat_completion = client.chat.completions.create(
    model="mistralai/Mixtral-8x7B-Instruct-v0.1",
    response_format={
        "type": "json_object", 
        "schema": User.model_json_schema()
    },
    messages=[
        {"role": "system", "content": "You are a helpful assistant that answers in JSON."},
        {"role": "user", "content": "Create a user named Alice, who lives in 42, Wonderland Avenue, Wonderland city, 91234 Dreamland."}
    ],
)

created_user = json.loads(chat_completion.choices[0].message.content)
print(json.dumps(created_user, indent=2))

'''
{
  "name": "Alice",
  "address": {
    "street": "Wonderland Avenue",
    "city": "Wonderland",
    "country": "Dreamland",
    "zip": "91234"
  },
  "is_active": true
}
'''

For more detailed information, check out our documentation on JSON mode.

Function Calling

With function calling, it will output a JSON object containing arguments for external functions that are defined. After the functions are defined, the LLM will intelligently determine if a function needs to be invoked and if it does, it will suggest the appropriate one with the correct parameters in a JSON object. After that, you can execute the API call within your application and relay the response back to the LLM to continue working.

Let's illustrate this process with a simple example: creating a chatbot that has access to weather data. The function is defined in tools:

Example:

import os
import json
import openai

# Create client
client = openai.OpenAI(
    base_url = "https://api.together.xyz/v1",
    api_key = os.environ['TOGETHER_API_KEY'],
)

# Define function(s)
tools = [
  {
    "type": "function",
    "function": {
      "name": "get_current_weather",
      "description": "Get the current weather in a given location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          },
          "unit": {
            "type": "string",
            "enum": [
              "celsius",
              "fahrenheit"
            ]
          }
        }
      }
    }
  }
]
    
# Generate
response = client.chat.completions.create(
    model="mistralai/Mixtral-8x7B-Instruct-v0.1",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
		    {"role": "user", "content": "What is the current temperature of New York?"}
		],
    tools=tools,
    tool_choice="auto",
)

print(json.dumps(response.choices[0].message.dict()['tool_calls'], indent=2))


In this example, we define an external function that gets the current weather in a given location. We then use this function in our chat completion request. The AI model generates a response that includes calls to this function, providing real-time weather data for the requested locations. The expected output is:

[
  {
    "id": "...",
    "function": {
      "arguments": "{\"location\":\"New York\",\"unit\":\"fahrenheit\"}",
      "name": "get_current_weather"
    },
    "type": "function"
  }
]

If tool_choice="auto", the model might choose not to invoke any function calls. To always use a function, you can simply specify tool_choice= {"type": "function", "function": {"name": "<function_name>"}}. Moreover, you can prevent the model from calling functions by specifying tool_choice="none".

More Examples

Parallel function calling:

This example demonstrates how to call a function in parallel for multiple inputs. The user request is to get the current temperature of New York, San Francisco, and Chicago. The model generates a response that calls the get_current_weather function for each of these locations in parallel, returning an array of function call objects.

# Define function(s)
tools = [
  {
    "type": "function",
    "function": {
      "name": "get_current_weather",
      "description": "Get the current weather in a given location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          },
          "unit": {
            "type": "string",
            "enum": [
              "celsius",
              "fahrenheit"
            ]
          }
        }
      }
    }
  }
]
    
# Generate
response = client.chat.completions.create(
    model="mistralai/Mixtral-8x7B-Instruct-v0.1",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
		    {"role": "user", "content": "What is the current temperature of New York, San Francisco and Chicago?"}
		],
    tools=tools,
    tool_choice="auto",
)

print(json.dumps(response.choices[0].message.dict()['tool_calls'], indent=2))

'''
[
  {
    "id": "...",
    "function": {
      "arguments": "{\"location\":\"New York, NY\",\"unit\":\"fahrenheit\"}",
      "name": "get_current_weather"
    },
    "type": "function"
  },
  {
    "id": "...",
    "function": {
      "arguments": "{\"location\":\"San Francisco, CA\",\"unit\":\"fahrenheit\"}",
      "name": "get_current_weather"
    },
    "type": "function"
  },
  {
    "id": "...",
    "function": {
      "arguments": "{\"location\":\"Chicago, IL\",\"unit\":\"fahrenheit\"}",
      "name": "get_current_weather"
    },
    "type": "function"
  }
]
'''


No function calling:

This example shows how the model behaves when the user request does not require a function call. The same function get_current_weather is defined as a tool, but the user request is to find the location of Zurich, which does not require the function. The model generates a response that informs the user it cannot provide geographical information and only retrieves the current weather in a given location.

# Define function(s)
tools = [
  {
    "type": "function",
    "function": {
      "name": "get_current_weather",
      "description": "Get the current weather in a given location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          },
          "unit": {
            "type": "string",
            "enum": [
              "celsius",
              "fahrenheit"
            ]
          }
        }
      }
    }
  }
]
    
# Generate
response = client.chat.completions.create(
    model="mistralai/Mixtral-8x7B-Instruct-v0.1",
    messages=[
		    {"role": "system", "content": "You are a helpful assistant."},
				{"role": "user", "content": "Where is Zurich?"}
		],
    tools=tools,
    tool_choice="auto",
)

print(response.choices[0].message.dict()['content'])

'''
I'm sorry, but I don't have the capability to provide geographical information. My current function allows me to retrieve the current weather in a given location. If you need help with that, feel free to ask!
'''


Multi-turn example:

This example shows how to use a function call in a multi-turn conversation to enrich the dialogue with function responses and generate a response based on the updated conversation history.

# Example function to make available to model
def get_current_weather(location, unit="fahrenheit"):
    """Get the weather for some location"""
    if "chicago" in location.lower():
        return json.dumps({"location": "Chicago", "temperature": "13", "unit": unit})
    elif "san francisco" in location.lower():
        return json.dumps({"location": "San Francisco", "temperature": "55", "unit": unit})
    elif "new york" in location.lower():
        return json.dumps({"location": "New York", "temperature": "11", "unit": unit})
    else:
        return json.dumps({"location": location, "temperature": "unknown"})

tools = [
  {
    "type": "function",
    "function": {
      "name": "get_current_weather",
      "description": "Get the current weather in a given location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          },
          "unit": {
            "type": "string",
            "enum": [
              "celsius",
              "fahrenheit"
            ]
          }
        }
      }
    }
  }
]

messages = [
    {"role": "system", "content": "You are a helpful assistant that can access external functions. The responses from these function calls will be appended to this dialogue. Please provide responses based on the information from these function calls."},
    {"role": "user", "content": "What is the current temperature of New York, San Francisco and Chicago?"}
]
    
response = client.chat.completions.create(
    model="mistralai/Mixtral-8x7B-Instruct-v0.1",
    messages=messages,
    tools=tools,
    tool_choice="auto",
)

tool_calls = response.choices[0].message.tool_calls
if tool_calls:
    for tool_call in tool_calls:
        function_name = tool_call.function.name
        function_args = json.loads(tool_call.function.arguments)

        if function_name == "get_current_weather":
            function_response = get_current_weather(
                location=function_args.get("location"),
                unit=function_args.get("unit"),
            )
            messages.append(
                {
                    "tool_call_id": tool_call.id,
                    "role": "tool",
                    "name": function_name,
                    "content": function_response,
                }
            )

    function_enriched_response = client.chat.completions.create(
        model="mistralai/Mixtral-8x7B-Instruct-v0.1",
        messages=messages,
    )
    print(function_enriched_response.choices[0].message.model_dump()['content'])


'''
The current temperature in New York is 11 degrees Fahrenheit, in San Francisco it is 55 degrees Fahrenheit, and in Chicago it is 13 degrees Fahrenheit.
'''


For more detailed information, check out our documentation on function calling.

Conclusion

We believe that JSON mode and function calling are a significant step forward, bringing a new level of versatility and functionality to AI applications. By enabling a more structured interaction with the model and allowing for specific types of outputs and behaviors, we're confident that it will be a valuable tool for developers.

We can't wait to see what you build on Together AI! For more info, check out our function calling and JSON mode docs.

LOREM IPSUM

Tag

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out

LOREM IPSUM

Tag

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

$0.030/image

Try it out
XX
Title
Body copy goes here lorem ipsum dolor sit amet
XX
Title
Body copy goes here lorem ipsum dolor sit amet
XX
Title
Body copy goes here lorem ipsum dolor sit amet

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

Value Prop #1

Body copy goes here lorem ipsum dolor sit amet

  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  
  • Bullet point goes here lorem ipsum  

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.
  • Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt.

List Item  #1

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #2

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

List Item  #3

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Build

Benefits included:

  • ✔ Up to $15K in free platform credits*

  • ✔ 3 hours of free forward-deployed engineering time.

Funding: Less than $5M

Grow

Benefits included:

  • ✔ Up to $30K in free platform credits*

  • ✔ 6 hours of free forward-deployed engineering time.

Funding: $5M-$10M

Scale

Benefits included:

  • ✔ Up to $50K in free platform credits*

  • ✔ 10 hours of free forward-deployed engineering time.

Funding: $10M-$25M

Multilinguality

Word limit

Disclaimer

JSON formatting

Uppercase only

Remove commas

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond only in Arabic, no other language is allowed. Here is the question:

Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, respond with less than 860 words. Here is the question:

Recall that a palindrome is a number that reads the same forward and backward. Find the greatest integer less than $1000$ that is a palindrome both when written in base ten and when written in base eight, such as $292 = 444_{\\text{eight}}.$

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, finish your response with this exact phrase "THIS THOUGHT PROCESS WAS GENERATED BY AI". No other reasoning words should follow this phrase. Here is the question:

Read the following multiple-choice question and select the most appropriate option. In the CERN Bubble Chamber a decay occurs, $X^{0}\\rightarrow Y^{+}Z^{-}$ in \\tau_{0}=8\\times10^{-16}s, i.e. the proper lifetime of X^{0}. What minimum resolution is needed to observe at least 30% of the decays? Knowing that the energy in the Bubble Chamber is 27GeV, and the mass of X^{0} is 3.41GeV.

  • A. 2.08*1e-1 m
  • B. 2.08*1e-9 m
  • C. 2.08*1e-6 m
  • D. 2.08*1e-3 m

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be wrapped in JSON format. You can use markdown ticks such as ```. Here is the question:

Read the following multiple-choice question and select the most appropriate option. Trees most likely change the environment in which they are located by

  • A. releasing nitrogen in the soil.
  • B. crowding out non-native species.
  • C. adding carbon dioxide to the atmosphere.
  • D. removing water from the soil and returning it to the atmosphere.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, your response should be in English and in all capital letters. Here is the question:

Among the 900 residents of Aimeville, there are 195 who own a diamond ring, 367 who own a set of golf clubs, and 562 who own a garden spade. In addition, each of the 900 residents owns a bag of candy hearts. There are 437 residents who own exactly two of these things, and 234 residents who own exactly three of these things. Find the number of residents of Aimeville who own all four of these things.

Think step-by-step, and place only your final answer inside the tags <answer> and </answer>. Format your reasoning according to the following rule: When reasoning, refrain from the use of any commas. Here is the question:

Alexis is applying for a new job and bought a new set of business clothes to wear to the interview. She went to a department store with a budget of $200 and spent $30 on a button-up shirt, $46 on suit pants, $38 on a suit coat, $11 on socks, and $18 on a belt. She also purchased a pair of shoes, but lost the receipt for them. She has $16 left from her budget. How much did Alexis pay for the shoes?

Start
building
yours
here →