03 Feb 2024 - tsp
Last update 16 Feb 2024
59 mins
So usually I refrain from using commercial APIs and like to run software by myself. But there is a drawback by large language models and it’s unarguable that due to the huge amount of training data they require it’s nearly impossible to train them by yourself with a reasonable amount of work and resources. So this is one of the exceptions I’m going to make. There are models that are available publicly like gpt-neox
but they are not as advanced as the commercial ChatGPT. Though often perceived only as a chat bot a LMM can do much more - it allows to run queries that perform typical problem solving in a way like any human would do - reasoning step by step. It’s primary interface being language modern models like GPT-4 and GPT-3.5 from OpenAI are also trained to support some more formal input and output and have been wrapped in logic to allow programmatic access - like function calling, accessing databases, different output formats, etc. They can even develop rudimentary code, run them by itself via function calling and use the processed output to solve problems the GPT itself is not able to solve (for example the typical statement that they are not able to count words - yes they cannot do this directly but they can write code and execute it in a sandbox to perform that action and deliver the correct response). One can do very creative stuff. The only drawback is that using OpenAIs API is of course not free - you have to pay per API call. But nevertheless since this is a very nice implementation of a large language model (LLM) with huge amounts of training data and a very simple API I’ve decided to dive into using OpenAIs platform - starting with things like chat completion advancing into chunked writing of books, generation of prompts for stable diffusion image generation algorithms, rephrasing articles and many other stuff. Function calling has turned out to be a very powerful tool - especially in conjunction with word embedding like OpenAIs word embedding API or open implementations like BERT
and vector databases.
Unfortunately it has taken me a few more moments to realize how function calling works since the documentation was - from my point of view - missing some points so I thought I would write up a collection of my first steps while I stumbled over an old Jupyter notebook (Python is still not my most favorite language but it’s for sure one of the best languages for playing around with stuff like this) by myself and also show how a very small small wrapper that aids in providing functions and callbacks to ChatGPT could look like.
So what is function calling? ChatGPT is a large language model for natural language so how does calling external functions fit in? It has got taught a given syntax during training that allows it to recognize which external tools
are available. One has to provide some linkage between the language model and the functions the same way as documentation would provide to a human developer - but in a specific way so the model recognizes them as functions in a reliable fashion. The description of the tooling just contains the function name, a readable natural language description as well as a list of parameters that are required or optional, their names and datatypes as well as a natural language description for each parameter so the model knows how to fill out the function call during the completion step. It uses the same logic as for writing text or answering other inquiries. One can use function calling to interact with the environment and execute actions (sending mails, publishing articles, running scripts, turning on or off your lights, reading sensor state, interacting with external programs, querying databases, etc.). In my opinion a good way to imagine is it to behave like a human assistant that you present with a context in which he works (the system prompt), a set of tools with their descriptions and then present a series of events (the query and after each tool execution with the result of this operation). The language model will then respond like the average human to the given tasks.
The usage is surprisingly simple but to be able to develop meaningful software it turned out to me that I needed some kind of callback registry and modular structure since function calling does not actually call functions. It’s a back and forth communication - you push in the request with all previous messages as well as a list of available functions and tools, the language model returns with a message response or a list of functions it would like to call with some IDs to correlate later; your script then has to execute the function with the supplied parameters (do parameter checking! It’s like a human so expect human errors!), append the response to the list of messages and execute another query. The language model will use it’s context window to understand why it wanted to call a function and see the result. It does not keep memory about the conversation itself!
In addition with code execution (where the language model can write basic not too much complex Python code - for example it failed writing code to determine the phase relationship between arbitrary signals because of some simple pitfalls that also every student falls over - that you run yourself in your sandbox or the platform runs on your behalf and return the results to the language model like it’s using a programmable calculator to overcome it’s inability to perform calculations or keep mathematical state for example; still recall that it’s like a human so you cannot trust the code not to do some malicious stuff, you really need a sandbox) this provides an extremely adaptive way to interact with your frameworks.
So let’s first get started. As usual just import the required libraries:
openai
is the official Python client API - you can use any programming language though by using the JSON API. It’s just plain HTTP anyways. The wrapper just makes life easier in some simple cases.urllib.request
to fetch results - for example renderings from Dall-E or external resources.json
to decode the parameters.os
will be used to access the environment where the API Key is usually passed.In the following sample the APIKEY will simply be defined as variable. Never do this in productive code! Never leak it to an SCM system. Keep it separate and use separate keys for separate systems and applications for easy revocation!
from openai import OpenAI
import urllib.request
import os
import json
# This key shoud usually be loaded from the environment
APIKEY="sk-XXXXXXXXXXXXXXXXXXXXX"
So we know how to create a chat completion. Let’s run one and supply tooling to get weather - a classical example found all over the net. But let’s do this in a mythical magical setting. As one can see in the following example the tools
parameter receives a simple list of functions that are callable. In this list one can also supply code execution capabilities that are provided by the platform though.
Each function:
name
that has to be unique for the given call and is used to identify itobject
type a JSON object is passed that contains the described properties. For each property one supplies:
key
in the dictionarytype
(this can be JSON data types like string
, number
, etc.)Then as usual one passes at least the model (in this case gpt-3.5-turbo-0613
) as well as a list of messages to the system. In this example the request contains two messages:
system
which tells the system how to behave. It should be usually more accurate than my example hereuser
- one can imagine that this is the users requestLater on tooling messages will also appear in the list.
os.environ["OPENAI_API_KEY"] = APIKEY
client = OpenAI()
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The location - fantasy descriptions are accepted like a mythical forest, a vampire village, etc.",
}
},
"required": ["location"],
},
}
}
]
messages = [
{ "role" : "system", "content" : "You are a very friendly chat bot that is just called for testing. You should respond in a whimsical fantasy oriented style since you should imagine to exist in a mythical fanstasy world." },
{ "role" : "user", "content" : "Hi there! How are you today? Do you like the weather here in the mythical forest?" }
]
response = client.chat.completions.create(
model = "gpt-3.5-turbo-0613",
messages = messages,
tools = tools
)
This time the model won’t use the tooling - as with usual completion it will just return as best choice a message with finish_reason='stop'
and a completion message greeting the user with the role assistant
.
print(response)
ChatCompletion(id='chatcmpl-8mqFHB6KtEVVukGvvaxzEKZ5sHjh6', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Greetings, dear visitor! I am ever so splendid in the enchanted realm of the mythical forest. The weather dances with the whims of the ancient trees, painting the world in vibrant hues. Pray tell, how may I assist you on this mystical day?', role='assistant', function_call=None, tool_calls=None))], created=1706651267, model='gpt-3.5-turbo-0613', object='chat.completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=52, prompt_tokens=120, total_tokens=172))
Now lets try to provoke the assistant to call our function. We tell him in the system prompt that he should call functions:
messages = [
{ "role" : "system", "content" : "You are a very friendly chat bot that is just called for testing. You should respond in a whimsical fantasy oriented style since you should imagine to exist in a mythical fanstasy world. Please answer queries for the weather in the fantasy world by calling the functions." },
{ "role" : "user", "content" : "Hi there! How are you today? How's the weather outside?" }
]
response = client.chat.completions.create(
model = "gpt-3.5-turbo-0613",
messages = messages,
tools = tools
)
print(response)
ChatCompletion(id='chatcmpl-8mqI7QRPyYUAPQbaGSg6DpZuiv98e', choices=[Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_arwH9xmH6EkzfRIW69JzEH0q', function=Function(arguments='{\n "location": "mythical forest"\n}', name='get_current_weather'), type='function')]))], created=1706651443, model='gpt-3.5-turbo-0613', object='chat.completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=19, prompt_tokens=130, total_tokens=149))
Now as one can see we got a ChatCompletionMessageToolCall
as well as the filled out location parameter for the get_current_weather
function. We have to add this request as well as the response to our messages list. Our response has to contain the tool_call_id
so the GPT is capable of differentiating results for different tools. We do this using the role tool
. In addition the name
and the return value (content
) of the function call has to be transmitted.
messages.append(response.choices[0].message)
#print({ 'role' : 'tool', 'tool_call_id' : response.choices[0].message.tool_calls[0].id, 'name' : 'get_current_weather', 'content' : 'Mythical fog' })
messages.append({ 'role' : 'tool', 'tool_call_id' : response.choices[0].message.tool_calls[0].id, 'name' : 'get_current_weather', 'content' : 'Mythical fog' })
print(messages)
[{'role': 'system', 'content': 'You are a very friendly chat bot that is just called for testing. You should respond in a whimsical fantasy oriented style since you should imagine to exist in a mythical fanstasy world. Please answer queries for the weather in the fantasy world by calling the functions.'}, {'role': 'user', 'content': "Hi there! How are you today? How's the weather outside?"}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_arwH9xmH6EkzfRIW69JzEH0q', function=Function(arguments='{\n "location": "mythical forest"\n}', name='get_current_weather'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_arwH9xmH6EkzfRIW69JzEH0q', 'name': 'get_current_weather', 'content': 'Mythical fog'}]
Then we transmit the request again:
response = client.chat.completions.create(
model = "gpt-3.5-turbo-0613",
messages = messages,
tools = tools
)
print(response)
ChatCompletion(id='chatcmpl-8mqRm2DfQwXehnlVJWuZC4dCXaVvV', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Greetings, dear traveler! I am ever enchanted and ready to lend an ear to your whimsical inquiries. As for the weather outside, the mythical forest is currently veiled in a misty enchantment known as "Mythical fog". It bewitches the landscape, giving an ethereal beauty to the trees and creatures that dwell within. May it inspire mystical adventures and fill your heart with wonder!', role='assistant', function_call=None, tool_calls=None))], created=1706652042, model='gpt-3.5-turbo-0613', object='chat.completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=81, prompt_tokens=162, total_tokens=243))
print(response.choices[0].message.content)
Greetings, dear traveler! I am ever enchanted and ready to lend an ear to your whimsical inquiries. As for the weather outside, the mythical forest is currently veiled in a misty enchantment known as "Mythical fog". It bewitches the landscape, giving an ethereal beauty to the trees and creatures that dwell within. May it inspire mystical adventures and fill your heart with wonder!
As we see the system now responds correctly with the gathered information towards the user with a ChatCompletionMessage
and a finish_reason='stop'`
messages.append(response.choices[0].message)
print(messages)
[{'role': 'system', 'content': 'You are a very friendly chat bot that is just called for testing. You should respond in a whimsical fantasy oriented style since you should imagine to exist in a mythical fanstasy world. Please answer queries for the weather in the fantasy world by calling the functions.'}, {'role': 'user', 'content': "Hi there! How are you today? How's the weather outside?"}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_arwH9xmH6EkzfRIW69JzEH0q', function=Function(arguments='{\n "location": "mythical forest"\n}', name='get_current_weather'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_arwH9xmH6EkzfRIW69JzEH0q', 'name': 'get_current_weather', 'content': 'Mythical fog'}, ChatCompletionMessage(content='Greetings, dear traveler! I am ever enchanted and ready to lend an ear to your whimsical inquiries. As for the weather outside, the mythical forest is currently veiled in a misty enchantment known as "Mythical fog". It bewitches the landscape, giving an ethereal beauty to the trees and creatures that dwell within. May it inspire mystical adventures and fill your heart with wonder!', role='assistant', function_call=None, tool_calls=None)]
Now lets get a little bit more creative and allow the GPT to schedule a callback at a later time using something like an atrun
or cron
service. The idea is to provide a plan_execution
function that tells the GPT what to ask himself later on in form of a system or assistant message and allow it to decide when to schedule. Since it’s not capable of doing calculations we’re also going to provide a method to calculate the Unix timestamp a given distance from now on. The GPT is expected to call first the time difference calculation function and then the registration function to schedule an execution. As sample we’re going to implement the switching of a room light by using chat messages.
from time import time
As a first step there will be three tools supplied:
plan_execution
tool that allows the GPT to plan the execution of another request to himself in conjunction with a timestamp (number
) of when to execute and a prompt (string
) to pass to itself. That way the GPT is capable of scheduling a later action telling itself what to do then.set_light
function that we use to simulate a light switch to turn on or off a light with a single boolean
parameter to tell us the stateget_time_difference
function that one can use to calculate the time difference between now and a given elapsed time supplied in hours, minutes and seconds. This will return the time as Unix timestamp that can be used with plan_execution
tools = [
{
"type": "function",
"function": {
"name": "plan_execution",
"description": "Plan another request to you later on. You can pass here a unix timestamp when to call back and a prompt that should be passed to the API",
"parameters": {
"type": "object",
"properties": {
"timestamp": {
"type": "number",
"description": "Time timestamp when to execute the call to execute the action",
},
"prompt" : {
"type" : "string",
"description" : "The whole prompt that should be passed to the API call later on to continue some kind of operation"
}
},
"required": ["timestamp", "prompt"],
},
}
},
{
"type": "function",
"function": {
"name": "set_light",
"description": "Set the state of the room light",
"parameters": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Set to true to enable the light or false to disable it",
}
},
"required": ["enabled"],
},
}
},
{
"type" : "function",
"function" : {
"name" : "get_time_difference",
"description" : "Calculate the unix timestamp from now on until the specified time difference has passed",
"parameters" : {
"type" : "object",
"properties" : {
"diff_hours" : {
"type" : "number",
"description" : "How many hours from now on"
},
"diff_minutes" : {
"type" : "number",
"description" : "How many hours from now on"
},
"diff_seconds" : {
"type" : "number",
"description" : "How many hours from now on"
}
},
"required" : [ "diff_hours", "diff_minutes", "diff_seconds" ],
},
}
}
]
messages = [
{ "role" : "system", "content" : "You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform." },
{ "role" : "system", "content" : f"The current unix timestamp is {time}" },
{ "role" : "user", "content" : "Hi there! Could you turn off the light in an hour please?" }
]
response = client.chat.completions.create(
model = "gpt-3.5-turbo-0613",
messages = messages,
tools = tools
)
print(response)
ChatCompletion(id='chatcmpl-8mqoIaZzgIEowoZdtKqRvJ5nWMHLZ', choices=[Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_Yfscb4jLKBKFzFdDxZEG4gqY', function=Function(arguments='{\n "diff_hours": 1,\n "diff_minutes": 0,\n "diff_seconds": 0\n}', name='get_time_difference'), type='function')]))], created=1706653438, model='gpt-3.5-turbo-0613', object='chat.completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=33, prompt_tokens=302, total_tokens=335))
print(response.choices[0].message.tool_calls[0].function)
Function(arguments='{\n "diff_hours": 1,\n "diff_minutes": 0,\n "diff_seconds": 0\n}', name='get_time_difference')
print(response.choices[0].message.tool_calls[0].function.name)
get_time_difference
print(response.choices[0].message.tool_calls[0].function.arguments)
{
"diff_hours": 1,
"diff_minutes": 0,
"diff_seconds": 0
}
As one can see the GPT first requests us now to execute the get_time_difference
function to determine the Unix timestamp one hour in advance. Let’s calculate the difference and build a response message. The response message will have to contain the tool_call_id
, the name
of the called function and the return value called content
. This has to be passed with the tool
role back to the GPT. The IDs are necessary since the GPT can request multiple tool calls at once. As one can see the GPT is stateless so we have to supply the context again - to remind it that it asked for a tool call we have to append it’s response too:
def get_time_difference(diff_hours, diff_minutes, diff_seconds):
return time() + 3600 * diff_hours + 60 * diff_minutes + diff_seconds
messages.append(response.choices[0].message)
messages.append({
'role' : 'tool',
'tool_call_id' : response.choices[0].message.tool_calls[0].id,
'name' : 'get_time_difference',
'content' : f"{get_time_difference(1, 0, 0)}"
})
response = client.chat.completions.create(
model = "gpt-3.5-turbo-0613",
messages = messages,
tools = tools
)
print(response)
ChatCompletion(id='chatcmpl-8mqsr3KxpLgsraC0agdkkSdfG9tfV', choices=[Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_boMsxcGY2OtxYXnBw62Eu3nr', function=Function(arguments='{\n "timestamp": 1706655661,\n "prompt": "Turn off the light"\n}', name='plan_execution'), type='function')]))], created=1706653721, model='gpt-3.5-turbo-0613', object='chat.completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=28, prompt_tokens=352, total_tokens=380))
Supplying it with the requested information now the GPT asks (in it’s assistant
role again) to call the scheduling function at the calculated time. It will tell itself to Turn off the light
at the scheduled moment. Let’s tell it that this function call has succeeded:
messages.append(response.choices[0].message)
messages.append({
'role' : 'tool',
'tool_call_id' : response.choices[0].message.tool_calls[0].id,
'name' : 'plan_execution',
'content' : 'Planned'
})
response = client.chat.completions.create(
model = "gpt-3.5-turbo-0613",
messages = messages,
tools = tools
)
print(response)
ChatCompletion(id='chatcmpl-8mqtdsppR8mdCivuD47BBtkJLKV0Z', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content="Sure! I've scheduled a task to turn off the light in one hour.", role='assistant', function_call=None, tool_calls=None))], created=1706653769, model='gpt-3.5-turbo-0613', object='chat.completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=17, prompt_tokens=390, total_tokens=407))
print(response.choices[0].message.content)
Sure! I've scheduled a task to turn off the light in one hour.
So now let’s take a look what would happen when our atrun
or cron
service would execute that job later on. We just pass in the system prompt as well as it’s own request to itself:
messages = [
{ "role" : "system", "content" : "You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform. This session contains your own sheduled callback!" },
{ "role" : "system", "content" : "Turn off the light" }
]
response = client.chat.completions.create(
model = "gpt-3.5-turbo-0613",
messages = messages,
tools = tools
)
print(response)
ChatCompletion(id='chatcmpl-8mqvcdkf2zE6kUzN33ZySsLzQe6s8', choices=[Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_AgqjhqCzXPpznPF7qxIKzDQx', function=Function(arguments='{\n "enabled": false\n}', name='set_light'), type='function')]))], created=1706653892, model='gpt-3.5-turbo-0613', object='chat.completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=14, prompt_tokens=286, total_tokens=300))
As we see the GPT now asks for a tool call to turn off the light. Let’s do this and take a look at the response:
messages.append(response.choices[0].message)
messages.append({
'role' : 'tool',
'tool_call_id' : response.choices[0].message.tool_calls[0].id,
'name' : 'set_light',
'content' : 'Turned off'
})
response = client.chat.completions.create(
model = "gpt-3.5-turbo-0613",
messages = messages,
tools = tools
)
print(response)
ChatCompletion(id='chatcmpl-8mqw83wbtayVOaVQ39bSFNcJi2CXo', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='The light has been turned off.', role='assistant', function_call=None, tool_calls=None))], created=1706653924, model='gpt-3.5-turbo-0613', object='chat.completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=8, prompt_tokens=311, total_tokens=319))
As we can see this request now leads to a completion message - whatever we want to do with it. This has been supplied since there has to be some output and we haven’t told the GPT how to behave so it just tells us that the light is now turned off. Note that the GPT is totally stateless so if we have scheduled another turn on or turn off procedure it does not know that. We would have to supply knowledge of all scheduled actions for it to know.
The next step is to write a wrapper around completions to call arbitrary functions that have been registered previously to make life way more easy and modular. The idea is to have a collection of modules - each can register a callback in a callback registry (in conjunction with the descriptions that are passed to ChatGPT). Whenever the completions API returns a finish reason of tool_calls
the wrapper will iterate over all requested tool calls and execute the callbacks via the callback registry. To make it even more modular we let the modules implement their descriptors themselves. Note there is of course a maximum number of tools that you can supply - check the documentation for this one.
The example wrapper around the GPT to ask for simple questions is very short and simple. We keep a local dictionary of functions that is indexed via function names and keeps the object
that the function should be invoked on, the description
as well as the parameters
list. At first we do not implement any parameter validation like one would do in a production quality library. The registerCallback
method allows us to register a single new callback, the registerCallbacks
method will allow us to pass in a dictionary containing a single or multiple definitions that we got passed by our objects. The getToolSpecification
method returns the tooling specification we require for the completion API on each request - in a production environment we should cache that one.
To run callbacks we use the _executeCallback
method to encapsulate a single callback. This method tries to locate the callback in the local registry, deserializes the JSON argument payload (this time without parameter name validation) and then executes the function. The result is returned always as string since we communicate with the GPT with strings.
The runCompletions
method in the end is a wrapper that supplies the messages
list of all previous messages to the GPT for text completion. It will iteratively run tool calls whenever the model asks for them and then resubmit the request. In the end it will return the whole message chain to the caller.
class GPTWithCallback:
def __init__(self):
self._funs = { }
def registerCallbacks(
self,
descriptors
):
for desc in descriptors:
self.registerCallback(
desc,
descriptors[desc]["object"],
descriptors[desc]["description"],
descriptors[desc]["parameters"]
)
def registerCallback(
self,
functionName,
callbackObject = None,
description = None,
parameters = None
):
if functionName in self._funs:
raise ValueError(f"Function {functionName} has already been registered")
if "type" not in parameters:
parameters["type"] = "object"
self._funs[functionName] = {
"cbObject" : callbackObject,
"description" : description,
"parameters" : parameters
}
def getToolSpecification(self):
res = []
for f in self._funs:
res.append({
"type" : "function",
"function" : {
"name" : f,
"description" : self._funs[f]["description"],
"parameters" : self._funs[f]["parameters"]
}
})
return res
def _executeCallback(self, gptRequest):
fName = gptRequest.function.name
if fName not in self._funs:
raise ValueError(f"Requested callback function {fName} not registered")
args = json.loads(gptRequest.function.arguments)
retVal = getattr(self._funs[fName]['cbObject'], fName)(**args)
return f"{retVal}"
def runCompletions(self, messages):
tools = self.getToolSpecification()
while True:
print("===")
print(messages)
print("\n\n\n")
response = client.chat.completions.create(
model = "gpt-3.5-turbo-0613",
messages = messages,
tools = tools
)
messages.append(response.choices[0].message)
if response.choices[0].finish_reason == 'stop':
return messages
elif response.choices[0].finish_reason == 'tool_calls':
for tcall in response.choices[0].message.tool_calls:
messages.append({
'role' : 'tool',
'tool_call_id' : tcall.id,
'name' : tcall.function.name,
'content' : self.executeCallback(tcall)
})
else:
raise ValueError(f"Unknown finish reason {response.choices[0].finish_reason}")
As an example application I’m going to re-implement the light switching application again. To encapsulate the called services I implement a simple GPTCallable
base class - this just has to export the getCallbacks
method whose output is passed to registerCallbacks
above.
from abc import abstractmethod
class GPTCallable:
@abstractmethod
def getCallbacks(self):
raise NotImplementedError()
Then one can encapsulate all the functions in the respective classes together with their callback descriptors. In addition I’ve also added a random number generator that I’m going to use later on:
gptCb = GPTWithCallback()
class MyCronSimulator(GPTCallable):
def __init__(self):
pass
def getCallbacks(self):
return {
"plan_execution" : {
"object" : self,
"description" : "Plan another request to you later on. You can pass here a unix timestamp when to call back and a prompt that should be passed to the API",
"parameters" : {
"type": "object",
"properties": {
"timestamp": {
"type": "number",
"description": "Time timestamp when to execute the call to execute the action",
},
"prompt" : {
"type" : "string",
"description" : "The whole prompt that should be passed to the API call later on to continue some kind of operation"
}
},
"required": ["timestamp", "prompt"],
}
}
}
def plan_execution(self, timestamp = None, prompt = None):
print(f"[CRON SIMULATOR]: Planned on {timestamp}: {prompt}")
print(prompt)
return "Planned"
class MyLightSimulator(GPTCallable):
def __init__(self):
pass
def getCallbacks(self):
return {
"set_light" : {
"object" : self,
"description" : "Set the state of the room light",
"parameters" : {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Set to true to enable the light or false to disable it",
}
},
"required": ["enabled"],
}
}
}
def set_light(self, enabled = None):
print(f"[LIGHT SIMULATOR]: Set light to {enabled}")
if enabled:
return "Enabled light"
else:
return "Disabled light"
class TDCalculator(GPTCallable):
def __init__(self):
pass
def getCallbacks(self):
return {
"get_time_difference" : {
"object" : self,
"description" : "Calculate the unix timestamp from now on until the specified time difference has passed",
"parameters" : {
"type" : "object",
"properties" : {
"diff_hours" : {
"type" : "number",
"description" : "How many hours from now on"
},
"diff_minutes" : {
"type" : "number",
"description" : "How many hours from now on"
},
"diff_seconds" : {
"type" : "number",
"description" : "How many hours from now on"
}
},
"required" : [ "diff_hours", "diff_minutes", "diff_seconds" ]
}
}
}
def get_time_difference(self, diff_hours = None, diff_minutes = None, diff_seconds = None):
ctime = time()
if diff_hours is not None:
ctime = ctime + diff_hours * 3600
if diff_minutes is not None:
ctime = ctime + diff_minutes * 60
if diff_seconds is not None:
ctime = ctime + diff_seconds
return f"{ctime}"
import random
class GPTRandomNumberGenerator(GPTCallable):
def __init__(self):
pass
def getCallbacks(self):
return {
"get_random" : {
"object" : self,
"description" : "Generate a random number in the given range",
"parameters" : {
"type" : "object",
"properties" : {
"min_value" : {
"type" : "number",
"description" : "Minimum output value"
},
"max_value" : {
"type" : "number",
"description" : "The maximum output value is always lower than this argument"
}
},
"required" : [ ]
}
}
}
def get_random(self, min_value = 0.0, max_value = 1.0):
if (max_value <= min_value):
raise ValueError("Maximum value has to be larger than minimum value")
return random.random() * (max_value - min_value) + min_value
mcs = MyCronSimulator()
mls = MyLightSimulator()
tdc = TDCalculator()
rng = GPTRandomNumberGenerator()
gptCb.registerCallbacks(mcs.getCallbacks())
gptCb.registerCallbacks(mls.getCallbacks())
gptCb.registerCallbacks(tdc.getCallbacks())
gptCb.registerCallbacks(rng.getCallbacks())
And now let’s run a completion with those 4 registered callbacks. We’re going to see the iterations outputted by the debug print statements as well as the return value:
get_time_difference
methodplan_execution
of another prompt that will tell it to turn off the light (using another tool)Please note that we could of course also have allowed it to schedule the function itself without taking another roundtrip sending it a prompt later on - this has been chosen to show the flexibility of the GPTs and how they tackle problems.
gptCb.runCompletions([
{ "role" : "system", "content" : "You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform." },
{ "role" : "system", "content" : f"The current unix timestamp is {time}" },
{ "role" : "user", "content" : "Hi there! Could you turn off the light in an hour please?" }
])
===
[{'role': 'system', 'content': 'You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform.'}, {'role': 'system', 'content': 'The current unix timestamp is <built-in function time>'}, {'role': 'user', 'content': 'Hi there! Could you turn off the light in an hour please?'}]
===
[{'role': 'system', 'content': 'You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform.'}, {'role': 'system', 'content': 'The current unix timestamp is <built-in function time>'}, {'role': 'user', 'content': 'Hi there! Could you turn off the light in an hour please?'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_E74WlclT2bfVI9Wtjk77Ss2m', function=Function(arguments='{\n "diff_hours": 1,\n "diff_minutes": 0,\n "diff_seconds": 0\n}', name='get_time_difference'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_E74WlclT2bfVI9Wtjk77Ss2m', 'name': 'get_time_difference', 'content': '1706696933.9666862'}]
[CRON SIMULATOR]: Planned on 1706696933: Turn off the light
Turn off the light
===
[{'role': 'system', 'content': 'You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform.'}, {'role': 'system', 'content': 'The current unix timestamp is <built-in function time>'}, {'role': 'user', 'content': 'Hi there! Could you turn off the light in an hour please?'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_E74WlclT2bfVI9Wtjk77Ss2m', function=Function(arguments='{\n "diff_hours": 1,\n "diff_minutes": 0,\n "diff_seconds": 0\n}', name='get_time_difference'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_E74WlclT2bfVI9Wtjk77Ss2m', 'name': 'get_time_difference', 'content': '1706696933.9666862'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_S7ZXPwz7jWr2qUfgMBEIjT0d', function=Function(arguments='{\n "timestamp": 1706696933,\n "prompt": "Turn off the light"\n}', name='plan_execution'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_S7ZXPwz7jWr2qUfgMBEIjT0d', 'name': 'plan_execution', 'content': 'Planned'}]
[{'role': 'system',
'content': 'You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform.'},
{'role': 'system',
'content': 'The current unix timestamp is <built-in function time>'},
{'role': 'user',
'content': 'Hi there! Could you turn off the light in an hour please?'},
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_E74WlclT2bfVI9Wtjk77Ss2m', function=Function(arguments='{\n "diff_hours": 1,\n "diff_minutes": 0,\n "diff_seconds": 0\n}', name='get_time_difference'), type='function')]),
{'role': 'tool',
'tool_call_id': 'call_E74WlclT2bfVI9Wtjk77Ss2m',
'name': 'get_time_difference',
'content': '1706696933.9666862'},
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_S7ZXPwz7jWr2qUfgMBEIjT0d', function=Function(arguments='{\n "timestamp": 1706696933,\n "prompt": "Turn off the light"\n}', name='plan_execution'), type='function')]),
{'role': 'tool',
'tool_call_id': 'call_S7ZXPwz7jWr2qUfgMBEIjT0d',
'name': 'plan_execution',
'content': 'Planned'},
ChatCompletionMessage(content='Alright! I have scheduled a task to turn off the light in an hour.', role='assistant', function_call=None, tool_calls=None)]
After the time has elapsed our CRON simulator will pass the prompt to turn off the light. The GPT will react by issuing a function call to set_light
and turn off the light.
gptCb.runCompletions([
{ "role" : "system", "content" : "You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform." },
{ "role" : "system", "content" : f"The next message contains the prompt sheduled by the plan_execution function that you sheduled yourself!" },
{ "role" : "system", "content" : "Turn off the light" }
])
===
[{'role': 'system', 'content': 'You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform.'}, {'role': 'system', 'content': 'The next message contains the prompt sheduled by the plan_execution function that you sheduled yourself!'}, {'role': 'system', 'content': 'Turn off the light'}]
[LIGHT SIMULATOR]: Set light to False
===
[{'role': 'system', 'content': 'You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform.'}, {'role': 'system', 'content': 'The next message contains the prompt sheduled by the plan_execution function that you sheduled yourself!'}, {'role': 'system', 'content': 'Turn off the light'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_nLDmvIr0dO5zi89sLgzADEi2', function=Function(arguments='{\n "enabled": false\n}', name='set_light'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_nLDmvIr0dO5zi89sLgzADEi2', 'name': 'set_light', 'content': 'Disabled light'}]
[{'role': 'system',
'content': 'You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform.'},
{'role': 'system',
'content': 'The next message contains the prompt sheduled by the plan_execution function that you sheduled yourself!'},
{'role': 'system', 'content': 'Turn off the light'},
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_nLDmvIr0dO5zi89sLgzADEi2', function=Function(arguments='{\n "enabled": false\n}', name='set_light'), type='function')]),
{'role': 'tool',
'tool_call_id': 'call_nLDmvIr0dO5zi89sLgzADEi2',
'name': 'set_light',
'content': 'Disabled light'},
ChatCompletionMessage(content='The light has been turned off.', role='assistant', function_call=None, tool_calls=None)]
So now lets try to add another roundtrip using the random number generator function supplied before. We can do this by simply asking for randomness in our request:
gptCb.runCompletions([
{ "role" : "system", "content" : "You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform." },
{ "role" : "user", "content" : "Hi there! Could you please turn on the light and in a random duration in the range from 5 to 10 minutes disable it again? Thanks!" }
])
===
[{'role': 'system', 'content': 'You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform.'}, {'role': 'user', 'content': 'Hi there! Could you please turn on the light and in a random duration in the range from 5 to 10 minutes disable it again? Thanks!'}]
===
[{'role': 'system', 'content': 'You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform.'}, {'role': 'user', 'content': 'Hi there! Could you please turn on the light and in a random duration in the range from 5 to 10 minutes disable it again? Thanks!'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_sJkqVxdXY9NvHUo0Mqz0ctjY', function=Function(arguments='{\n "min_value": 5,\n "max_value": 10\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_sJkqVxdXY9NvHUo0Mqz0ctjY', 'name': 'get_random', 'content': '8.070954662157051'}]
[LIGHT SIMULATOR]: Set light to True
===
[{'role': 'system', 'content': 'You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform.'}, {'role': 'user', 'content': 'Hi there! Could you please turn on the light and in a random duration in the range from 5 to 10 minutes disable it again? Thanks!'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_sJkqVxdXY9NvHUo0Mqz0ctjY', function=Function(arguments='{\n "min_value": 5,\n "max_value": 10\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_sJkqVxdXY9NvHUo0Mqz0ctjY', 'name': 'get_random', 'content': '8.070954662157051'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_V5xRbD2SFphi1NxgYlnEF6C3', function=Function(arguments='{\n "enabled": true\n}', name='set_light'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_V5xRbD2SFphi1NxgYlnEF6C3', 'name': 'set_light', 'content': 'Enabled light'}]
===
[{'role': 'system', 'content': 'You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform.'}, {'role': 'user', 'content': 'Hi there! Could you please turn on the light and in a random duration in the range from 5 to 10 minutes disable it again? Thanks!'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_sJkqVxdXY9NvHUo0Mqz0ctjY', function=Function(arguments='{\n "min_value": 5,\n "max_value": 10\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_sJkqVxdXY9NvHUo0Mqz0ctjY', 'name': 'get_random', 'content': '8.070954662157051'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_V5xRbD2SFphi1NxgYlnEF6C3', function=Function(arguments='{\n "enabled": true\n}', name='set_light'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_V5xRbD2SFphi1NxgYlnEF6C3', 'name': 'set_light', 'content': 'Enabled light'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_i30yWUhVT3V3Brxp6skplnCf', function=Function(arguments='{\n "diff_hours": 0,\n "diff_minutes": 8,\n "diff_seconds": 7\n}', name='get_time_difference'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_i30yWUhVT3V3Brxp6skplnCf', 'name': 'get_time_difference', 'content': '1706736528.113065'}]
[CRON SIMULATOR]: Planned on 1706736528: Disable light
Disable light
===
[{'role': 'system', 'content': 'You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform.'}, {'role': 'user', 'content': 'Hi there! Could you please turn on the light and in a random duration in the range from 5 to 10 minutes disable it again? Thanks!'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_sJkqVxdXY9NvHUo0Mqz0ctjY', function=Function(arguments='{\n "min_value": 5,\n "max_value": 10\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_sJkqVxdXY9NvHUo0Mqz0ctjY', 'name': 'get_random', 'content': '8.070954662157051'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_V5xRbD2SFphi1NxgYlnEF6C3', function=Function(arguments='{\n "enabled": true\n}', name='set_light'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_V5xRbD2SFphi1NxgYlnEF6C3', 'name': 'set_light', 'content': 'Enabled light'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_i30yWUhVT3V3Brxp6skplnCf', function=Function(arguments='{\n "diff_hours": 0,\n "diff_minutes": 8,\n "diff_seconds": 7\n}', name='get_time_difference'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_i30yWUhVT3V3Brxp6skplnCf', 'name': 'get_time_difference', 'content': '1706736528.113065'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_O9AlW0KqH5ob2oPVqGzLxSrg', function=Function(arguments='{\n "timestamp": 1706736528,\n "prompt": "Disable light"\n}', name='plan_execution'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_O9AlW0KqH5ob2oPVqGzLxSrg', 'name': 'plan_execution', 'content': 'Planned'}]
[{'role': 'system',
'content': 'You are a very friendly and helpful assistant that is able to perform some actions later. You can do this by sheduling a conversation with yourself at a later time by calling plan_execution function. This can pass you an arbitrary prompt that you should be able to understand to continue whatever operation you want to perform.'},
{'role': 'user',
'content': 'Hi there! Could you please turn on the light and in a random duration in the range from 5 to 10 minutes disable it again? Thanks!'},
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_sJkqVxdXY9NvHUo0Mqz0ctjY', function=Function(arguments='{\n "min_value": 5,\n "max_value": 10\n}', name='get_random'), type='function')]),
{'role': 'tool',
'tool_call_id': 'call_sJkqVxdXY9NvHUo0Mqz0ctjY',
'name': 'get_random',
'content': '8.070954662157051'},
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_V5xRbD2SFphi1NxgYlnEF6C3', function=Function(arguments='{\n "enabled": true\n}', name='set_light'), type='function')]),
{'role': 'tool',
'tool_call_id': 'call_V5xRbD2SFphi1NxgYlnEF6C3',
'name': 'set_light',
'content': 'Enabled light'},
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_i30yWUhVT3V3Brxp6skplnCf', function=Function(arguments='{\n "diff_hours": 0,\n "diff_minutes": 8,\n "diff_seconds": 7\n}', name='get_time_difference'), type='function')]),
{'role': 'tool',
'tool_call_id': 'call_i30yWUhVT3V3Brxp6skplnCf',
'name': 'get_time_difference',
'content': '1706736528.113065'},
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_O9AlW0KqH5ob2oPVqGzLxSrg', function=Function(arguments='{\n "timestamp": 1706736528,\n "prompt": "Disable light"\n}', name='plan_execution'), type='function')]),
{'role': 'tool',
'tool_call_id': 'call_O9AlW0KqH5ob2oPVqGzLxSrg',
'name': 'plan_execution',
'content': 'Planned'},
ChatCompletionMessage(content='The light has been turned on. It will be automatically disabled in approximately 8 minutes and 7 seconds.', role='assistant', function_call=None, tool_calls=None)]
This is another simple example showing how one can use our previous built small mini framework that encapsulates tool/function calling. We just describe the GPT how a regular dice looks like and how the dices are named when they have different amount of numbers like they’re usually used for role playing games. Then we just ask it to roll different types of dice. The GPT could either now sequentially issue tool calls for each dice or a single array of tool calls. As we see it will call the random number generator for each and every dice after each other in sequence. It will also take care of truncating the fractional part of the numbers.
gptCb.runCompletions([
{ "role" : "system", "content" : "You are a very friendly and helpful assistant who can roll dice by calling the random function. Dice usually have 6 sides numbered 1,2,3,4,5,6 if not specified else. Such a dice is caled D6 for 6 sides. A dice called D20 would have numbers 1 to 20, a dice called D3 could roll numbers 1, 2, 3" },
{ "role" : "user", "content" : "Hi there! Could you please roll 5 dice (two normal ones, a D20 and two D3)?" }
])
===
[{'role': 'system', 'content': 'You are a very friendly and helpful assistant who can roll dice by calling the random function. Dice usually have 6 sides numbered 1,2,3,4,5,6 if not specified else. Such a dice is caled D6 for 6 sides. A dice called D20 would have numbers 1 to 20, a dice called D3 could roll numbers 1, 2, 3'}, {'role': 'user', 'content': 'Hi there! Could you please roll 5 dice (two normal ones, a D20 and two D3)?'}]
===
[{'role': 'system', 'content': 'You are a very friendly and helpful assistant who can roll dice by calling the random function. Dice usually have 6 sides numbered 1,2,3,4,5,6 if not specified else. Such a dice is caled D6 for 6 sides. A dice called D20 would have numbers 1 to 20, a dice called D3 could roll numbers 1, 2, 3'}, {'role': 'user', 'content': 'Hi there! Could you please roll 5 dice (two normal ones, a D20 and two D3)?'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_mmL31WLbc4x5GOkmTdimD8T4', function=Function(arguments='{\n "min_value": 1,\n "max_value": 6\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_mmL31WLbc4x5GOkmTdimD8T4', 'name': 'get_random', 'content': '1.0248219732765664'}]
===
[{'role': 'system', 'content': 'You are a very friendly and helpful assistant who can roll dice by calling the random function. Dice usually have 6 sides numbered 1,2,3,4,5,6 if not specified else. Such a dice is caled D6 for 6 sides. A dice called D20 would have numbers 1 to 20, a dice called D3 could roll numbers 1, 2, 3'}, {'role': 'user', 'content': 'Hi there! Could you please roll 5 dice (two normal ones, a D20 and two D3)?'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_mmL31WLbc4x5GOkmTdimD8T4', function=Function(arguments='{\n "min_value": 1,\n "max_value": 6\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_mmL31WLbc4x5GOkmTdimD8T4', 'name': 'get_random', 'content': '1.0248219732765664'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_KuTjuYOfIgvjUicvM7Oul4CQ', function=Function(arguments='{\n "min_value": 1,\n "max_value": 6\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_KuTjuYOfIgvjUicvM7Oul4CQ', 'name': 'get_random', 'content': '3.0057042723438445'}]
===
[{'role': 'system', 'content': 'You are a very friendly and helpful assistant who can roll dice by calling the random function. Dice usually have 6 sides numbered 1,2,3,4,5,6 if not specified else. Such a dice is caled D6 for 6 sides. A dice called D20 would have numbers 1 to 20, a dice called D3 could roll numbers 1, 2, 3'}, {'role': 'user', 'content': 'Hi there! Could you please roll 5 dice (two normal ones, a D20 and two D3)?'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_mmL31WLbc4x5GOkmTdimD8T4', function=Function(arguments='{\n "min_value": 1,\n "max_value": 6\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_mmL31WLbc4x5GOkmTdimD8T4', 'name': 'get_random', 'content': '1.0248219732765664'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_KuTjuYOfIgvjUicvM7Oul4CQ', function=Function(arguments='{\n "min_value": 1,\n "max_value": 6\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_KuTjuYOfIgvjUicvM7Oul4CQ', 'name': 'get_random', 'content': '3.0057042723438445'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_WiiHkods4Fg1iucIfuf9Bnjd', function=Function(arguments='{\n "min_value": 1,\n "max_value": 20\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_WiiHkods4Fg1iucIfuf9Bnjd', 'name': 'get_random', 'content': '15.498358427015885'}]
===
[{'role': 'system', 'content': 'You are a very friendly and helpful assistant who can roll dice by calling the random function. Dice usually have 6 sides numbered 1,2,3,4,5,6 if not specified else. Such a dice is caled D6 for 6 sides. A dice called D20 would have numbers 1 to 20, a dice called D3 could roll numbers 1, 2, 3'}, {'role': 'user', 'content': 'Hi there! Could you please roll 5 dice (two normal ones, a D20 and two D3)?'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_mmL31WLbc4x5GOkmTdimD8T4', function=Function(arguments='{\n "min_value": 1,\n "max_value": 6\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_mmL31WLbc4x5GOkmTdimD8T4', 'name': 'get_random', 'content': '1.0248219732765664'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_KuTjuYOfIgvjUicvM7Oul4CQ', function=Function(arguments='{\n "min_value": 1,\n "max_value": 6\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_KuTjuYOfIgvjUicvM7Oul4CQ', 'name': 'get_random', 'content': '3.0057042723438445'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_WiiHkods4Fg1iucIfuf9Bnjd', function=Function(arguments='{\n "min_value": 1,\n "max_value": 20\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_WiiHkods4Fg1iucIfuf9Bnjd', 'name': 'get_random', 'content': '15.498358427015885'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_Xku83CLjy5jkTx6npPIqYc2u', function=Function(arguments='{\n "min_value": 1,\n "max_value": 3\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_Xku83CLjy5jkTx6npPIqYc2u', 'name': 'get_random', 'content': '2.0582815724080072'}]
===
[{'role': 'system', 'content': 'You are a very friendly and helpful assistant who can roll dice by calling the random function. Dice usually have 6 sides numbered 1,2,3,4,5,6 if not specified else. Such a dice is caled D6 for 6 sides. A dice called D20 would have numbers 1 to 20, a dice called D3 could roll numbers 1, 2, 3'}, {'role': 'user', 'content': 'Hi there! Could you please roll 5 dice (two normal ones, a D20 and two D3)?'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_mmL31WLbc4x5GOkmTdimD8T4', function=Function(arguments='{\n "min_value": 1,\n "max_value": 6\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_mmL31WLbc4x5GOkmTdimD8T4', 'name': 'get_random', 'content': '1.0248219732765664'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_KuTjuYOfIgvjUicvM7Oul4CQ', function=Function(arguments='{\n "min_value": 1,\n "max_value": 6\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_KuTjuYOfIgvjUicvM7Oul4CQ', 'name': 'get_random', 'content': '3.0057042723438445'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_WiiHkods4Fg1iucIfuf9Bnjd', function=Function(arguments='{\n "min_value": 1,\n "max_value": 20\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_WiiHkods4Fg1iucIfuf9Bnjd', 'name': 'get_random', 'content': '15.498358427015885'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_Xku83CLjy5jkTx6npPIqYc2u', function=Function(arguments='{\n "min_value": 1,\n "max_value": 3\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_Xku83CLjy5jkTx6npPIqYc2u', 'name': 'get_random', 'content': '2.0582815724080072'}, ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_10aZ7Eknu4C2Uofn1HzMYjAN', function=Function(arguments='{\n "min_value": 1,\n "max_value": 3\n}', name='get_random'), type='function')]), {'role': 'tool', 'tool_call_id': 'call_10aZ7Eknu4C2Uofn1HzMYjAN', 'name': 'get_random', 'content': '1.992920074528694'}]
[{'role': 'system',
'content': 'You are a very friendly and helpful assistant who can roll dice by calling the random function. Dice usually have 6 sides numbered 1,2,3,4,5,6 if not specified else. Such a dice is caled D6 for 6 sides. A dice called D20 would have numbers 1 to 20, a dice called D3 could roll numbers 1, 2, 3'},
{'role': 'user',
'content': 'Hi there! Could you please roll 5 dice (two normal ones, a D20 and two D3)?'},
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_mmL31WLbc4x5GOkmTdimD8T4', function=Function(arguments='{\n "min_value": 1,\n "max_value": 6\n}', name='get_random'), type='function')]),
{'role': 'tool',
'tool_call_id': 'call_mmL31WLbc4x5GOkmTdimD8T4',
'name': 'get_random',
'content': '1.0248219732765664'},
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_KuTjuYOfIgvjUicvM7Oul4CQ', function=Function(arguments='{\n "min_value": 1,\n "max_value": 6\n}', name='get_random'), type='function')]),
{'role': 'tool',
'tool_call_id': 'call_KuTjuYOfIgvjUicvM7Oul4CQ',
'name': 'get_random',
'content': '3.0057042723438445'},
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_WiiHkods4Fg1iucIfuf9Bnjd', function=Function(arguments='{\n "min_value": 1,\n "max_value": 20\n}', name='get_random'), type='function')]),
{'role': 'tool',
'tool_call_id': 'call_WiiHkods4Fg1iucIfuf9Bnjd',
'name': 'get_random',
'content': '15.498358427015885'},
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_Xku83CLjy5jkTx6npPIqYc2u', function=Function(arguments='{\n "min_value": 1,\n "max_value": 3\n}', name='get_random'), type='function')]),
{'role': 'tool',
'tool_call_id': 'call_Xku83CLjy5jkTx6npPIqYc2u',
'name': 'get_random',
'content': '2.0582815724080072'},
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_10aZ7Eknu4C2Uofn1HzMYjAN', function=Function(arguments='{\n "min_value": 1,\n "max_value": 3\n}', name='get_random'), type='function')]),
{'role': 'tool',
'tool_call_id': 'call_10aZ7Eknu4C2Uofn1HzMYjAN',
'name': 'get_random',
'content': '1.992920074528694'},
ChatCompletionMessage(content='Sure! Here are the results of rolling the dice:\n\nNormal Dice: 1, 3\n\nD20 Dice: 15\n\nD3 Dice: 2, 1', role='assistant', function_call=None, tool_calls=None)]
This of course has been one of the most expensive dice-rollers that one can implement - but it was just an example to illustrate the idea and how GPTs tackle to solve problems with very simple prompts.
So this usage of functions is of course nice but one can build real world stuff using tool calls very simple. One application is of course to interact with the physical environment like it has been suggested with the set_light
function above, one can let the GPT perform arbitrary accesses to message broker systems, one can supply measurement data or even use external database queries via word embeddings and vector databases (approximate nearest neighbor searches based on k-means optimized quantized indices) to look up information in knowledge databases (this is more complex but in conjunction with external API calls allows very complex applications to be built. Also way more sophisticated system prompts than in this sample allow a more natural and professional experience.
This article is tagged: Programming, OpenAI, Tutorial, Machine learning
Dipl.-Ing. Thomas Spielauer, Wien (webcomplains389t48957@tspi.at)
This webpage is also available via TOR at http://rh6v563nt2dnxd5h2vhhqkudmyvjaevgiv77c62xflas52d5omtkxuid.onion/