Python Quickstart
Get started with Humanlayer using Python and OpenAI
Setup Environment
mkdir humanlayer-demo && cd humanlayer-demo
Install the required package with pip:
pip install humanlayer openai
uv is a fast Python package installer and resolver. Install uv:
curl -LsSf https://astral.sh/uv/install.sh | sh
Then use uv to install the packages:
uv init
uv add humanlayer openai
We recommend using a virtual environment to manage your dependencies. Create and activate a virtual environment:
python -m venv env
source env/bin/activate # On Windows, use `env\Scripts\activate`
Then install the packages:
pip install humanlayer openai
Next, get your OpenAI API key from the OpenAI dashboard.
export OPENAI_API_KEY=<your-openai-api-key>
and set your HumanLayer API key
Head over to the HumanLayer dashboard and create a new project to get your API key.
export HUMANLAYER_API_KEY=<your-humanlayer-api-key>
Create a simple agent
First, let’s create a simple math agent that can perform basic arithmetic operations.
import json
import logging
from openai import OpenAI
PROMPT = "multiply 2 and 5, then add 32 to the result"
def add(x: int, y: int) -> int:
"""Add two numbers together."""
return x + y
def multiply(x: int, y: int) -> int:
"""multiply two numbers"""
return x * y
math_tools_map = {
"add": add,
"multiply": multiply,
}
math_tools_openai = [
{
"type": "function",
"function": {
"name": "add",
"description": "Add two numbers together.",
"parameters": {
"type": "object",
"properties": {
"x": {"type": "number"},
"y": {"type": "number"},
},
"required": ["x", "y"],
},
},
},
{
"type": "function",
"function": {
"name": "multiply",
"description": "multiply two numbers",
"parameters": {
"type": "object",
"properties": {
"x": {"type": "number"},
"y": {"type": "number"},
},
"required": ["x", "y"],
},
},
},
]
logger = logging.getLogger(__name__)
def run_chain(prompt: str, tools_openai: list[dict], tools_map: dict) -> str:
client = OpenAI()
messages = [{"role": "user", "content": prompt}]
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools_openai,
tool_choice="auto",
)
while response.choices[0].finish_reason != "stop":
response_message = response.choices[0].message
tool_calls = response_message.tool_calls
if tool_calls:
messages.append(response_message) # extend conversation with assistant's reply
logger.info(
"last message led to %s tool calls: %s",
len(tool_calls),
[(tool_call.function.name, tool_call.function.arguments) for tool_call in tool_calls],
)
for tool_call in tool_calls:
function_name = tool_call.function.name
function_to_call = tools_map[function_name]
function_args = json.loads(tool_call.function.arguments)
logger.info("CALL tool %s with %s", function_name, function_args)
function_response_json: str
try:
function_response = function_to_call(**function_args)
function_response_json = json.dumps(function_response)
except Exception as e:
function_response_json = json.dumps(
{
"error": str(e),
}
)
logger.info(
"tool %s responded with %s",
function_name,
function_response_json[:200],
)
messages.append(
{
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": function_response_json,
}
) # extend conversation with function response
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools_openai,
)
return response.choices[0].message.content
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
result = run_chain(PROMPT, math_tools_openai, math_tools_map)
print("\n\n----------Result----------\n\n")
print(result)
You can also use a .env
file to store your keys:
echo "OPENAI_API_KEY=<your-openai-api-key>" >> .env
echo "HUMANLAYER_API_KEY=<your-humanlayer-api-key>" >> .env
Install the python-dotenv
package:
pip install python-dotenv
then load the keys near the top of your script:
from dotenv import load_dotenv
load_dotenv()
You can run this script with:
python main.py
Or, with uv:
uv run main.py
You should see output like:
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
INFO:__main__:last message led to 1 tool calls: [('multiply', '{"x":2,"y":5}')]
INFO:__main__:CALL tool multiply with {'x': 2, 'y': 5}
INFO:__main__:tool multiply responded with 10
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
INFO:__main__:last message led to 1 tool calls: [('add', '{"x":10,"y":32}')]
INFO:__main__:CALL tool add with {'x': 10, 'y': 32}
INFO:__main__:tool add responded with 42
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
----------Result----------
The result of multiplying 2 and 5, and then adding 32, is 42.
Sure, not a particularly useful agent, but enough to get you started with tool calling.
Add HumanLayer
Now, let’s add a human approval step to the script to see how it works.
We’ll wrap the multiply function with @hl.require_approval()
so that the LLM has to get your approval before it can run the function.
At the top of the script, import HumanLayer and instantiate it:
from humanlayer import HumanLayer
hl = HumanLayer(
verbose=True,
run_id="openai-math-example",
)
Then, wrap the multiply function with @hl.require_approval()
:
def add(x: int, y: int) -> int:
"""Add two numbers together."""
return x + y
+ @hl.require_approval()
def multiply(x: int, y: int) -> int:
"""multiply two numbers"""
return x * y
import json
import logging
from openai import OpenAI
from humanlayer import HumanLayer
hl = HumanLayer(
verbose=True,
# run_id is optional -it can be used to identify the agent in approval history
run_id="openai-math-example",
)
PROMPT = "multiply 2 and 5, then add 32 to the result"
def add(x: int, y: int) -> int:
"""Add two numbers together."""
return x + y
@hl.require_approval()
def multiply(x: int, y: int) -> int:
"""multiply two numbers"""
return x * y
math_tools_map = {
"add": add,
"multiply": multiply,
}
math_tools_openai = [
{
"type": "function",
"function": {
"name": "add",
"description": "Add two numbers together.",
"parameters": {
"type": "object",
"properties": {
"x": {"type": "number"},
"y": {"type": "number"},
},
"required": ["x", "y"],
},
},
},
{
"type": "function",
"function": {
"name": "multiply",
"description": "multiply two numbers",
"parameters": {
"type": "object",
"properties": {
"x": {"type": "number"},
"y": {"type": "number"},
},
"required": ["x", "y"],
},
},
},
]
logger = logging.getLogger(__name__)
def run_chain(prompt: str, tools_openai: list[dict], tools_map: dict) -> str:
client = OpenAI()
messages = [{"role": "user", "content": prompt}]
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools_openai,
tool_choice="auto",
)
while response.choices[0].finish_reason != "stop":
response_message = response.choices[0].message
tool_calls = response_message.tool_calls
if tool_calls:
messages.append(response_message) # extend conversation with assistant's reply
logger.info(
"last message led to %s tool calls: %s",
len(tool_calls),
[(tool_call.function.name, tool_call.function.arguments) for tool_call in tool_calls],
)
for tool_call in tool_calls:
function_name = tool_call.function.name
function_to_call = tools_map[function_name]
function_args = json.loads(tool_call.function.arguments)
logger.info("CALL tool %s with %s", function_name, function_args)
function_response_json: str
try:
function_response = function_to_call(**function_args)
function_response_json = json.dumps(function_response)
except Exception as e:
function_response_json = json.dumps(
{
"error": str(e),
}
)
logger.info(
"tool %s responded with %s",
function_name,
function_response_json[:200],
)
messages.append(
{
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": function_response_json,
}
) # extend conversation with function response
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools_openai,
)
return response.choices[0].message.content
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
result = run_chain(PROMPT, math_tools_openai, math_tools_map)
print("\n\n----------Result----------\n\n")
print(result)
Now, when you run the script, it will prompt you for approval before performing the multiply operation.
Since we chose verbose mode, you’ll see a note in the console.
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
INFO:__main__:last message led to 1 tool calls: [('multiply', '{"x":2,"y":5}')]
INFO:__main__:CALL tool multiply with {'x': 2, 'y': 5}
HumanLayer: waiting for approval for multiply
Try rejecting the tool call with something like “use 7 instead of 5” to see how your feedback is passed into the agent.
When you reject a tool call, HumanLayer will pass your feedback back to the agent, which can then adjust its approach based on your input.
You should see some more output:
HumanLayer: human denied multiply with message: use 7 instead of 5
INFO:__main__:tool multiply responded with "User denied multiply with message: use 7 instead of 5"
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
INFO:__main__:last message led to 1 tool calls: [('multiply', '{"x":2,"y":7}')]
INFO:__main__:CALL tool multiply with {'x': 2, 'y': 7}
HumanLayer: waiting for approval for multiply
This time, you can approve the tool call.
That’s it, you’ve now added a human approval step to your tool-calling agent!
Connect Slack or try an email approval
Head over to the integrations panel to connect Slack as an approval channel and run the script again.
Or you can try an email approval by setting the contact channel in @require_approval.
Integrate your favorite Agent Framework
HumanLayer is designed to work with any LLM agent framework. See Frameworks for more information.