Skip to main content
Open In ColabOpen on GitHub

ChatOpenAI

This notebook provides a quick overview for getting started with OpenAI chat models. For detailed documentation of all ChatOpenAI features and configurations head to the API reference.

OpenAI has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the OpenAI docs.

Azure OpenAI

Note that certain OpenAI models can also be accessed via the Microsoft Azure platform. To use the Azure OpenAI service use the AzureChatOpenAI integration.

Overview

Integration details

ClassPackageLocalSerializableJS supportPackage downloadsPackage latest
ChatOpenAIlangchain-openaibetaPyPI - DownloadsPyPI - Version

Model features

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs

Setup

To access OpenAI models you'll need to create an OpenAI account, get an API key, and install the langchain-openai integration package.

Credentials

Head to https://platform.openai.com to sign up to OpenAI and generate an API key. Once you've done this set the OPENAI_API_KEY environment variable:

import getpass
import os

if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")

If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:

# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"

Installation

The LangChain OpenAI integration lives in the langchain-openai package:

%pip install -qU langchain-openai

Instantiation

Now we can instantiate our model object and generate chat completions:

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
model="gpt-4o",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# api_key="...", # if you prefer to pass api key in directly instaed of using env vars
# base_url="...",
# organization="...",
# other params...
)
API Reference:ChatOpenAI

Invocation

messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
AIMessage(content="J'adore la programmation.", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 5, 'prompt_tokens': 31, 'total_tokens': 36}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_3aa7262c27', 'finish_reason': 'stop', 'logprobs': None}, id='run-63219b22-03e3-4561-8cc4-78b7c7c3a3ca-0', usage_metadata={'input_tokens': 31, 'output_tokens': 5, 'total_tokens': 36})
print(ai_msg.content)
J'adore la programmation.

Chaining

We can chain our model with a prompt template like so:

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)

chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
API Reference:ChatPromptTemplate
AIMessage(content='Ich liebe das Programmieren.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 26, 'total_tokens': 32}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_3aa7262c27', 'finish_reason': 'stop', 'logprobs': None}, id='run-350585e1-16ca-4dad-9460-3d9e7e49aaf1-0', usage_metadata={'input_tokens': 26, 'output_tokens': 6, 'total_tokens': 32})

Tool calling

OpenAI has a tool calling (we use "tool calling" and "function calling" interchangeably here) API that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool. tool-calling is extremely useful for building tool-using chains and agents, and for getting structured outputs from models more generally.

ChatOpenAI.bind_tools()

With ChatOpenAI.bind_tools, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to an OpenAI tool schemas, which looks like:

{
"name": "...",
"description": "...",
"parameters": {...} # JSONSchema
}

and passed in every model invocation.

from pydantic import BaseModel, Field


class GetWeather(BaseModel):
"""Get the current weather in a given location"""

location: str = Field(..., description="The city and state, e.g. San Francisco, CA")


llm_with_tools = llm.bind_tools([GetWeather])
ai_msg = llm_with_tools.invoke(
"what is the weather like in San Francisco",
)
ai_msg
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_o9udf3EVOWiV4Iupktpbpofk', 'function': {'arguments': '{"location":"San Francisco, CA"}', 'name': 'GetWeather'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 68, 'total_tokens': 85}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_3aa7262c27', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-1617c9b2-dda5-4120-996b-0333ed5992e2-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'San Francisco, CA'}, 'id': 'call_o9udf3EVOWiV4Iupktpbpofk', 'type': 'tool_call'}], usage_metadata={'input_tokens': 68, 'output_tokens': 17, 'total_tokens': 85})

strict=True

Requires langchain-openai>=0.1.21

As of Aug 6, 2024, OpenAI supports a strict argument when calling tools that will enforce that the tool argument schema is respected by the model. See more here: https://platform.openai.com/docs/guides/function-calling

Note: If strict=True the tool definition will also be validated, and a subset of JSON schema are accepted. Crucially, schema cannot have optional args (those with default values). Read the full docs on what types of schema are supported here: https://platform.openai.com/docs/guides/structured-outputs/supported-schemas.

llm_with_tools = llm.bind_tools([GetWeather], strict=True)
ai_msg = llm_with_tools.invoke(
"what is the weather like in San Francisco",
)
ai_msg
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_jUqhd8wzAIzInTJl72Rla8ht', 'function': {'arguments': '{"location":"San Francisco, CA"}', 'name': 'GetWeather'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 68, 'total_tokens': 85}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_3aa7262c27', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-5e3356a9-132d-4623-8e73-dd5a898cf4a6-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'San Francisco, CA'}, 'id': 'call_jUqhd8wzAIzInTJl72Rla8ht', 'type': 'tool_call'}], usage_metadata={'input_tokens': 68, 'output_tokens': 17, 'total_tokens': 85})

AIMessage.tool_calls

Notice that the AIMessage has a tool_calls attribute. This contains in a standardized ToolCall format that is model-provider agnostic.

ai_msg.tool_calls
[{'name': 'GetWeather',
'args': {'location': 'San Francisco, CA'},
'id': 'call_jUqhd8wzAIzInTJl72Rla8ht',
'type': 'tool_call'}]

For more on binding tools and tool call outputs, head to the tool calling docs.

Responses API

Requires langchain-openai>=0.3.9

OpenAI supports a Responses API that is oriented toward building agentic applications. It includes a suite of built-in tools, including web and file search. It also supports management of conversation state, allowing you to continue a conversational thread without explicitly passing in previous messages.

ChatOpenAI will route to the Responses API if one of these features is used. You can also specify use_responses_api=True when instantiating ChatOpenAI.

Built-in tools

Equipping ChatOpenAI with built-in tools will ground its responses with outside information, such as via context in files or the web. The AIMessage generated from the model will include information about the built-in tool invocation.

To trigger a web search, pass {"type": "web_search_preview"} to the model as you would another tool.

tip

You can also pass built-in tools as invocation params:

llm.invoke("...", tools=[{"type": "web_search_preview"}])
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini")

tool = {"type": "web_search_preview"}
llm_with_tools = llm.bind_tools([tool])

response = llm_with_tools.invoke("What was a positive news story from today?")
API Reference:ChatOpenAI

Note that the response includes structured content blocks that include both the text of the response and OpenAI annotations citing its sources:

response.content
[{'type': 'text',
'text': 'Today, a heartwarming story emerged from Minnesota, where a group of high school robotics students built a custom motorized wheelchair for a 2-year-old boy named Cillian Jackson. Born with a genetic condition that limited his mobility, Cillian\'s family couldn\'t afford the $20,000 wheelchair he needed. The students at Farmington High School\'s Rogue Robotics team took it upon themselves to modify a Power Wheels toy car into a functional motorized wheelchair for Cillian, complete with a joystick, safety bumpers, and a harness. One team member remarked, "I think we won here more than we do in our competitions. Instead of completing a task, we\'re helping change someone\'s life." ([boredpanda.com](https://www.boredpanda.com/wholesome-global-positive-news/?utm_source=openai))\n\nThis act of kindness highlights the profound impact that community support and innovation can have on individuals facing challenges. ',
'annotations': [{'end_index': 778,
'start_index': 682,
'title': '“Global Positive News”: 40 Posts To Remind Us There’s Good In The World',
'type': 'url_citation',
'url': 'https://www.boredpanda.com/wholesome-global-positive-news/?utm_source=openai'}]}]
tip

You can recover just the text content of the response as a string by using response.text(). For example, to stream response text:

for token in llm_with_tools.stream("..."):
print(token.text(), end="|")

See the streaming guide for more detail.

The output message will also contain information from any tool invocations:

response.additional_kwargs
{'tool_outputs': [{'id': 'ws_67d192aeb6cc81918e736ad4a57937570d6f8507990d9d71',
'status': 'completed',
'type': 'web_search_call'}]}

To trigger a file search, pass a file search tool to the model as you would another tool. You will need to populate an OpenAI-managed vector store and include the vector store ID in the tool definition. See OpenAI documentation for more detail.

llm = ChatOpenAI(model="gpt-4o-mini")

openai_vector_store_ids = [
"vs_...", # your IDs here
]

tool = {
"type": "file_search",
"vector_store_ids": openai_vector_store_ids,
}
llm_with_tools = llm.bind_tools([tool])

response = llm_with_tools.invoke("What is deep research by OpenAI?")
print(response.text())
Deep Research by OpenAI is a new capability integrated into ChatGPT that allows for the execution of multi-step research tasks independently. It can synthesize extensive amounts of online information and produce comprehensive reports similar to what a research analyst would do, significantly speeding up processes that would typically take hours for a human.

### Key Features:
- **Independent Research**: Users simply provide a prompt, and the model can find, analyze, and synthesize information from hundreds of online sources.
- **Multi-Modal Capabilities**: The model is also able to browse user-uploaded files, plot graphs using Python, and embed visualizations in its outputs.
- **Training**: Deep Research has been trained using reinforcement learning on real-world tasks that require extensive browsing and reasoning.

### Applications:
- Useful for professionals in sectors like finance, science, policy, and engineering, enabling them to obtain accurate and thorough research quickly.
- It can also be beneficial for consumers seeking personalized recommendations on complex purchases.

### Limitations:
Although Deep Research presents significant advancements, it has some limitations, such as the potential to hallucinate facts or struggle with authoritative information.

Deep Research aims to facilitate access to thorough and documented information, marking a significant step toward the broader goal of developing artificial general intelligence (AGI).

As with web search, the response will include content blocks with citations:

response.content[0]["annotations"][:2]
[{'file_id': 'file-3UzgX7jcC8Dt9ZAFzywg5k',
'index': 346,
'type': 'file_citation',
'filename': 'deep_research_blog.pdf'},
{'file_id': 'file-3UzgX7jcC8Dt9ZAFzywg5k',
'index': 575,
'type': 'file_citation',
'filename': 'deep_research_blog.pdf'}]

It will also include information from the built-in tool invocations:

response.additional_kwargs
{'tool_outputs': [{'id': 'fs_67d196fbb83c8191ba20586175331687089228ce932eceb1',
'queries': ['What is deep research by OpenAI?'],
'status': 'completed',
'type': 'file_search_call'}]}

Computer use

ChatOpenAI supports the "computer-use-preview" model, which is a specialized model for the built-in computer use tool. To enable, pass a computer use tool as you would pass another tool.

Currently, tool outputs for computer use are present in AIMessage.additional_kwargs["tool_outputs"]. To reply to the computer use tool call, construct a ToolMessage with {"type": "computer_call_output"} in its additional_kwargs. The content of the message will be a screenshot. Below, we demonstrate a simple example.

First, load two screenshots:

import base64


def load_png_as_base64(file_path):
with open(file_path, "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
return encoded_string.decode("utf-8")


screenshot_1_base64 = load_png_as_base64(
"/path/to/screenshot_1.png"
) # perhaps a screenshot of an application
screenshot_2_base64 = load_png_as_base64(
"/path/to/screenshot_2.png"
) # perhaps a screenshot of the Desktop
from langchain_openai import ChatOpenAI

# Initialize model
llm = ChatOpenAI(
model="computer-use-preview",
model_kwargs={"truncation": "auto"},
)

# Bind computer-use tool
tool = {
"type": "computer_use_preview",
"display_width": 1024,
"display_height": 768,
"environment": "browser",
}
llm_with_tools = llm.bind_tools([tool])

# Construct input message
input_message = {
"role": "user",
"content": [
{
"type": "text",
"text": (
"Click the red X to close and reveal my Desktop. "
"Proceed, no confirmation needed."
),
},
{
"type": "input_image",
"image_url": f"data:image/png;base64,{screenshot_1_base64}",
},
],
}

# Invoke model
response = llm_with_tools.invoke(
[input_message],
reasoning={
"generate_summary": "concise",
},
)
API Reference:ChatOpenAI

The response will include a call to the computer-use tool in its additional_kwargs:

response.additional_kwargs
{'reasoning': {'id': 'rs_67ddb381c85081919c46e3e544a161e8051ff325ba1bad35',
'summary': [{'text': 'Closing Visual Studio Code application',
'type': 'summary_text'}],
'type': 'reasoning'},
'tool_outputs': [{'id': 'cu_67ddb385358c8191bf1a127b71bcf1ea051ff325ba1bad35',
'action': {'button': 'left', 'type': 'click', 'x': 17, 'y': 38},
'call_id': 'call_Ae3Ghz8xdqZQ01mosYhXXMho',
'pending_safety_checks': [],
'status': 'completed',
'type': 'computer_call'}]}

We next construct a ToolMessage with these properties:

  1. It has a tool_call_id matching the call_id from the computer-call.
  2. It has {"type": "computer_call_output"} in its additional_kwargs.
  3. Its content is either an image_url or an input_image output block (see OpenAI docs for formatting).
from langchain_core.messages import ToolMessage

tool_call_id = response.additional_kwargs["tool_outputs"][0]["call_id"]

tool_message = ToolMessage(
content=[
{
"type": "input_image",
"image_url": f"data:image/png;base64,{screenshot_2_base64}",
}
],
# content=f"data:image/png;base64,{screenshot_2_base64}", # <-- also acceptable
tool_call_id=tool_call_id,
additional_kwargs={"type": "computer_call_output"},
)
API Reference:ToolMessage

We can now invoke the model again using the message history:

messages = [
input_message,
response,
tool_message,
]

response_2 = llm_with_tools.invoke(
messages,
reasoning={
"generate_summary": "concise",
},
)
response_2.text()
'Done! The Desktop is now visible.'

Instead of passing back the entire sequence, we can also use the previous_response_id:

previous_response_id = response.response_metadata["id"]

response_2 = llm_with_tools.invoke(
[tool_message],
previous_response_id=previous_response_id,
reasoning={
"generate_summary": "concise",
},
)
response_2.text()
'The Visual Studio Code terminal has been closed and your desktop is now visible.'

Managing conversation state

The Responses API supports management of conversation state.

Manually manage state

You can manage the state manually or using LangGraph, as with other chat models:

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini")

tool = {"type": "web_search_preview"}
llm_with_tools = llm.bind_tools([tool])

first_query = "What was a positive news story from today?"
messages = [{"role": "user", "content": first_query}]

response = llm_with_tools.invoke(messages)
response_text = response.text()
print(f"{response_text[:100]}... {response_text[-100:]}")
API Reference:ChatOpenAI
As of March 12, 2025, here are some positive news stories that highlight recent uplifting events:

*... exemplify positive developments in health, environmental sustainability, and community well-being.
second_query = (
"Repeat my question back to me, as well as the last sentence of your answer."
)

messages.extend(
[
response,
{"role": "user", "content": second_query},
]
)
second_response = llm_with_tools.invoke(messages)
print(second_response.text())
Your question was: "What was a positive news story from today?"

The last sentence of my answer was: "These stories exemplify positive developments in health, environmental sustainability, and community well-being."
tip

You can use LangGraph to manage conversational threads for you in a variety of backends, including in-memory and Postgres. See this tutorial to get started.

Passing previous_response_id

When using the Responses API, LangChain messages will include an "id" field in its metadata. Passing this ID to subsequent invocations will continue the conversation. Note that this is equivalent to manually passing in messages from a billing perspective.

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
model="gpt-4o-mini",
use_responses_api=True,
)
response = llm.invoke("Hi, I'm Bob.")
print(response.text())
API Reference:ChatOpenAI
Hi Bob! How can I assist you today?
second_response = llm.invoke(
"What is my name?",
previous_response_id=response.response_metadata["id"],
)
print(second_response.text())
Your name is Bob. How can I help you today, Bob?

Fine-tuning

You can call fine-tuned OpenAI models by passing in your corresponding modelName parameter.

This generally takes the form of ft:{OPENAI_MODEL_NAME}:{ORG_NAME}::{MODEL_ID}. For example:

fine_tuned_model = ChatOpenAI(
temperature=0, model_name="ft:gpt-3.5-turbo-0613:langchain::7qTVM5AR"
)

fine_tuned_model.invoke(messages)
AIMessage(content="J'adore la programmation.", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 8, 'prompt_tokens': 31, 'total_tokens': 39}, 'model_name': 'ft:gpt-3.5-turbo-0613:langchain::7qTVM5AR', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-0f39b30e-c56e-4f3b-af99-5c948c984146-0', usage_metadata={'input_tokens': 31, 'output_tokens': 8, 'total_tokens': 39})

Multimodal Inputs

OpenAI has models that support multimodal inputs. You can pass in images or audio to these models. For more information on how to do this in LangChain, head to the multimodal inputs docs.

You can see the list of models that support different modalities in OpenAI's documentation.

At the time of this doc's writing, the main OpenAI models you would use would be:

  • Image inputs: gpt-4o, gpt-4o-mini
  • Audio inputs: gpt-4o-audio-preview

For an example of passing in image inputs, see the multimodal inputs how-to guide.

Below is an example of passing audio inputs to gpt-4o-audio-preview:

import base64

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
model="gpt-4o-audio-preview",
temperature=0,
)

with open(
"../../../../libs/partners/openai/tests/integration_tests/chat_models/audio_input.wav",
"rb",
) as f:
# b64 encode it
audio = f.read()
audio_b64 = base64.b64encode(audio).decode()


output_message = llm.invoke(
[
(
"human",
[
{"type": "text", "text": "Transcribe the following:"},
# the audio clip says "I'm sorry, but I can't create..."
{
"type": "input_audio",
"input_audio": {"data": audio_b64, "format": "wav"},
},
],
),
]
)
output_message.content
API Reference:ChatOpenAI
"I'm sorry, but I can't create audio content that involves yelling. Is there anything else I can help you with?"

Predicted output

info

Requires langchain-openai>=0.2.6

Some OpenAI models (such as their gpt-4o and gpt-4o-mini series) support Predicted Outputs, which allow you to pass in a known portion of the LLM's expected output ahead of time to reduce latency. This is useful for cases such as editing text or code, where only a small part of the model's output will change.

Here's an example:

code = """
/// <summary>
/// Represents a user with a first name, last name, and username.
/// </summary>
public class User
{
/// <summary>
/// Gets or sets the user's first name.
/// </summary>
public string FirstName { get; set; }

/// <summary>
/// Gets or sets the user's last name.
/// </summary>
public string LastName { get; set; }

/// <summary>
/// Gets or sets the user's username.
/// </summary>
public string Username { get; set; }
}
"""

llm = ChatOpenAI(model="gpt-4o")
query = (
"Replace the Username property with an Email property. "
"Respond only with code, and with no markdown formatting."
)
response = llm.invoke(
[{"role": "user", "content": query}, {"role": "user", "content": code}],
prediction={"type": "content", "content": code},
)
print(response.content)
print(response.response_metadata)
/// <summary>
/// Represents a user with a first name, last name, and email.
/// </summary>
public class User
{
/// <summary>
/// Gets or sets the user's first name.
/// </summary>
public string FirstName { get; set; }

/// <summary>
/// Gets or sets the user's last name.
/// </summary>
public string LastName { get; set; }

/// <summary>
/// Gets or sets the user's email.
/// </summary>
public string Email { get; set; }
}
{'token_usage': {'completion_tokens': 226, 'prompt_tokens': 166, 'total_tokens': 392, 'completion_tokens_details': {'accepted_prediction_tokens': 49, 'audio_tokens': None, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 107}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_45cf54deae', 'finish_reason': 'stop', 'logprobs': None}

Note that currently predictions are billed as additional tokens and may increase your usage and costs in exchange for this reduced latency.

Audio Generation (Preview)

info

Requires langchain-openai>=0.2.3

OpenAI has a new audio generation feature that allows you to use audio inputs and outputs with the gpt-4o-audio-preview model.

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
model="gpt-4o-audio-preview",
temperature=0,
model_kwargs={
"modalities": ["text", "audio"],
"audio": {"voice": "alloy", "format": "wav"},
},
)

output_message = llm.invoke(
[
("human", "Are you made by OpenAI? Just answer yes or no"),
]
)
API Reference:ChatOpenAI

output_message.additional_kwargs['audio'] will contain a dictionary like

{
'data': '<audio data b64-encoded',
'expires_at': 1729268602,
'id': 'audio_67127d6a44348190af62c1530ef0955a',
'transcript': 'Yes.'
}

and the format will be what was passed in model_kwargs['audio']['format'].

We can also pass this message with audio data back to the model as part of a message history before openai expires_at is reached.

note

Output audio is stored under the audio key in AIMessage.additional_kwargs, but input content blocks are typed with an input_audio type and key in HumanMessage.content lists.

For more information, see OpenAI's audio docs.

history = [
("human", "Are you made by OpenAI? Just answer yes or no"),
output_message,
("human", "And what is your name? Just give your name."),
]
second_output_message = llm.invoke(history)

API reference

For detailed documentation of all ChatOpenAI features and configurations head to the API reference: https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html


Was this page helpful?