Optional
fields: anyOverridable Anthropic ClientOptions
A maximum number of tokens to generate before stopping.
Model name to use
Model name to use
Whether or not to include token usage data in streamed chunks.
Whether to stream the results or not
Amount of randomness injected into the response. Ranges from 0 to 1. Use temp closer to 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks.
Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Defaults to -1, which disables it.
Does nucleus sampling, in which we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. Defaults to -1, which disables it. Note that you should either alter temperature or top_p, but not both.
Optional
anthropicAnthropic API key
Optional
apiAnthropic API key
Optional
apiOptional
invocationHolds any additional parameters that are valid to pass to anthropic.messages
that are not explicitly specified on this class.
Optional
stopA list of strings upon which to stop generating.
You probably want ["\n\nHuman:"]
, as that's the cue for
the next turn in the dialog agent.
Protected
batchProtected
streamingOptional
kwargs: Partial<ChatAnthropicCallOptions>Get the identifying parameters for the model
The maximum number of tokens to generate before stopping.
Note that our models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate.
Different models have different maximum values for this parameter. See models for details.
The model that will complete your prompt.
See models for additional details and options.
Optional
metadata?: MetadataAn object describing metadata about the request.
Optional
stop_Custom text sequences that will cause the model to stop generating.
Our models will normally stop when they have naturally completed their turn,
which will result in a response stop_reason
of "end_turn"
.
If you want the model to stop generating when it encounters custom strings of
text, you can use the stop_sequences
parameter. If the model encounters one of
the custom sequences, the response stop_reason
value will be "stop_sequence"
and the response stop_sequence
value will contain the matched stop sequence.
Optional
stream?: booleanWhether to incrementally stream the response using server-sent events.
See streaming for details.
Optional
system?: stringSystem prompt.
A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role. See our guide to system prompts.
Optional
temperature?: numberAmount of randomness injected into the response.
Defaults to 1.0
. Ranges from 0.0
to 1.0
. Use temperature
closer to 0.0
for analytical / multiple choice, and closer to 1.0
for creative and
generative tasks.
Note that even with temperature
of 0.0
, the results will not be fully
deterministic.
Optional
tool_How the model should use the provided tools. The model can use a specific tool, any available tool, or decide by itself.
Optional
tools?: Tool[]Definitions of tools that the model may use.
If you include tools
in your API request, the model may return tool_use
content blocks that represent the model's use of those tools. You can then run
those tools using the tool input generated by the model and then optionally
return results back to the model using tool_result
content blocks.
Each tool definition includes:
name
: Name of the tool.description
: Optional, but strongly-recommended description of the tool.input_schema
: JSON schema for the tool input
shape that the model will produce in tool_use
output content blocks.For example, if you defined tools
as:
[
{
"name": "get_stock_price",
"description": "Get the current stock price for a given ticker symbol.",
"input_schema": {
"type": "object",
"properties": {
"ticker": {
"type": "string",
"description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
}
},
"required": ["ticker"]
}
}
]
And then asked the model "What's the S&P 500 at today?", the model might produce
tool_use
content blocks in the response like this:
[
{
"type": "tool_use",
"id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"name": "get_stock_price",
"input": { "ticker": "^GSPC" }
}
]
You might then run your get_stock_price
tool with {"ticker": "^GSPC"}
as an
input, and return the following back to the model in a subsequent user
message:
[
{
"type": "tool_result",
"tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"content": "259.75 USD"
}
]
Tools can be used for workflows that include running client-side tools and functions, or more generally whenever you want the model to produce a particular JSON structure of output.
See our guide for more details.
Optional
top_Only sample from the top K options for each subsequent token.
Used to remove "long tail" low probability responses. Learn more technical details here.
Recommended for advanced use cases only. You usually only need to use
temperature
.
Optional
top_Use nucleus sampling.
In nucleus sampling, we compute the cumulative distribution over all the options
for each subsequent token in decreasing probability order and cut it off once it
reaches a particular probability specified by top_p
. You should either alter
temperature
or top_p
, but not both.
Recommended for advanced use cases only. You usually only need to use
temperature
.
Protected
createCreates a streaming request with retry.
The parameters for creating a completion.
Optional
options: AnthropicRequestOptionsA streaming request.
Anthropic chat model integration.
Setup: Install
@langchain/anthropic
and set environment variableANTHROPIC_API_KEY
.Constructor args
Runtime args
Runtime args can be passed as the second argument to any of the base runnable methods
.invoke
..stream
,.batch
, etc. They can also be passed via.bind
, or the second arg in.bindTools
, like shown in the examples below:Examples
Instantiate
Invoking
Streaming Chunks
Aggregate Streamed Chunks
Bind tools
Structured Output
Multimodal
Usage Metadata
Stream Usage Metadata
Response Metadata