Prompt caching with @AnthropicAI Production-ready LLM app... | Prompt caching with @AnthropicAI Production-ready LLM app...
Prompt caching with
@AnthropicAI


Production-ready LLM applications often involve long, static instructions in every prompt. Anthropic's new prompt caching feature improves model latency by up to 80% and cost by up to 90% on such prompts.

Try it out in LangChain today!

Python: langchain-anthropic==0.1.23
JS: langchain/anthropic 0.2.15

Anthropic announcement: https://anthropic.com/news/prompt-caching Prompt caching with Claude