About:
GPTCache is a semantic cache designed specifically for large language models (LLMs). It is fully integrated with LangChain and llama_index‚ providing efficient storage and retrieval of precomputed embeddings and related data. By caching semantic information‚ GPTCache accelerates inference tasks for LLMs‚ reducing latency and improving overall performance. Its seamless integration with LangChain and llama_index ensures compatibility and ease of use within existing workflows. With GPTCache‚ developers can leverage the power of semantic caching to enhance the efficiency and effectiveness of their language model applications.
Stack: ai; chatgpt
Stars: 5.9K
We send new OSS products every week in a new newsletter. No Spam.
We will contact you by the email
Approximately, we add new tools within three months.
We will publish it with a no-follow link.
However, you can publish your tool immediately and get a forever do-follow link.