Skip to content

Commit 156e429

Browse files
authored
fix: fix title/heading on the notebook (#396)
1 parent 52d38f4 commit 156e429

File tree

1 file changed

+6
-1
lines changed

1 file changed

+6
-1
lines changed

docs/user_guide/03_llmcache.ipynb

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,10 @@
11
{
22
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"source": "# LLM Caching\n\nThis notebook demonstrates how to use RedisVL's `SemanticCache` to cache LLM responses based on semantic similarity. Semantic caching can significantly reduce API costs and latency by retrieving cached responses for semantically similar prompts instead of making redundant API calls.\n\nKey features covered:\n- Basic cache operations (store, check, clear)\n- Customizing semantic similarity thresholds\n- TTL policies for cache expiration\n- Performance benchmarking\n- Access controls with tags and filters for multi-user scenarios\n\nPrerequisites:\n- Ensure `redisvl` is installed in your Python environment\n- Have a running instance of [Redis Stack](https://redis.io/docs/install/install-stack/) or [Redis Cloud](https://redis.io/cloud)\n- OpenAI API key for the examples",
6+
"metadata": {}
7+
},
38
{
49
"cell_type": "markdown",
510
"metadata": {},
@@ -925,4 +930,4 @@
925930
},
926931
"nbformat": 4,
927932
"nbformat_minor": 2
928-
}
933+
}

0 commit comments

Comments
 (0)