This repository connects Ollama with Elasticsearch + Kibana to support OpenAI-compatible RAG (Retrieval-Augmented Generation) experiments locally. It uses a shared Docker network to enable seamless integration between the LLM runtime and the Elastic Stack.
# 1. Clone this repository
git clone https://github.com/Som23Git/ollama_plus_elasticsearch_kibana.git
cd ollama_plus_elasticsearch_kibana
# 2. Start Ollama with model selection (e.g., mistral or deepseek)
./start-ollama.sh
# 3. Install and start Elasticsearch + Kibana (first time only)
curl -fsSL https://elastic.co/start-local | sh -s -- -v 8.18.2
# 4. If already installed, start the services
cd elastic-start-local
./start.sh
# 5. (Optional) Check network connectivity from host or containers
./network-check.sh
๐ Checking connectivity to http://localhost:11434/v1/chat/completions
โณ Sending test prompt to mistral...
โ
Ollama responded successfully!
๐ง Model response:
"A vector database is a type of database designed specifically for storing, indexing, and querying large collections of data vectors..."
This setup ensures all containers (Ollama, Elasticsearch, Kibana) run in a shared Docker network named rag-network
.
Inspired by: ๐ Testing DeepSeek R1 locally for RAG with Ollama and Kibana โ Elasticsearch Labs
This project is licensed under the MIT License.
This repository uses public domain content for RAG demos:
As per Project Gutenbergโข License, this work is freely usable in the U.S. The text file was stripped of all Gutenberg branding for compliance.
You are free to copy, modify, and use the text for any purpose.