Transform your web browsing with AI-powered contextual understanding - right in your browser, powered by local LLMs.
Get StartedAll processing happens locally on your machine. No data is sent to external servers, ensuring complete privacy of your browsing and queries.
Leverages Ollama for local LLM capabilities, providing intelligent responses without cloud dependencies.
Uses ChromaDB for efficient vector storage and retrieval, enabling contextually aware responses across multiple pages.
# Clone the repository git clone https://github.com/vanzway/mmxxv.git cd mmxxv # Set up Python environment python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install websockets chromadb beautifulsoup4 requests ollama # Install required models ollama pull nomic-embed-text ollama pull hf.co/bartowski/Llama-3.2-1B-Instruct-GGUF:Q4_0 # Start the server python mmxxv.py --server --config mmxxv.json