devmatrix-scripts/ai
devmatrix 1bd38de10b Add local LLM setup script using Ollama
- Auto-detects GPU (NVIDIA/AMD/Intel/CPU)
- Installs appropriate models based on VRAM
- Creates helper commands: llm-start, llm-stop, llm-list, llm-chat
- Sets up systemd service for auto-start
- API endpoint at localhost:11434 for integration
2026-02-18 13:58:27 +00:00
..
setup-local-llm.sh Add local LLM setup script using Ollama 2026-02-18 13:58:27 +00:00