Implement local LLM inference support for Ollama and LM Studio:
New Clients:
- OllamaClient: Interface to Ollama API (default: localhost:11434)
- LMStudioClient: Interface to LM Studio API (default: localhost:1234)
Factory Updates:
- Added OLLAMA and LMSTUDIO to LLMProvider enum
- Updated create_client() to instantiate local clients
- Updated list_available_providers() with is_local flag
Configuration:
- Added ollama_base_url and lmstudio_base_url settings
- Local providers return configured for API key check
Tests:
- Comprehensive test suite (250+ lines)
- Tests for client initialization and invocation
- Factory integration tests
Documentation:
- Added LLM Providers section to SKILL.md
- Documented setup for Ollama and LM Studio
- Added usage examples and configuration guide
Usage:
provider: ollama, model: llama3.2
provider: lmstudio, model: local-model