Implement local LLM inference support for Ollama and LM Studio:
New Clients:
- OllamaClient: Interface to Ollama API (default: localhost:11434)
- LMStudioClient: Interface to LM Studio API (default: localhost:1234)
Factory Updates:
- Added OLLAMA and LMSTUDIO to LLMProvider enum
- Updated create_client() to instantiate local clients
- Updated list_available_providers() with is_local flag
Configuration:
- Added ollama_base_url and lmstudio_base_url settings
- Local providers return configured for API key check
Tests:
- Comprehensive test suite (250+ lines)
- Tests for client initialization and invocation
- Factory integration tests
Documentation:
- Added LLM Providers section to SKILL.md
- Documented setup for Ollama and LM Studio
- Added usage examples and configuration guide
Usage:
provider: ollama, model: llama3.2
provider: lmstudio, model: local-model
Add test coverage for new integration components:
New Test Files:
- test_notebooklm_indexer.py: Unit tests for NotebookLMIndexerService
* test_sync_notebook_success: Verify successful notebook sync
* test_sync_notebook_not_found: Handle non-existent notebooks
* test_extract_source_content_success/failure: Content extraction
* test_delete_notebook_index_success/failure: Index management
* test_end_to_end_sync_flow: Integration verification
- test_notebooklm_sync.py: API route tests
* test_sync_notebook_endpoint: POST /notebooklm/sync/{id}
* test_list_indexed_notebooks_endpoint: GET /notebooklm/indexed
* test_delete_notebook_index_endpoint: DELETE /notebooklm/sync/{id}
* test_get_sync_status_endpoint: GET /notebooklm/sync/{id}/status
* test_query_with_notebook_ids: Query with notebook filters
* test_query_notebooks_endpoint: POST /query/notebooks
All tests use mocking to avoid external dependencies.
## Changes
- Update all references from AgenticRAG to DocuMente
- Update README.md with new project description and structure
- Update LICENSE with new project name
- Update API title and descriptions in main.py
- Update frontend components (Layout, Login, Dashboard, Settings)
- Update static HTML page
- Update all documentation files (prd-v2.md, frontend-plan.md, etc.)
- Update test files with new project name
- Update docker-compose.yml, Dockerfile, requirements.txt
## SEO Benefits
- DocuMente combines 'Documento' and 'Mente' (Italian for Document and Mind)
- Memorable and brandable name
- Reflects the core functionality: AI-powered document intelligence
🎉 Project officially renamed to DocuMente!
Implement Sprint 3: Chat Functionality
- Add ChatService with send_message and get_history methods
- Add POST /api/v1/notebooks/{id}/chat - Send message
- Add GET /api/v1/notebooks/{id}/chat/history - Get chat history
- Add ChatRequest model (message, include_references)
- Add ChatResponse model (message, sources[], timestamp)
- Add ChatMessage model (id, role, content, timestamp, sources)
- Add SourceReference model (source_id, title, snippet)
- Integrate chat router with main app
Features:
- Send messages to notebook chat
- Get AI responses with source references
- Retrieve chat history
- Support for citations in responses
Tests:
- 14 unit tests for ChatService
- 11 integration tests for chat API
- 25/25 tests passing
Related: Sprint 3 - Chat Functionality
- Add 28 unit tests for SourceService (TEST-004)
- Test all CRUD operations
- Test validation logic
- Test error handling
- Test research functionality
- Add 13 integration tests for sources API (TEST-005)
- Test POST /sources endpoint
- Test GET /sources endpoint with filters
- Test DELETE /sources endpoint
- Test POST /sources/research endpoint
- Fix ValidationError signatures in SourceService
- Fix NotebookLMError signatures
- Fix status parameter shadowing in sources router
Coverage: 28/28 unit tests pass, 13/13 integration tests pass