Expand SKILL.md LLM Providers section with comprehensive network configuration:
New Sections:
- Configurazione URL Personalizzato: Table of scenarios (localhost, LAN, VPN, Docker)
- Setup Ollama per Rete: 3 configuration options (env var, systemd, docker)
- Setup LM Studio per Rete: Step-by-step with CORS configuration
- Architetture Comuni: ASCII diagrams for common setups
* Server AI dedicato
* Multi-client configuration
* VPN Remote access
- Sicurezza & Firewall: UFW and iptables rules
- SSH Tunnel: Secure alternative to direct exposure
- Troubleshooting Rete: Common issues and solutions
Security Notes:
- Warning about exposing to public IP
- Firewall rules for LAN-only access
- SSH tunnel as secure alternative
Examples:
- Docker deployment
- WireGuard VPN setup
- Multi-user configurations
- CORS troubleshooting
This documentation enables users to deploy Ollama/LM Studio
on dedicated servers and access from multiple clients.
Implement local LLM inference support for Ollama and LM Studio:
New Clients:
- OllamaClient: Interface to Ollama API (default: localhost:11434)
- LMStudioClient: Interface to LM Studio API (default: localhost:1234)
Factory Updates:
- Added OLLAMA and LMSTUDIO to LLMProvider enum
- Updated create_client() to instantiate local clients
- Updated list_available_providers() with is_local flag
Configuration:
- Added ollama_base_url and lmstudio_base_url settings
- Local providers return configured for API key check
Tests:
- Comprehensive test suite (250+ lines)
- Tests for client initialization and invocation
- Factory integration tests
Documentation:
- Added LLM Providers section to SKILL.md
- Documented setup for Ollama and LM Studio
- Added usage examples and configuration guide
Usage:
provider: ollama, model: llama3.2
provider: lmstudio, model: local-model
Update documentation to reflect new integration features:
README.md:
- Add 'Integrazione NotebookLM + RAG' section after Overview
- Update DocuMente component section with new endpoints
- Add notebooklm_sync.py and notebooklm_indexer.py to architecture
- Add integration API examples
- Add link to docs/integration.md
SKILL.md:
- Add RAG Integration to Capabilities table
- Update Autonomy Rules with new endpoints
- Add RAG Integration section to Quick Reference
- Add Sprint 2 changelog with integration features
- Update Skill Version to 1.2.0
docs/integration.md (NEW):
- Complete integration guide with architecture diagram
- API reference for all sync and query endpoints
- Usage examples and workflows
- Best practices and troubleshooting
- Performance considerations and limitations
- Roadmap for future features
All documentation now accurately reflects the unified
NotebookLM + RAG agent capabilities.
Add test coverage for new integration components:
New Test Files:
- test_notebooklm_indexer.py: Unit tests for NotebookLMIndexerService
* test_sync_notebook_success: Verify successful notebook sync
* test_sync_notebook_not_found: Handle non-existent notebooks
* test_extract_source_content_success/failure: Content extraction
* test_delete_notebook_index_success/failure: Index management
* test_end_to_end_sync_flow: Integration verification
- test_notebooklm_sync.py: API route tests
* test_sync_notebook_endpoint: POST /notebooklm/sync/{id}
* test_list_indexed_notebooks_endpoint: GET /notebooklm/indexed
* test_delete_notebook_index_endpoint: DELETE /notebooklm/sync/{id}
* test_get_sync_status_endpoint: GET /notebooklm/sync/{id}/status
* test_query_with_notebook_ids: Query with notebook filters
* test_query_notebooks_endpoint: POST /query/notebooks
All tests use mocking to avoid external dependencies.
Add complete integration between NotebookLM Agent and DocuMente RAG:
New Components:
- NotebookLMIndexerService: Syncs NotebookLM content to Qdrant vector store
- notebooklm_sync API routes: Manage notebook indexing (/api/v1/notebooklm/*)
Enhanced Components:
- RAGService: Added notebook_ids filter and query_notebooks() method
- VectorStoreService: Added filter support for metadata queries
- DocumentService: Added ingest_notebooklm_source() method
- Query routes: Added /query/notebooks endpoint for notebook-only queries
- Main API: Integrated new routes and updated to v2.1.0
Features:
- Sync NotebookLM notebooks to local vector store
- Query across documents and/or notebooks
- Filter RAG queries by specific notebook IDs
- Manage indexed notebooks (list, sync, delete)
- Track sync status and metadata
API Endpoints:
- POST /api/v1/notebooklm/sync/{notebook_id}
- GET /api/v1/notebooklm/indexed
- DELETE /api/v1/notebooklm/sync/{notebook_id}
- GET /api/v1/notebooklm/sync/{notebook_id}/status
- POST /api/v1/query/notebooks
Closes integration request for unified NotebookLM + RAG agent
Restructure README to clearly present both systems:
- NotebookLM Agent: API for Google NotebookLM automation
- DocuMente: Multi-provider RAG system with web UI
Changes:
- Add detailed architecture diagrams for both systems
- Separate configuration and usage instructions
- Update project structure to show both src/notebooklm_agent and src/agentic_rag
- Add API examples for both systems
- Reorganize documentation section by system
## Changes
- Update all references from AgenticRAG to DocuMente
- Update README.md with new project description and structure
- Update LICENSE with new project name
- Update API title and descriptions in main.py
- Update frontend components (Layout, Login, Dashboard, Settings)
- Update static HTML page
- Update all documentation files (prd-v2.md, frontend-plan.md, etc.)
- Update test files with new project name
- Update docker-compose.yml, Dockerfile, requirements.txt
## SEO Benefits
- DocuMente combines 'Documento' and 'Mente' (Italian for Document and Mind)
- Memorable and brandable name
- Reflects the core functionality: AI-powered document intelligence
🎉 Project officially renamed to DocuMente!
- Remove MIT License
- Add proprietary license with All Rights Reserved
- Update README with ⚖️ Licenza e Note Legali section
- Specify Luca Sacchi Ricciardi as copyright holder
- Designate Foro di Milano as exclusive jurisdiction
⚖️ All Rights Reserved - Luca Sacchi Ricciardi 2026
Major refactoring from NotebookLM API to Agentic Retrieval System:
## New Features
- AgenticRAG backend powered by datapizza-ai framework
- Web interface for document upload and chat
- REST API with Swagger/OpenAPI documentation
- Document processing pipeline (Docling, chunking, embedding)
- Qdrant vector store integration
- Multi-provider LLM support (OpenAI, Google, Anthropic)
## New Components
- src/agentic_rag/api/ - FastAPI REST API
- Documents API (upload, list, delete)
- Query API (RAG queries)
- Chat API (conversational interface)
- Swagger UI at /api/docs
- src/agentic_rag/services/
- Document service with datapizza-ai pipeline
- RAG service with retrieval + generation
- Vector store service (Qdrant)
- static/index.html - Web UI (upload + chat)
## Dependencies
- datapizza-ai (core framework)
- datapizza-ai-clients-openai
- datapizza-ai-embedders-openai
- datapizza-ai-vectorstores-qdrant
- FastAPI, Pydantic, Qdrant
## API Endpoints
- POST /api/v1/documents - Upload documents
- GET /api/v1/documents - List documents
- POST /api/v1/query - Query knowledge base
- POST /api/v1/chat - Chat interface
- GET /api/docs - Swagger documentation
- GET / - Web UI
🏁 Ready for testing and deployment!
Implement Sprint 3: Chat Functionality
- Add ChatService with send_message and get_history methods
- Add POST /api/v1/notebooks/{id}/chat - Send message
- Add GET /api/v1/notebooks/{id}/chat/history - Get chat history
- Add ChatRequest model (message, include_references)
- Add ChatResponse model (message, sources[], timestamp)
- Add ChatMessage model (id, role, content, timestamp, sources)
- Add SourceReference model (source_id, title, snippet)
- Integrate chat router with main app
Features:
- Send messages to notebook chat
- Get AI responses with source references
- Retrieve chat history
- Support for citations in responses
Tests:
- 14 unit tests for ChatService
- 11 integration tests for chat API
- 25/25 tests passing
Related: Sprint 3 - Chat Functionality
- Add 28 unit tests for SourceService (TEST-004)
- Test all CRUD operations
- Test validation logic
- Test error handling
- Test research functionality
- Add 13 integration tests for sources API (TEST-005)
- Test POST /sources endpoint
- Test GET /sources endpoint with filters
- Test DELETE /sources endpoint
- Test POST /sources/research endpoint
- Fix ValidationError signatures in SourceService
- Fix NotebookLMError signatures
- Fix status parameter shadowing in sources router
Coverage: 28/28 unit tests pass, 13/13 integration tests pass
Implement Sprint 2: Source Management
- Add SourceService with create, list, delete, research methods
- Add POST /api/v1/notebooks/{id}/sources - Add source (URL, YouTube, Drive)
- Add GET /api/v1/notebooks/{id}/sources - List sources with filtering
- Add DELETE /api/v1/notebooks/{id}/sources/{source_id} - Delete source
- Add POST /api/v1/notebooks/{id}/sources/research - Web research
- Add ResearchRequest model for research parameters
- Integrate sources router with main app
Endpoints:
- POST /sources - 201 Created
- GET /sources - 200 OK with pagination
- DELETE /sources/{id} - 204 No Content
- POST /sources/research - 202 Accepted
Technical:
- Support for url, youtube, drive source types
- Filtering by source_type and status
- Validation for research mode (fast/deep)
- Error handling with standardized responses
Related: Sprint 2 - Source Management