Files
documente/SKILL.md
Luca Sacchi Ricciardi 0b33cd1619
Some checks failed
CI / test (3.10) (push) Has been cancelled
CI / test (3.11) (push) Has been cancelled
CI / test (3.12) (push) Has been cancelled
CI / lint (push) Has been cancelled
feat: add support for local LLM providers (Ollama & LM Studio)
Implement local LLM inference support for Ollama and LM Studio:

New Clients:
- OllamaClient: Interface to Ollama API (default: localhost:11434)
- LMStudioClient: Interface to LM Studio API (default: localhost:1234)

Factory Updates:
- Added OLLAMA and LMSTUDIO to LLMProvider enum
- Updated create_client() to instantiate local clients
- Updated list_available_providers() with is_local flag

Configuration:
- Added ollama_base_url and lmstudio_base_url settings
- Local providers return configured for API key check

Tests:
- Comprehensive test suite (250+ lines)
- Tests for client initialization and invocation
- Factory integration tests

Documentation:
- Added LLM Providers section to SKILL.md
- Documented setup for Ollama and LM Studio
- Added usage examples and configuration guide

Usage:
  provider: ollama, model: llama3.2
  provider: lmstudio, model: local-model
2026-04-06 18:28:21 +02:00

822 lines
23 KiB
Markdown

---
name: notebooklm-agent
description: API and webhook interface for Google NotebookLM automation. Full programmatic access including audio generation, video creation, quizzes, flashcards, and all NotebookLM Studio features. Integrates with other AI agents via REST API and webhooks.
triggers:
- /notebooklm-agent
- /notebooklm
- "create.*podcast"
- "generate.*audio"
- "create.*video"
- "generate.*quiz"
- "create.*flashcard"
- "research.*notebook"
- "webhook.*notebook"
---
# NotebookLM Agent API Skill
Interfaccia agentica per Google NotebookLM tramite API REST e webhook. Automatizza la creazione di notebook, gestione fonti, generazione contenuti multi-formato (audio, video, slide, quiz, flashcard) e integrazione con altri agenti AI.
---
## Capabilities
### Operazioni Supportate
| Categoria | Operazioni |
|-----------|------------|
| **Notebook** | Creare, listare, ottenere, aggiornare, eliminare |
| **Fonti** | Aggiungere URL, PDF, YouTube, Drive, ricerca web |
| **Chat** | Interrogare fonti, storico conversazioni |
| **Generazione** | Audio (podcast), Video, Slide, Infografiche, Quiz, Flashcard, Report, Mappe mentali, Tabelle |
| **Artifacts** | Monitorare stato, scaricare in vari formati |
| **Webhook** | Registrare endpoint, ricevere notifiche eventi |
| **RAG Integration** | Sincronizzare notebook, ricerche semantiche, query multi-notebook |
---
## Prerequisiti
### 1. Autenticazione NotebookLM
Prima di qualsiasi operazione, autenticarsi con NotebookLM:
```bash
# Login browser (prima volta)
notebooklm login
# Verifica autenticazione
notebooklm auth check
notebooklm list
```
### 2. Avvio API Server
```bash
# Avvia server API
uv run fastapi dev src/notebooklm_agent/api/main.py
# Verifica salute
http://localhost:8000/health
```
---
## Autonomy Rules
### ✅ Esegui Automaticamente (senza conferma)
| Operazione | Motivo |
|------------|--------|
| `GET /api/v1/notebooks` | Read-only |
| `GET /api/v1/notebooks/{id}` | Read-only |
| `GET /api/v1/notebooks/{id}/sources` | Read-only |
| `GET /api/v1/notebooks/{id}/chat/history` | Read-only |
| `GET /api/v1/notebooks/{id}/artifacts` | Read-only |
| `GET /api/v1/notebooks/{id}/artifacts/{id}/status` | Read-only |
| `GET /api/v1/notebooklm/indexed` | Read-only |
| `GET /api/v1/notebooklm/sync/{id}/status` | Read-only |
| `POST /api/v1/query` | Read-only (ricerca) |
| `POST /api/v1/query/notebooks` | Read-only (ricerca) |
| `GET /health` | Health check |
| `POST /api/v1/webhooks/{id}/test` | Test non distruttivo |
### ⚠️ Chiedi Conferma Prima
| Operazione | Motivo |
|------------|--------|
| `POST /api/v1/notebooks` | Crea risorsa |
| `DELETE /api/v1/notebooks/{id}` | Distruttivo |
| `POST /api/v1/notebooks/{id}/sources` | Aggiunge dati |
| `POST /api/v1/notebooks/{id}/generate/*` | Lungo, può fallire |
| `GET /api/v1/notebooks/{id}/artifacts/{id}/download` | Scrive filesystem |
| `POST /api/v1/webhooks` | Configura endpoint |
| `POST /api/v1/notebooklm/sync/{id}` | Indicizza dati (tempo/risorse) |
| `DELETE /api/v1/notebooklm/sync/{id}` | Rimuove dati indicizzati |
---
## Quick Reference API
### Notebook Operations
```bash
# Creare notebook
curl -X POST http://localhost:8000/api/v1/notebooks \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{"title": "Ricerca AI", "description": "Studio sull\'intelligenza artificiale"}'
# Listare notebook
curl http://localhost:8000/api/v1/notebooks \
-H "X-API-Key: your-key"
# Ottenere notebook specifico
curl http://localhost:8000/api/v1/notebooks/{notebook_id} \
-H "X-API-Key: your-key"
```
### Source Operations
```bash
# Aggiungere URL
curl -X POST http://localhost:8000/api/v1/notebooks/{id}/sources \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{"type": "url", "url": "https://example.com/article"}'
# Aggiungere PDF
curl -X POST http://localhost:8000/api/v1/notebooks/{id}/sources \
-H "X-API-Key: your-key" \
-H "Content-Type: multipart/form-data" \
-F "type=file" \
-F "file=@/path/to/document.pdf"
# Ricerca web
curl -X POST http://localhost:8000/api/v1/notebooks/{id}/sources/research \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{"query": "intelligenza artificiale 2026", "mode": "deep", "auto_import": true}'
```
### Chat Operations
```bash
# Inviare messaggio
curl -X POST http://localhost:8000/api/v1/notebooks/{id}/chat \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{"message": "Quali sono i punti chiave?", "include_references": true}'
# Ottenere storico
curl http://localhost:8000/api/v1/notebooks/{id}/chat/history \
-H "X-API-Key: your-key"
```
### Content Generation
```bash
# Generare podcast audio
curl -X POST http://localhost:8000/api/v1/notebooks/{id}/generate/audio \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{
"instructions": "Rendi il podcast coinvolgente e accessibile",
"format": "deep-dive",
"length": "long",
"language": "it"
}'
# Generare video
curl -X POST http://localhost:8000/api/v1/notebooks/{id}/generate/video \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{
"instructions": "Video esplicativo professionale",
"style": "whiteboard",
"language": "it"
}'
# Generare quiz
curl -X POST http://localhost:8000/api/v1/notebooks/{id}/generate/quiz \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{
"difficulty": "medium",
"quantity": "standard"
}'
# Generare flashcards
curl -X POST http://localhost:8000/api/v1/notebooks/{id}/generate/flashcards \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{
"difficulty": "hard",
"quantity": "more"
}'
# Generare slide deck
curl -X POST http://localhost:8000/api/v1/notebooks/{id}/generate/slide-deck \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{
"format": "detailed",
"length": "default"
}'
# Generare infografica
curl -X POST http://localhost:8000/api/v1/notebooks/{id}/generate/infographic \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{
"orientation": "portrait",
"detail": "detailed"
}'
# Generare mappa mentale (instant)
curl -X POST http://localhost:8000/api/v1/notebooks/{id}/generate/mind-map \
-H "X-API-Key: your-key"
# Generare tabella dati
curl -X POST http://localhost:8000/api/v1/notebooks/{id}/generate/data-table \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{
"description": "Confronta i diversi approcci di machine learning"
}'
```
### Artifact Management
```bash
# Listare artifacts
curl http://localhost:8000/api/v1/notebooks/{id}/artifacts \
-H "X-API-Key: your-key"
# Controllare stato
curl http://localhost:8000/api/v1/notebooks/{id}/artifacts/{artifact_id}/status \
-H "X-API-Key: your-key"
# Attendere completamento
curl -X POST http://localhost:8000/api/v1/notebooks/{id}/artifacts/{artifact_id}/wait \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{"timeout": 1200}'
# Scaricare artifact
curl http://localhost:8000/api/v1/notebooks/{id}/artifacts/{artifact_id}/download \
-H "X-API-Key: your-key" \
-o artifact.mp3
```
### RAG Integration
```bash
# Sincronizzare notebook nel vector store
curl -X POST http://localhost:8000/api/v1/notebooklm/sync/{notebook_id} \
-H "X-API-Key: your-key"
# Lista notebook sincronizzati
curl http://localhost:8000/api/v1/notebooklm/indexed \
-H "X-API-Key: your-key"
# Query sui notebook (solo contenuto notebook)
curl -X POST http://localhost:8000/api/v1/query/notebooks \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{
"question": "Quali sono i punti chiave?",
"notebook_ids": ["uuid-1", "uuid-2"],
"k": 10,
"provider": "openai"
}'
# Query mista (documenti + notebook)
curl -X POST http://localhost:8000/api/v1/query \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{
"question": "Confronta le informazioni tra documenti e notebook",
"notebook_ids": ["uuid-1"],
"include_documents": true,
"provider": "anthropic"
}'
# Rimuovere sincronizzazione
curl -X DELETE http://localhost:8000/api/v1/notebooklm/sync/{notebook_id} \
-H "X-API-Key: your-key"
```
---
### Webhook Management
```bash
# Registrare webhook
curl -X POST http://localhost:8000/api/v1/webhooks \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{
"url": "https://your-agent.com/webhook",
"events": ["artifact.completed", "source.ready"],
"secret": "your-webhook-secret"
}'
# Listare webhook
curl http://localhost:8000/api/v1/webhooks \
-H "X-API-Key: your-key"
# Testare webhook
curl -X POST http://localhost:8000/api/v1/webhooks/{webhook_id}/test \
-H "X-API-Key: your-key"
# Rimuovere webhook
curl -X DELETE http://localhost:8000/api/v1/webhooks/{webhook_id} \
-H "X-API-Key: your-key"
```
---
## Content Generation Options
### Audio (Podcast)
| Parametro | Valori | Default |
|-----------|--------|---------|
| `format` | `deep-dive`, `brief`, `critique`, `debate` | `deep-dive` |
| `length` | `short`, `default`, `long` | `default` |
| `language` | `en`, `it`, `es`, `fr`, `de`, ... | `en` |
### Video
| Parametro | Valori | Default |
|-----------|--------|---------|
| `format` | `explainer`, `brief` | `explainer` |
| `style` | `auto`, `classic`, `whiteboard`, `kawaii`, `anime`, `watercolor`, `retro-print`, `heritage`, `paper-craft` | `auto` |
| `language` | Codice lingua | `en` |
### Slide Deck
| Parametro | Valori | Default |
|-----------|--------|---------|
| `format` | `detailed`, `presenter` | `detailed` |
| `length` | `default`, `short` | `default` |
### Infographic
| Parametro | Valori | Default |
|-----------|--------|---------|
| `orientation` | `landscape`, `portrait`, `square` | `landscape` |
| `detail` | `concise`, `standard`, `detailed` | `standard` |
| `style` | `auto`, `sketch-note`, `professional`, `bento-grid`, `editorial`, `instructional`, `bricks`, `clay`, `anime`, `kawaii`, `scientific` | `auto` |
### Quiz / Flashcards
| Parametro | Valori | Default |
|-----------|--------|---------|
| `difficulty` | `easy`, `medium`, `hard` | `medium` |
| `quantity` | `fewer`, `standard`, `more` | `standard` |
---
## Workflow Comuni
### Workflow 1: Research to Podcast
```bash
# 1. Creare notebook
NOTEBOOK=$(curl -s -X POST http://localhost:8000/api/v1/notebooks \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{"title": "AI Research"}' | jq -r '.data.id')
# 2. Aggiungere fonti
curl -X POST http://localhost:8000/api/v1/notebooks/$NOTEBOOK/sources \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{"type": "url", "url": "https://example.com/ai-article"}'
# 3. Ricerca web (opzionale)
curl -X POST http://localhost:8000/api/v1/notebooks/$NOTEBOOK/sources/research \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{"query": "latest AI trends 2026", "mode": "deep", "auto_import": true}'
# 4. Generare podcast
ARTIFACT=$(curl -s -X POST http://localhost:8000/api/v1/notebooks/$NOTEBOOK/generate/audio \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{"instructions": "Make it engaging", "format": "deep-dive", "length": "long"}' | jq -r '.data.artifact_id')
# 5. Attendere completamento
curl -X POST http://localhost:8000/api/v1/notebooks/$NOTEBOOK/artifacts/$ARTIFACT/wait \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{"timeout": 1200}'
# 6. Scaricare
curl http://localhost:8000/api/v1/notebooks/$NOTEBOOK/artifacts/$ARTIFACT/download \
-H "X-API-Key: your-key" \
-o podcast.mp3
```
### Workflow 2: Document Analysis
```bash
# Creare notebook e caricare PDF
NOTEBOOK=$(curl -s -X POST http://localhost:8000/api/v1/notebooks \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{"title": "Document Analysis"}' | jq -r '.data.id')
curl -X POST http://localhost:8000/api/v1/notebooks/$NOTEBOOK/sources \
-H "X-API-Key: your-key" \
-F "type=file" \
-F "file=@document.pdf"
# Interrogare contenuto
curl -X POST http://localhost:8000/api/v1/notebooks/$NOTEBOOK/chat \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{"message": "Summarize the key points"}'
# Generare quiz
curl -X POST http://localhost:8000/api/v1/notebooks/$NOTEBOOK/generate/quiz \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{"difficulty": "medium"}'
```
### Workflow 3: Webhook Integration
```bash
# 1. Registrare webhook per ricevere notifiche
curl -X POST http://localhost:8000/api/v1/webhooks \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{
"url": "https://my-agent.com/notebooklm-webhook",
"events": ["artifact.completed", "artifact.failed", "source.ready"],
"secret": "secure-webhook-secret"
}'
# 2. Avviare generazione lunga
curl -X POST http://localhost:8000/api/v1/notebooks/{id}/generate/audio \
-H "X-API-Key: your-key" \
-H "Content-Type: application/json" \
-d '{"instructions": "Create engaging podcast"}'
# 3. Il webhook riceverà:
# {
# "event": "artifact.completed",
# "timestamp": "2026-04-05T10:30:00Z",
# "data": {
# "notebook_id": "...",
# "artifact_id": "...",
# "type": "audio",
# "download_url": "..."
# }
# }
```
---
## Response Formats
### Success Response
```json
{
"success": true,
"data": {
"id": "abc123...",
"title": "My Notebook",
"created_at": "2026-04-05T10:30:00Z"
},
"meta": {
"timestamp": "2026-04-05T10:30:00Z",
"request_id": "req-uuid"
}
}
```
### Error Response
```json
{
"success": false,
"error": {
"code": "VALIDATION_ERROR",
"message": "Invalid notebook title",
"details": [
{"field": "title", "error": "Title must be at least 3 characters"}
]
},
"meta": {
"timestamp": "2026-04-05T10:30:00Z",
"request_id": "req-uuid"
}
}
```
---
## Webhook Events
### Event Types
| Evento | Descrizione | Payload |
|--------|-------------|---------|
| `notebook.created` | Nuovo notebook creato | `{notebook_id, title}` |
| `source.added` | Nuova fonte aggiunta | `{notebook_id, source_id, type}` |
| `source.ready` | Fonte indicizzata | `{notebook_id, source_id, title}` |
| `source.error` | Errore indicizzazione | `{notebook_id, source_id, error}` |
| `artifact.pending` | Generazione avviata | `{notebook_id, artifact_id, type}` |
| `artifact.completed` | Generazione completata | `{notebook_id, artifact_id, type}` |
| `artifact.failed` | Generazione fallita | `{notebook_id, artifact_id, error}` |
| `research.completed` | Ricerca completata | `{notebook_id, sources_count}` |
### Webhook Security
Gli webhook includono header `X-Webhook-Signature` con HMAC-SHA256:
```python
import hmac
import hashlib
signature = hmac.new(
secret.encode(),
payload.encode(),
hashlib.sha256
).hexdigest()
# Verificare: signature == request.headers['X-Webhook-Signature']
```
---
## Error Handling
### Error Codes
| Codice | Descrizione | Azione |
|--------|-------------|--------|
| `AUTH_ERROR` | Autenticazione fallita | Verificare API key |
| `NOTEBOOKLM_AUTH_ERROR` | Sessione NotebookLM scaduta | Eseguire `notebooklm login` |
| `VALIDATION_ERROR` | Dati input non validi | Correggere payload |
| `NOT_FOUND` | Risorsa non trovata | Verificare ID |
| `RATE_LIMITED` | Rate limit NotebookLM | Attendere e riprovare |
| `GENERATION_FAILED` | Generazione fallita | Verificare fonti, riprovare |
| `TIMEOUT` | Operazione timeout | Estendere timeout, riprovare |
### Retry Strategy
Per operazioni che falliscono con `RATE_LIMITED`:
- Attendere 5-10 minuti
- Riprovare con exponential backoff
- Massimo 3 retry
---
## Timing Guide
| Operazione | Tempo Tipico | Timeout Consigliato |
|------------|--------------|---------------------|
| Creazione notebook | <1s | 30s |
| Aggiunta fonte URL | 10-60s | 120s |
| Aggiunta PDF | 30s-10min | 600s |
| Ricerca web (fast) | 30s-2min | 180s |
| Ricerca web (deep) | 15-30min | 1800s |
| Generazione quiz | 5-15min | 900s |
| Generazione audio | 10-20min | 1200s |
| Generazione video | 15-45min | 2700s |
| Mind map | Istantaneo | n/a |
---
## Best Practices
1. **Usa webhook per operazioni lunghe** - Non bloccare l'agente in polling
2. **Gestisci rate limits** - NotebookLM ha limiti aggressivi
3. **Verifica firma webhook** - Sicurezza endpoint
4. **Usa UUID completi** in automazione - Evita ambiguità
5. **Isola contesti per agenti paralleli** - Usa profile o NOTEBOOKLM_HOME
---
## Troubleshooting
```bash
# Verifica stato API
curl http://localhost:8000/health
# Verifica autenticazione NotebookLM
notebooklm auth check --test
# Log dettagliati
LOG_LEVEL=DEBUG uv run fastapi dev src/notebooklm_agent/api/main.py
# Lista notebook per verificare
curl http://localhost:8000/api/v1/notebooks -H "X-API-Key: your-key"
```
---
## LLM Providers
DocuMente supporta molteplici provider LLM, inclusi quelli locali tramite **Ollama** e **LM Studio**.
### Provider Cloud
| Provider | API Key Richiesta | Default Model |
|----------|------------------|---------------|
| **OpenAI** | ✅ `OPENAI_API_KEY` | gpt-4o-mini |
| **Anthropic** | ✅ `ANTHROPIC_API_KEY` | claude-3-sonnet |
| **Google** | ✅ `GOOGLE_API_KEY` | gemini-pro |
| **Mistral** | ✅ `MISTRAL_API_KEY` | mistral-medium |
| **Azure** | ✅ `AZURE_API_KEY` | gpt-4 |
| **OpenRouter** | ✅ `OPENROUTER_API_KEY` | openai/gpt-4o-mini |
| **Z.AI** | ✅ `ZAI_API_KEY` | zai-large |
| **OpenCode Zen** | ✅ `OPENCODE_ZEN_API_KEY` | zen-1 |
### Provider Locali
| Provider | URL Default | Configurazione |
|----------|-------------|----------------|
| **Ollama** | http://localhost:11434 | `OLLAMA_BASE_URL` |
| **LM Studio** | http://localhost:1234 | `LMSTUDIO_BASE_URL` |
#### Setup Ollama
```bash
# 1. Installa Ollama
# macOS/Linux
curl -fsSL https://ollama.com/install.sh | sh
# 2. Scarica un modello
ollama pull llama3.2
ollama pull mistral
ollama pull qwen2.5
# 3. Avvia Ollama (in un terminale separato)
ollama serve
# 4. Verifica che sia in esecuzione
curl http://localhost:11434/api/tags
```
**Uso con DocuMente:**
```bash
# Query con Ollama
curl -X POST http://localhost:8000/api/v1/query \
-H "Content-Type: application/json" \
-d '{
"question": "Spiega l\'intelligenza artificiale",
"provider": "ollama",
"model": "llama3.2"
}'
```
#### Setup LM Studio
```bash
# 1. Scarica LM Studio da https://lmstudio.ai/
# 2. Avvia LM Studio e carica un modello
# 3. Attiva il server locale (Settings > Local Server)
# Default URL: http://localhost:1234
# 4. Verifica che sia in esecuzione
curl http://localhost:1234/v1/models
```
**Uso con DocuMente:**
```bash
# Query con LM Studio
curl -X POST http://localhost:8000/api/v1/query \
-H "Content-Type: application/json" \
-d '{
"question": "Cosa sono i notebook?",
"provider": "lmstudio",
"model": "local-model"
}'
```
#### Configurazione URL Personalizzato
Per usare Ollama/LM Studio su un'altra macchina nella rete:
```env
# .env
OLLAMA_BASE_URL=http://192.168.1.100:11434
LMSTUDIO_BASE_URL=http://192.168.1.50:1234
```
#### Vantaggi dei Provider Locali
- 🔒 **Privacy**: I dati non lasciano il tuo computer/rete
- 💰 **Gratuito**: Nessun costo per API call
-**Offline**: Funziona senza connessione internet
- 🔧 **Controllo**: Scegli tu quali modelli usare
#### Limitazioni
- Richiedono hardware adeguato (RAM, GPU consigliata)
- I modelli locali sono generalmente meno potenti di GPT-4/Claude
- Tempo di risposta più lungo su hardware consumer
---
**Skill Version:** 1.3.0
**API Version:** v1
**Last Updated:** 2026-04-06
---
## Changelog Sprint 1
### 2026-04-06 - Notebook Management CRUD
**Implemented:**
-`POST /api/v1/notebooks` - Create notebook
-`GET /api/v1/notebooks` - List notebooks with pagination
-`GET /api/v1/notebooks/{id}` - Get notebook by ID
-`PATCH /api/v1/notebooks/{id}` - Update notebook (partial)
-`DELETE /api/v1/notebooks/{id}` - Delete notebook
**Features:**
- Full CRUD operations for notebook management
- UUID validation for notebook IDs
- Pagination with limit/offset
- Sorting (created_at, updated_at, title)
- Error handling with standardized responses
- Comprehensive test coverage (97% services)
**Next Sprint:**
- Source management endpoints
- Chat functionality
- Content generation (audio, video, etc.)
- Webhook system
---
## Changelog Sprint 2
### 2026-04-06 - NotebookLM + RAG Integration
**Implemented:**
-`POST /api/v1/notebooklm/sync/{id}` - Sync notebook to RAG vector store
-`GET /api/v1/notebooklm/indexed` - List synced notebooks
-`DELETE /api/v1/notebooklm/sync/{id}` - Remove notebook from RAG
-`GET /api/v1/notebooklm/sync/{id}/status` - Check sync status
-`POST /api/v1/query/notebooks` - Query only notebook content
- ✅ Enhanced `POST /api/v1/query` - Filter by notebook_ids
**Features:**
- NotebookLMIndexerService for content extraction and indexing
- Vector store integration with Qdrant
- Metadata preservation (notebook_id, source_id, source_title)
- Multi-notebook queries
- Hybrid search (documents + notebooks)
- Support for all LLM providers in notebook queries
- Comprehensive test coverage (428 lines of tests)
**Architecture:**
- Service layer: NotebookLMIndexerService
- API routes: notebooklm_sync.py
- Enhanced RAGService with notebook filtering
- Extended VectorStoreService with filter support
**Documentation:**
- ✅ Updated README.md with integration overview
- ✅ Created docs/integration.md with full guide
- ✅ Updated SKILL.md with new capabilities
- ✅ API examples and best practices
---
## Changelog Sprint 3
### 2026-04-06 - Local LLM Providers (Ollama & LM Studio)
**Implemented:**
-`OllamaClient` - Support for Ollama local inference
-`LMStudioClient` - Support for LM Studio local inference
- ✅ Added `ollama` and `lmstudio` to `LLMProvider` enum
- ✅ Updated `LLMClientFactory` to create local provider clients
- ✅ Added configuration options `OLLAMA_BASE_URL` and `LMSTUDIO_BASE_URL`
- ✅ Local providers marked with `is_local: true` in provider list
**Features:**
- OpenAI-compatible API endpoints (/v1/chat/completions)
- Configurable base URLs for network deployments
- Longer timeouts (120s) for local inference
- No API key required for local providers
- Support for all Ollama models (llama3.2, mistral, qwen, etc.)
- Support for any model loaded in LM Studio
**Configuration:**
```env
# Optional: Custom URLs
OLLAMA_BASE_URL=http://localhost:11434
LMSTUDIO_BASE_URL=http://localhost:1234
```
**Usage:**
```bash
curl -X POST http://localhost:8000/api/v1/query \
-H "Content-Type: application/json" \
-d '{
"question": "Explain AI",
"provider": "ollama",
"model": "llama3.2"
}'
```
**Tests:**
- ✅ 250+ lines of tests for local providers
- ✅ Unit tests for OllamaClient and LMStudioClient
- ✅ Integration tests for factory creation
- ✅ Configuration tests