full upgrade to dev

This commit is contained in:
2026-01-08 04:24:22 +01:00
parent e2c2585468
commit 884d7f984b
15 changed files with 2371 additions and 369 deletions

503
docs/N8N_CHAT_SETUP.md Normal file
View File

@@ -0,0 +1,503 @@
# n8n + Ollama Chat Setup Guide
This guide explains how to set up the chat feature on your portfolio website using n8n workflows and Ollama for AI responses.
## Overview
The chat system works as follows:
1. User sends a message via the chat widget on your website
2. Message is sent to your Next.js API route (`/api/n8n/chat`)
3. API forwards the message to your n8n webhook
4. n8n processes the message and sends it to Ollama (local LLM)
5. Ollama generates a response
6. Response is returned through n8n back to the website
7. User sees the AI response
## Prerequisites
- ✅ n8n instance running (you have: https://n8n.dk0.dev)
- ✅ Ollama installed and running locally or on a server
- ✅ Environment variables configured in `.env`
## Step 1: Set Up Ollama
### Install Ollama
```bash
# macOS/Linux
curl -fsSL https://ollama.com/install.sh | sh
# Or download from https://ollama.com/download
```
### Pull a Model
```bash
# For general chat (recommended)
ollama pull llama3.2
# Or for faster responses (smaller model)
ollama pull llama3.2:1b
# Or for better quality (larger model)
ollama pull llama3.2:70b
```
### Run Ollama
```bash
# Start Ollama server
ollama serve
# Test it
curl http://localhost:11434/api/generate -d '{
"model": "llama3.2",
"prompt": "Hello, who are you?",
"stream": false
}'
```
## Step 2: Create n8n Workflow
### 2.1 Create a New Workflow in n8n
1. Go to https://n8n.dk0.dev
2. Click "Create New Workflow"
3. Name it "Portfolio Chat Bot"
### 2.2 Add Webhook Trigger
1. Add a **Webhook** node (trigger)
2. Configure:
- **HTTP Method**: POST
- **Path**: `chat`
- **Authentication**: None (or add if you want)
- **Response Mode**: When Last Node Finishes
Your webhook URL will be: `https://n8n.dk0.dev/webhook/chat`
### 2.3 Add Function Node (Message Processing)
Add a **Function** node to extract and format the message:
```javascript
// Extract the message from the webhook body
const userMessage = $json.body.message || $json.message;
// Get conversation context (if you want to maintain history)
const conversationId = $json.body.conversationId || 'default';
// Create context about Dennis
const systemPrompt = `You are a helpful AI assistant on Dennis Konkol's portfolio website.
About Dennis:
- Full-stack developer based in Osnabrück, Germany
- Student passionate about technology and self-hosting
- Skills: Next.js, React, Flutter, Docker, DevOps, TypeScript, Python
- Runs his own infrastructure with Docker Swarm and Traefik
- Projects include: Clarity (dyslexia app), self-hosted services, game servers
- Contact: contact@dk0.dev
- Website: https://dk0.dev
Be friendly, concise, and helpful. Answer questions about Dennis's skills, projects, or experience.
If asked about things unrelated to Dennis, politely redirect to his portfolio topics.`;
return {
json: {
userMessage,
conversationId,
systemPrompt,
timestamp: new Date().toISOString()
}
};
```
### 2.4 Add HTTP Request Node (Ollama)
Add an **HTTP Request** node to call Ollama:
**Configuration:**
- **Method**: POST
- **URL**: `http://localhost:11434/api/generate` (or your Ollama server URL)
- **Authentication**: None
- **Body Content Type**: JSON
- **Specify Body**: Using Fields Below
**Body (JSON):**
```json
{
"model": "llama3.2",
"prompt": "{{ $json.systemPrompt }}\n\nUser: {{ $json.userMessage }}\n\nAssistant:",
"stream": false,
"options": {
"temperature": 0.7,
"top_p": 0.9,
"max_tokens": 500
}
}
```
**Alternative: If Ollama is on a different server**
Replace `localhost` with your server IP/domain:
```
http://your-ollama-server:11434/api/generate
```
### 2.5 Add Function Node (Format Response)
Add another **Function** node to format the response:
```javascript
// Extract the response from Ollama
const ollamaResponse = $json.response || $json.text || '';
// Clean up the response
let reply = ollamaResponse.trim();
// Remove any system prompts that might leak through
reply = reply.replace(/^(System:|Assistant:|User:)/gi, '').trim();
// Limit length if too long
if (reply.length > 1000) {
reply = reply.substring(0, 1000) + '...';
}
return {
json: {
reply: reply,
timestamp: new Date().toISOString(),
model: 'llama3.2'
}
};
```
### 2.6 Add Respond to Webhook Node
Add a **Respond to Webhook** node:
**Configuration:**
- **Response Body**: JSON
- **Response Data**: Using Fields Below
**Body:**
```json
{
"reply": "={{ $json.reply }}",
"timestamp": "={{ $json.timestamp }}",
"success": true
}
```
### 2.7 Save and Activate
1. Click "Save" (top right)
2. Toggle "Active" switch to ON
3. Test the webhook:
```bash
curl -X POST https://n8n.dk0.dev/webhook/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello, tell me about Dennis"}'
```
## Step 3: Advanced - Conversation Memory
To maintain conversation context across messages, add a **Redis** or **MongoDB** node:
### Option A: Using Redis (Recommended)
**Add Redis Node (Store):**
```javascript
// Store conversation in Redis with TTL
const conversationKey = `chat:${$json.conversationId}`;
const messages = [
{ role: 'user', content: $json.userMessage },
{ role: 'assistant', content: $json.reply }
];
// Get existing conversation
const existing = await this.helpers.request({
method: 'GET',
url: `redis://localhost:6379/${conversationKey}`
});
// Append new messages
const conversation = existing ? JSON.parse(existing) : [];
conversation.push(...messages);
// Keep only last 10 messages
const recentConversation = conversation.slice(-10);
// Store back with 1 hour TTL
await this.helpers.request({
method: 'SET',
url: `redis://localhost:6379/${conversationKey}`,
body: JSON.stringify(recentConversation),
qs: { EX: 3600 }
});
```
### Option B: Using Session Storage (Simpler)
Store conversation in n8n's internal storage:
```javascript
// Use n8n's static data for simple storage
const conversationKey = $json.conversationId;
const staticData = this.getWorkflowStaticData('global');
if (!staticData.conversations) {
staticData.conversations = {};
}
if (!staticData.conversations[conversationKey]) {
staticData.conversations[conversationKey] = [];
}
// Add message
staticData.conversations[conversationKey].push({
user: $json.userMessage,
assistant: $json.reply,
timestamp: new Date().toISOString()
});
// Keep only last 10
staticData.conversations[conversationKey] =
staticData.conversations[conversationKey].slice(-10);
```
## Step 4: Handle Multiple Users
The chat system automatically handles multiple users through:
1. **Session IDs**: Each user gets a unique `conversationId` generated client-side
2. **Stateless by default**: Each request is independent unless you add conversation memory
3. **Redis/Database**: Store conversations per user ID for persistent history
### Client-Side Session Management
The chat widget (created in next step) will generate a unique session ID:
```javascript
// Auto-generated in the chat widget
const conversationId = crypto.randomUUID();
localStorage.setItem('chatSessionId', conversationId);
```
### Server-Side (n8n)
n8n processes each request independently. For multiple concurrent users:
- Each webhook call is a separate execution
- No shared state between users (unless you add it)
- Ollama can handle concurrent requests
- Use Redis for scalable conversation storage
## Step 5: Rate Limiting (Optional)
To prevent abuse, add rate limiting in n8n:
```javascript
// Add this as first function node
const ip = $json.headers['x-forwarded-for'] || $json.headers['x-real-ip'] || 'unknown';
const rateLimitKey = `ratelimit:${ip}`;
const staticData = this.getWorkflowStaticData('global');
if (!staticData.rateLimits) {
staticData.rateLimits = {};
}
const now = Date.now();
const limit = staticData.rateLimits[rateLimitKey] || { count: 0, resetAt: now + 60000 };
if (now > limit.resetAt) {
// Reset after 1 minute
limit.count = 0;
limit.resetAt = now + 60000;
}
if (limit.count >= 10) {
// Max 10 requests per minute per IP
throw new Error('Rate limit exceeded. Please wait a moment.');
}
limit.count++;
staticData.rateLimits[rateLimitKey] = limit;
```
## Step 6: Environment Variables
Update your `.env` file:
```bash
# n8n Configuration
N8N_WEBHOOK_URL=https://n8n.dk0.dev
N8N_SECRET_TOKEN=your-secret-token-here # Optional: for authentication
N8N_API_KEY=your-api-key-here # Optional: for API access
# Ollama Configuration (optional - stored in n8n workflow)
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3.2
```
## Step 7: Test the Setup
```bash
# Test the chat endpoint
curl -X POST http://localhost:3000/api/n8n/chat \
-H "Content-Type: application/json" \
-d '{
"message": "What technologies does Dennis work with?"
}'
# Expected response:
{
"reply": "Dennis works with a variety of modern technologies including Next.js, React, Flutter for mobile development, Docker for containerization, and TypeScript. He's also experienced with DevOps practices, running his own infrastructure with Docker Swarm and Traefik as a reverse proxy."
}
```
## Troubleshooting
### Ollama Not Responding
```bash
# Check if Ollama is running
curl http://localhost:11434/api/tags
# If not, start it
ollama serve
# Check logs
journalctl -u ollama -f
```
### n8n Webhook Returns 404
- Make sure workflow is **Active** (toggle in top right)
- Check webhook path matches: `/webhook/chat`
- Test directly: `https://n8n.dk0.dev/webhook/chat`
### Slow Responses
- Use a smaller model: `ollama pull llama3.2:1b`
- Reduce `max_tokens` in Ollama request
- Add response caching for common questions
- Consider using streaming responses
### CORS Issues
Add CORS headers in the n8n Respond node:
```json
{
"headers": {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, OPTIONS",
"Access-Control-Allow-Headers": "Content-Type"
}
}
```
## Performance Tips
1. **Use GPU acceleration** for Ollama if available
2. **Cache common responses** in Redis
3. **Implement streaming** for real-time responses
4. **Use smaller models** for faster responses (llama3.2:1b)
5. **Add typing indicators** in the UI while waiting
## Security Considerations
1. **Add authentication** to n8n webhook (Bearer token)
2. **Implement rate limiting** (shown above)
3. **Sanitize user input** in n8n function node
4. **Don't expose Ollama** directly to the internet
5. **Use HTTPS** for all communications
6. **Add CAPTCHA** to prevent bot abuse
## Next Steps
1. ✅ Set up Ollama
2. ✅ Create n8n workflow
3. ✅ Test the API endpoint
4. 🔲 Create chat UI widget (see CHAT_WIDGET_SETUP.md)
5. 🔲 Add conversation memory
6. 🔲 Implement rate limiting
7. 🔲 Add analytics tracking
## Resources
- [Ollama Documentation](https://ollama.com/docs)
- [n8n Documentation](https://docs.n8n.io)
- [Llama 3.2 Model Card](https://ollama.com/library/llama3.2)
- [Next.js API Routes](https://nextjs.org/docs/api-routes/introduction)
## Example n8n Workflow JSON
Save this as `chat-workflow.json` and import into n8n:
```json
{
"name": "Portfolio Chat Bot",
"nodes": [
{
"parameters": {
"path": "chat",
"responseMode": "lastNode",
"options": {}
},
"name": "Webhook",
"type": "n8n-nodes-base.webhook",
"position": [250, 300],
"webhookId": "chat-webhook"
},
{
"parameters": {
"functionCode": "const userMessage = $json.body.message;\nconst systemPrompt = `You are a helpful AI assistant on Dennis Konkol's portfolio website.`;\nreturn { json: { userMessage, systemPrompt } };"
},
"name": "Process Message",
"type": "n8n-nodes-base.function",
"position": [450, 300]
},
{
"parameters": {
"method": "POST",
"url": "http://localhost:11434/api/generate",
"jsonParameters": true,
"options": {},
"bodyParametersJson": "={ \"model\": \"llama3.2\", \"prompt\": \"{{ $json.systemPrompt }}\\n\\nUser: {{ $json.userMessage }}\\n\\nAssistant:\", \"stream\": false }"
},
"name": "Call Ollama",
"type": "n8n-nodes-base.httpRequest",
"position": [650, 300]
},
{
"parameters": {
"functionCode": "const reply = $json.response || '';\nreturn { json: { reply: reply.trim() } };"
},
"name": "Format Response",
"type": "n8n-nodes-base.function",
"position": [850, 300]
},
{
"parameters": {
"respondWith": "json",
"options": {},
"responseBody": "={ \"reply\": \"{{ $json.reply }}\", \"success\": true }"
},
"name": "Respond to Webhook",
"type": "n8n-nodes-base.respondToWebhook",
"position": [1050, 300]
}
],
"connections": {
"Webhook": { "main": [[{ "node": "Process Message", "type": "main", "index": 0 }]] },
"Process Message": { "main": [[{ "node": "Call Ollama", "type": "main", "index": 0 }]] },
"Call Ollama": { "main": [[{ "node": "Format Response", "type": "main", "index": 0 }]] },
"Format Response": { "main": [[{ "node": "Respond to Webhook", "type": "main", "index": 0 }]] }
}
}
```
---
**Need help?** Check the troubleshooting section or reach out!