How to Run FREE AI Models in n8n Using Docker (Step-by-Step)
🔗 Link do vídeo: https://www.youtube.com/watch?v=YRLnTb4ce9k
🆔 ID do vídeo: YRLnTb4ce9k
📅 Publicado em: 2025-09-23T13:01:15Z
📺 Canal: Leon van Zyl
⏱️ Duração (ISO): PT6M13S
⏱️ Duração formatada: 00:06:13
📊 Estatísticas:
– Views: 579
– Likes: 36
– Comentários: 5
🏷️ Tags:
Learn how to use Docker model runner with N8N instead of Ollama for running free open source AI models locally. This tutorial shows you how to set up Docker Desktop, download models like GPT OSS, and integrate them with your N8N workflows using GPU acceleration. You'll discover how to configure the OpenAI-compatible API, set up embedding models for vector databases, and create a complete AI agent setup with Postgres database integration for knowledge base queries.
⭐ Try n8n cloud for FREE:
https://n8n.partnerlinks.io/f7f19w3vrhin
🙏 SUPPORT THE CHANNEL:
☕ Buy me a coffee: https://www.buymeacoffee.com/leonvanzyl
💰 PayPal: https://www.paypal.com/ncp/payment/EKRQ8QSGV6CWW
👋 CONNECT:
🔔 Subscribe for weekly AI automation tutorials
🐦 Follow on Twitter: https://x.com/leonvz
⏰ TIMESTAMPS:
0:00 – Series recap: N8N with free models and Ollama setup
0:29 – Why use Docker model runner instead of Ollama
0:43 – Docker model runner benefits: GPU acceleration and easy integration
1:08 – Installing Docker Desktop and downloading models
1:37 – Accessing models through Docker Desktop interface
1:54 – Command line access to Docker models
2:20 – OpenAI-compatible API for external applications
2:52 – Enabling Docker model runner in settings
3:28 – Integrating Docker model runner with N8N
3:58 – URL configuration for Docker vs local N8N setup
4:38 – Testing the N8N integration with Docker models
5:03 – Setting up embedding models for knowledge base
5:42 – Testing vector database queries with Anthropic invoice example
#n8n #docker #ai