Generative AI

All the latest news and updates on the rapidly evolving field of Generative AI space. From cutting-edge research and developments in LLMs, text-to-image generators, to real-world applications, and the impact of generative AI on various industries.

Follow publication

How to Connect Open WebUI to an n8n AI Agent Workflow

A step-by-step guide to connecting Open WebUI to a custom AI Agent Workflow in n8n hosted in Docker and using Ollama on a Windows PC.

Jes Fink-Jensen
Generative AI
Published in
8 min readMar 22, 2025

--

In this article, I’ll show you how to set up Open WebUI and connect it to a Workflow in n8n.

Let’s continue from the previous article, in which we installed Ollama, Docker Desktop, n8n, and DeepSeek-R1. You can find the article here.

Here, we’ll choose Llama 3.1 instead of DeepSeek-R1 as our LLM model for the current article. The reason for this is that the output generated by Llama3.1 is ‘cleaner’ as it doesn’t show the ‘thought process,’ as DeepSeek-R1 does.

If you want to work with DeepSeek-R1 anyway, you’ll need to add a way to filter out this thought process or at least find a way to process it to make it look better when output in a chat window. I will not show this here.

First, we’ll install Open WebUI in Docker and add some functionality to it so that it can communicate with n8n. Next, we’ll create a minimal Workflow in n8n that includes webhooks to allow connection from Open WebUI. Finally, we’ll update the settings in Open WebUI so that it will send and receive the chat data to and from the specific webhook we made in n8n.

Of course, it is also possible to chat directly with an LLM, like Llama 3.1 or DeepSeek-R1, running on Ollama via OpenWeb UI. But I will show you how to make Open WebUI communicate with n8n so you can create more advanced Workflows yourself.

Prerequisites

As mentioned in the introduction, we’ll start with the setup described in a previous article. You can check it out here.

This setup includes the following:

  • Docker Desktop
  • Ollama
  • n8n running in Docker

We’ll also need the Llama 3.1 LLM. To download this model, open a command line interface in Windows by right-clicking the desktop and selecting ‘Open in Terminal.’ Then, write the following:

ollama run llama3.1

--

--

Published in Generative AI

All the latest news and updates on the rapidly evolving field of Generative AI space. From cutting-edge research and developments in LLMs, text-to-image generators, to real-world applications, and the impact of generative AI on various industries.

Responses (2)

Write a response