{"workflow":{"id":13261,"name":"Multi-AI Council Research 🔍: GPT 5.2, Claude Opus 4.6 & Gemini 3 Pro Aggregation","views":287,"recentViews":1,"totalViews":287,"createdAt":"2026-02-08T18:02:24.570Z","description":"\nThis workflow implements a **multi-model AI orchestration** with the BEST models at now (**ChatGPT 5.2, Claude Opus 4.6, Gemini 3 Pro**) and response aggregation system designed to handle user chat inputs intelligently and reliably.\n\n---\n\n\n### Key Advantages\n\n#### 1. ✅ Higher Answer Quality\n\nBy combining multiple top-tier AI models, the workflow reduces blind spots and single-model bias, resulting in more accurate and nuanced answers.\n\n#### 2.✅  Built-in Reliability and Redundancy\n\nIf one model underperforms or misunderstands the query, the others compensate, improving robustness and consistency.\n\n#### 3. ✅ Intelligent Query Handling\n\nThe search classification and optimization layer ensures that:\n\n* research queries are handled with precision,\n* casual conversation is not over-processed,\n* model resources are used efficiently.\n\n#### 4. ✅ Balanced and Transparent Reasoning\n\nContradictions between models are not hidden. Instead, they are reconciled or clearly explained, increasing trust in the final output.\n\n#### 5. ✅ Scalability and Extensibility\n\nThe architecture makes it easy to:\n\n* add new models,\n* swap providers,\n* experiment with different aggregation strategies,\n  without redesigning the entire workflow.\n\n#### 6. ✅ Enterprise-Ready Design\n\nThis approach is well suited for:\n\n* research assistants,\n* decision-support tools,\n* knowledge management systems,\n* high-stakes professional use cases where answer quality matters more than speed alone.\n\n---\n\n### How it Works\n\n1. **Input Processing**: When a chat message is received, it's sent to a \"Search Query Optimizer\" that determines whether the input is a research query or general conversation. If it's a search query, it's optimized for better search results.\n\n2. **Multi-Model Query Execution**: If the input is classified as a research query, the workflow simultaneously sends the optimized query to three different AI models:\n   - ChatGPT 5.2 (OpenAI)\n   - Claude Opus 4.6 (Anthropic)\n   - Gemini 3 Pro (Google)\n\n3. **Response Aggregation**: Each model's response is collected separately, then all three responses are sent to a \"Multi-Response Aggregator\" which synthesizes them into a single comprehensive answer.\n\n4. **Fallback Handling**: If the input is not a research query, the workflow bypasses the multi-model execution and sends a default message asking the user to enter a research text.\n---\n\n\n### Set up Steps\n1. **Model Configuration**: Ensure you have valid API credentials set up for:\n   - OpenAI (for ChatGPT 5.2)\n   - Anthropic (for Claude Opus 4.6)\n   - Google Gemini (for both query optimization and Gemini 3 Pro)\n\n2. **Connection Verification**: Confirm all node connections are properly established in the workflow editor, particularly:\n   - Chat trigger to Search Query Optimizer\n   - Conditional branch routing based on query classification\n   - Parallel connections to the three AI models\n   - Response collection to the aggregator\n\n3. **Prompt Customization**: Review and adjust the system prompts in:\n   - Search Query Optimizer (for query classification rules)\n   - Multi-Response Aggregator (for synthesis guidelines)\n   - Each model's chain nodes (if specific formatting is required)\n\n4. **Testing**: Activate the workflow and test with various inputs to verify:\n   - Proper classification of research vs. non-research queries\n   - Simultaneous execution of all three AI models\n   - Correct aggregation of responses\n   - Appropriate fallback message for non-research inputs\n\n\n---\n\n👉 [Subscribe to my new **YouTube channel**](https://youtube.com/@n3witalia). Here I’ll share videos and Shorts with practical tutorials and **FREE templates for n8n**.\n\n[![image](https://n3wstorage.b-cdn.net/n3witalia/youtube-n8n-cover.jpg)](https://youtube.com/@n3witalia)\n\n\n---\n\n### **Need help customizing?**  \n[Contact me](mailto:info@n3w.it) for consulting and support or add me on [Linkedin](https://www.linkedin.com/in/davideboizza/).","workflow":{"id":"kmQtg1DenmcMryFvi9N5D","meta":{"instanceId":"a4bfc93e975ca233ac45ed7c9227d84cf5a2329310525917adaf3312e10d5462","templateCredsSetupCompleted":true},"name":"Multi-AI Council Research","tags":[],"nodes":[{"id":"4dd91365-c791-4252-8cd6-1f4e9ebcabd4","name":"When chat message received","type":"@n8n/n8n-nodes-langchain.chatTrigger","position":[-832,304],"webhookId":"20be4137-36b2-4d1e-b2d6-ae7bc2cb5026","parameters":{"options":{"responseMode":"responseNodes"}},"typeVersion":1.4},{"id":"d0ff1d3a-3108-4253-abd8-7b3b732d7a1d","name":"Chat","type":"@n8n/n8n-nodes-langchain.chat","position":[432,1040],"webhookId":"b3be76ab-1f12-4a06-81d6-b983f4a62408","parameters":{"message":"This is not a research text. Please enter one.","options":{"memoryConnection":false}},"typeVersion":1.1},{"id":"5af81c5e-1f52-4935-8573-0ce887d4ff9e","name":"Google Gemini Chat Model","type":"@n8n/n8n-nodes-langchain.lmChatGoogleGemini","position":[-432,496],"parameters":{"options":{}},"credentials":{"googlePalmApi":{"id":"AaNPKXAphyMzRgfA","name":"Google Gemini(PaLM) (Eure)"}},"typeVersion":1},{"id":"8b722808-c2c0-43b0-96ae-1ebb42ca18b6","name":"Structured Output Parser","type":"@n8n/n8n-nodes-langchain.outputParserStructured","position":[-224,496],"parameters":{"schemaType":"manual","inputSchema":"{\n\t\"type\": \"object\",\n\t\"properties\": {\n\t\t\"search\": {\n\t\t\t\"type\": \"string\"\n\t\t},\n\t\t\"query\": {\n\t\t\t\"type\": \"string\"\n\t\t}\n\t}\n}"},"typeVersion":1.3},{"id":"5c5f2305-ebd8-480d-9beb-8ae2854ab56f","name":"OpenAI Chat Model","type":"@n8n/n8n-nodes-langchain.lmChatOpenAi","position":[448,-208],"parameters":{"model":{"__rl":true,"mode":"list","value":"gpt-5.2-pro","cachedResultName":"gpt-5.2-pro"},"options":{},"builtInTools":{"webSearch":{"searchContextSize":"medium"}}},"credentials":{"openAiApi":{"id":"TefveNaDaMERl1hY","name":"OpenAi account (Eure)"}},"typeVersion":1.3},{"id":"70ae5af6-884b-49d1-ada9-8a48069b04c8","name":"ChatGPT 5.2","type":"@n8n/n8n-nodes-langchain.chainLlm","position":[416,-400],"parameters":{"text":"={{ $json.output.query }}","batching":{},"promptType":"define"},"typeVersion":1.9},{"id":"09593191-9431-4658-84fd-4db0dd14aaa2","name":"Claude Opus 4.6","type":"@n8n/n8n-nodes-langchain.chainLlm","position":[416,0],"parameters":{"text":"={{ $json.output.query }}","batching":{},"promptType":"define"},"typeVersion":1.9},{"id":"733d016c-8c21-4300-a547-eda75e06cc46","name":"Anthropic Chat Model","type":"@n8n/n8n-nodes-langchain.lmChatAnthropic","position":[448,176],"parameters":{"model":{"__rl":true,"mode":"list","value":"claude-opus-4-6","cachedResultName":"Claude Opus 4.6"},"options":{"thinking":true,"thinkingBudget":1024}},"credentials":{"anthropicApi":{"id":"NNTZAD0Gmf7lcniq","name":"Anthropic account"}},"typeVersion":1.3},{"id":"fb07c440-97ce-4c61-b874-ab4460e40878","name":"Google Gemini Chat Model1","type":"@n8n/n8n-nodes-langchain.lmChatGoogleGemini","position":[432,560],"parameters":{"options":{"maxOutputTokens":2048},"modelName":"models/gemini-3-pro-preview"},"credentials":{"googlePalmApi":{"id":"AaNPKXAphyMzRgfA","name":"Google Gemini(PaLM) (Eure)"}},"typeVersion":1},{"id":"9afa4366-bc2f-4620-b9fe-ee07a21fdbdd","name":"Gemini 3 Pro","type":"@n8n/n8n-nodes-langchain.chainLlm","position":[432,400],"parameters":{"text":"={{ $json.output.query }}","batching":{},"promptType":"define"},"typeVersion":1.9},{"id":"2f4329a3-44ad-44e7-a359-37ea101cf7fc","name":"ChatGPT Result","type":"n8n-nodes-base.set","position":[768,-400],"parameters":{"options":{},"assignments":{"assignments":[{"id":"01d8253f-9878-428a-93d9-30b9bad4b6eb","name":"chatgpt","type":"string","value":"={{ $json.text }}"}]}},"typeVersion":3.4},{"id":"4960b6de-9f37-40a6-a3f5-9f2c07e81ce9","name":"Claude Result","type":"n8n-nodes-base.set","position":[768,0],"parameters":{"options":{},"assignments":{"assignments":[{"id":"01d8253f-9878-428a-93d9-30b9bad4b6eb","name":"claude","type":"string","value":"={{ $json.text }"}]}},"typeVersion":3.4},{"id":"5768118b-1126-4fb0-b162-c4f82abaaffb","name":"Gemini Result","type":"n8n-nodes-base.set","position":[768,400],"parameters":{"options":{},"assignments":{"assignments":[{"id":"01d8253f-9878-428a-93d9-30b9bad4b6eb","name":"gemini","type":"string","value":"={{ $json.text }}"}]}},"typeVersion":3.4},{"id":"a49e7108-99cb-4a69-8776-23ea713f60f9","name":"Google Gemini Chat Model2","type":"@n8n/n8n-nodes-langchain.lmChatGoogleGemini","position":[1264,192],"parameters":{"options":{}},"credentials":{"googlePalmApi":{"id":"0p34rXqIqy8WuoPg","name":"Google Gemini(PaLM) Api account"}},"typeVersion":1},{"id":"862e1bd6-181e-4b88-b20d-73d379edf982","name":"Multi-Response Aggregator","type":"@n8n/n8n-nodes-langchain.chainLlm","position":[1296,0],"parameters":{"text":"=Response 1: {{ $json.chatgpt }}\n\nResponse 2: {{ $json.claude }}\n\nResponse 3: {{ $json.gemini }}","batching":{},"messages":{"messageValues":[{"message":"You are an assistant specialized in intelligently aggregating responses from multiple sources or analyses.\n\n## Your Task\n\nYou will receive 3 separate responses that analyze the same question from different perspectives or using different methodologies. Your goal is to synthesize these responses into a single comprehensive, coherent, and well-structured answer.\n\n## Aggregation Guidelines\n\n### 1. Response Analysis\n- Identify common points across the 3 responses\n- Detect complementary information that integrates well together\n- Note any contradictions or divergences that require clarification\n\n### 2. Aggregated Response Structure\n- **Brief introduction**: Overview of the topic addressed\n- **Main body**: Integrate insights from all three sources, organizing by theme rather than by source\n- **Handling disagreements**: When responses conflict, present different perspectives transparently\n- **Conclusion**: Synthesize the key takeaways\n\n### 3. Quality Principles\n- **Completeness**: Include all relevant information from the 3 responses\n- **Coherence**: Create a fluid narrative, not a list of separate summaries\n- **Clarity**: Avoid redundancy while preserving nuance\n- **Attribution when relevant**: If sources have different confidence levels or expertise areas, note this\n- **Balanced perspective**: Don't favor one response unless there's clear reason to do so\n\n### 4. What to Avoid\n- Simply concatenating the three responses\n- Losing important details in over-summarization\n- Introducing information not present in the original responses\n- Creating false consensus when perspectives genuinely differ\n\n## Output Format\n\nProvide a single, unified response that reads as if written by one expert who has considered multiple analytical approaches. The reader should not be able to tell it came from 3 separate sources unless you explicitly note diverging viewpoints.\n\n---\n\n**Input Format Expected:**\n```\nResponse 1: [content]\nResponse 2: [content]\nResponse 3: [content]\n```\n\n**Your aggregated response should be comprehensive yet concise, professional, and directly useful to the end user.**"}]},"promptType":"define"},"typeVersion":1.9},{"id":"f2c333c9-c0bc-4e4b-8037-f19b8975e2ad","name":"Search Query Optimizer","type":"@n8n/n8n-nodes-langchain.chainLlm","position":[-400,304],"parameters":{"batching":{},"messages":{"messageValues":[{"message":"You are a specialized assistant that determines whether user input constitutes a search query or general conversational text, and optimizes search queries for AI agent interpretation.\n\n## Your Task\n\nAnalyze the user's input and:\n1. **Classify** whether it's a search query or general text\n2. **Optimize** the query if it's a search request\n3. **Return** a structured JSON response\n\n## Classification Criteria\n\n### It IS a Search Query when:\n- User explicitly asks to find, search, or look up information\n- Request implies information retrieval (e.g., \"What are the best...\", \"Show me...\", \"I need information about...\")\n- User asks about current events, facts, statistics, or data\n- Request requires external knowledge or recent information\n- Contains keywords like: \"search for\", \"find\", \"look up\", \"research\", \"what is\", \"how to\", \"where can I\"\n\n### It is NOT a Search Query when:\n- General conversation or greetings\n- Requests for explanations based on general knowledge\n- Creative writing requests\n- Coding or problem-solving tasks that don't require external data\n- Personal opinions or discussions\n- Task execution (e.g., \"write an email\", \"create a script\")\n\n## Query Optimization Process\n\nWhen you identify a search query, optimize it by:\n- **Removing conversational filler** (\"Can you please...\", \"I was wondering if...\")\n- **Extracting core keywords** and intent\n- **Adding specificity** if context allows\n- **Structuring for clarity** (e.g., \"best practices for X\" instead of \"what are some good ways to do X\")\n- **Preserving key constraints** (dates, locations, specific requirements)\n- **Making it concise** while retaining all important information\n\n## Output Format\n\nAlways respond with a JSON object in this exact format:\n\n```json\n{\n  \"research\": \"yes\",\n  \"query\": \"your optimized search query here\",\n  \"original_intent\": \"brief description of what user wants to find\"\n}\n```"}]},"hasOutputParser":true},"typeVersion":1.9},{"id":"11ab9c88-f235-4c80-8bce-85ebe4d3abca","name":"Search?","type":"n8n-nodes-base.if","position":[-48,304],"parameters":{"options":{},"conditions":{"options":{"version":3,"leftValue":"","caseSensitive":true,"typeValidation":"strict"},"combinator":"and","conditions":[{"id":"48940d10-fde9-4d87-97aa-32e61492194a","operator":{"type":"string","operation":"equals"},"leftValue":"={{ $json.output.search }}","rightValue":"yes"}]}},"typeVersion":2.3},{"id":"0a6a312d-2572-4176-8dc7-0db12a8c34b7","name":"Sticky Note","type":"n8n-nodes-base.stickyNote","position":[-928,-336],"parameters":{"width":1024,"height":464,"content":"## Multi-AI Council Research: GPT 5.2, Claude Opus 4.6 & Gemini 3 Pro Aggregation\nThis workflow implements a **multi-model AI orchestration** with the BEST models at now (**ChatGPT 5.2, Claude Opus 4.6, Gemini 3 Pro**) and response aggregation system designed to handle user chat inputs intelligently and reliably.\n\n### **How it works**\n\nThe workflow processes incoming chat messages through a Search Query Optimizer that classifies inputs as research queries or general conversation and optimizes research queries for accuracy. Research queries are then executed in parallel across three AI models—ChatGPT 5.2, Claude Opus 4.6, and Gemini 3 Pro—to reduce single-model bias and improve answer quality. Each model’s response is collected independently and passed to a Multi-Response Aggregator, which synthesizes them into one coherent, balanced output while reconciling or explaining contradictions. Non-research inputs bypass the multi-model flow and trigger a fallback message, ensuring efficient resource usage and predictable behavior.\n\n### **Setup steps**\n\nConfigure valid API credentials for OpenAI, Anthropic, and Google Gemini to enable all model nodes and query optimization. Verify workflow connections, including chat input to the Search Query Optimizer, conditional routing based on classification, parallel execution to the three models, and aggregation into the response synthesizer. Customize system prompts for the optimizer, aggregator, and individual model nodes to define classification logic, synthesis rules, and output formatting. Activate the workflow and test with multiple input types to confirm correct classification, parallel execution, aggregation behavior, and fallback handling for non-research queries.\n"},"typeVersion":1},{"id":"8fcb79e1-54af-46a2-960a-41e1f4be3a6f","name":"Sticky Note1","type":"n8n-nodes-base.stickyNote","position":[-512,160],"parameters":{"color":7,"width":608,"height":496,"content":"## STEP 1 - Input Processing\nWhen a chat message is received, it's sent to a \"Search Query Optimizer\" that determines whether the input is a research query or general conversation. "},"typeVersion":1},{"id":"0bf9532d-5a01-4868-b383-a6b6562ce231","name":"Sticky Note2","type":"n8n-nodes-base.stickyNote","position":[336,-528],"parameters":{"color":7,"width":704,"height":1360,"content":"## STEP2 - Multi-Model Query Execution\nIf the input is classified as a research query, the workflow simultaneously sends the optimized query to three different AI models: ChatGPT 5.2 (OpenAI), Claude Opus 4.6 (Anthropic), Gemini 3 Pro (Google)"},"typeVersion":1},{"id":"3d176ee1-69d7-4793-be08-de7491bbc554","name":"Sticky Note3","type":"n8n-nodes-base.stickyNote","position":[336,864],"parameters":{"color":7,"width":704,"height":352,"content":"## STEP 3 - Fallback Handling\nIf the input is not a research query, the workflow bypasses the multi-model execution and sends a default message asking the user to enter a research text."},"typeVersion":1},{"id":"159ef060-a881-44c0-8145-d267808a7368","name":"Sticky Note4","type":"n8n-nodes-base.stickyNote","position":[1184,-112],"parameters":{"color":7,"width":704,"height":464,"content":"## STEP 4 - Response Aggregation\nEach model's response is collected separately, then all three responses are sent to a \"Multi-Response Aggregator\" which synthesizes them into a single comprehensive answer."},"typeVersion":1},{"id":"743dc0b7-e2c7-49b4-8f29-32ff130ec7ab","name":"Sticky Note8","type":"n8n-nodes-base.stickyNote","position":[-1696,-608],"parameters":{"color":7,"width":736,"height":736,"content":"## MY NEW YOUTUBE CHANNEL\n👉 [Subscribe to my new **YouTube channel**](https://youtube.com/@n3witalia). Here I’ll share videos and Shorts with practical tutorials and **FREE templates for n8n**.\n\n[![image](https://n3wstorage.b-cdn.net/n3witalia/youtube-n8n-cover.jpg)](https://youtube.com/@n3witalia)"},"typeVersion":1}],"active":false,"pinData":{},"settings":{"binaryMode":"separate","availableInMCP":false,"executionOrder":"v1"},"versionId":"8ed24e81-3b3a-4962-a3a9-74aa5d5408b9","connections":{"Search?":{"main":[[{"node":"ChatGPT 5.2","type":"main","index":0},{"node":"Claude Opus 4.6","type":"main","index":0},{"node":"Gemini 3 Pro","type":"main","index":0}],[{"node":"Chat","type":"main","index":0}]]},"ChatGPT 5.2":{"main":[[{"node":"ChatGPT Result","type":"main","index":0}]]},"Gemini 3 Pro":{"main":[[{"node":"Gemini Result","type":"main","index":0}]]},"Claude Result":{"main":[[{"node":"Multi-Response Aggregator","type":"main","index":0}]]},"Gemini Result":{"main":[[{"node":"Multi-Response Aggregator","type":"main","index":0}]]},"ChatGPT Result":{"main":[[{"node":"Multi-Response Aggregator","type":"main","index":0}]]},"Claude Opus 4.6":{"main":[[{"node":"Claude Result","type":"main","index":0}]]},"OpenAI Chat Model":{"ai_languageModel":[[{"node":"ChatGPT 5.2","type":"ai_languageModel","index":0}]]},"Anthropic Chat Model":{"ai_languageModel":[[{"node":"Claude Opus 4.6","type":"ai_languageModel","index":0}]]},"Search Query Optimizer":{"main":[[{"node":"Search?","type":"main","index":0}]]},"Google Gemini Chat Model":{"ai_languageModel":[[{"node":"Search Query Optimizer","type":"ai_languageModel","index":0}]]},"Structured Output Parser":{"ai_outputParser":[[{"node":"Search Query Optimizer","type":"ai_outputParser","index":0}]]},"Google Gemini Chat Model1":{"ai_languageModel":[[{"node":"Gemini 3 Pro","type":"ai_languageModel","index":0}]]},"Google Gemini Chat Model2":{"ai_languageModel":[[{"node":"Multi-Response Aggregator","type":"ai_languageModel","index":0}]]},"When chat message received":{"main":[[{"node":"Search Query Optimizer","type":"main","index":0}]]}}},"lastUpdatedBy":29,"workflowInfo":{"nodeCount":23,"nodeTypes":{"n8n-nodes-base.if":{"count":1},"n8n-nodes-base.set":{"count":3},"n8n-nodes-base.stickyNote":{"count":6},"@n8n/n8n-nodes-langchain.chat":{"count":1},"@n8n/n8n-nodes-langchain.chainLlm":{"count":5},"@n8n/n8n-nodes-langchain.chatTrigger":{"count":1},"@n8n/n8n-nodes-langchain.lmChatOpenAi":{"count":1},"@n8n/n8n-nodes-langchain.lmChatAnthropic":{"count":1},"@n8n/n8n-nodes-langchain.lmChatGoogleGemini":{"count":3},"@n8n/n8n-nodes-langchain.outputParserStructured":{"count":1}}},"status":"published","readyToDemo":null,"user":{"name":"Davide Boizza","username":"n3witalia","bio":"Full-stack Web Developer based in Italy specialising in Marketing & AI-powered automations. For business enquiries, send me an email at info@n3w.it or add me on Linkedin.com/in/davideboizza and Youtube.com/@n3witalia","verified":true,"links":["https://n3w.it"],"avatar":"https://gravatar.com/avatar/d41b8a0aa81139243509c58870f5b4be292824a507ab57d10ed066d8628ed8da?r=pg&d=retro&size=200"},"nodes":[{"id":20,"icon":"fa:map-signs","name":"n8n-nodes-base.if","codex":{"data":{"alias":["Router","Filter","Condition","Logic","Boolean","Branch"],"details":"The IF node can be used to implement binary conditional logic in your workflow. You can set up one-to-many conditions to evaluate each item of data being inputted into the node. That data will either evaluate to TRUE or FALSE and route out of the node accordingly.\n\nThis node has multiple types of conditions: Bool, String, Number, and Date & Time.","resources":{"generic":[{"url":"https://n8n.io/blog/learn-to-automate-your-factorys-incident-reporting-a-step-by-step-guide/","icon":"🏭","label":"Learn to Automate Your Factory's Incident Reporting: A Step by Step Guide"},{"url":"https://n8n.io/blog/2021-the-year-to-automate-the-new-you-with-n8n/","icon":"☀️","label":"2021: The Year to Automate the New You with n8n"},{"url":"https://n8n.io/blog/why-business-process-automation-with-n8n-can-change-your-daily-life/","icon":"🧬","label":"Why business process automation with n8n can change your daily life"},{"url":"https://n8n.io/blog/create-a-toxic-language-detector-for-telegram/","icon":"🤬","label":"Create a toxic language detector for Telegram in 4 step"},{"url":"https://n8n.io/blog/no-code-ecommerce-workflow-automations/","icon":"store","label":"6 e-commerce workflows to power up your Shopify s"},{"url":"https://n8n.io/blog/how-to-build-a-low-code-self-hosted-url-shortener/","icon":"🔗","label":"How to build a low-code, self-hosted URL shortener in 3 steps"},{"url":"https://n8n.io/blog/automate-your-data-processing-pipeline-in-9-steps-with-n8n/","icon":"⚙️","label":"Automate your data processing pipeline in 9 steps"},{"url":"https://n8n.io/blog/how-to-get-started-with-crm-automation-and-no-code-workflow-ideas/","icon":"👥","label":"How to get started with CRM automation (with 3 no-code workflow ideas"},{"url":"https://n8n.io/blog/5-tasks-you-can-automate-with-notion-api/","icon":"⚡️","label":"5 tasks you can automate with the new Notion API "},{"url":"https://n8n.io/blog/automate-google-apps-for-productivity/","icon":"💡","label":"15 Google apps you can combine and automate to increase productivity"},{"url":"https://n8n.io/blog/automation-for-maintainers-of-open-source-projects/","icon":"🏷️","label":"How to automatically manage contributions to open-source projects"},{"url":"https://n8n.io/blog/how-uproc-scraped-a-multi-page-website-with-a-low-code-workflow/","icon":" 🕸️","label":"How uProc scraped a multi-page website with a low-code workflow"},{"url":"https://n8n.io/blog/5-workflow-automations-for-mattermost-that-we-love-at-n8n/","icon":"🤖","label":"5 workflow automations for Mattermost that we love at n8n"},{"url":"https://n8n.io/blog/why-this-product-manager-loves-workflow-automation-with-n8n/","icon":"🧠","label":"Why this Product Manager loves workflow automation with n8n"},{"url":"https://n8n.io/blog/sending-automated-congratulations-with-google-sheets-twilio-and-n8n/","icon":"🙌","label":"Sending Automated Congratulations with Google Sheets, Twilio, and n8n "},{"url":"https://n8n.io/blog/how-to-set-up-a-ci-cd-pipeline-with-no-code/","icon":"🎡","label":"How to set up a no-code CI/CD pipeline with GitHub and TravisCI"},{"url":"https://n8n.io/blog/benefits-of-automation-and-n8n-an-interview-with-hubspots-hugh-durkin/","icon":"🎖","label":"Benefits of automation and n8n: An interview with HubSpot's Hugh Durkin"},{"url":"https://n8n.io/blog/aws-workflow-automation/","label":"7 no-code workflow automations for Amazon Web Services"}],"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-base.if/"}]},"categories":["Core Nodes"],"nodeVersion":"1.0","codexVersion":"1.0","subcategories":{"Core Nodes":["Flow"]}}},"group":"[\"transform\"]","defaults":{"name":"If","color":"#408000"},"iconData":{"icon":"map-signs","type":"icon"},"displayName":"If","typeVersion":2,"nodeCategories":[{"id":9,"name":"Core Nodes"}]},{"id":38,"icon":"fa:pen","name":"n8n-nodes-base.set","codex":{"data":{"alias":["Set","JS","JSON","Filter","Transform","Map"],"resources":{"generic":[{"url":"https://n8n.io/blog/learn-to-automate-your-factorys-incident-reporting-a-step-by-step-guide/","icon":"🏭","label":"Learn to Automate Your Factory's Incident Reporting: A Step by Step Guide"},{"url":"https://n8n.io/blog/2021-the-year-to-automate-the-new-you-with-n8n/","icon":"☀️","label":"2021: The Year to Automate the New You with n8n"},{"url":"https://n8n.io/blog/automatically-pulling-and-visualizing-data-with-n8n/","icon":"📈","label":"Automatically pulling and visualizing data with n8n"},{"url":"https://n8n.io/blog/database-monitoring-and-alerting-with-n8n/","icon":"📡","label":"Database Monitoring and Alerting with n8n"},{"url":"https://n8n.io/blog/automatically-adding-expense-receipts-to-google-sheets-with-telegram-mindee-twilio-and-n8n/","icon":"🧾","label":"Automatically Adding Expense Receipts to Google Sheets with Telegram, Mindee, Twilio, and n8n"},{"url":"https://n8n.io/blog/no-code-ecommerce-workflow-automations/","icon":"store","label":"6 e-commerce workflows to power up your Shopify s"},{"url":"https://n8n.io/blog/how-to-build-a-low-code-self-hosted-url-shortener/","icon":"🔗","label":"How to build a low-code, self-hosted URL shortener in 3 steps"},{"url":"https://n8n.io/blog/automate-your-data-processing-pipeline-in-9-steps-with-n8n/","icon":"⚙️","label":"Automate your data processing pipeline in 9 steps"},{"url":"https://n8n.io/blog/how-to-get-started-with-crm-automation-and-no-code-workflow-ideas/","icon":"👥","label":"How to get started with CRM automation (with 3 no-code workflow ideas"},{"url":"https://n8n.io/blog/5-tasks-you-can-automate-with-notion-api/","icon":"⚡️","label":"5 tasks you can automate with the new Notion API "},{"url":"https://n8n.io/blog/automate-google-apps-for-productivity/","icon":"💡","label":"15 Google apps you can combine and automate to increase productivity"},{"url":"https://n8n.io/blog/how-uproc-scraped-a-multi-page-website-with-a-low-code-workflow/","icon":" 🕸️","label":"How uProc scraped a multi-page website with a low-code workflow"},{"url":"https://n8n.io/blog/building-an-expense-tracking-app-in-10-minutes/","icon":"📱","label":"Building an expense tracking app in 10 minutes"},{"url":"https://n8n.io/blog/the-ultimate-guide-to-automate-your-video-collaboration-with-whereby-mattermost-and-n8n/","icon":"📹","label":"The ultimate guide to automate your video collaboration with Whereby, Mattermost, and n8n"},{"url":"https://n8n.io/blog/5-workflow-automations-for-mattermost-that-we-love-at-n8n/","icon":"🤖","label":"5 workflow automations for Mattermost that we love at n8n"},{"url":"https://n8n.io/blog/learn-to-build-powerful-api-endpoints-using-webhooks/","icon":"🧰","label":"Learn to Build Powerful API Endpoints Using Webhooks"},{"url":"https://n8n.io/blog/how-a-membership-development-manager-automates-his-work-and-investments/","icon":"📈","label":"How a Membership Development Manager automates his work and investments"},{"url":"https://n8n.io/blog/a-low-code-bitcoin-ticker-built-with-questdb-and-n8n-io/","icon":"📈","label":"A low-code bitcoin ticker built with QuestDB and n8n.io"},{"url":"https://n8n.io/blog/how-to-set-up-a-ci-cd-pipeline-with-no-code/","icon":"🎡","label":"How to set up a no-code CI/CD pipeline with GitHub and TravisCI"},{"url":"https://n8n.io/blog/benefits-of-automation-and-n8n-an-interview-with-hubspots-hugh-durkin/","icon":"🎖","label":"Benefits of automation and n8n: An interview with HubSpot's Hugh Durkin"},{"url":"https://n8n.io/blog/how-goomer-automated-their-operations-with-over-200-n8n-workflows/","icon":"🛵","label":"How Goomer automated their operations with over 200 n8n workflows"},{"url":"https://n8n.io/blog/aws-workflow-automation/","label":"7 no-code workflow automations for Amazon Web Services"}],"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-base.set/"}]},"categories":["Core Nodes"],"nodeVersion":"1.0","codexVersion":"1.0","subcategories":{"Core Nodes":["Data Transformation"]}}},"group":"[\"input\"]","defaults":{"name":"Edit Fields"},"iconData":{"icon":"pen","type":"icon"},"displayName":"Edit Fields (Set)","typeVersion":3,"nodeCategories":[{"id":9,"name":"Core Nodes"}]},{"id":565,"icon":"fa:sticky-note","name":"n8n-nodes-base.stickyNote","codex":{"data":{"alias":["Comments","Notes","Sticky"],"categories":["Core Nodes"],"nodeVersion":"1.0","codexVersion":"1.0","subcategories":{"Core Nodes":["Helpers"]}}},"group":"[\"input\"]","defaults":{"name":"Sticky Note","color":"#FFD233"},"iconData":{"icon":"sticky-note","type":"icon"},"displayName":"Sticky Note","typeVersion":1,"nodeCategories":[{"id":9,"name":"Core Nodes"}]},{"id":1123,"icon":"fa:link","name":"@n8n/n8n-nodes-langchain.chainLlm","codex":{"data":{"alias":["LangChain"],"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainllm/"}]},"categories":["AI","Langchain"],"subcategories":{"AI":["Chains","Root Nodes"]}}},"group":"[\"transform\"]","defaults":{"name":"Basic LLM Chain","color":"#909298"},"iconData":{"icon":"link","type":"icon"},"displayName":"Basic LLM Chain","typeVersion":2,"nodeCategories":[{"id":25,"name":"AI"},{"id":26,"name":"Langchain"}]},{"id":1145,"icon":"file:anthropic.svg","name":"@n8n/n8n-nodes-langchain.lmChatAnthropic","codex":{"data":{"alias":["claude","sonnet","opus"],"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatanthropic/"}]},"categories":["AI","Langchain"],"subcategories":{"AI":["Language Models","Root Nodes"],"Language Models":["Chat Models (Recommended)"]}}},"group":"[\"transform\"]","defaults":{"name":"Anthropic Chat Model"},"iconData":{"type":"file","fileBuffer":"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI0NiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSI+PHBhdGggZmlsbD0iIzdEN0Q4NyIgZD0iTTMyLjczIDBoLTYuOTQ1TDM4LjQ1IDMyaDYuOTQ1ek0xMi42NjUgMCAwIDMyaDcuMDgybDIuNTktNi43MmgxMy4yNWwyLjU5IDYuNzJoNy4wODJMMTkuOTI5IDB6bS0uNzAyIDE5LjMzNyA0LjMzNC0xMS4yNDYgNC4zMzQgMTEuMjQ2eiIvPjwvc3ZnPg=="},"displayName":"Anthropic Chat Model","typeVersion":1,"nodeCategories":[{"id":25,"name":"AI"},{"id":26,"name":"Langchain"}]},{"id":1153,"icon":"file:openAiLight.svg","name":"@n8n/n8n-nodes-langchain.lmChatOpenAi","codex":{"data":{"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai/"}]},"categories":["AI","Langchain"],"subcategories":{"AI":["Language Models","Root Nodes"],"Language Models":["Chat Models (Recommended)"]}}},"group":"[\"transform\"]","defaults":{"name":"OpenAI Chat Model"},"iconData":{"type":"file","fileBuffer":"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDAiIGhlaWdodD0iNDAiIHZpZXdCb3g9IjAgMCA0MCA0MCIgZmlsbD0ibm9uZSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KPHBhdGggZD0iTTM2Ljg2NzEgMTYuMzcxOEMzNy43NzQ2IDEzLjY0OCAzNy40NjIxIDEwLjY2NDIgMzYuMDEwOCA4LjE4NjYxQzMzLjgyODIgNC4zODY1MyAyOS40NDA3IDIuNDMxNDkgMjUuMTU1NiAzLjM1MTUxQzIzLjI0OTMgMS4yMDM5NiAyMC41MTA1IC0wLjAxNzMxNDggMTcuNjM5MiAwLjAwMDE4NTUzM0MxMy4yNTkxIC0wLjAwOTgxNDY4IDkuMzcyNzMgMi44MTAyNSA4LjAyNTIgNi45Nzc4M0M1LjIxMTM5IDcuNTU0MSAyLjc4MjU4IDkuMzE1MzggMS4zNjEzIDExLjgxMTdDLTAuODM3NDkzIDE1LjYwMTggLTAuMzM2MjMyIDIwLjM3OTQgMi42MDEzMyAyMy42Mjk0QzEuNjkzODEgMjYuMzUzMiAyLjAwNjMyIDI5LjMzNzEgMy40NTc2IDMxLjgxNDZDNS42NDAxNSAzNS42MTQ3IDEwLjAyNzcgMzcuNTY5NyAxNC4zMTI4IDM2LjY0OTdDMTYuMjE3OSAzOC43OTczIDE4Ljk1NzkgNDAuMDE4NSAyMS44MjkyIDM5Ljk5OThDMjYuMjExOCA0MC4wMTEgMzAuMDk5NCAzNy4xODg1IDMxLjQ0NjkgMzMuMDE3MUMzNC4yNjA4IDMyLjQ0MDkgMzYuNjg5NiAzMC42Nzk2IDM4LjExMDggMjguMTgzM0M0MC4zMDcxIDI0LjM5MzIgMzkuODA0NiAxOS42MTk0IDM2Ljg2ODMgMTYuMzY5M0wzNi44NjcxIDE2LjM3MThaTTIxLjgzMTcgMzcuMzg2QzIwLjA3OCAzNy4zODg1IDE4LjM3OTIgMzYuNzc0NyAxNy4wMzI5IDM1LjY1MDlDMTcuMDk0MSAzNS42MTg0IDE3LjIwMDQgMzUuNTU5NyAxNy4yNjkxIDM1LjUxNzJMMjUuMjM0MyAzMC45MTcxQzI1LjY0MTggMzAuNjg1OCAyNS44OTE4IDMwLjI1MjEgMjUuODg5MyAyOS43ODMzVjE4LjU1NDNMMjkuMjU1NyAyMC40OTgxQzI5LjI5MTkgMjAuNTE1NiAyOS4zMTU3IDIwLjU1MDYgMjkuMzIwNyAyMC41OTA2VjI5Ljg4OTZDMjkuMzE1NyAzNC4wMjQ3IDI1Ljk2NjggMzcuMzc3MiAyMS44MzE3IDM3LjM4NlpNNS43MjY0IDMwLjUwNzFDNC44NDc2MyAyOC45ODk2IDQuNTMxMzcgMjcuMjEwOCA0LjgzMjYzIDI1LjQ4NDVDNC44OTEzOCAyNS41MTk1IDQuOTk1MTMgMjUuNTgzMiA1LjA2ODg4IDI1LjYyNTdMMTMuMDM0MSAzMC4yMjU4QzEzLjQzNzggMzAuNDYyMSAxMy45Mzc4IDMwLjQ2MjEgMTQuMzQyOCAzMC4yMjU4TDI0LjA2NjggMjQuNjEwN1YyOC40OTgzQzI0LjA2OTMgMjguNTM4MyAyNC4wNTA1IDI4LjU3NyAyNC4wMTkzIDI4LjYwMkwxNS45Njc5IDMzLjI1MDlDMTIuMzgxNSAzNS4zMTU5IDcuODAxNDQgMzQuMDg4NCA1LjcyNzY1IDMwLjUwNzFINS43MjY0Wk0zLjYzMDEgMTMuMTIwNUM0LjUwNTEyIDExLjYwMDQgNS44ODY0IDEwLjQzNzkgNy41MzE0NCA5LjgzNDE1QzcuNTMxNDQgOS45MDI5IDcuNTI3NjkgMTAuMDI0MiA3LjUyNzY5IDEwLjEwOTJWMTkuMzEwNkM3LjUyNTE5IDE5Ljc3ODEgNy43NzUxOSAyMC4yMTE5IDguMTgxNDUgMjAuNDQzMUwxNy45MDU0IDI2LjA1N0wxNC41MzkxIDI4LjAwMDhDMTQuNTA1MyAyOC4wMjMzIDE0LjQ2MjggMjguMDI3IDE0LjQyNTMgMjguMDEwOEw2LjM3MjY2IDIzLjM1ODJDMi43OTM4MyAyMS4yODU2IDEuNTY2MzEgMTYuNzA2OCAzLjYyODg1IDEzLjEyMTdMMy42MzAxIDEzLjEyMDVaTTMxLjI4ODIgMTkuNTU2OUwyMS41NjQyIDEzLjk0MTdMMjQuOTMwNiAxMS45OTkyQzI0Ljk2NDMgMTEuOTc2NyAyNS4wMDY4IDExLjk3MjkgMjUuMDQ0MyAxMS45ODkyTDMzLjA5NyAxNi42MzhDMzYuNjgyMSAxOC43MDkzIDM3LjkxMDggMjMuMjk1NyAzNS44Mzk1IDI2Ljg4MDhDMzQuOTYzMyAyOC4zOTgzIDMzLjU4MzIgMjkuNTYwOCAzMS45Mzk1IDMwLjE2NThWMjAuNjg5NEMzMS45NDMyIDIwLjIyMTkgMzEuNjk0NSAxOS43ODk0IDMxLjI4OTQgMTkuNTU2OUgzMS4yODgyWk0zNC42MzgzIDE0LjUxNDJDMzQuNTc5NSAxNC40NzggMzQuNDc1OCAxNC40MTU1IDM0LjQwMiAxNC4zNzNMMjYuNDM2OCA5Ljc3Mjg5QzI2LjAzMzEgOS41MzY2NCAyNS41MzMxIDkuNTM2NjQgMjUuMTI4MSA5Ljc3Mjg5TDE1LjQwNDEgMTUuMzg4VjExLjUwMDRDMTUuNDAxNiAxMS40NjA0IDE1LjQyMDQgMTEuNDIxNyAxNS40NTE2IDExLjM5NjdMMjMuNTAzIDYuNzUxNThDMjcuMDg5NCA0LjY4Mjc5IDMxLjY3NDUgNS45MTQwNiAzMy43NDIgOS41MDE2NEMzNC42MTU4IDExLjAxNjcgMzQuOTMyIDEyLjc5MDUgMzQuNjM1OCAxNC41MTQySDM0LjYzODNaTTEzLjU3NDEgMjEuNDQzMUwxMC4yMDY1IDE5LjQ5OTRDMTAuMTcwMiAxOS40ODE5IDEwLjE0NjUgMTkuNDQ2OCAxMC4xNDE1IDE5LjQwNjhWMTAuMTA3OUMxMC4xNDQgNS45Njc4MSAxMy41MDI4IDIuNjEyNzQgMTcuNjQyOSAyLjYxNTI0QzE5LjM5NDIgMi42MTUyNCAyMS4wODkyIDMuMjMwMjUgMjIuNDM1NSA0LjM1MDI4QzIyLjM3NDMgNC4zODI3OCAyMi4yNjkzIDQuNDQxNTMgMjIuMTk5MiA0LjQ4NDAzTDE0LjIzNDEgOS4wODQxM0MxMy44MjY2IDkuMzE1MzggMTMuNTc2NiA5Ljc0Nzg5IDEzLjU3OTEgMTAuMjE2N0wxMy41NzQxIDIxLjQ0MDZWMjEuNDQzMVpNMTUuNDAyOSAxNy41MDA2TDE5LjczNDIgMTQuOTk5M0wyNC4wNjU1IDE3LjQ5OTNWMjIuNTAwN0wxOS43MzQyIDI1LjAwMDdMMTUuNDAyOSAyMi41MDA3VjE3LjUwMDZaIiBmaWxsPSIjN0Q3RDg3Ii8+Cjwvc3ZnPgo="},"displayName":"OpenAI Chat Model","typeVersion":1,"nodeCategories":[{"id":25,"name":"AI"},{"id":26,"name":"Langchain"}]},{"id":1179,"icon":"fa:code","name":"@n8n/n8n-nodes-langchain.outputParserStructured","codex":{"data":{"alias":["json","zod"],"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.outputparserstructured/"}]},"categories":["AI","Langchain"],"subcategories":{"AI":["Output Parsers"]}}},"group":"[\"transform\"]","defaults":{"name":"Structured Output Parser"},"iconData":{"icon":"code","type":"icon"},"displayName":"Structured Output Parser","typeVersion":1,"nodeCategories":[{"id":25,"name":"AI"},{"id":26,"name":"Langchain"}]},{"id":1247,"icon":"fa:comments","name":"@n8n/n8n-nodes-langchain.chatTrigger","codex":{"data":{"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.chattrigger/"}]},"categories":["Core Nodes","Langchain"]}},"group":"[\"trigger\"]","defaults":{"name":"When chat message received"},"iconData":{"icon":"comments","type":"icon"},"displayName":"Chat Trigger","typeVersion":1,"nodeCategories":[{"id":9,"name":"Core Nodes"},{"id":26,"name":"Langchain"}]},{"id":1262,"icon":"file:google.svg","name":"@n8n/n8n-nodes-langchain.lmChatGoogleGemini","codex":{"data":{"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatgooglegemini/"}]},"categories":["AI","Langchain"],"subcategories":{"AI":["Language Models","Root Nodes"],"Language Models":["Chat Models (Recommended)"]}}},"group":"[\"transform\"]","defaults":{"name":"Google Gemini Chat Model"},"iconData":{"type":"file","fileBuffer":"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDggNDgiPjxkZWZzPjxwYXRoIGlkPSJhIiBkPSJNNDQuNSAyMEgyNHY4LjVoMTEuOEMzNC43IDMzLjkgMzAuMSAzNyAyNCAzN2MtNy4yIDAtMTMtNS44LTEzLTEzczUuOC0xMyAxMy0xM2MzLjEgMCA1LjkgMS4xIDguMSAyLjlsNi40LTYuNEMzNC42IDQuMSAyOS42IDIgMjQgMiAxMS44IDIgMiAxMS44IDIgMjRzOS44IDIyIDIyIDIyYzExIDAgMjEtOCAyMS0yMiAwLTEuMy0uMi0yLjctLjUtNCIvPjwvZGVmcz48Y2xpcFBhdGggaWQ9ImIiPjx1c2UgeGxpbms6aHJlZj0iI2EiIG92ZXJmbG93PSJ2aXNpYmxlIi8+PC9jbGlwUGF0aD48cGF0aCBmaWxsPSIjRkJCQzA1IiBkPSJNMCAzN1YxMWwxNyAxM3oiIGNsaXAtcGF0aD0idXJsKCNiKSIvPjxwYXRoIGZpbGw9IiNFQTQzMzUiIGQ9Im0wIDExIDE3IDEzIDctNi4xTDQ4IDE0VjBIMHoiIGNsaXAtcGF0aD0idXJsKCNiKSIvPjxwYXRoIGZpbGw9IiMzNEE4NTMiIGQ9Im0wIDM3IDMwLTIzIDcuOSAxTDQ4IDB2NDhIMHoiIGNsaXAtcGF0aD0idXJsKCNiKSIvPjxwYXRoIGZpbGw9IiM0Mjg1RjQiIGQ9Ik00OCA0OCAxNyAyNGwtNC0zIDM1LTEweiIgY2xpcC1wYXRoPSJ1cmwoI2IpIi8+PC9zdmc+"},"displayName":"Google Gemini Chat Model","typeVersion":1,"nodeCategories":[{"id":25,"name":"AI"},{"id":26,"name":"Langchain"}]},{"id":1313,"icon":"fa:comments","name":"@n8n/n8n-nodes-langchain.chat","codex":{"data":{"alias":["human","wait","hitl","respond","approve","confirm","send","message"],"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.respondtochat/"}]},"categories":["Core Nodes","HITL","Langchain"],"subcategories":{"HITL":["Human in the Loop"]}}},"group":"[\"input\"]","defaults":{"name":"Chat"},"iconData":{"icon":"comments","type":"icon"},"displayName":"Chat","typeVersion":1,"nodeCategories":[{"id":9,"name":"Core Nodes"},{"id":26,"name":"Langchain"},{"id":28,"name":"HITL"}]}],"categories":[{"id":32,"name":"Market Research"},{"id":47,"name":"AI Chatbot"}],"image":[]}}