{"workflow":{"id":14497,"name":"Combine answers from OpenAI, Anthropic, Gemini and Groq into one consensus","views":2,"recentViews":1,"totalViews":2,"createdAt":"2026-03-30T18:30:35.709Z","description":"## Stop trusting one model. Let multiple LLMs show you where they agree and where they don't.\n \nAsk the same question to multiple LLMs and get one answer you can actually trust. Instead of hoping one model gets it right, this workflow sends your question to four models at once, compares what they say, and catches the ones that sound confident but are probably wrong.\n \nThis is not a \"chain models together\" template. Instead of trusting one model's answer, it makes multiple models prove they agree by checking every answer against the others and showing you exactly how much they align.\n \n### How it works\n \nThe workflow runs in four stages:\n \n1. **Ask in parallel:** Your question goes to four LLMs at the same time. Each model answers on its own and reports how confident it is. No model sees what the others said.\n2. **Compare answers:** A similarity engine checks how much the answers actually agree. It uses two different methods (Jaccard and Cosine) plus extra checks for short answers. So if one model says \"4\" and another says \"The answer is 4,\" both get credit for agreeing.\n3. **Calibrate confidence:** This is the key part. The system looks at what each model claims versus what the others actually said. A model saying it is 95% sure while everyone else disagrees? Its confidence gets cut. A model that is unsure but matches what the group said? Its confidence goes up. Overconfident outliers are usually the first sign of a hallucination.\n4. **Deliver the result:** If models agree, you get a single weighted answer with a visual bar showing how strong the agreement is. If they properly disagree, the system switches to peer review mode and shows every answer so you can decide for yourself.\n \n### Key Benefits\n- **Catches hallucinations with maths, not prompts.** An overconfident model that disagrees with the group gets its score reduced automatically.\n- **Three clear tiers.** Strong agreement gets a green label. Partial agreement gets yellow. Weak agreement gets orange. You always know how much to trust the response.\n- **Works with any LLM you want.** Default setup uses OpenAI, Anthropic, Gemini, and Groq. Swap any of them or add more.\n- **Tells you when a model fails.** If one provider is down or not set up yet, the response says so instead of breaking silently.\n\n### Setup\n\n- Add your API credentials.\n- Activate the workflow and open the production chat URL.\n- Type any question and wait for the consensus analysis\n\n \n### Who this is for\n \n- AI engineers comparing model reliability across different providers\n- Product teams that need dependable AI answers for things users will see\n- Researchers looking at how different LLMs handle the same question\n- Anyone who has been burned by one model confidently making things up\n \n### Required APIs & Credentials\n\nAdd credentials for the LLM providers you want to use. The default setup includes OpenAI, Anthropic, Google Gemini, and Groq, but you can swap or remove any of them.\n\n### How to customise it\n \n- **Swap models:** Replace any LLM node with a different provider. Add more branches if you want and update the Merge node input count.\n- **Adjust the calibration:** Open the Confidence Calibration node and change what counts as overconfident, underconfident, or divergent.\n- **Change the agreement tiers:** In the Format Chat Message node, the defaults are green at 70%, yellow at 40%, orange below that.\n- **Use a different trigger:** Replace the chat trigger with a webhook, Slack command, or scheduled trigger.\n- **Send the output somewhere:** The structured JSON from Format Final Output works with Google Sheets, databases, dashboards, or any other workflow.\n \n### Known limitations\n \nThis workflow picks the answer most models agree on. That works well for factual questions. But if three models share the same wrong answer and one model gets it right, the correct answer gets penalised for being the outlier. For trick questions or topics where popular knowledge is wrong, keep that in mind.","workflow":{"id":"7BH4qa8EUakMmnlT","meta":{"instanceId":"d1dc073e8e3059a23e2730f69cb1b90065a2ac39039fea0727fdf9bee77a9131","templateCredsSetupCompleted":true},"name":"AI Consensus Engine: 4 Models, 1 Trusted Answer","tags":[],"nodes":[{"id":"f8513ac8-1f67-43b9-9d8f-1b4e0554f308","name":"Sticky Note","type":"n8n-nodes-base.stickyNote","position":[-1968,992],"parameters":{"width":496,"height":768,"content":"## AI Consensus Engine: 4 Models, 1 Trusted Answer\nThink of this as a panel of experts that actually checks each other's work. Instead of trusting one AI's answer blindly, the system cross-examines multiple models and calls out the ones that are bluffing.\n\n### How it works\n\nAsk: Your question goes to four LLMs in parallel, each self-reporting its confidence.\nCompare: Dual similarity analysis (Jaccard + Cosine) measures how much the answers actually agree.\nCalibrate: Overconfident outliers get penalized. Underconfident models matching consensus get boosted.\nDeliver: Strong agreement returns a single weighted answer. True disagreement switches to peer review mode showing every perspective.\n\n### Setup\n\n- [ ] Add API credentials for the four LLM providers ( OpenAI, Anthropic, Google Gemini, Groq)\n- [ ] Activate the workflow and open the chat window\n- [ ] Type any question and wait for the consensus analysis to come back\n\n### Customization\nSwap any LLM for another or add more parallel branches. Adjust similarity weights in the Similarity Analysis node. Change agreement tier thresholds in the Format Chat Message node. Replace the chat trigger with a webhook, Slack command, or any other entry point."},"typeVersion":1},{"id":"2fc46ee0-0a8b-4153-9967-fea815845acf","name":"Groq Chat Model3","type":"@n8n/n8n-nodes-langchain.lmChatGroq","onError":"continueRegularOutput","position":[-656,2160],"parameters":{"model":"llama-3.3-70b-versatile","options":{}},"credentials":{"groqApi":{"id":"TgbldqAabYDTWFNU","name":"Groq account"}},"typeVersion":1},{"id":"3a55a608-4a8c-49cd-920e-ba7cf9bbb4c1","name":"Sticky Note5","type":"n8n-nodes-base.stickyNote","position":[-1008,432],"parameters":{"color":7,"width":928,"height":1872,"content":"## Parallel LLM Generation\n\nEach model answers independently with confidence rating"},"typeVersion":1},{"id":"623b21cc-381b-4441-992e-c376c8b49962","name":"Sticky Note6","type":"n8n-nodes-base.stickyNote","position":[240,1104],"parameters":{"color":7,"width":768,"height":480,"content":"## Calibration Engine\n\nDetects overconfident outliers and underconfident consensus"},"typeVersion":1},{"id":"628254df-a7ba-4541-b08a-105896589d36","name":"Sticky Note7","type":"n8n-nodes-base.stickyNote","position":[1104,1056],"parameters":{"color":7,"width":656,"height":576,"content":"## Consensus or Fallback\n\nWeighted average if consensus exists, peer review if extreme divergence"},"typeVersion":1},{"id":"fc013140-4191-4c12-b901-aa244810c0b9","name":"When chat message received","type":"@n8n/n8n-nodes-langchain.chatTrigger","position":[-1376,1328],"webhookId":"ca263af5-1818-42b4-9f87-37dbede77081","parameters":{"public":true,"options":{"responseMode":"responseNodes"},"initialMessages":"Hi there! 👋\nAsk me any question and I'll analyze it using 4 AI models!"},"typeVersion":1.4},{"id":"ab784aaf-fac4-4eb3-af21-617704a5eaef","name":"Chat","type":"@n8n/n8n-nodes-langchain.chat","position":[2224,1328],"webhookId":"980b2c0c-d517-43d8-82db-88be08cdbe50","parameters":{"message":"={{ $json.chatResponse }}","options":{}},"typeVersion":1.3},{"id":"2fed4605-3342-4080-9d63-252aefb0e14b","name":"Parse & Validate Responses","type":"n8n-nodes-base.code","position":[336,1328],"parameters":{"jsCode":"// Extracts each LLM's answer and confidence from raw output, validates the JSON structure, and flags invalid responses.\n\nconst allItems = $input.all();\nconst parsedResponses = [];\n\nfor (let i = 0; i < allItems.length; i++) {\n  const item = allItems[i];\n\n  let modelName = `Model ${i + 1}`;\n\n  try {\n    const rawNodeName = item.json?.$node?.name || \"\";\n    const cleanedNodeName = rawNodeName\n      .replace(/^LLM \\d+ - /, \"\")\n      .replace(/\\d+$/, \"\")\n      .trim();\n\n    if (cleanedNodeName.length >= 3) {\n      modelName = cleanedNodeName;\n    }\n  } catch (nameError) {\n    modelName = `Model ${i + 1}`;\n  }\n\n  try {\n    const rawOutput = item.json.output || item.json.text || \"\";\n    let cleanedOutput = rawOutput.trim();\n\n    if (cleanedOutput.startsWith(\"```json\")) cleanedOutput = cleanedOutput.slice(7);\n    if (cleanedOutput.startsWith(\"```\")) cleanedOutput = cleanedOutput.slice(3);\n    if (cleanedOutput.endsWith(\"```\")) cleanedOutput = cleanedOutput.slice(0, -3);\n    cleanedOutput = cleanedOutput.trim();\n\n    const parsedJson = JSON.parse(cleanedOutput);\n\n    const hasValidAnswer = parsedJson.answer && typeof parsedJson.answer === \"string\";\n    const hasValidConfidence = typeof parsedJson.confidence === \"number\";\n\n    if (!hasValidAnswer || !hasValidConfidence) {\n      throw new Error(\"Invalid response structure: missing answer or confidence\");\n    }\n\n    const clampedConfidence = Math.max(0, Math.min(1, parsedJson.confidence));\n\n    parsedResponses.push({\n      model: modelName,\n      answer: parsedJson.answer.trim(),\n      originalConfidence: clampedConfidence,\n      reasoning: parsedJson.reasoning || \"\",\n      valid: true,\n    });\n  } catch (parseError) {\n    parsedResponses.push({\n      model: modelName,\n      answer: \"Parse error\",\n      originalConfidence: 0.0,\n      reasoning: parseError.message,\n      valid: false,\n    });\n  }\n}\n\nconst validResponseCount = parsedResponses.filter((r) => r.valid).length;\n\nreturn {\n  json: {\n    responses: parsedResponses,\n    validCount: validResponseCount,\n  },\n};"},"typeVersion":2},{"id":"ef6b2f68-1d25-4cb0-98e3-8c001c1d667f","name":"Similarity Analysis","type":"n8n-nodes-base.code","position":[560,1328],"parameters":{"jsCode":"// Compares every valid LLM response using both Jaccard and Cosine similarity for accurate agreement detection, then attaches an average similarity score to each.\n\nconst allResponses = $input.first().json.responses;\nconst validResponses = allResponses.filter((r) => r.valid);\n\nif (validResponses.length < 2) {\n  return {\n    json: {\n      responses: validResponses,\n      similarityMatrix: [],\n      error: \"Not enough valid responses to compare\",\n    },\n  };\n}\n\nfunction normalizeAnswer(text) {\n  return text\n    .toLowerCase()\n    .replace(/[.,!?;:'\"()\\-]/g, \"\")\n    .replace(/\\s+/g, \" \")\n    .trim();\n}\n\nfunction getWordFrequency(text) {\n  const words = text.split(/\\s+/);\n  const frequency = {};\n\n  for (const word of words) {\n    frequency[word] = (frequency[word] || 0) + 1;\n  }\n\n  return frequency;\n}\n\nfunction getJaccardSimilarity(textA, textB) {\n  const wordsA = new Set(textA.split(/\\s+/));\n  const wordsB = new Set(textB.split(/\\s+/));\n\n  const sharedWords = new Set([...wordsA].filter((word) => wordsB.has(word)));\n  const allWords = new Set([...wordsA, ...wordsB]);\n\n  return sharedWords.size / allWords.size;\n}\n\nfunction getCosineSimilarity(textA, textB) {\n  const frequencyA = getWordFrequency(textA);\n  const frequencyB = getWordFrequency(textB);\n\n  const allUniqueWords = new Set([\n    ...Object.keys(frequencyA),\n    ...Object.keys(frequencyB),\n  ]);\n\n  let dotProduct = 0;\n  let magnitudeA = 0;\n  let magnitudeB = 0;\n\n  for (const word of allUniqueWords) {\n    const countA = frequencyA[word] || 0;\n    const countB = frequencyB[word] || 0;\n\n    dotProduct += countA * countB;\n    magnitudeA += countA * countA;\n    magnitudeB += countB * countB;\n  }\n\n  const magnitude = Math.sqrt(magnitudeA) * Math.sqrt(magnitudeB);\n\n  if (magnitude === 0) return 0;\n\n  return dotProduct / magnitude;\n}\n\nfunction getCombinedSimilarity(textA, textB) {\n  const normalizedA = normalizeAnswer(textA);\n  const normalizedB = normalizeAnswer(textB);\n\n  const shorter = normalizedA.length <= normalizedB.length ? normalizedA : normalizedB;\n  const longer = normalizedA.length > normalizedB.length ? normalizedA : normalizedB;\n\n  if (shorter.length > 0 && longer.includes(shorter)) {\n    return 0.95;\n  }\n\n  const firstSentenceA = normalizedA.split(/[.!?]/)[0].trim();\n  const firstSentenceB = normalizedB.split(/[.!?]/)[0].trim();\n\n  const firstSentenceSimilarity = getCosineSimilarity(firstSentenceA, firstSentenceB);\n\n  const jaccardScore = getJaccardSimilarity(normalizedA, normalizedB);\n  const cosineScore = getCosineSimilarity(normalizedA, normalizedB);\n\n  const JACCARD_WEIGHT = 0.25;\n  const COSINE_WEIGHT = 0.45;\n  const FIRST_SENTENCE_WEIGHT = 0.30;\n\n  return (jaccardScore * JACCARD_WEIGHT) + (cosineScore * COSINE_WEIGHT) + (firstSentenceSimilarity * FIRST_SENTENCE_WEIGHT);\n}\n\nconst similarityMatrix = [];\n\nfor (let i = 0; i < validResponses.length; i++) {\n  const rowScores = [];\n\n  for (let j = 0; j < validResponses.length; j++) {\n    if (i === j) {\n      rowScores.push(1.0);\n    } else {\n      const score = getCombinedSimilarity(\n        validResponses[i].answer,\n        validResponses[j].answer\n      );\n      rowScores.push(score);\n    }\n  }\n\n  similarityMatrix.push(rowScores);\n}\n\nconst averageSimilarityPerResponse = similarityMatrix.map((row) => {\n  const totalExcludingSelf = row.reduce((sum, value) => sum + value, 0) - 1;\n  const otherResponseCount = row.length - 1;\n  return totalExcludingSelf / otherResponseCount;\n});\n\nconst responsesWithSimilarity = validResponses.map((response, index) => ({\n  ...response,\n  avgSimilarityToOthers: averageSimilarityPerResponse[index],\n}));\n\nreturn {\n  json: {\n    responses: responsesWithSimilarity,\n    similarityMatrix: similarityMatrix,\n  },\n};"},"typeVersion":2},{"id":"8a00b00c-0930-4bab-9f03-5b03c8fcbca7","name":"Confidence Calibration","type":"n8n-nodes-base.code","position":[784,1328],"parameters":{"jsCode":"// Adjusts each model's confidence based on agreement patterns: penalizes overconfident outliers, boosts underconfident consensus matches, and moderately reduces mid-confidence outliers.\n\nconst responses = $input.first().json.responses;\n\nconst HIGH_CONFIDENCE_THRESHOLD = 0.80;\nconst LOW_AGREEMENT_THRESHOLD = 0.30;\nconst STRONG_AGREEMENT_THRESHOLD = 0.50;\nconst MODERATE_DISAGREEMENT_THRESHOLD = 0.30;\nconst PENALTY_MULTIPLIER = 0.70;\n\nconst calibratedResponses = responses.map((response) => {\n  let adjustedConfidence = response.originalConfidence;\n  let adjustmentReason = \"No calibration needed\";\n  let wasCalibrated = false;\n\n  const isHighConfidence = response.originalConfidence >= HIGH_CONFIDENCE_THRESHOLD;\n  const isLowAgreement = response.avgSimilarityToOthers < LOW_AGREEMENT_THRESHOLD;\n  const isLowConfidence = response.originalConfidence < 0.50;\n  const isStrongAgreement = response.avgSimilarityToOthers >= STRONG_AGREEMENT_THRESHOLD;\n  const isMidConfidence = response.originalConfidence >= 0.60;\n  const isModerateDisagreement = response.avgSimilarityToOthers < MODERATE_DISAGREEMENT_THRESHOLD;\n\n  if (isHighConfidence && isLowAgreement) {\n    adjustedConfidence = Math.min(response.originalConfidence, 0.40);\n    adjustmentReason = \"Overconfident outlier: High self-reported confidence but low agreement with other models. Likely hallucination, confidence reduced.\";\n    wasCalibrated = true;\n  } else if (isLowConfidence && isStrongAgreement) {\n    adjustedConfidence = Math.max(response.originalConfidence, 0.65);\n    adjustmentReason = \"Consensus boost: Low self-confidence but strong agreement with other models. Confidence increased.\";\n    wasCalibrated = true;\n  } else if (isMidConfidence && isModerateDisagreement) {\n    adjustedConfidence = response.originalConfidence * PENALTY_MULTIPLIER;\n    adjustmentReason = \"Moderate outlier: Disagreement with consensus detected. Confidence reduced moderately.\";\n    wasCalibrated = true;\n  }\n\n  return {\n    ...response,\n    calibratedConfidence: Math.round(adjustedConfidence * 100) / 100,\n    calibrated: wasCalibrated,\n    calibrationReason: adjustmentReason,\n  };\n});\n\nconst totalSimilarity = responses.reduce((sum, r) => sum + r.avgSimilarityToOthers, 0);\nconst averageSimilarity = totalSimilarity / responses.length;\nconst hasStrongConsensus = averageSimilarity >= STRONG_AGREEMENT_THRESHOLD;\nconst hasExtremeDivergence = averageSimilarity < 0.10;\n\nreturn {\n  json: {\n    responses: calibratedResponses,\n    consensusMetrics: {\n      avgSimilarity: Math.round(averageSimilarity * 100) / 100,\n      hasStrongConsensus: hasStrongConsensus,\n      hasExtremeDivergence: hasExtremeDivergence,\n    },\n  },\n};"},"typeVersion":2},{"id":"284a9008-3bd8-4130-842a-4e5cf6cb7b66","name":"Weighted Consensus","type":"n8n-nodes-base.code","position":[1392,1424],"parameters":{"jsCode":"// Ranks all responses by calibrated confidence weight and picks the top answer as primary consensus, collecting minority views above 15% weight.\n\nconst responses = $input.first().json.responses;\nconst consensusMetrics = $input.first().json.consensusMetrics;\n\nconst totalCalibratedConfidence = responses.reduce((sum, r) => sum + r.calibratedConfidence, 0);\n\nconst responsesWithWeights = responses.map((response) => ({\n  ...response,\n  weight: totalCalibratedConfidence > 0\n    ? response.calibratedConfidence / totalCalibratedConfidence\n    : 1 / responses.length,\n}));\n\nresponsesWithWeights.sort((a, b) => b.weight - a.weight);\n\nconst topWeightedResponse = responsesWithWeights[0];\nconst remainingResponses = responsesWithWeights.slice(1);\nconst significantMinorityViews = remainingResponses.filter((r) => r.weight > 0.15);\n\nconst calibratedModelCount = responsesWithWeights.filter((r) => r.calibrated).length;\n\nconst consensusSummary = {\n  primaryAnswer: topWeightedResponse.answer,\n  primaryWeight: Math.round(topWeightedResponse.weight * 100),\n  primaryModel: topWeightedResponse.model,\n  consensusStrength: consensusMetrics.avgSimilarity,\n  calibrationsApplied: calibratedModelCount,\n  minorityViews: significantMinorityViews.map((r) => ({\n    answer: r.answer,\n    weight: Math.round(r.weight * 100),\n    model: r.model,\n  })),\n};\n\nreturn {\n  json: {\n    consensusSummary: consensusSummary,\n    allResponses: responsesWithWeights,\n    mode: \"weighted_consensus\",\n  },\n};"},"typeVersion":2},{"id":"8c655fa2-d923-4eb9-b72b-3ec05f2ab071","name":"Peer Review Fallback","type":"n8n-nodes-base.code","position":[1392,1232],"parameters":{"jsCode":"// When models disagree too much for weighted consensus, sorts all responses by original confidence and returns them as individual perspectives for the user to review.\n\nconst responses = $input.first().json.responses;\n\nconst sortedByOriginalConfidence = [...responses].sort(\n  (a, b) => b.originalConfidence - a.originalConfidence\n);\n\nconst formattedPerspectives = sortedByOriginalConfidence.map((response) => ({\n  model: response.model,\n  answer: response.answer,\n  confidence: response.originalConfidence,\n  similarityToOthers: response.avgSimilarityToOthers,\n}));\n\nreturn {\n  json: {\n    peerReviewSummary: {\n      status: \"extreme_divergence_detected\",\n      explanation: \"Models disagree significantly. Showing all perspectives instead of weighted consensus.\",\n      responses: formattedPerspectives,\n    },\n    mode: \"peer_review_fallback\",\n  },\n};"},"typeVersion":2},{"id":"f256d865-3c18-4f82-9612-4e1f27bc5dec","name":"Format Output (chat message)","type":"n8n-nodes-base.code","position":[2000,1328],"parameters":{"jsCode":"// Formats the final JSON result into a readable chat message with tiered agreement display, calibration notes, and per-model breakdowns.\n\nconst inputData = $input.first().json;\nconst consensusMode = inputData.mode;\n\nlet chatMessage = \"\";\n\nif (consensusMode === \"weighted_consensus\") {\n  const answer = inputData.result.answer;\n  const agreementLevel = inputData.result.consensusStrength;\n\n  const agreementPercent = parseInt(agreementLevel);\n  const filledBlocks = Math.floor(agreementPercent / 10);\n  const emptyBlocks = 10 - filledBlocks;\n  const agreementBar = \"█\".repeat(filledBlocks) + \"░\".repeat(emptyBlocks);\n\n  let agreementLabel = \"\";\n\n  if (agreementPercent >= 70) {\n    agreementLabel = \"✅ **Strong consensus across all models**\";\n  } else if (agreementPercent >= 40) {\n    agreementLabel = \"🟡 **Models mostly agree, with some variation in detail**\";\n  } else {\n    agreementLabel = \"🟠 **Models loosely agree, but answers vary significantly**\";\n  }\n\n  chatMessage = `${agreementLabel}\\n\\n${answer}\\n\\n`;\n  chatMessage += `**Agreement level:** ${agreementBar} ${agreementLevel}\\n`;\n\n  const calibrationCount = inputData.calibrationReport?.calibrationsApplied || 0;\n\n  if (calibrationCount > 0) {\n    const plural = calibrationCount > 1 ? \"s\" : \"\";\n    chatMessage += `\\n⚙️ I adjusted ${calibrationCount} model${plural} that seemed overconfident\\n`;\n  }\n\n  const minorityOpinions = inputData.minorityOpinions || [];\n\n  if (minorityOpinions.length > 0) {\n    chatMessage += `\\n**💭 Other perspectives:**\\n`;\n\n    for (const opinion of minorityOpinions) {\n      const maxAnswerLength = 150;\n      const truncatedAnswer = opinion.answer.length > maxAnswerLength\n        ? opinion.answer.substring(0, maxAnswerLength) + \"...\"\n        : opinion.answer;\n      chatMessage += `• ${truncatedAnswer}\\n`;\n    }\n  }\n} else {\n  chatMessage = `⚠️ **The models don't agree on this one**\\n\\n`;\n  chatMessage += `This usually means the question is:\\n`;\n  chatMessage += `• Controversial or debatable\\n`;\n  chatMessage += `• Ambiguous (could mean different things)\\n`;\n  chatMessage += `• Based on outdated information\\n\\n`;\n  chatMessage += `Here's what each model thinks:\\n\\n`;\n\n  const allPerspectives = inputData.allPerspectives || [];\n\n  for (let i = 0; i < allPerspectives.length; i++) {\n    const perspective = allPerspectives[i];\n    const confidencePercent = Math.round(perspective.confidence * 100);\n\n    let confidenceEmoji = \"🤷\";\n    if (confidencePercent >= 90) confidenceEmoji = \"💪\";\n    else if (confidencePercent >= 70) confidenceEmoji = \"👍\";\n    else if (confidencePercent >= 50) confidenceEmoji = \"🤔\";\n\n    chatMessage += `${confidenceEmoji} **Model ${i + 1}** (${confidencePercent}% sure)\\n`;\n    chatMessage += `${perspective.answer}\\n\\n`;\n  }\n\n  chatMessage += `---\\n💡 **My recommendation:** Review all perspectives above and use your judgment.`;\n}\n\nconst failedModels = inputData.failedModels || [];\n\nif (failedModels.length > 0) {\n  const failedNames = failedModels.join(\", \");\n  const plural = failedModels.length > 1 ? \"s\" : \"\";\n  chatMessage += `\\n\\n⚠️ **Note:** ${failedNames} model${plural} failed to respond and was excluded from the analysis.`;\n}\n\nreturn {\n  json: {\n    chatResponse: chatMessage,\n  },\n};"},"typeVersion":2},{"id":"0dd53837-ab81-486a-be77-593b20203039","name":"Format Final Output","type":"n8n-nodes-base.code","position":[1616,1328],"parameters":{"jsCode":"// Builds the structured output object with result summary, calibration report, and full model breakdown depending on whether consensus or peer review mode was used.\n\nconst inputData = $input.first().json;\nconst consensusMode = inputData.mode;\n\nlet finalOutput = {\n  mode: consensusMode,\n  timestamp: new Date().toISOString(),\n};\n\nif (consensusMode === \"weighted_consensus\") {\n  const summary = inputData.consensusSummary;\n  const allModelResponses = inputData.allResponses;\n\n  finalOutput.result = {\n    answer: summary.primaryAnswer,\n    confidence: `${summary.primaryWeight}% (weighted)`,\n    consensusStrength: `${Math.round(summary.consensusStrength * 100)}%`,\n    source: `${summary.primaryModel} (${summary.primaryWeight}% weight)`,\n  };\n\n  finalOutput.minorityOpinions = summary.minorityViews;\n\n  const calibratedModels = allModelResponses.filter((r) => r.calibrated);\n\n  finalOutput.calibrationReport = {\n    calibrationsApplied: summary.calibrationsApplied,\n    details: calibratedModels.map((r) => ({\n      model: r.model,\n      originalConfidence: `${Math.round(r.originalConfidence * 100)}%`,\n      calibratedConfidence: `${Math.round(r.calibratedConfidence * 100)}%`,\n      reason: r.calibrationReason,\n    })),\n  };\n\n  finalOutput.fullBreakdown = allModelResponses.map((r) => ({\n    model: r.model,\n    answer: r.answer,\n    originalConfidence: `${Math.round(r.originalConfidence * 100)}%`,\n    calibratedConfidence: `${Math.round(r.calibratedConfidence * 100)}%`,\n    weight: `${Math.round(r.weight * 100)}%`,\n    similarityToOthers: `${Math.round(r.avgSimilarityToOthers * 100)}%`,\n    wasCalibrated: r.calibrated,\n  }));\n} else {\n  finalOutput.result = {\n    answer: \"Multiple conflicting perspectives - see all responses below\",\n    confidence: \"N/A (extreme divergence)\",\n    consensusStrength: \"< 25%\",\n    source: \"Peer review fallback mode\",\n  };\n\n  finalOutput.allPerspectives = inputData.peerReviewSummary.responses;\n  finalOutput.explanation = inputData.peerReviewSummary.explanation;\n}\n\nfinalOutput.failedModels = (inputData.allResponses || [])\n  .filter((r) => r.originalConfidence === 0 && r.answer === \"Parse error\")\n  .map((r) => r.model);\n\nreturn { json: finalOutput };"},"typeVersion":2},{"id":"bdbd9213-9c82-4a72-961f-f6b9f7f17682","name":"Sticky Note8","type":"n8n-nodes-base.stickyNote","position":[1904,1104],"parameters":{"color":7,"width":544,"height":464,"content":"## Chat Response\n\nFormats the report into a readable message and sends it to the user"},"typeVersion":1},{"id":"aa47a3a4-a663-440a-9847-d741df86ce7c","name":"Merge All 4 LLMs","type":"n8n-nodes-base.merge","position":[-224,1296],"parameters":{"numberInputs":4},"typeVersion":3},{"id":"6bd18a44-0e5b-4a5f-ac54-b346afe4ab4f","name":"Set User Prompt","type":"n8n-nodes-base.set","position":[-1152,1328],"parameters":{"options":{},"assignments":{"assignments":[{"id":"fb70c030-9f27-40d6-8805-41863a150088","name":"userPrompt","type":"string","value":"={{ $json.chatInput }}"}]}},"typeVersion":3.4},{"id":"07810333-b719-4393-a7e1-c15a553bb078","name":"Google Gemini Chat Model","type":"@n8n/n8n-nodes-langchain.lmChatGoogleGemini","onError":"continueRegularOutput","position":[-656,848],"parameters":{"options":{}},"credentials":{"googlePalmApi":{"id":"qQGrvqnSPqWFH6I6","name":"Google Gemini(PaLM) Api account 5"}},"typeVersion":1},{"id":"bdb00463-6a77-40ac-a7b8-68ff6b100828","name":"Anthropic Chat Model","type":"@n8n/n8n-nodes-langchain.lmChatAnthropic","onError":"continueRegularOutput","position":[-656,1248],"parameters":{"model":{"__rl":true,"mode":"list","value":"claude-sonnet-4-5-20250929","cachedResultName":"Claude Sonnet 4.5"},"options":{}},"credentials":{"anthropicApi":{"id":"9nzHwX0Ed87LaDrh","name":"Anthropic account"}},"typeVersion":1.3},{"id":"ead472f9-e7b4-4564-9e66-6c1c98316cf6","name":"OpenAI Chat Model","type":"@n8n/n8n-nodes-langchain.lmChatOpenAi","onError":"continueRegularOutput","position":[-656,1648],"parameters":{"model":{"__rl":true,"mode":"list","value":"gpt-5-mini"},"options":{},"builtInTools":{}},"credentials":{"openAiApi":{"id":"ZsG0nwc3tCaUdXpw","name":"OpenAi account"}},"retryOnFail":false,"typeVersion":1.3},{"id":"ddb55708-a6f5-4652-830d-c9269a49cc43","name":"LLM1 ","type":"@n8n/n8n-nodes-langchain.agent","onError":"continueRegularOutput","position":[-720,624],"parameters":{"text":"=Answer this question and rate your confidence:\n\nQuestion: {{ $json.userPrompt }}\n\nReturn ONLY valid JSON:\n{\n  \"answer\": \"your detailed answer here\",\n  \"confidence\": 0.75,\n  \"reasoning\": \"brief explanation of your confidence level\"\n}\n\nRules:\n- confidence: float between 0.0 and 1.0\n- Be honest about uncertainty\n- Lower confidence when speculating\n- Higher confidence only when certain","options":{"systemMessage":"You are a helpful AI assistant. Always return responses in valid JSON format as requested."},"promptType":"define"},"typeVersion":3.1},{"id":"93167b26-228d-4c8d-bd7a-2d3bff9a7093","name":"LLM2 ","type":"@n8n/n8n-nodes-langchain.agent","onError":"continueRegularOutput","position":[-720,1024],"parameters":{"text":"=Answer this question and rate your confidence:\n\nQuestion: {{ $json.userPrompt }}\n\nReturn ONLY valid JSON:\n{\n  \"answer\": \"your detailed answer here\",\n  \"confidence\": 0.75,\n  \"reasoning\": \"brief explanation of your confidence level\"\n}\n\nRules:\n- confidence: float between 0.0 and 1.0\n- Be honest about uncertainty\n- Lower confidence when speculating\n- Higher confidence only when certain","options":{"systemMessage":"You are a helpful AI assistant. Always return responses in valid JSON format as requested."},"promptType":"define"},"typeVersion":3.1},{"id":"d54b80a2-aabe-4886-a96d-5aa8ad4008c9","name":"LLM3","type":"@n8n/n8n-nodes-langchain.agent","position":[-720,1424],"parameters":{"text":"=Answer this question and rate your confidence:\n\nQuestion: {{ $json.userPrompt }}\n\nReturn ONLY valid JSON:\n{\n  \"answer\": \"your detailed answer here\",\n  \"confidence\": 0.75,\n  \"reasoning\": \"brief explanation of your confidence level\"\n}\n\nRules:\n- confidence: float between 0.0 and 1.0\n- Be honest about uncertainty\n- Lower confidence when speculating\n- Higher confidence only when certain","options":{"systemMessage":"You are a helpful AI assistant. Always return responses in valid JSON format as requested."},"promptType":"define"},"typeVersion":3.1},{"id":"05fa9b24-c695-4179-ab34-c9aec1a75b6a","name":"LLM4 ","type":"@n8n/n8n-nodes-langchain.agent","position":[-720,1936],"parameters":{"text":"=Answer this question and rate your confidence:\n\nQuestion: {{ $json.userPrompt }}\n\nReturn ONLY valid JSON:\n{\n  \"answer\": \"your detailed answer here\",\n  \"confidence\": 0.75,\n  \"reasoning\": \"brief explanation of your confidence level\"\n}\n\nRules:\n- confidence: float between 0.0 and 1.0\n- Be honest about uncertainty\n- Lower confidence when speculating\n- Higher confidence only when certain","options":{"systemMessage":"You are a helpful AI assistant. Always return responses in valid JSON format as requested."},"promptType":"define"},"typeVersion":3.1},{"id":"df756265-010b-461f-8152-04abf4420b9f","name":"Check for Extreme Divergence","type":"n8n-nodes-base.if","position":[1168,1328],"parameters":{"options":{},"conditions":{"options":{"version":1,"leftValue":"","caseSensitive":true,"typeValidation":"strict"},"combinator":"and","conditions":[{"id":"extreme-divergence-check","operator":{"type":"boolean","operation":"true"},"leftValue":"={{ $json.consensusMetrics.hasExtremeDivergence }}","rightValue":true}]}},"typeVersion":2}],"active":false,"pinData":{},"settings":{"binaryMode":"separate","executionOrder":"v1"},"versionId":"84e67570-876d-439b-a80d-3d74b2c42552","connections":{"LLM3":{"main":[[{"node":"Merge All 4 LLMs","type":"main","index":2}]]},"LLM1 ":{"main":[[{"node":"Merge All 4 LLMs","type":"main","index":0}]]},"LLM2 ":{"main":[[{"node":"Merge All 4 LLMs","type":"main","index":1}]]},"LLM4 ":{"main":[[{"node":"Merge All 4 LLMs","type":"main","index":3}]]},"Set User Prompt":{"main":[[{"node":"LLM4 ","type":"main","index":0},{"node":"LLM3","type":"main","index":0},{"node":"LLM2 ","type":"main","index":0},{"node":"LLM1 ","type":"main","index":0}]]},"Groq Chat Model3":{"ai_languageModel":[[{"node":"LLM4 ","type":"ai_languageModel","index":0}]]},"Merge All 4 LLMs":{"main":[[{"node":"Parse & Validate Responses","type":"main","index":0}]]},"OpenAI Chat Model":{"ai_languageModel":[[{"node":"LLM3","type":"ai_languageModel","index":0}]]},"Weighted Consensus":{"main":[[{"node":"Format Final Output","type":"main","index":0}]]},"Format Final Output":{"main":[[{"node":"Format Output (chat message)","type":"main","index":0}]]},"Similarity Analysis":{"main":[[{"node":"Confidence Calibration","type":"main","index":0}]]},"Anthropic Chat Model":{"ai_languageModel":[[{"node":"LLM2 ","type":"ai_languageModel","index":0}]]},"Peer Review Fallback":{"main":[[{"node":"Format Final Output","type":"main","index":0}]]},"Confidence Calibration":{"main":[[{"node":"Check for Extreme Divergence","type":"main","index":0}]]},"Google Gemini Chat Model":{"ai_languageModel":[[{"node":"LLM1 ","type":"ai_languageModel","index":0}]]},"Parse & Validate Responses":{"main":[[{"node":"Similarity Analysis","type":"main","index":0}]]},"When chat message received":{"main":[[{"node":"Set User Prompt","type":"main","index":0}]]},"Check for Extreme Divergence":{"main":[[{"node":"Peer Review Fallback","type":"main","index":0}],[{"node":"Weighted Consensus","type":"main","index":0}]]},"Format Output (chat message)":{"main":[[{"node":"Chat","type":"main","index":0}]]}}},"lastUpdatedBy":1,"workflowInfo":{"nodeCount":25,"nodeTypes":{"n8n-nodes-base.if":{"count":1},"n8n-nodes-base.set":{"count":1},"n8n-nodes-base.code":{"count":7},"n8n-nodes-base.merge":{"count":1},"n8n-nodes-base.stickyNote":{"count":5},"@n8n/n8n-nodes-langchain.chat":{"count":1},"@n8n/n8n-nodes-langchain.agent":{"count":4},"@n8n/n8n-nodes-langchain.lmChatGroq":{"count":1},"@n8n/n8n-nodes-langchain.chatTrigger":{"count":1},"@n8n/n8n-nodes-langchain.lmChatOpenAi":{"count":1},"@n8n/n8n-nodes-langchain.lmChatAnthropic":{"count":1},"@n8n/n8n-nodes-langchain.lmChatGoogleGemini":{"count":1}}},"status":"published","readyToDemo":null,"user":{"name":"Mychel Garzon","username":"mychel-garzon","bio":"n8n Verified Creator and Junction 2025 n8n Tech Challenge Winner based in Helsinki, Finland. Full Stack Engineer specializing in AI automation workflows, multi-agent systems, RAG pipelines, and automated incident triage. Node.js, TypeScript, React, LLMs (OpenAI, Anthropic, Gemini, Groq). 99.9% production uptime.\n\nCustom n8n workflows: mychel.garzon@gmail.com","verified":true,"links":["https://mychelgarzon.com/"],"avatar":"https://gravatar.com/avatar/8937dc435f1eb7cc47cfc0139be315f5e28add64bc872edc5e5315137ee12b75?r=pg&d=retro&size=200"},"nodes":[{"id":20,"icon":"fa:map-signs","name":"n8n-nodes-base.if","codex":{"data":{"alias":["Router","Filter","Condition","Logic","Boolean","Branch"],"details":"The IF node can be used to implement binary conditional logic in your workflow. You can set up one-to-many conditions to evaluate each item of data being inputted into the node. That data will either evaluate to TRUE or FALSE and route out of the node accordingly.\n\nThis node has multiple types of conditions: Bool, String, Number, and Date & Time.","resources":{"generic":[{"url":"https://n8n.io/blog/learn-to-automate-your-factorys-incident-reporting-a-step-by-step-guide/","icon":"🏭","label":"Learn to Automate Your Factory's Incident Reporting: A Step by Step Guide"},{"url":"https://n8n.io/blog/2021-the-year-to-automate-the-new-you-with-n8n/","icon":"☀️","label":"2021: The Year to Automate the New You with n8n"},{"url":"https://n8n.io/blog/why-business-process-automation-with-n8n-can-change-your-daily-life/","icon":"🧬","label":"Why business process automation with n8n can change your daily life"},{"url":"https://n8n.io/blog/create-a-toxic-language-detector-for-telegram/","icon":"🤬","label":"Create a toxic language detector for Telegram in 4 step"},{"url":"https://n8n.io/blog/no-code-ecommerce-workflow-automations/","icon":"store","label":"6 e-commerce workflows to power up your Shopify s"},{"url":"https://n8n.io/blog/how-to-build-a-low-code-self-hosted-url-shortener/","icon":"🔗","label":"How to build a low-code, self-hosted URL shortener in 3 steps"},{"url":"https://n8n.io/blog/automate-your-data-processing-pipeline-in-9-steps-with-n8n/","icon":"⚙️","label":"Automate your data processing pipeline in 9 steps"},{"url":"https://n8n.io/blog/how-to-get-started-with-crm-automation-and-no-code-workflow-ideas/","icon":"👥","label":"How to get started with CRM automation (with 3 no-code workflow ideas"},{"url":"https://n8n.io/blog/5-tasks-you-can-automate-with-notion-api/","icon":"⚡️","label":"5 tasks you can automate with the new Notion API "},{"url":"https://n8n.io/blog/automate-google-apps-for-productivity/","icon":"💡","label":"15 Google apps you can combine and automate to increase productivity"},{"url":"https://n8n.io/blog/automation-for-maintainers-of-open-source-projects/","icon":"🏷️","label":"How to automatically manage contributions to open-source projects"},{"url":"https://n8n.io/blog/how-uproc-scraped-a-multi-page-website-with-a-low-code-workflow/","icon":" 🕸️","label":"How uProc scraped a multi-page website with a low-code workflow"},{"url":"https://n8n.io/blog/5-workflow-automations-for-mattermost-that-we-love-at-n8n/","icon":"🤖","label":"5 workflow automations for Mattermost that we love at n8n"},{"url":"https://n8n.io/blog/why-this-product-manager-loves-workflow-automation-with-n8n/","icon":"🧠","label":"Why this Product Manager loves workflow automation with n8n"},{"url":"https://n8n.io/blog/sending-automated-congratulations-with-google-sheets-twilio-and-n8n/","icon":"🙌","label":"Sending Automated Congratulations with Google Sheets, Twilio, and n8n "},{"url":"https://n8n.io/blog/how-to-set-up-a-ci-cd-pipeline-with-no-code/","icon":"🎡","label":"How to set up a no-code CI/CD pipeline with GitHub and TravisCI"},{"url":"https://n8n.io/blog/benefits-of-automation-and-n8n-an-interview-with-hubspots-hugh-durkin/","icon":"🎖","label":"Benefits of automation and n8n: An interview with HubSpot's Hugh Durkin"},{"url":"https://n8n.io/blog/aws-workflow-automation/","label":"7 no-code workflow automations for Amazon Web Services"}],"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-base.if/"}]},"categories":["Core Nodes"],"nodeVersion":"1.0","codexVersion":"1.0","subcategories":{"Core Nodes":["Flow"]}}},"group":"[\"transform\"]","defaults":{"name":"If","color":"#408000"},"iconData":{"icon":"map-signs","type":"icon"},"displayName":"If","typeVersion":2,"nodeCategories":[{"id":9,"name":"Core Nodes"}]},{"id":24,"icon":"file:merge.svg","name":"n8n-nodes-base.merge","codex":{"data":{"alias":["Join","Concatenate","Wait"],"resources":{"generic":[{"url":"https://n8n.io/blog/how-to-sync-data-between-two-systems/","icon":"🏬","label":"How to synchronize data between two systems (one-way vs. two-way sync"},{"url":"https://n8n.io/blog/supercharging-your-conference-registration-process-with-n8n/","icon":"🎫","label":"Supercharging your conference registration process with n8n"},{"url":"https://n8n.io/blog/migrating-community-metrics-to-orbit-using-n8n/","icon":"📈","label":"Migrating Community Metrics to Orbit using n8n"},{"url":"https://n8n.io/blog/build-your-own-virtual-assistant-with-n8n-a-step-by-step-guide/","icon":"👦","label":"Build your own virtual assistant with n8n: A step by step guide"},{"url":"https://n8n.io/blog/sending-automated-congratulations-with-google-sheets-twilio-and-n8n/","icon":"🙌","label":"Sending Automated Congratulations with Google Sheets, Twilio, and n8n "},{"url":"https://n8n.io/blog/aws-workflow-automation/","label":"7 no-code workflow automations for Amazon Web Services"}],"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-base.merge/"}]},"categories":["Core Nodes"],"nodeVersion":"1.0","codexVersion":"1.0","subcategories":{"Core Nodes":["Flow","Data Transformation"]}}},"group":"[\"transform\"]","defaults":{"name":"Merge"},"iconData":{"type":"file","fileBuffer":"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNTEyIiBoZWlnaHQ9IjUxMiIgdmlld0JveD0iMCAwIDUxMiA1MTIiIGZpbGw9Im5vbmUiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CjxnIGNsaXAtcGF0aD0idXJsKCNjbGlwMF8xMTc3XzUxOCkiPgo8cGF0aCBmaWxsLXJ1bGU9ImV2ZW5vZGQiIGNsaXAtcnVsZT0iZXZlbm9kZCIgZD0iTTAgNDhDMCAyMS40OTAzIDIxLjQ5MDMgMCA0OCAwSDExMkMxMzguNTEgMCAxNjAgMjEuNDkwMyAxNjAgNDhWNTZIMTk2LjI1MkMyNDAuNDM1IDU2IDI3Ni4yNTIgOTEuODE3MiAyNzYuMjUyIDEzNlYxOTJDMjc2LjI1MiAyMTQuMDkxIDI5NC4xNjEgMjMyIDMxNi4yNTIgMjMySDM1MlYyMjRDMzUyIDE5Ny40OSAzNzMuNDkgMTc2IDQwMCAxNzZINDY0QzQ5MC41MSAxNzYgNTEyIDE5Ny40OSA1MTIgMjI0VjI4OEM1MTIgMzE0LjUxIDQ5MC41MSAzMzYgNDY0IDMzNkg0MDBDMzczLjQ5IDMzNiAzNTIgMzE0LjUxIDM1MiAyODhWMjgwSDMxNi4yNTJDMjk0LjE2MSAyODAgMjc2LjI1MiAyOTcuOTA5IDI3Ni4yNTIgMzIwVjM3NkMyNzYuMjUyIDQyMC4xODMgMjQwLjQzNSA0NTYgMTk2LjI1MiA0NTZIMTYwVjQ2NEMxNjAgNDkwLjUxIDEzOC41MSA1MTIgMTEyIDUxMkg0OEMyMS40OTAzIDUxMiAwIDQ5MC41MSAwIDQ2NFY0MDBDMCAzNzMuNDkgMjEuNDkwMyAzNTIgNDggMzUySDExMkMxMzguNTEgMzUyIDE2MCAzNzMuNDkgMTYwIDQwMFY0MDhIMTk2LjI1MkMyMTMuOTI1IDQwOCAyMjguMjUyIDM5My42NzMgMjI4LjI1MiAzNzZWMzIwQzIyOC4yNTIgMjk0Ljc4NCAyMzguODU5IDI3Mi4wNDQgMjU1Ljg1MyAyNTZDMjM4Ljg1OSAyMzkuOTU2IDIyOC4yNTIgMjE3LjIxNiAyMjguMjUyIDE5MlYxMzZDMjI4LjI1MiAxMTguMzI3IDIxMy45MjUgMTA0IDE5Ni4yNTIgMTA0SDE2MFYxMTJDMTYwIDEzOC41MSAxMzguNTEgMTYwIDExMiAxNjBINDhDMjEuNDkwMyAxNjAgMCAxMzguNTEgMCAxMTJWNDhaTTEwNCA0OEMxMDguNDE4IDQ4IDExMiA1MS41ODE3IDExMiA1NlYxMDRDMTEyIDEwOC40MTggMTA4LjQxOCAxMTIgMTA0IDExMkg1NkM1MS41ODE3IDExMiA0OCAxMDguNDE4IDQ4IDEwNFY1NkM0OCA1MS41ODE3IDUxLjU4MTcgNDggNTYgNDhIMTA0Wk00NTYgMjI0QzQ2MC40MTggMjI0IDQ2NCAyMjcuNTgyIDQ2NCAyMzJWMjgwQzQ2NCAyODQuNDE4IDQ2MC40MTggMjg4IDQ1NiAyODhINDA4QzQwMy41ODIgMjg4IDQwMCAyODQuNDE4IDQwMCAyODBWMjMyQzQwMCAyMjcuNTgyIDQwMy41ODIgMjI0IDQwOCAyMjRINDU2Wk0xMTIgNDA4QzExMiA0MDMuNTgyIDEwOC40MTggNDAwIDEwNCA0MDBINTZDNTEuNTgxNyA0MDAgNDggNDAzLjU4MiA0OCA0MDhWNDU2QzQ4IDQ2MC40MTggNTEuNTgxNyA0NjQgNTYgNDY0SDEwNEMxMDguNDE4IDQ2NCAxMTIgNDYwLjQxOCAxMTIgNDU2VjQwOFoiIGZpbGw9IiM1NEI4QzkiLz4KPC9nPgo8ZGVmcz4KPGNsaXBQYXRoIGlkPSJjbGlwMF8xMTc3XzUxOCI+CjxyZWN0IHdpZHRoPSI1MTIiIGhlaWdodD0iNTEyIiBmaWxsPSJ3aGl0ZSIvPgo8L2NsaXBQYXRoPgo8L2RlZnM+Cjwvc3ZnPgo="},"displayName":"Merge","typeVersion":3,"nodeCategories":[{"id":9,"name":"Core Nodes"}]},{"id":38,"icon":"fa:pen","name":"n8n-nodes-base.set","codex":{"data":{"alias":["Set","JS","JSON","Filter","Transform","Map"],"resources":{"generic":[{"url":"https://n8n.io/blog/learn-to-automate-your-factorys-incident-reporting-a-step-by-step-guide/","icon":"🏭","label":"Learn to Automate Your Factory's Incident Reporting: A Step by Step Guide"},{"url":"https://n8n.io/blog/2021-the-year-to-automate-the-new-you-with-n8n/","icon":"☀️","label":"2021: The Year to Automate the New You with n8n"},{"url":"https://n8n.io/blog/automatically-pulling-and-visualizing-data-with-n8n/","icon":"📈","label":"Automatically pulling and visualizing data with n8n"},{"url":"https://n8n.io/blog/database-monitoring-and-alerting-with-n8n/","icon":"📡","label":"Database Monitoring and Alerting with n8n"},{"url":"https://n8n.io/blog/automatically-adding-expense-receipts-to-google-sheets-with-telegram-mindee-twilio-and-n8n/","icon":"🧾","label":"Automatically Adding Expense Receipts to Google Sheets with Telegram, Mindee, Twilio, and n8n"},{"url":"https://n8n.io/blog/no-code-ecommerce-workflow-automations/","icon":"store","label":"6 e-commerce workflows to power up your Shopify s"},{"url":"https://n8n.io/blog/how-to-build-a-low-code-self-hosted-url-shortener/","icon":"🔗","label":"How to build a low-code, self-hosted URL shortener in 3 steps"},{"url":"https://n8n.io/blog/automate-your-data-processing-pipeline-in-9-steps-with-n8n/","icon":"⚙️","label":"Automate your data processing pipeline in 9 steps"},{"url":"https://n8n.io/blog/how-to-get-started-with-crm-automation-and-no-code-workflow-ideas/","icon":"👥","label":"How to get started with CRM automation (with 3 no-code workflow ideas"},{"url":"https://n8n.io/blog/5-tasks-you-can-automate-with-notion-api/","icon":"⚡️","label":"5 tasks you can automate with the new Notion API "},{"url":"https://n8n.io/blog/automate-google-apps-for-productivity/","icon":"💡","label":"15 Google apps you can combine and automate to increase productivity"},{"url":"https://n8n.io/blog/how-uproc-scraped-a-multi-page-website-with-a-low-code-workflow/","icon":" 🕸️","label":"How uProc scraped a multi-page website with a low-code workflow"},{"url":"https://n8n.io/blog/building-an-expense-tracking-app-in-10-minutes/","icon":"📱","label":"Building an expense tracking app in 10 minutes"},{"url":"https://n8n.io/blog/the-ultimate-guide-to-automate-your-video-collaboration-with-whereby-mattermost-and-n8n/","icon":"📹","label":"The ultimate guide to automate your video collaboration with Whereby, Mattermost, and n8n"},{"url":"https://n8n.io/blog/5-workflow-automations-for-mattermost-that-we-love-at-n8n/","icon":"🤖","label":"5 workflow automations for Mattermost that we love at n8n"},{"url":"https://n8n.io/blog/learn-to-build-powerful-api-endpoints-using-webhooks/","icon":"🧰","label":"Learn to Build Powerful API Endpoints Using Webhooks"},{"url":"https://n8n.io/blog/how-a-membership-development-manager-automates-his-work-and-investments/","icon":"📈","label":"How a Membership Development Manager automates his work and investments"},{"url":"https://n8n.io/blog/a-low-code-bitcoin-ticker-built-with-questdb-and-n8n-io/","icon":"📈","label":"A low-code bitcoin ticker built with QuestDB and n8n.io"},{"url":"https://n8n.io/blog/how-to-set-up-a-ci-cd-pipeline-with-no-code/","icon":"🎡","label":"How to set up a no-code CI/CD pipeline with GitHub and TravisCI"},{"url":"https://n8n.io/blog/benefits-of-automation-and-n8n-an-interview-with-hubspots-hugh-durkin/","icon":"🎖","label":"Benefits of automation and n8n: An interview with HubSpot's Hugh Durkin"},{"url":"https://n8n.io/blog/how-goomer-automated-their-operations-with-over-200-n8n-workflows/","icon":"🛵","label":"How Goomer automated their operations with over 200 n8n workflows"},{"url":"https://n8n.io/blog/aws-workflow-automation/","label":"7 no-code workflow automations for Amazon Web Services"}],"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-base.set/"}]},"categories":["Core Nodes"],"nodeVersion":"1.0","codexVersion":"1.0","subcategories":{"Core Nodes":["Data Transformation"]}}},"group":"[\"input\"]","defaults":{"name":"Edit Fields"},"iconData":{"icon":"pen","type":"icon"},"displayName":"Edit Fields (Set)","typeVersion":3,"nodeCategories":[{"id":9,"name":"Core Nodes"}]},{"id":565,"icon":"fa:sticky-note","name":"n8n-nodes-base.stickyNote","codex":{"data":{"alias":["Comments","Notes","Sticky"],"categories":["Core Nodes"],"nodeVersion":"1.0","codexVersion":"1.0","subcategories":{"Core Nodes":["Helpers"]}}},"group":"[\"input\"]","defaults":{"name":"Sticky Note","color":"#FFD233"},"iconData":{"icon":"sticky-note","type":"icon"},"displayName":"Sticky Note","typeVersion":1,"nodeCategories":[{"id":9,"name":"Core Nodes"}]},{"id":834,"icon":"file:code.svg","name":"n8n-nodes-base.code","codex":{"data":{"alias":["cpde","Javascript","JS","Python","Script","Custom Code","Function"],"details":"The Code node allows you to execute JavaScript in your workflow.","resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-base.code/"}]},"categories":["Development","Core Nodes"],"nodeVersion":"1.0","codexVersion":"1.0","subcategories":{"Core Nodes":["Helpers","Data Transformation"]}}},"group":"[\"transform\"]","defaults":{"name":"Code"},"iconData":{"type":"file","fileBuffer":"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNTEyIiBoZWlnaHQ9IjUxMiIgdmlld0JveD0iMCAwIDUxMiA1MTIiIGZpbGw9Im5vbmUiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CjxnIGNsaXAtcGF0aD0idXJsKCNjbGlwMF8xMTcxXzQ0MSkiPgo8cGF0aCBkPSJNMTcwLjI4MyA0OEgxOTYuNUMyMDMuMTI3IDQ4IDIwOC41IDQyLjYyNzQgMjA4LjUgMzZWMTJDMjA4LjUgNS4zNzI1OCAyMDMuMTI3IDAgMTk2LjUgMEgxNzAuMjgzQzEyNi4xIDAgOTAuMjgzIDM1LjgxNzIgOTAuMjgzIDgwVjE3NkM5MC4yODMgMjA2LjkyOCA2NS4yMTA5IDIzMiAzNC4yODMgMjMySDIzQzE2LjM3MjYgMjMyIDExIDIzNy4zNzIgMTEgMjQ0VjI2OEMxMSAyNzQuNjI3IDE2LjM3MjQgMjgwIDIyLjk5OTYgMjgwTDM0LjI4MyAyODBDNjUuMjEwOSAyODAgOTAuMjgzIDMwNS4wNzIgOTAuMjgzIDMzNlY0NDBDOTAuMjgzIDQ3OS43NjQgMTIyLjUxOCA1MTIgMTYyLjI4MyA1MTJIMTk2LjVDMjAzLjEyNyA1MTIgMjA4LjUgNTA2LjYyNyAyMDguNSA1MDBWNDc2QzIwOC41IDQ2OS4zNzMgMjAzLjEyNyA0NjQgMTk2LjUgNDY0SDE2Mi4yODNDMTQ5LjAyOCA0NjQgMTM4LjI4MyA0NTMuMjU1IDEzOC4yODMgNDQwVjMzNkMxMzguMjgzIDMwOS4wMjIgMTI4LjAxMSAyODQuNDQzIDExMS4xNjQgMjY1Ljk2MUMxMDYuMTA5IDI2MC40MTYgMTA2LjEwOSAyNTEuNTg0IDExMS4xNjQgMjQ2LjAzOUMxMjguMDExIDIyNy41NTcgMTM4LjI4MyAyMDIuOTc4IDEzOC4yODMgMTc2VjgwQzEzOC4yODMgNjIuMzI2OSAxNTIuNjEgNDggMTcwLjI4MyA0OFoiIGZpbGw9IiNGRjk5MjIiLz4KPHBhdGggZD0iTTMwNSAzNkMzMDUgNDIuNjI3NCAzMTAuMzczIDQ4IDMxNyA0OEgzNDIuOTc5QzM2MC42NTIgNDggMzc0Ljk3OCA2Mi4zMjY5IDM3NC45NzggODBWMTc2QzM3NC45NzggMjAyLjk3OCAzODUuMjUxIDIyNy41NTcgNDAyLjA5OCAyNDYuMDM5QzQwNy4xNTMgMjUxLjU4NCA0MDcuMTUzIDI2MC40MTYgNDAyLjA5OCAyNjUuOTYxQzM4NS4yNTEgMjg0LjQ0MyAzNzQuOTc4IDMwOS4wMjIgMzc0Ljk3OCAzMzZWNDMyQzM3NC45NzggNDQ5LjY3MyAzNjAuNjUyIDQ2NCAzNDIuOTc5IDQ2NEgzMTdDMzEwLjM3MyA0NjQgMzA1IDQ2OS4zNzMgMzA1IDQ3NlY1MDBDMzA1IDUwNi42MjcgMzEwLjM3MyA1MTIgMzE3IDUxMkgzNDIuOTc5QzM4Ny4xNjEgNTEyIDQyMi45NzggNDc2LjE4MyA0MjIuOTc4IDQzMlYzMzZDNDIyLjk3OCAzMDUuMDcyIDQ0OC4wNTEgMjgwIDQ3OC45NzkgMjgwSDQ5MEM0OTYuNjI3IDI4MCA1MDIgMjc0LjYyOCA1MDIgMjY4VjI0NEM1MDIgMjM3LjM3MyA0OTYuNjI4IDIzMiA0OTAgMjMyTDQ3OC45NzkgMjMyQzQ0OC4wNTEgMjMyIDQyMi45NzggMjA2LjkyOCA0MjIuOTc4IDE3NlY4MEM0MjIuOTc4IDM1LjgxNzIgMzg3LjE2MSAwIDM0Mi45NzkgMEgzMTdDMzEwLjM3MyAwIDMwNSA1LjM3MjU4IDMwNSAxMlYzNloiIGZpbGw9IiNGRjk5MjIiLz4KPC9nPgo8ZGVmcz4KPGNsaXBQYXRoIGlkPSJjbGlwMF8xMTcxXzQ0MSI+CjxyZWN0IHdpZHRoPSI1MTIiIGhlaWdodD0iNTEyIiBmaWxsPSJ3aGl0ZSIvPgo8L2NsaXBQYXRoPgo8L2RlZnM+Cjwvc3ZnPgo="},"displayName":"Code","typeVersion":2,"nodeCategories":[{"id":5,"name":"Development"},{"id":9,"name":"Core Nodes"}]},{"id":1119,"icon":"fa:robot","name":"@n8n/n8n-nodes-langchain.agent","codex":{"data":{"alias":["LangChain","Chat","Conversational","Plan and Execute","ReAct","Tools"],"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/"}]},"categories":["AI","Langchain"],"subcategories":{"AI":["Agents","Root Nodes"]}}},"group":"[\"transform\"]","defaults":{"name":"AI Agent","color":"#404040"},"iconData":{"icon":"robot","type":"icon"},"displayName":"AI Agent","typeVersion":3,"nodeCategories":[{"id":25,"name":"AI"},{"id":26,"name":"Langchain"}]},{"id":1145,"icon":"file:anthropic.svg","name":"@n8n/n8n-nodes-langchain.lmChatAnthropic","codex":{"data":{"alias":["claude","sonnet","opus"],"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatanthropic/"}]},"categories":["AI","Langchain"],"subcategories":{"AI":["Language Models","Root Nodes"],"Language Models":["Chat Models (Recommended)"]}}},"group":"[\"transform\"]","defaults":{"name":"Anthropic Chat Model"},"iconData":{"type":"file","fileBuffer":"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI0NiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSI+PHBhdGggZmlsbD0iIzdEN0Q4NyIgZD0iTTMyLjczIDBoLTYuOTQ1TDM4LjQ1IDMyaDYuOTQ1ek0xMi42NjUgMCAwIDMyaDcuMDgybDIuNTktNi43MmgxMy4yNWwyLjU5IDYuNzJoNy4wODJMMTkuOTI5IDB6bS0uNzAyIDE5LjMzNyA0LjMzNC0xMS4yNDYgNC4zMzQgMTEuMjQ2eiIvPjwvc3ZnPg=="},"displayName":"Anthropic Chat Model","typeVersion":1,"nodeCategories":[{"id":25,"name":"AI"},{"id":26,"name":"Langchain"}]},{"id":1153,"icon":"file:openAiLight.svg","name":"@n8n/n8n-nodes-langchain.lmChatOpenAi","codex":{"data":{"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai/"}]},"categories":["AI","Langchain"],"subcategories":{"AI":["Language Models","Root Nodes"],"Language Models":["Chat Models (Recommended)"]}}},"group":"[\"transform\"]","defaults":{"name":"OpenAI Chat Model"},"iconData":{"type":"file","fileBuffer":"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDAiIGhlaWdodD0iNDAiIHZpZXdCb3g9IjAgMCA0MCA0MCIgZmlsbD0ibm9uZSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KPHBhdGggZD0iTTM2Ljg2NzEgMTYuMzcxOEMzNy43NzQ2IDEzLjY0OCAzNy40NjIxIDEwLjY2NDIgMzYuMDEwOCA4LjE4NjYxQzMzLjgyODIgNC4zODY1MyAyOS40NDA3IDIuNDMxNDkgMjUuMTU1NiAzLjM1MTUxQzIzLjI0OTMgMS4yMDM5NiAyMC41MTA1IC0wLjAxNzMxNDggMTcuNjM5MiAwLjAwMDE4NTUzM0MxMy4yNTkxIC0wLjAwOTgxNDY4IDkuMzcyNzMgMi44MTAyNSA4LjAyNTIgNi45Nzc4M0M1LjIxMTM5IDcuNTU0MSAyLjc4MjU4IDkuMzE1MzggMS4zNjEzIDExLjgxMTdDLTAuODM3NDkzIDE1LjYwMTggLTAuMzM2MjMyIDIwLjM3OTQgMi42MDEzMyAyMy42Mjk0QzEuNjkzODEgMjYuMzUzMiAyLjAwNjMyIDI5LjMzNzEgMy40NTc2IDMxLjgxNDZDNS42NDAxNSAzNS42MTQ3IDEwLjAyNzcgMzcuNTY5NyAxNC4zMTI4IDM2LjY0OTdDMTYuMjE3OSAzOC43OTczIDE4Ljk1NzkgNDAuMDE4NSAyMS44MjkyIDM5Ljk5OThDMjYuMjExOCA0MC4wMTEgMzAuMDk5NCAzNy4xODg1IDMxLjQ0NjkgMzMuMDE3MUMzNC4yNjA4IDMyLjQ0MDkgMzYuNjg5NiAzMC42Nzk2IDM4LjExMDggMjguMTgzM0M0MC4zMDcxIDI0LjM5MzIgMzkuODA0NiAxOS42MTk0IDM2Ljg2ODMgMTYuMzY5M0wzNi44NjcxIDE2LjM3MThaTTIxLjgzMTcgMzcuMzg2QzIwLjA3OCAzNy4zODg1IDE4LjM3OTIgMzYuNzc0NyAxNy4wMzI5IDM1LjY1MDlDMTcuMDk0MSAzNS42MTg0IDE3LjIwMDQgMzUuNTU5NyAxNy4yNjkxIDM1LjUxNzJMMjUuMjM0MyAzMC45MTcxQzI1LjY0MTggMzAuNjg1OCAyNS44OTE4IDMwLjI1MjEgMjUuODg5MyAyOS43ODMzVjE4LjU1NDNMMjkuMjU1NyAyMC40OTgxQzI5LjI5MTkgMjAuNTE1NiAyOS4zMTU3IDIwLjU1MDYgMjkuMzIwNyAyMC41OTA2VjI5Ljg4OTZDMjkuMzE1NyAzNC4wMjQ3IDI1Ljk2NjggMzcuMzc3MiAyMS44MzE3IDM3LjM4NlpNNS43MjY0IDMwLjUwNzFDNC44NDc2MyAyOC45ODk2IDQuNTMxMzcgMjcuMjEwOCA0LjgzMjYzIDI1LjQ4NDVDNC44OTEzOCAyNS41MTk1IDQuOTk1MTMgMjUuNTgzMiA1LjA2ODg4IDI1LjYyNTdMMTMuMDM0MSAzMC4yMjU4QzEzLjQzNzggMzAuNDYyMSAxMy45Mzc4IDMwLjQ2MjEgMTQuMzQyOCAzMC4yMjU4TDI0LjA2NjggMjQuNjEwN1YyOC40OTgzQzI0LjA2OTMgMjguNTM4MyAyNC4wNTA1IDI4LjU3NyAyNC4wMTkzIDI4LjYwMkwxNS45Njc5IDMzLjI1MDlDMTIuMzgxNSAzNS4zMTU5IDcuODAxNDQgMzQuMDg4NCA1LjcyNzY1IDMwLjUwNzFINS43MjY0Wk0zLjYzMDEgMTMuMTIwNUM0LjUwNTEyIDExLjYwMDQgNS44ODY0IDEwLjQzNzkgNy41MzE0NCA5LjgzNDE1QzcuNTMxNDQgOS45MDI5IDcuNTI3NjkgMTAuMDI0MiA3LjUyNzY5IDEwLjEwOTJWMTkuMzEwNkM3LjUyNTE5IDE5Ljc3ODEgNy43NzUxOSAyMC4yMTE5IDguMTgxNDUgMjAuNDQzMUwxNy45MDU0IDI2LjA1N0wxNC41MzkxIDI4LjAwMDhDMTQuNTA1MyAyOC4wMjMzIDE0LjQ2MjggMjguMDI3IDE0LjQyNTMgMjguMDEwOEw2LjM3MjY2IDIzLjM1ODJDMi43OTM4MyAyMS4yODU2IDEuNTY2MzEgMTYuNzA2OCAzLjYyODg1IDEzLjEyMTdMMy42MzAxIDEzLjEyMDVaTTMxLjI4ODIgMTkuNTU2OUwyMS41NjQyIDEzLjk0MTdMMjQuOTMwNiAxMS45OTkyQzI0Ljk2NDMgMTEuOTc2NyAyNS4wMDY4IDExLjk3MjkgMjUuMDQ0MyAxMS45ODkyTDMzLjA5NyAxNi42MzhDMzYuNjgyMSAxOC43MDkzIDM3LjkxMDggMjMuMjk1NyAzNS44Mzk1IDI2Ljg4MDhDMzQuOTYzMyAyOC4zOTgzIDMzLjU4MzIgMjkuNTYwOCAzMS45Mzk1IDMwLjE2NThWMjAuNjg5NEMzMS45NDMyIDIwLjIyMTkgMzEuNjk0NSAxOS43ODk0IDMxLjI4OTQgMTkuNTU2OUgzMS4yODgyWk0zNC42MzgzIDE0LjUxNDJDMzQuNTc5NSAxNC40NzggMzQuNDc1OCAxNC40MTU1IDM0LjQwMiAxNC4zNzNMMjYuNDM2OCA5Ljc3Mjg5QzI2LjAzMzEgOS41MzY2NCAyNS41MzMxIDkuNTM2NjQgMjUuMTI4MSA5Ljc3Mjg5TDE1LjQwNDEgMTUuMzg4VjExLjUwMDRDMTUuNDAxNiAxMS40NjA0IDE1LjQyMDQgMTEuNDIxNyAxNS40NTE2IDExLjM5NjdMMjMuNTAzIDYuNzUxNThDMjcuMDg5NCA0LjY4Mjc5IDMxLjY3NDUgNS45MTQwNiAzMy43NDIgOS41MDE2NEMzNC42MTU4IDExLjAxNjcgMzQuOTMyIDEyLjc5MDUgMzQuNjM1OCAxNC41MTQySDM0LjYzODNaTTEzLjU3NDEgMjEuNDQzMUwxMC4yMDY1IDE5LjQ5OTRDMTAuMTcwMiAxOS40ODE5IDEwLjE0NjUgMTkuNDQ2OCAxMC4xNDE1IDE5LjQwNjhWMTAuMTA3OUMxMC4xNDQgNS45Njc4MSAxMy41MDI4IDIuNjEyNzQgMTcuNjQyOSAyLjYxNTI0QzE5LjM5NDIgMi42MTUyNCAyMS4wODkyIDMuMjMwMjUgMjIuNDM1NSA0LjM1MDI4QzIyLjM3NDMgNC4zODI3OCAyMi4yNjkzIDQuNDQxNTMgMjIuMTk5MiA0LjQ4NDAzTDE0LjIzNDEgOS4wODQxM0MxMy44MjY2IDkuMzE1MzggMTMuNTc2NiA5Ljc0Nzg5IDEzLjU3OTEgMTAuMjE2N0wxMy41NzQxIDIxLjQ0MDZWMjEuNDQzMVpNMTUuNDAyOSAxNy41MDA2TDE5LjczNDIgMTQuOTk5M0wyNC4wNjU1IDE3LjQ5OTNWMjIuNTAwN0wxOS43MzQyIDI1LjAwMDdMMTUuNDAyOSAyMi41MDA3VjE3LjUwMDZaIiBmaWxsPSIjN0Q3RDg3Ii8+Cjwvc3ZnPgo="},"displayName":"OpenAI Chat Model","typeVersion":1,"nodeCategories":[{"id":25,"name":"AI"},{"id":26,"name":"Langchain"}]},{"id":1247,"icon":"fa:comments","name":"@n8n/n8n-nodes-langchain.chatTrigger","codex":{"data":{"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.chattrigger/"}]},"categories":["Core Nodes","Langchain"]}},"group":"[\"trigger\"]","defaults":{"name":"When chat message received"},"iconData":{"icon":"comments","type":"icon"},"displayName":"Chat Trigger","typeVersion":1,"nodeCategories":[{"id":9,"name":"Core Nodes"},{"id":26,"name":"Langchain"}]},{"id":1262,"icon":"file:google.svg","name":"@n8n/n8n-nodes-langchain.lmChatGoogleGemini","codex":{"data":{"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatgooglegemini/"}]},"categories":["AI","Langchain"],"subcategories":{"AI":["Language Models","Root Nodes"],"Language Models":["Chat Models (Recommended)"]}}},"group":"[\"transform\"]","defaults":{"name":"Google Gemini Chat Model"},"iconData":{"type":"file","fileBuffer":"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDggNDgiPjxkZWZzPjxwYXRoIGlkPSJhIiBkPSJNNDQuNSAyMEgyNHY4LjVoMTEuOEMzNC43IDMzLjkgMzAuMSAzNyAyNCAzN2MtNy4yIDAtMTMtNS44LTEzLTEzczUuOC0xMyAxMy0xM2MzLjEgMCA1LjkgMS4xIDguMSAyLjlsNi40LTYuNEMzNC42IDQuMSAyOS42IDIgMjQgMiAxMS44IDIgMiAxMS44IDIgMjRzOS44IDIyIDIyIDIyYzExIDAgMjEtOCAyMS0yMiAwLTEuMy0uMi0yLjctLjUtNCIvPjwvZGVmcz48Y2xpcFBhdGggaWQ9ImIiPjx1c2UgeGxpbms6aHJlZj0iI2EiIG92ZXJmbG93PSJ2aXNpYmxlIi8+PC9jbGlwUGF0aD48cGF0aCBmaWxsPSIjRkJCQzA1IiBkPSJNMCAzN1YxMWwxNyAxM3oiIGNsaXAtcGF0aD0idXJsKCNiKSIvPjxwYXRoIGZpbGw9IiNFQTQzMzUiIGQ9Im0wIDExIDE3IDEzIDctNi4xTDQ4IDE0VjBIMHoiIGNsaXAtcGF0aD0idXJsKCNiKSIvPjxwYXRoIGZpbGw9IiMzNEE4NTMiIGQ9Im0wIDM3IDMwLTIzIDcuOSAxTDQ4IDB2NDhIMHoiIGNsaXAtcGF0aD0idXJsKCNiKSIvPjxwYXRoIGZpbGw9IiM0Mjg1RjQiIGQ9Ik00OCA0OCAxNyAyNGwtNC0zIDM1LTEweiIgY2xpcC1wYXRoPSJ1cmwoI2IpIi8+PC9zdmc+"},"displayName":"Google Gemini Chat Model","typeVersion":1,"nodeCategories":[{"id":25,"name":"AI"},{"id":26,"name":"Langchain"}]},{"id":1263,"icon":"file:groq.svg","name":"@n8n/n8n-nodes-langchain.lmChatGroq","codex":{"data":{"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatgroq/"}]},"categories":["AI","Langchain"],"subcategories":{"AI":["Language Models","Root Nodes"],"Language Models":["Chat Models (Recommended)"]}}},"group":"[\"transform\"]","defaults":{"name":"Groq Chat Model"},"iconData":{"type":"file","fileBuffer":"data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxzdmcKICAgaWQ9IkxheWVyXzIiCiAgIHZpZXdCb3g9IjAgMCA0OTkuOTk5OTkgNDk5Ljk5OTk5IgogICB2ZXJzaW9uPSIxLjEiCiAgIHdpZHRoPSI1MDAiCiAgIGhlaWdodD0iNTAwIgogICB4bWw6c3BhY2U9InByZXNlcnZlIgogICB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciCiAgIHhtbG5zOnN2Zz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjxkZWZzCiAgICAgaWQ9ImRlZnM0IiAvPjxnCiAgICAgaWQ9IlBBR0VTIj48Y2lyY2xlCiAgICAgICBzdHlsZT0iZmlsbDojZjU0ZjM1O2ZpbGwtb3BhY2l0eToxO3N0cm9rZS13aWR0aDoxLjEzNjIyIgogICAgICAgaWQ9InBhdGg0IgogICAgICAgY3g9IjI1MCIKICAgICAgIGN5PSIyNTAiCiAgICAgICByPSIyNTAiIC8+PHBhdGgKICAgICAgIGQ9Ik0gMjUwLjUzNjY0LDk3LjEyMjk5NCBDIDE5Mi43MTkzMSw5Ni41ODg2MzggMTQ1LjQ4MjIyLDE0Mi45NzA3NSAxNDQuOTQ3ODYsMjAwLjc4ODA4IGMgLTAuNTM0MzQsNTcuODE3MzMgNDUuODQ3NzcsMTA1LjA1NDQyIDEwMy42NjUxLDEwNS41ODg3NyBoIDM2LjMzNjIxIHYgLTM5LjIyMTc0IGggLTM0LjQxMjUzIGMgLTM2LjEyMjQ4LDAuNDI3NSAtNjUuNzI1OCwtMjguNTM0NjIgLTY2LjE1MzI5LC02NC42NTcwOCAtMC40Mjc0OSwtMzYuMTIyNDggMjguNTM0NjMsLTY1LjcyNTgxIDY0LjY1NzA4LC02Ni4xNTMzIGggMS40OTYyMSBjIDM2LjEyMjQ4LDAgNjUuNDA1MiwyOS4yODI3MiA2NS41MTIwNyw2NS40MDUyIHYgMCA5Ni4zOTc4MyAwIGMgMCwzNS44MDE4NyAtMjkuMTc1ODUsNjQuOTc3NzMgLTY0Ljg3MDgzLDY1LjQwNTIxIC0xNy4wOTk0MSwtMC4xMDY4OCAtMzMuNDUwNzEsLTcuMDUzNTEgLTQ1LjUyNzE3LC0xOS4xMjk5NSBsIC0yNy43ODY1LDI3Ljc4NjUxIGMgMTkuMjM2ODEsMTkuMzQzNyA0NS4zMTMzOSwzMC4zNTE0MyA3Mi41NjU1NiwzMC42NzIwNSBoIDEuMzg5MzMgYyA1Ny4wNjkyNCwtMC44NTQ5NyAxMDIuOTE3LC00Ny4xMzAyMiAxMDMuMjM3NiwtMTA0LjE5OTQ1IFYgMTk5LjI5MTg5IEMgMzUzLjY2NzM5LDE0Mi40MzYzOSAzMDcuMjg1MjcsOTcuMTIyOTk0IDI1MC41MzY2NCw5Ny4xMjI5OTQgWiIKICAgICAgIHN0eWxlPSJmaWxsOiNmZmZmZmY7c3Ryb2tlLXdpZHRoOjBweCIKICAgICAgIGlkPSJwYXRoMS0zIiAvPjwvZz48L3N2Zz4K"},"displayName":"Groq Chat Model","typeVersion":1,"nodeCategories":[{"id":25,"name":"AI"},{"id":26,"name":"Langchain"}]},{"id":1313,"icon":"fa:comments","name":"@n8n/n8n-nodes-langchain.chat","codex":{"data":{"alias":["human","wait","hitl","respond","approve","confirm","send","message"],"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.respondtochat/"}]},"categories":["Core Nodes","HITL","Langchain"],"subcategories":{"HITL":["Human in the Loop"]}}},"group":"[\"input\"]","defaults":{"name":"Chat"},"iconData":{"icon":"comments","type":"icon"},"displayName":"Chat","typeVersion":1,"nodeCategories":[{"id":9,"name":"Core Nodes"},{"id":26,"name":"Langchain"},{"id":28,"name":"HITL"}]}],"categories":[{"id":5,"name":"Engineering"},{"id":49,"name":"AI Summarization"}],"image":[]}}