{"workflow":{"id":14551,"name":"Detect human vs AI text using stylometric metrics and multi‑agent LLM debate","views":3,"recentViews":1,"totalViews":3,"createdAt":"2026-04-01T10:27:33.859Z","description":" \n### Stop guessing if text came from ChatGPT. Let three AI agents argue about it using forensic data.\n \nPaste any text and get a verdict on whether it was written by a human, AI, or a hybrid mix. Instead of trusting one black-box score, this workflow runs your text through statistical analysis and a three-agent debate where each agent challenges the others using hard numbers.\n \nThis is not another \"detect AI with AI\" template. The workflow measures six forensic markers first, then makes three separate agents argue about what those numbers mean. You see the raw data, the debate, and the final verdict with confidence scores.\n \n### How it works\n \nThe workflow runs in five stages:\n \n1. **Extract forensic metrics:** A code node measures burstiness (sentence length variation), type-token ratio (vocabulary diversity), hapax rate (words appearing once), repetition score (repeated phrases), transition density (filler words like \"furthermore\"), and AI fingerprints (100+ known LLM phrases stored in a data table). Short texts under 150 words get recalibrated because metrics are less reliable.\n \n2. **Agent 1 - The Scanner:** Reads the text cold with zero metrics. Gives a gut impression (human/AI/hybrid) based purely on instinct. Acts like an editor who has read thousands of manuscripts.\n \n3. **Agent 2 - Forensic Analyst:** Gets the text, all metrics, and Agent 1's verdict. Writes a data-driven report that must cite specific numbers. Either agrees or disagrees with Agent 1 and explains why using the forensic evidence.\n \n4. **Agent 3 - Devil's Advocate:** Gets everything above and argues the opposite of whatever Agent 2 concluded. If Agent 2 said AI, Agent 3 must argue human. Finds holes in the logic and metrics that got ignored.\n \n5. **Weighted verdict:** A code node scores all three agents (35% Analyst, 15% Scanner, 15% Devil's Advocate, 35% raw metrics) and classifies as human (score under 0.35), AI (score over 0.60), or AI-augmented (in between). Confidence is calculated separately so you get verdicts like \"AI with 67% confidence.\"\n \n### Chat output format\n \nThe chat response shows:\n \n* **Verdict badge:** 🙎🏻 Human-Written, 🤖 AI-Generated, or 🦾 AI-Augmented\n* **Confidence bar:** Visual bar (██████████ 85%) showing how certain the verdict is\n* **Metrics table:** All six forensic markers with 🟥 AI or 🟩 Human flags\n* **Agent debate:** Three verdicts with reasoning. Agent 1's gut check, Agent 2's forensic report, Agent 3's counter-argument. Each shows classification and confidence percentage.\n \nExample output for AI text:\n```\n🤖 Verdict: AI-Generated\nConfidence: ████████░░ 87%\n \n📊 Stylometric Metrics:\nBurstiness: 0.18 🟥 AI\nVocabulary Diversity: 0.36 🟥 AI\nHapax Rate: 0.32 🟥 AI\nRepetition: 0.21 🟥 AI\nTransition Density: 0.024 🟥 AI\n \n🔎 Agent 1 (Gut Check): AI (90%)\n\"Monotonous rhythm, corporate vocabulary, zero personality\"\n \n🔬 Agent 2 (Data): AI (95%)\n\"Five of six metrics flag AI. Burstiness of 0.18 well below human threshold...\"\n \n😈 Agent 3 (Critic): AI-AUGMENTED (65%)\n\"Could be human technical writing. Transition density alone not conclusive...\"\n```\n \n### Self-updating fingerprint database\n \nA separate workflow branch runs weekly to keep the AI phrase list current:\n \n1. **Check existing words:** Reads all fingerprint phrases from the data table\n2. **Find new AI tells:** Asks an LLM what phrases modern models currently overuse\n3. **Filter duplicates:** Removes words already in the database\n4. **Add to table:** Stores new phrases for future detection\n \n**Requires:** A data table (Google Sheets, Airtable, or n8n Data Table) to store fingerprint words. The workflow includes a starter list of 100+ phrases like \"delve into,\" \"it's worth noting,\" \"as of my last update.\"\n \nLLM writing patterns shift fast. What worked for GPT-3 detection does not work for GPT-4. This keeps the detector current without manual updates.\n \n### Key benefits\n \n* **Three classifications instead of binary.** Human, AI, or AI-augmented. Most real content is hybrid.\n* **You see the reasoning.** Full agent debate included. When verdicts are borderline, you can read which argument won.\n* **Transparent metrics.** Raw numbers exposed with red/green flags. No hidden scoring.\n* **Self-updating detection.** Weekly workflow finds new AI phrase patterns as models evolve.\n* **Error resilient.** If one agent fails, the workflow continues and redistributes weights.\n \n### Who this is for\n \n* Content teams verifying contractor submissions are not AI-generated\n* Educators checking student essays for AI assistance\n* Publishers screening submissions to maintain editorial standards\n* SEO teams ensuring content meets Google's helpful content guidelines\n* Researchers analyzing hybrid human-AI writing patterns\n \n### Setup\n \n* Add API credentials for at least one LLM provider (Groq, OpenAI, Gemini, or Anthropic)\n* Create a data table for AI fingerprint phrases or use n8n's built-in Data Table node\n* Populate the table with the starter list (included in workflow documentation)\n* Activate the workflow and open the chat interface\n* Paste text and wait 30-60 seconds for forensic analysis\n \n### Required APIs & credentials\n* At least one LLM provider: OpenAI, Anthropic, Google Gemini, Groq, \n  or any other provider with JSON output support. Each agent can use \n  a different provider or all can use the same one.\n  \n* Data storage for fingerprint phrases: n8n Data Table (built-in), \n  Google Sheets, or Airtable. The workflow checks this table to \n  identify known AI phrases during analysis.\n \n### How to customise it\n \n* **Swap models:** Each agent node has a chat model sub-node. Replace with any provider. Scanner works with smaller models. Analyst needs strong reasoning. Devil's Advocate needs good instruction-following.\n* **Tune thresholds:** Open Extract Stylometric Metrics code. Burstiness under 0.3 flags AI. Type-token ratio under 0.4 flags AI. Adjust for stricter or looser detection.\n* **Change agent weights:** Open Final Verdict code. Default is 35% Analyst, 15% Scanner, 15% Devil's Advocate, 35% metrics. Increase metric weight to trust data more.\n* **Modify agent personas:** Edit system prompts. Make Scanner more skeptical. Make Analyst cite sources. Make Devil's Advocate more aggressive.\n* **Add quality gate:** Drop a Filter node after verdict. Only proceed if confidence exceeds 70%.\n* **Batch process:** Replace Chat Trigger with Schedule Trigger looping over a file list.\n \n### Known limitations\n \nThe workflow works best on long-form content (500+ words). Short texts under 100 words produce less reliable metrics because statistical patterns need more data to emerge. The recalibration helps but is not perfect.\n \nAI fingerprint phrases evolve as models improve. GPT-5 might not use \"delve into\" but will have new tells. The self-updating workflow helps but lags current releases by a few weeks.\n \nThe three-agent debate architecture assumes disagreement is meaningful. For extremely niche topics where only one agent has relevant training data, the minority opinion might be correct but gets outvoted. Review the individual agent reasoning when dealing with specialized content.","workflow":{"id":"CLMJfjbdtPnwTu6S","meta":{"instanceId":"d1dc073e8e3059a23e2730f69cb1b90065a2ac39039fea0727fdf9bee77a9131","templateCredsSetupCompleted":true},"name":"AI Lie Detector: Forensic Stylometry Engine","tags":[],"nodes":[{"id":"40de2d94-0384-40b7-a572-c920d9dac3d1","name":"Sticky Note","type":"n8n-nodes-base.stickyNote","position":[-528,-16],"parameters":{"width":496,"height":768,"content":"## AI Lie Detector: Forensic Stylometry Engine\nPaste any text and three specialist agents debate whether it was written by a human, AI, or a mix of both.\n\n### How it works\n\nExtract: A code node computes stylometric metrics (burstiness, vocabulary density, repetition, sentence variance) from the raw text.\nScan: Agent 1 reads the text cold with no metrics and gives a gut reaction.\nAnalyze: Agent 2 gets the text, the metrics, and Agent 1's impression. Writes a forensic report grounded in data.\nChallenge: Agent 3 (Devil's Advocate) gets everything and argues the opposite case.\nJudge: A code node weighs the debate and metrics to produce the final verdict.\n\n### Setup\n\n- [ ] Add API credentials for the LLM providers you want to use\n- [ ] Activate the workflow and open the chat window\n- [ ] Paste any text and wait for the forensic analysis\n\n### Customization\nSwap any LLM for another. Adjust metric thresholds in the Extract Stylometric Metrics node. Modify agent personas in their system prompts."},"typeVersion":1},{"id":"203720a1-8f08-42b0-afd5-c4aa326832f5","name":"Format Chat Message","type":"n8n-nodes-base.code","position":[3536,336],"parameters":{"jsCode":"// Formats the forensic debate into a readable chat message with verdict badge, metrics table, and the debate chain.\n\nconst reportData = $input.first().json;\nconst finalVerdict = reportData.finalVerdict;\nconst agentDebate = reportData.debate;\nconst textMetrics = reportData.metrics;\nconst metricSignals = reportData.metricSignals;\n\nlet verdictBadge = \"\";\nif (finalVerdict.classification === \"human\") verdictBadge = \"🙎🏻 **Verdict: Human-Written**\";\nelse if (finalVerdict.classification === \"ai\") verdictBadge = \"🤖 **Verdict: AI-Generated**\";\nelse verdictBadge = \"🦾 **Verdict: AI-Augmented (Hybrid)**\";\n\nconst confidencePercent = Math.round(finalVerdict.confidence * 100);\nconst filledBarBlocks = Math.floor(confidencePercent / 10);\nconst emptyBarBlocks = 10 - filledBarBlocks;\nconst confidenceBar = \"█\".repeat(filledBarBlocks) + \"░\".repeat(emptyBarBlocks);\n\nlet chatMessage = `${verdictBadge}\\n`;\nchatMessage += `**Confidence:** ${confidenceBar} ${confidencePercent}%\\n\\n`;\n\nchatMessage += `**📊 Stylometric Metrics:**\\n`;\nchatMessage += `Burstiness: ${textMetrics.burstiness} ${textMetrics.burstiness < 0.3 ? '🟥 AI' : '🟩 Human'}\\n`;\nchatMessage += `Vocabulary Diversity: ${textMetrics.typeTokenRatio} ${textMetrics.typeTokenRatio < 0.4 ? '🟥 AI' : '🟩 Human'}\\n`;\nchatMessage += `Hapax Rate: ${textMetrics.hapaxRate} ${textMetrics.hapaxRate < 0.4 ? '🟥 AI' : '🟩 Human'}\\n`;\nchatMessage += `Repetition: ${textMetrics.repetitionScore} ${textMetrics.repetitionScore > 0.15 ? '🟥 AI' : '🟩 Human'}\\n`;\nchatMessage += `Transition Density: ${textMetrics.transitionDensity} ${textMetrics.transitionDensity > 0.015 ? '🟥 AI' : '🟩 Human'}\\n\\n`;\n\nchatMessage += `---\\n\\n`;\n\nchatMessage += `**🔎 Agent 1 (Gut Check):** ${agentDebate.scanner.impression.toUpperCase()} (${Math.round(agentDebate.scanner.confidence * 100)}%)\\n`;\nconst scannerReasoningText = agentDebate.scanner.gut_reasoning || \"\";\n\nconst maxScannerLength = 200;\nchatMessage += `*\"${scannerReasoningText.length > maxScannerLength ? scannerReasoningText.substring(0, maxScannerLength) + '...' : scannerReasoningText}\"*\\n\\n`;\n\nchatMessage += `**🔬 Agent 2 (Data):** ${agentDebate.analyst.classification.toUpperCase()} (${Math.round(agentDebate.analyst.confidence * 100)}%)\\n`;\nconst analystReportText = agentDebate.analyst.forensic_report || \"\";\n\nconst maxAnalystLength = 200;\nchatMessage += `*\"${analystReportText.length > maxAnalystLength ? analystReportText.substring(0, maxAnalystLength) + '...' : analystReportText}\"*\\n\\n`;\n\nchatMessage += `**😈 Agent 3 (Critic):** ${agentDebate.devilsAdvocate.counter_classification.toUpperCase()} (${Math.round(agentDebate.devilsAdvocate.confidence * 100)}%)\\n`;\nconst counterArgumentText = agentDebate.devilsAdvocate.counter_argument || \"\";\n\nconst maxDevilLength = 200;\nchatMessage += `*\"${counterArgumentText.length > maxDevilLength ? counterArgumentText.substring(0, maxDevilLength) + '...' : counterArgumentText}\"*\\n`;\n\nconst biggestFlawText = agentDebate.devilsAdvocate.strongest_weakness || \"\";\nif (biggestFlawText) {\n  chatMessage += `**Flaw Found:** ${biggestFlawText}\\n`;\n}\n\nreturn [{\n  json: {\n    output: chatMessage,\n  }\n}];"},"typeVersion":2},{"id":"7100eabf-34dc-4e36-baf1-c6b909f39875","name":"Final Verdict","type":"n8n-nodes-base.code","position":[3248,336],"parameters":{"jsCode":"// Weighs the full debate chain and raw metrics to produce the final classification, with short-text adjustments and AI fingerprint boosting.\n\nconst analystPackageOutput = $('Package Analyst Output').first().json;\nconst devilsAdvocateRawOutput = $input.first().json.output || $input.first().json.text || \"\";\n\nlet parsedDevilsAdvocateVerdict = {};\n\ntry {\n  let cleanedDevilOutput = devilsAdvocateRawOutput.trim();\n  if (cleanedDevilOutput.startsWith(\"```json\")) cleanedDevilOutput = cleanedDevilOutput.slice(7);\n  if (cleanedDevilOutput.startsWith(\"```\")) cleanedDevilOutput = cleanedDevilOutput.slice(3);\n  if (cleanedDevilOutput.endsWith(\"```\")) cleanedDevilOutput = cleanedDevilOutput.slice(0, -3);\n  parsedDevilsAdvocateVerdict = JSON.parse(cleanedDevilOutput.trim());\n} catch (parseError) {\n  parsedDevilsAdvocateVerdict = {\n    counter_classification: \"unknown\",\n    confidence: 0,\n    counter_argument: \"Devil's Advocate failed: \" + parseError.message,\n    strongest_weakness: \"N/A\",\n  };\n}\n\nconst scannerVerdict = analystPackageOutput.scannerVerdict;\nconst analystVerdict = analystPackageOutput.analystVerdict;\nconst devilsAdvocateVerdict = parsedDevilsAdvocateVerdict;\nconst textMetrics = analystPackageOutput.metrics;\n\nlet aiSignalCount = 0;\nlet humanSignalCount = 0;\n\nif (textMetrics.burstiness < 0.3) aiSignalCount += 1;\nelse humanSignalCount += 1;\n\nif (textMetrics.typeTokenRatio < 0.4) aiSignalCount += 1;\nelse humanSignalCount += 1;\n\nif (textMetrics.hapaxRate < 0.4) aiSignalCount += 1;\nelse humanSignalCount += 1;\n\nif (textMetrics.repetitionScore > 0.15) aiSignalCount += 1;\nelse humanSignalCount += 1;\n\nif (textMetrics.transitionDensity > 0.015) aiSignalCount += 1;\nelse humanSignalCount += 1;\n\nif (textMetrics.aiFingerprintCount >= 8) aiSignalCount += 4;\nelse if (textMetrics.aiFingerprintCount >= 5) aiSignalCount += 2;\nelse if (textMetrics.aiFingerprintCount >= 3) aiSignalCount += 1;\n\nif (textMetrics.transitionDensity >= 0.05) aiSignalCount += 3;\nelse if (textMetrics.transitionDensity >= 0.03) aiSignalCount += 2;\n\nif (textMetrics.totalWords < 150) {\n  if (textMetrics.burstiness < 0.2) aiSignalCount += 2;\n  if (textMetrics.transitionDensity > 0.02) aiSignalCount += 2;\n  if (textMetrics.aiFingerprintCount >= 3) aiSignalCount += 1;\n}\n\nconst totalSignalCount = aiSignalCount + humanSignalCount;\n\nconst isScannerValid = scannerVerdict.impression && scannerVerdict.impression !== \"unknown\";\nconst isAnalystValid = analystVerdict.classification && analystVerdict.classification !== \"unknown\";\nconst isDevilValid = devilsAdvocateVerdict.counter_classification && devilsAdvocateVerdict.counter_classification !== \"unknown\";\n\nlet analystWeight = 0.35;\nlet scannerWeight = 0.15;\nlet devilsAdvocateWeight = 0.15;\nlet metricsWeight = 0.35;\n\nif (!isScannerValid) {\n  metricsWeight += scannerWeight;\n  scannerWeight = 0;\n}\n\nconst metricAiRatio = totalSignalCount > 0 ? aiSignalCount / totalSignalCount : 0.5;\n\nif (metricAiRatio >= 0.70 || metricAiRatio <= 0.30) {\n  metricsWeight += 0.10;\n  devilsAdvocateWeight -= 0.10;\n}\n\nfunction convertClassificationToScore(classification) {\n  if (classification === \"ai\") return 1.0;\n  if (classification === \"ai-augmented\") return 0.5;\n  if (classification === \"human\") return 0.0;\n  return 0.5;\n}\n\nconst scannerNumericScore = isScannerValid\n  ? convertClassificationToScore(scannerVerdict.impression) * scannerVerdict.confidence\n  : 0;\n\nconst analystNumericScore = isAnalystValid\n  ? convertClassificationToScore(analystVerdict.classification) * analystVerdict.confidence\n  : 0.5;\n\nconst devilsAdvocateNumericScore = isDevilValid\n  ? convertClassificationToScore(devilsAdvocateVerdict.counter_classification) * devilsAdvocateVerdict.confidence\n  : 0.5;\n\nconst metricsNumericScore = totalSignalCount > 0 ? aiSignalCount / totalSignalCount : 0.5;\n\nconst combinedWeightedScore =\n  (analystNumericScore * analystWeight) +\n  (scannerNumericScore * scannerWeight) +\n  (devilsAdvocateNumericScore * devilsAdvocateWeight) +\n  (metricsNumericScore * metricsWeight);\n\nlet finalClassification = \"ai-augmented\";\nlet finalConfidenceScore = 0.5;\n\nif (combinedWeightedScore >= 0.60) {\n  finalClassification = \"ai\";\n  finalConfidenceScore = Math.min(0.95, 0.5 + combinedWeightedScore);\n} else if (combinedWeightedScore <= 0.35) {\n  finalClassification = \"human\";\n  finalConfidenceScore = Math.min(0.95, 0.5 + (1 - combinedWeightedScore));\n} else {\n  finalClassification = \"ai-augmented\";\n  finalConfidenceScore = 0.5 + Math.abs(combinedWeightedScore - 0.5);\n}\n\nfinalConfidenceScore = Math.round(finalConfidenceScore * 100) / 100;\n\nconst failedAgentNames = [];\nif (!isScannerValid) failedAgentNames.push(\"Scanner\");\nif (!isAnalystValid) failedAgentNames.push(\"Forensic Analyst\");\nif (!isDevilValid) failedAgentNames.push(\"Devil's Advocate\");\n\nconst allAgentVotes = [];\nif (isScannerValid) allAgentVotes.push(scannerVerdict.impression);\nif (isAnalystValid) allAgentVotes.push(analystVerdict.classification);\nif (isDevilValid) allAgentVotes.push(devilsAdvocateVerdict.counter_classification);\n\nconst voteCountsByClassification = {};\nfor (const vote of allAgentVotes) {\n  voteCountsByClassification[vote] = (voteCountsByClassification[vote] || 0) + 1;\n}\n\nreturn [{\n  json: {\n    finalVerdict: {\n      classification: finalClassification,\n      confidence: finalConfidenceScore,\n      weightedScore: Math.round(combinedWeightedScore * 100) / 100,\n    },\n    debate: {\n      scanner: scannerVerdict,\n      analyst: analystVerdict,\n      devilsAdvocate: devilsAdvocateVerdict,\n    },\n    metrics: textMetrics,\n    metricSignals: { ai: aiSignalCount, human: humanSignalCount },\n    voteCounts: voteCountsByClassification,\n    originalText: analystPackageOutput.originalText,\n    failedAgents: failedAgentNames,\n  }\n}];"},"typeVersion":2},{"id":"a552c123-5e25-4997-9bab-1627281b285a","name":"Agent 3 - Devil's Advocate","type":"@n8n/n8n-nodes-langchain.agent","onError":"continueRegularOutput","position":[2736,336],"parameters":{"text":"==You are the Devil's Advocate. Two agents have already analyzed this text. Your ONLY job is to argue the OPPOSITE of the Forensic Analyst's conclusion. If they said AI, you argue human. If they said human, you argue AI. You must find real weaknesses in their reasoning.\n\n===START OF TEXT TO ANALYZE===\n{{ $json.originalText }}\n===END OF TEXT TO ANALYZE===\n\nSCANNER'S IMPRESSION:\nClassification: {{ $json.scannerVerdict.impression }} ({{ $json.scannerVerdict.confidence }} confidence)\nReasoning: {{ $json.scannerVerdict.gut_reasoning }}\n\nFORENSIC ANALYST'S VERDICT:\nClassification: {{ $json.analystVerdict.classification }} ({{ $json.analystVerdict.confidence }} confidence)\nReport: {{ $json.analystVerdict.forensic_report }}\nAgrees with Scanner: {{ $json.analystVerdict.agrees_with_scanner }}\n\nMETRICS:\n- Burstiness: {{ $json.metrics.burstiness }}\n- Type-Token Ratio: {{ $json.metrics.typeTokenRatio }}\n- Hapax Rate: {{ $json.metrics.hapaxRate }}\n- Repetition Score: {{ $json.metrics.repetitionScore }}\n- Transition Density: {{ $json.metrics.transitionDensity }}\n\nReturn ONLY valid JSON on a single line with no line breaks inside values:\n{\"counter_classification\": \"human or ai or ai-augmented\", \"confidence\": 0.60, \"counter_argument\": \"Aggressively argue why the Analyst is wrong. Find holes in their logic. Point to metrics they misinterpreted or ignored.\", \"strongest_weakness\": \"What is the single biggest flaw in the Analysts reasoning?\"}","options":{"systemMessage":"You are a hostile cross-examiner. Your job is to destroy the previous analysis. Do not agree with the Forensic Analyst under any circumstances. Find every possible flaw. Return valid JSON only."},"promptType":"define"},"typeVersion":3.1},{"id":"4fda3540-33e8-4bde-a126-acdf525d0001","name":"Package Analyst Output","type":"n8n-nodes-base.code","position":[2480,336],"parameters":{"jsCode":"// Packages the Forensic Analyst's raw output together with everything accumulated so far so the Devil's Advocate has the full debate history.\n\nconst scannerPackageOutput = $('Package Scanner Output').first().json;\nconst analystRawOutput = $input.first().json.output || $input.first().json.text || \"\";\n\nlet parsedAnalystVerdict = {};\n\ntry {\n  let cleanedAnalystOutput = analystRawOutput.trim();\n  if (cleanedAnalystOutput.startsWith(\"```json\")) cleanedAnalystOutput = cleanedAnalystOutput.slice(7);\n  if (cleanedAnalystOutput.startsWith(\"```\")) cleanedAnalystOutput = cleanedAnalystOutput.slice(3);\n  if (cleanedAnalystOutput.endsWith(\"```\")) cleanedAnalystOutput = cleanedAnalystOutput.slice(0, -3);\n  parsedAnalystVerdict = JSON.parse(cleanedAnalystOutput.trim());\n} catch (parseError) {\n  parsedAnalystVerdict = {\n    classification: \"unknown\",\n    confidence: 0,\n    forensic_report: \"Analyst failed to produce valid output: \" + parseError.message,\n    agrees_with_scanner: false,\n  };\n}\n\nreturn [{\n  json: {\n    originalText: scannerPackageOutput.originalText,\n    metrics: scannerPackageOutput.metrics,\n    scannerVerdict: scannerPackageOutput.scannerVerdict,\n    analystVerdict: parsedAnalystVerdict,\n  }\n}];"},"typeVersion":2},{"id":"10cd22d8-66b5-46e5-ac84-154c276b7f16","name":"Agent 2 - Forensic Analyst","type":"@n8n/n8n-nodes-langchain.agent","onError":"continueRegularOutput","position":[2144,336],"parameters":{"text":"=You are a forensic linguist. An initial scanner has already reviewed this text and given a gut impression. Your job is different: you have hard data. Use the stylometric metrics below to build a rigorous forensic case.\n\nTEXT:\n{{ $json.originalText }}\n\nSCANNER'S IMPRESSION:\nClassification: {{ $json.scannerVerdict.impression }}\nConfidence: {{ $json.scannerVerdict.confidence }}\nReasoning: {{ $json.scannerVerdict.gut_reasoning }}\n\nSTYLOMETRIC METRICS:\n- Burstiness: {{ $json.metrics.burstiness }} (sentence length variation, higher = more human-like)\n- Type-Token Ratio: {{ $json.metrics.typeTokenRatio }} (vocabulary diversity, lower = more AI-like)\n- Hapax Rate: {{ $json.metrics.hapaxRate }} (words used only once, lower = constrained vocabulary)\n- Repetition Score: {{ $json.metrics.repetitionScore }} (bigram repetition, higher = more repetitive)\n- Transition Density: {{ $json.metrics.transitionDensity }} (filler/transition words, higher = AI signal)\n- Avg Sentence Length: {{ $json.metrics.avgSentenceLength }}\n- Sentence Variance: {{ $json.metrics.sentenceLengthVariance }}\n- Paragraph Variance: {{ $json.metrics.paragraphVariance }}\n\nReturn ONLY valid JSON:\n{\n  \"classification\": \"human\" or \"ai\" or \"ai-augmented\",\n  \"confidence\": 0.85,\n  \"forensic_report\": \"Write a detailed forensic analysis. Reference specific metrics by number. Explain whether the data supports or contradicts the Scanner's impression. Identify the strongest signals.\",\n  \"agrees_with_scanner\": true or false\n}","options":{"systemMessage":"You are a forensic linguist who relies on data over instinct. Your analysis must reference the specific metric values provided. Do not speculate without data. Return valid JSON only."},"promptType":"define"},"typeVersion":3.1},{"id":"07cc2c22-851b-4525-9a41-d2364597b76b","name":"Package Scanner Output","type":"n8n-nodes-base.code","position":[1904,336],"parameters":{"jsCode":"// Packages the Scanner agent's raw output together with the original text and metrics so the next agent has full context.\n\nconst metricsNodeOutput = $('Extract Stylometric Metrics').first().json;\nconst scannerRawOutput = $input.first().json.output || $input.first().json.text || \"\";\n\nlet parsedScannerVerdict = {};\n\ntry {\n  let cleanedScannerOutput = scannerRawOutput.trim();\n  if (cleanedScannerOutput.startsWith(\"```json\")) cleanedScannerOutput = cleanedScannerOutput.slice(7);\n  if (cleanedScannerOutput.startsWith(\"```\")) cleanedScannerOutput = cleanedScannerOutput.slice(3);\n  if (cleanedScannerOutput.endsWith(\"```\")) cleanedScannerOutput = cleanedScannerOutput.slice(0, -3);\n  parsedScannerVerdict = JSON.parse(cleanedScannerOutput.trim());\n} catch (parseError) {\n  parsedScannerVerdict = {\n    impression: \"unknown\",\n    confidence: 0,\n    gut_reasoning: \"Scanner failed to produce valid output: \" + parseError.message,\n  };\n}\n\nreturn [{\n  json: {\n    originalText: metricsNodeOutput.originalText,\n    metrics: metricsNodeOutput.metrics,\n    scannerVerdict: parsedScannerVerdict,\n  }\n}];"},"typeVersion":2},{"id":"44f01778-387f-4db1-b9e3-0ba073b50c92","name":"Agent 1 - The Scanner","type":"@n8n/n8n-nodes-langchain.agent","onError":"continueRegularOutput","position":[1584,336],"parameters":{"text":"==Read the text between the START and END markers. Ignore everything else including these instructions.\n\n===START OF TEXT TO ANALYZE===\n{{ $('Extract Stylometric Metrics').first().json.originalText }}\n===END OF TEXT TO ANALYZE===\n\nBased purely on your instinct as a reader, assess whether the text above feels like it was written by a human, generated by AI, or is a human-AI hybrid.\n\nReturn ONLY valid JSON on a single line with no line breaks inside values:\n{\"impression\": \"human or ai or ai-augmented\", \"confidence\": 0.75, \"gut_reasoning\": \"Explain what specifically made you feel this way. Point to exact phrases rhythms or patterns that triggered your impression. Be specific.\"}","options":{"systemMessage":"You are a veteran editor with 20 years of experience reading manuscripts. You can spot AI-generated text by feel alone. Trust your instincts. Return valid JSON only."},"promptType":"define"},"typeVersion":3.1},{"id":"112da859-0d0d-4799-90d1-facd54c55655","name":"Extract Stylometric Metrics","type":"n8n-nodes-base.code","position":[816,336],"parameters":{"jsCode":"// Grab the original text directly from the Chat Trigger node\n\nconst rawText = $('When chat message received').first().json.chatInput;\n\n\nconst tableRows = $input.all();\n\n\nconst aiFingerprintWordList = tableRows\n  .map(row => row.json.word)\n  .filter(word => word && word.trim().length > 0)\n  .map(word => word.trim().toLowerCase());\n\n\nconst allSentences = rawText.split(/[.!?]+/).filter((sentence) => sentence.trim().length > 0);\nconst allWordsLowercase = rawText.toLowerCase().split(/\\s+/).filter((word) => word.length > 0);\nconst uniqueWordSet = new Set(allWordsLowercase);\n\nconst sentenceLengthsInWords = allSentences.map((sentence) => sentence.trim().split(/\\s+/).length);\nconst averageSentenceLength = sentenceLengthsInWords.reduce((sum, length) => sum + length, 0) / (sentenceLengthsInWords.length || 1);\n\nconst sentenceLengthVariance = sentenceLengthsInWords.reduce((sum, length) => {\n  return sum + Math.pow(length - averageSentenceLength, 2);\n}, 0) / (sentenceLengthsInWords.length || 1);\n\nconst burstinessScore = Math.sqrt(sentenceLengthVariance) / (averageSentenceLength || 1);\n\nconst typeTokenRatio = allWordsLowercase.length > 0 ? uniqueWordSet.size / allWordsLowercase.length : 0;\n\nconst wordFrequencyMap = {};\nfor (const word of allWordsLowercase) {\n  wordFrequencyMap[word] = (wordFrequencyMap[word] || 0) + 1;\n}\nconst wordsAppearingOnce = Object.values(wordFrequencyMap).filter((count) => count === 1).length;\nconst hapaxLegomenaRate = uniqueWordSet.size > 0 ? wordsAppearingOnce / uniqueWordSet.size : 0;\n\nconst allBigrams = [];\nfor (let i = 0; i < allWordsLowercase.length - 1; i++) {\n  allBigrams.push(allWordsLowercase[i] + \" \" + allWordsLowercase[i + 1]);\n}\nconst uniqueBigramSet = new Set(allBigrams);\nconst bigramRepetitionScore = allBigrams.length > 0 ? 1 - (uniqueBigramSet.size / allBigrams.length) : 0;\n\nconst allParagraphs = rawText.split(/\\n\\n+/).filter((paragraph) => paragraph.trim().length > 0);\nconst paragraphLengthsInWords = allParagraphs.map((paragraph) => paragraph.trim().split(/\\s+/).length);\nconst averageParagraphLength = paragraphLengthsInWords.reduce((sum, length) => sum + length, 0) / (paragraphLengthsInWords.length || 1);\nconst paragraphLengthVariance = paragraphLengthsInWords.reduce((sum, length) => {\n  return sum + Math.pow(length - averageParagraphLength, 2);\n}, 0) / (paragraphLengthsInWords.length || 1);\n\nconst detectedFingerprintWords = [];\nconst fingerprintMatchCount = allWordsLowercase.filter((word) => {\n  if (aiFingerprintWordList.includes(word)) {\n    detectedFingerprintWords.push(word);\n    return true;\n  }\n  return false;\n}).length;\nconst fingerprintDensity = allWordsLowercase.length > 0 ? fingerprintMatchCount / allWordsLowercase.length : 0;\n\nconst extractedMetrics = {\n  totalWords: allWordsLowercase.length,\n  totalSentences: allSentences.length,\n  totalParagraphs: allParagraphs.length,\n  avgSentenceLength: Math.round(averageSentenceLength * 100) / 100,\n  sentenceLengthVariance: Math.round(sentenceLengthVariance * 100) / 100,\n  burstiness: Math.round(burstinessScore * 100) / 100,\n  typeTokenRatio: Math.round(typeTokenRatio * 100) / 100,\n  hapaxRate: Math.round(hapaxLegomenaRate * 100) / 100,\n  repetitionScore: Math.round(bigramRepetitionScore * 100) / 100,\n  avgParagraphLength: Math.round(averageParagraphLength * 100) / 100,\n  paragraphVariance: Math.round(paragraphLengthVariance * 100) / 100,\n  transitionDensity: Math.round(fingerprintDensity * 1000) / 1000,\n  aiFingerprintsFound: detectedFingerprintWords,\n  aiFingerprintCount: detectedFingerprintWords.length,\n};\n\nreturn [{\n  json: {\n    originalText: rawText,\n    metrics: extractedMetrics,\n  }\n}];"},"typeVersion":2},{"id":"dab14166-da47-45d1-9d6a-ffc82387854d","name":"Sticky Note5","type":"n8n-nodes-base.stickyNote","disabled":true,"position":[3984,128],"parameters":{"color":7,"width":448,"height":384,"content":"## Chat Response\n\nFormats the forensic report and sends it to the user"},"typeVersion":1},{"id":"ba54ff47-b150-4190-802b-ea7d78fa2d4d","name":"Sticky Note4","type":"n8n-nodes-base.stickyNote","position":[3120,128],"parameters":{"color":7,"width":768,"height":384,"content":"## Final Verdict\n\nWeighs the full debate chain and metrics to produce the classification"},"typeVersion":1},{"id":"7a7b2ef3-c43e-410b-8ccc-52f995773656","name":"Sticky Note3","type":"n8n-nodes-base.stickyNote","position":[1200,64],"parameters":{"color":7,"width":1840,"height":640,"content":"## Sequential Forensic Debate\n\nThree specialists analyze the text in sequence, each building on the previous agent's output"},"typeVersion":1},{"id":"2b6e3a76-b40c-4d72-aaa1-75870810ca35","name":"Sticky Note2","type":"n8n-nodes-base.stickyNote","position":[624,160],"parameters":{"color":7,"width":496,"height":368,"content":"## Metrics Extraction\n\nComputes burstiness, vocabulary density, repetition, sentence variance from raw text"},"typeVersion":1},{"id":"bf5b6ba1-8999-4725-8fc7-59a764cd5063","name":"When chat message received","type":"@n8n/n8n-nodes-langchain.chatTrigger","position":[144,336],"webhookId":"e447f182-a406-40b9-aba0-a4f56b52b82d","parameters":{"public":true,"options":{"responseMode":"responseNodes"},"initialMessages":"Hi there! 👋\nPaste any text and I'll analyze whether it was written by a human, AI, or a mix of both."},"typeVersion":1.4},{"id":"ce245602-ee8c-4330-811e-86d94982efb0","name":"Schedule Trigger","type":"n8n-nodes-base.scheduleTrigger","position":[176,1104],"parameters":{"rule":{"interval":[{"field":"months"}]}},"typeVersion":1.2},{"id":"61a36f67-e9b7-4d9a-a4b9-fea4afb63168","name":"Format Existing List","type":"n8n-nodes-base.code","position":[624,1104],"parameters":{"jsCode":"// Collects all existing fingerprint words from the data table into a single comma-separated string for the LLM to reference.\n\nconst words = $input.all().map(item => item.json.word).filter(w => w);\nreturn [{ json: { existingWords: words.join(', ') } }];"},"typeVersion":2},{"id":"57624427-95ba-4da5-8877-1c2b69c57339","name":"Find New Words","type":"@n8n/n8n-nodes-langchain.chainLlm","position":[848,1104],"parameters":{"text":"=You are an expert forensic linguist tracking AI text generation trends. Identify 5 NEW vocabulary words that modern LLMs (like GPT-4 or Claude) currently overuse in their outputs (e.g., flowery, corporate, or repetitive filler words).\n\nCRITICAL: Do NOT include any of these words we already track:\n{{ $json.existingWords }}\n\nReturn ONLY a comma-separated list of the 5 new words in lowercase. No intro, no bullet points, no extra text.","promptType":"define"},"typeVersion":1.4},{"id":"266422c6-5a29-4651-8c5e-59da049372a3","name":"Split into Rows","type":"n8n-nodes-base.code","position":[1184,1104],"parameters":{"jsCode":"// Splits the LLM's comma-separated response into individual words and formats each as a separate row for saving to the data table.\n\nconst response = $input.first().json.text || $input.first().json.output || \"\";\nconst newWords = response.split(',').map(w => w.trim().toLowerCase()).filter(w => w.length > 0);\n\nreturn newWords.map(word => ({\n  json: { word: word }\n}));"},"typeVersion":2},{"id":"7343b1a2-ae04-4f91-b962-66854bf1e084","name":"Agent Orchestrator","type":"@n8n/n8n-nodes-langchain.chat","position":[1328,336],"webhookId":"30a9f8bc-058d-4050-9d98-98ee604fa9ad","parameters":{"message":"🔍 Analyzing your text... Three specialist agents are about to debate it. This takes 30-60 seconds.","options":{}},"typeVersion":1.3},{"id":"46a11b60-92a3-409d-916f-590f956e070b","name":"Load Fingerprint List","type":"n8n-nodes-base.dataTable","position":[368,336],"parameters":{"operation":"get","dataTableId":{"__rl":true,"mode":"list","value":"1uOX45z3usViNlZs","cachedResultUrl":"/projects/hDHhdcMr4jVn06kt/datatables/1uOX45z3usViNlZs","cachedResultName":"AIFingerprints"}},"typeVersion":1.1},{"id":"7b6f2eab-fd4a-48e0-89f5-ce034a5d118b","name":"Send Final Report","type":"@n8n/n8n-nodes-langchain.chat","position":[4080,336],"webhookId":"1f564643-1779-4463-beff-fef07b27395f","parameters":{"message":"={{ $json.output }}","options":{}},"typeVersion":1.3},{"id":"9b2a6106-ea4b-4aaf-b8d5-f740f851664f","name":"Check Existing Words","type":"n8n-nodes-base.dataTable","position":[384,1104],"parameters":{"operation":"get","dataTableId":{"__rl":true,"mode":"list","value":"1uOX45z3usViNlZs","cachedResultUrl":"/projects/hDHhdcMr4jVn06kt/datatables/1uOX45z3usViNlZs","cachedResultName":"AIFingerprints"}},"typeVersion":1.1},{"id":"54f505bd-79f5-4666-86f8-2116dbaca755","name":"Save New Fingerprints","type":"n8n-nodes-base.dataTable","position":[1392,1104],"parameters":{"columns":{"value":{},"schema":[{"id":"word","type":"string","display":true,"removed":false,"readOnly":false,"required":false,"displayName":"word","defaultMatch":false}],"mappingMode":"autoMapInputData","matchingColumns":["word"],"attemptToConvertTypes":false,"convertFieldsToString":false},"options":{},"dataTableId":{"__rl":true,"mode":"list","value":"1uOX45z3usViNlZs","cachedResultUrl":"/projects/hDHhdcMr4jVn06kt/datatables/1uOX45z3usViNlZs","cachedResultName":"AIFingerprints"}},"typeVersion":1.1},{"id":"3774aae6-d1f8-4626-9184-c997d755299b","name":"LLM - Generator","type":"@n8n/n8n-nodes-langchain.lmChatGroq","position":[784,1312],"parameters":{"model":"openai/gpt-oss-safeguard-20b","options":{}},"typeVersion":1},{"id":"bf8a5267-56eb-486c-84fd-761f6964ca53","name":"Sticky Note1","type":"n8n-nodes-base.stickyNote","position":[-496,928],"parameters":{"width":464,"height":528,"content":"##  Fingerprint Generator Section\n**Runs:** Monthly (1st at midnight) OR manual trigger\n\n**Purpose:** Keeps detection current by asking an LLM to generate fresh AI fingerprint words based on latest model patterns (GPT-4o, Claude 3.5+, Gemini 1.5+, Llama 3.3+).\n\n**Flow:**\n1. Schedule Trigger → Get current date\n2. Load existing fingerprints from data table\n3. Ask LLM: \"What new AI markers emerged this month?\"\n4. Compare with existing words (avoid duplicates)\n5. Insert new words into data table with categories + weights\n\n**Customization:**\n- Change schedule: Edit cron expression (default: `0 0 1 * *`)\n- Adjust prompt: Edit \"Find New Words\" node\n- Different LLM: Swap \"LLM - Generator\" node\n\n**First-Time Setup:**\nRun this section ONCE manually to populate initial data table with ~80-100 fingerprint words."},"typeVersion":1},{"id":"da2079a1-1c31-4e42-ba48-dfcc3c90717a","name":"Sticky Note11","type":"n8n-nodes-base.stickyNote","position":[80,928],"parameters":{"color":7,"width":1584,"height":528,"content":"## Generator (Monthly Auto-Update)\n\nRuns 1st of each month: Loads existing words → Asks LLM for new AI fingerprints → Saves to data table\n\n**First run:** Populates initial 80-100 fingerprint words"},"typeVersion":1},{"id":"0fa04de9-feec-40da-8804-52804d5ee7b7","name":"Sticky Note12","type":"n8n-nodes-base.stickyNote","position":[32,160],"parameters":{"color":7,"width":560,"height":368,"content":"## Input & Metrics Extraction\n\nChat trigger receives text → Loads fingerprints from data table → Extracts stylometric signals"},"typeVersion":1},{"id":"87687ccf-8cbc-4377-8a40-1070d8697dc4","name":"LLM - Devil's Advocate","type":"@n8n/n8n-nodes-langchain.lmChatOpenAi","onError":"continueRegularOutput","position":[2800,560],"parameters":{"model":{"__rl":true,"mode":"list","value":"gpt-5-mini"},"options":{},"builtInTools":{}},"typeVersion":1.3},{"id":"c65987f4-612d-4699-a982-99cde99d1158","name":"LLM Scanner","type":"@n8n/n8n-nodes-langchain.lmChatAnthropic","position":[1456,544],"parameters":{"model":{"__rl":true,"mode":"list","value":"claude-sonnet-4-5-20250929","cachedResultName":"Claude Sonnet 4.5"},"options":{}},"credentials":{"anthropicApi":{"id":"9nzHwX0Ed87LaDrh","name":"Anthropic account"}},"typeVersion":1.3},{"id":"92bec431-ff76-4ecb-a1a1-3c979e56f3cd","name":"LLM - Analyst","type":"@n8n/n8n-nodes-langchain.lmChatGoogleGemini","position":[2016,544],"parameters":{"options":{}},"credentials":{"googlePalmApi":{"id":"qQGrvqnSPqWFH6I6","name":"Google Gemini(PaLM) Api account 5"}},"typeVersion":1}],"active":false,"pinData":{},"settings":{"binaryMode":"separate","callerPolicy":"workflowsFromSameOwner","timeSavedMode":"fixed","availableInMCP":false,"executionOrder":"v1","executionTimeout":180},"versionId":"db08ce0d-43c2-416a-91af-a48086b05199","connections":{"LLM Scanner":{"ai_languageModel":[[{"node":"Agent 1 - The Scanner","type":"ai_languageModel","index":0}]]},"Final Verdict":{"main":[[{"node":"Format Chat Message","type":"main","index":0}]]},"LLM - Analyst":{"ai_languageModel":[[{"node":"Agent 2 - Forensic Analyst","type":"ai_languageModel","index":0}]]},"Find New Words":{"main":[[{"node":"Split into Rows","type":"main","index":0}]]},"LLM - Generator":{"ai_languageModel":[[{"node":"Find New Words","type":"ai_languageModel","index":0}]]},"Split into Rows":{"main":[[{"node":"Save New Fingerprints","type":"main","index":0}]]},"Schedule Trigger":{"main":[[{"node":"Check Existing Words","type":"main","index":0}]]},"Agent Orchestrator":{"main":[[{"node":"Agent 1 - The Scanner","type":"main","index":0}]]},"Format Chat Message":{"main":[[{"node":"Send Final Report","type":"main","index":0}]]},"Check Existing Words":{"main":[[{"node":"Format Existing List","type":"main","index":0}]]},"Format Existing List":{"main":[[{"node":"Find New Words","type":"main","index":0}]]},"Agent 1 - The Scanner":{"main":[[{"node":"Package Scanner Output","type":"main","index":0}]]},"Load Fingerprint List":{"main":[[{"node":"Extract Stylometric Metrics","type":"main","index":0}]]},"LLM - Devil's Advocate":{"ai_languageModel":[[{"node":"Agent 3 - Devil's Advocate","type":"ai_languageModel","index":0}]]},"Package Analyst Output":{"main":[[{"node":"Agent 3 - Devil's Advocate","type":"main","index":0}]]},"Package Scanner Output":{"main":[[{"node":"Agent 2 - Forensic Analyst","type":"main","index":0}]]},"Agent 2 - Forensic Analyst":{"main":[[{"node":"Package Analyst Output","type":"main","index":0}]]},"Agent 3 - Devil's Advocate":{"main":[[{"node":"Final Verdict","type":"main","index":0}]]},"When chat message received":{"main":[[{"node":"Load Fingerprint List","type":"main","index":0}]]},"Extract Stylometric Metrics":{"main":[[{"node":"Agent Orchestrator","type":"main","index":0}]]}}},"lastUpdatedBy":1,"workflowInfo":{"nodeCount":30,"nodeTypes":{"n8n-nodes-base.code":{"count":7},"n8n-nodes-base.dataTable":{"count":3},"n8n-nodes-base.stickyNote":{"count":8},"@n8n/n8n-nodes-langchain.chat":{"count":2},"@n8n/n8n-nodes-langchain.agent":{"count":3},"n8n-nodes-base.scheduleTrigger":{"count":1},"@n8n/n8n-nodes-langchain.chainLlm":{"count":1},"@n8n/n8n-nodes-langchain.lmChatGroq":{"count":1},"@n8n/n8n-nodes-langchain.chatTrigger":{"count":1},"@n8n/n8n-nodes-langchain.lmChatOpenAi":{"count":1},"@n8n/n8n-nodes-langchain.lmChatAnthropic":{"count":1},"@n8n/n8n-nodes-langchain.lmChatGoogleGemini":{"count":1}}},"status":"published","readyToDemo":null,"user":{"name":"Mychel Garzon","username":"mychel-garzon","bio":"n8n Verified Creator and Junction 2025 n8n Tech Challenge Winner based in Helsinki, Finland. Full Stack Engineer specializing in AI automation workflows, multi-agent systems, RAG pipelines, and automated incident triage. Node.js, TypeScript, React, LLMs (OpenAI, Anthropic, Gemini, Groq). 99.9% production uptime.\n\nCustom n8n workflows: mychel.garzon@gmail.com","verified":true,"links":["https://mychelgarzon.com/"],"avatar":"https://gravatar.com/avatar/8937dc435f1eb7cc47cfc0139be315f5e28add64bc872edc5e5315137ee12b75?r=pg&d=retro&size=200"},"nodes":[{"id":565,"icon":"fa:sticky-note","name":"n8n-nodes-base.stickyNote","codex":{"data":{"alias":["Comments","Notes","Sticky"],"categories":["Core Nodes"],"nodeVersion":"1.0","codexVersion":"1.0","subcategories":{"Core Nodes":["Helpers"]}}},"group":"[\"input\"]","defaults":{"name":"Sticky Note","color":"#FFD233"},"iconData":{"icon":"sticky-note","type":"icon"},"displayName":"Sticky Note","typeVersion":1,"nodeCategories":[{"id":9,"name":"Core Nodes"}]},{"id":834,"icon":"file:code.svg","name":"n8n-nodes-base.code","codex":{"data":{"alias":["cpde","Javascript","JS","Python","Script","Custom Code","Function"],"details":"The Code node allows you to execute JavaScript in your workflow.","resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-base.code/"}]},"categories":["Development","Core Nodes"],"nodeVersion":"1.0","codexVersion":"1.0","subcategories":{"Core Nodes":["Helpers","Data Transformation"]}}},"group":"[\"transform\"]","defaults":{"name":"Code"},"iconData":{"type":"file","fileBuffer":"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNTEyIiBoZWlnaHQ9IjUxMiIgdmlld0JveD0iMCAwIDUxMiA1MTIiIGZpbGw9Im5vbmUiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CjxnIGNsaXAtcGF0aD0idXJsKCNjbGlwMF8xMTcxXzQ0MSkiPgo8cGF0aCBkPSJNMTcwLjI4MyA0OEgxOTYuNUMyMDMuMTI3IDQ4IDIwOC41IDQyLjYyNzQgMjA4LjUgMzZWMTJDMjA4LjUgNS4zNzI1OCAyMDMuMTI3IDAgMTk2LjUgMEgxNzAuMjgzQzEyNi4xIDAgOTAuMjgzIDM1LjgxNzIgOTAuMjgzIDgwVjE3NkM5MC4yODMgMjA2LjkyOCA2NS4yMTA5IDIzMiAzNC4yODMgMjMySDIzQzE2LjM3MjYgMjMyIDExIDIzNy4zNzIgMTEgMjQ0VjI2OEMxMSAyNzQuNjI3IDE2LjM3MjQgMjgwIDIyLjk5OTYgMjgwTDM0LjI4MyAyODBDNjUuMjEwOSAyODAgOTAuMjgzIDMwNS4wNzIgOTAuMjgzIDMzNlY0NDBDOTAuMjgzIDQ3OS43NjQgMTIyLjUxOCA1MTIgMTYyLjI4MyA1MTJIMTk2LjVDMjAzLjEyNyA1MTIgMjA4LjUgNTA2LjYyNyAyMDguNSA1MDBWNDc2QzIwOC41IDQ2OS4zNzMgMjAzLjEyNyA0NjQgMTk2LjUgNDY0SDE2Mi4yODNDMTQ5LjAyOCA0NjQgMTM4LjI4MyA0NTMuMjU1IDEzOC4yODMgNDQwVjMzNkMxMzguMjgzIDMwOS4wMjIgMTI4LjAxMSAyODQuNDQzIDExMS4xNjQgMjY1Ljk2MUMxMDYuMTA5IDI2MC40MTYgMTA2LjEwOSAyNTEuNTg0IDExMS4xNjQgMjQ2LjAzOUMxMjguMDExIDIyNy41NTcgMTM4LjI4MyAyMDIuOTc4IDEzOC4yODMgMTc2VjgwQzEzOC4yODMgNjIuMzI2OSAxNTIuNjEgNDggMTcwLjI4MyA0OFoiIGZpbGw9IiNGRjk5MjIiLz4KPHBhdGggZD0iTTMwNSAzNkMzMDUgNDIuNjI3NCAzMTAuMzczIDQ4IDMxNyA0OEgzNDIuOTc5QzM2MC42NTIgNDggMzc0Ljk3OCA2Mi4zMjY5IDM3NC45NzggODBWMTc2QzM3NC45NzggMjAyLjk3OCAzODUuMjUxIDIyNy41NTcgNDAyLjA5OCAyNDYuMDM5QzQwNy4xNTMgMjUxLjU4NCA0MDcuMTUzIDI2MC40MTYgNDAyLjA5OCAyNjUuOTYxQzM4NS4yNTEgMjg0LjQ0MyAzNzQuOTc4IDMwOS4wMjIgMzc0Ljk3OCAzMzZWNDMyQzM3NC45NzggNDQ5LjY3MyAzNjAuNjUyIDQ2NCAzNDIuOTc5IDQ2NEgzMTdDMzEwLjM3MyA0NjQgMzA1IDQ2OS4zNzMgMzA1IDQ3NlY1MDBDMzA1IDUwNi42MjcgMzEwLjM3MyA1MTIgMzE3IDUxMkgzNDIuOTc5QzM4Ny4xNjEgNTEyIDQyMi45NzggNDc2LjE4MyA0MjIuOTc4IDQzMlYzMzZDNDIyLjk3OCAzMDUuMDcyIDQ0OC4wNTEgMjgwIDQ3OC45NzkgMjgwSDQ5MEM0OTYuNjI3IDI4MCA1MDIgMjc0LjYyOCA1MDIgMjY4VjI0NEM1MDIgMjM3LjM3MyA0OTYuNjI4IDIzMiA0OTAgMjMyTDQ3OC45NzkgMjMyQzQ0OC4wNTEgMjMyIDQyMi45NzggMjA2LjkyOCA0MjIuOTc4IDE3NlY4MEM0MjIuOTc4IDM1LjgxNzIgMzg3LjE2MSAwIDM0Mi45NzkgMEgzMTdDMzEwLjM3MyAwIDMwNSA1LjM3MjU4IDMwNSAxMlYzNloiIGZpbGw9IiNGRjk5MjIiLz4KPC9nPgo8ZGVmcz4KPGNsaXBQYXRoIGlkPSJjbGlwMF8xMTcxXzQ0MSI+CjxyZWN0IHdpZHRoPSI1MTIiIGhlaWdodD0iNTEyIiBmaWxsPSJ3aGl0ZSIvPgo8L2NsaXBQYXRoPgo8L2RlZnM+Cjwvc3ZnPgo="},"displayName":"Code","typeVersion":2,"nodeCategories":[{"id":5,"name":"Development"},{"id":9,"name":"Core Nodes"}]},{"id":839,"icon":"fa:clock","name":"n8n-nodes-base.scheduleTrigger","codex":{"data":{"alias":["Time","Scheduler","Polling","Cron","Interval"],"resources":{"generic":[],"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-base.scheduletrigger/"}]},"categories":["Core Nodes"],"nodeVersion":"1.0","codexVersion":"1.0"}},"group":"[\"trigger\",\"schedule\"]","defaults":{"name":"Schedule Trigger","color":"#31C49F"},"iconData":{"icon":"clock","type":"icon"},"displayName":"Schedule Trigger","typeVersion":1,"nodeCategories":[{"id":9,"name":"Core Nodes"}]},{"id":1119,"icon":"fa:robot","name":"@n8n/n8n-nodes-langchain.agent","codex":{"data":{"alias":["LangChain","Chat","Conversational","Plan and Execute","ReAct","Tools"],"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/"}]},"categories":["AI","Langchain"],"subcategories":{"AI":["Agents","Root Nodes"]}}},"group":"[\"transform\"]","defaults":{"name":"AI Agent","color":"#404040"},"iconData":{"icon":"robot","type":"icon"},"displayName":"AI Agent","typeVersion":3,"nodeCategories":[{"id":25,"name":"AI"},{"id":26,"name":"Langchain"}]},{"id":1123,"icon":"fa:link","name":"@n8n/n8n-nodes-langchain.chainLlm","codex":{"data":{"alias":["LangChain"],"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainllm/"}]},"categories":["AI","Langchain"],"subcategories":{"AI":["Chains","Root Nodes"]}}},"group":"[\"transform\"]","defaults":{"name":"Basic LLM Chain","color":"#909298"},"iconData":{"icon":"link","type":"icon"},"displayName":"Basic LLM Chain","typeVersion":2,"nodeCategories":[{"id":25,"name":"AI"},{"id":26,"name":"Langchain"}]},{"id":1145,"icon":"file:anthropic.svg","name":"@n8n/n8n-nodes-langchain.lmChatAnthropic","codex":{"data":{"alias":["claude","sonnet","opus"],"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatanthropic/"}]},"categories":["AI","Langchain"],"subcategories":{"AI":["Language Models","Root Nodes"],"Language Models":["Chat Models (Recommended)"]}}},"group":"[\"transform\"]","defaults":{"name":"Anthropic Chat Model"},"iconData":{"type":"file","fileBuffer":"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI0NiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSI+PHBhdGggZmlsbD0iIzdEN0Q4NyIgZD0iTTMyLjczIDBoLTYuOTQ1TDM4LjQ1IDMyaDYuOTQ1ek0xMi42NjUgMCAwIDMyaDcuMDgybDIuNTktNi43MmgxMy4yNWwyLjU5IDYuNzJoNy4wODJMMTkuOTI5IDB6bS0uNzAyIDE5LjMzNyA0LjMzNC0xMS4yNDYgNC4zMzQgMTEuMjQ2eiIvPjwvc3ZnPg=="},"displayName":"Anthropic Chat Model","typeVersion":1,"nodeCategories":[{"id":25,"name":"AI"},{"id":26,"name":"Langchain"}]},{"id":1153,"icon":"file:openAiLight.svg","name":"@n8n/n8n-nodes-langchain.lmChatOpenAi","codex":{"data":{"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai/"}]},"categories":["AI","Langchain"],"subcategories":{"AI":["Language Models","Root Nodes"],"Language Models":["Chat Models (Recommended)"]}}},"group":"[\"transform\"]","defaults":{"name":"OpenAI Chat Model"},"iconData":{"type":"file","fileBuffer":"data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDAiIGhlaWdodD0iNDAiIHZpZXdCb3g9IjAgMCA0MCA0MCIgZmlsbD0ibm9uZSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KPHBhdGggZD0iTTM2Ljg2NzEgMTYuMzcxOEMzNy43NzQ2IDEzLjY0OCAzNy40NjIxIDEwLjY2NDIgMzYuMDEwOCA4LjE4NjYxQzMzLjgyODIgNC4zODY1MyAyOS40NDA3IDIuNDMxNDkgMjUuMTU1NiAzLjM1MTUxQzIzLjI0OTMgMS4yMDM5NiAyMC41MTA1IC0wLjAxNzMxNDggMTcuNjM5MiAwLjAwMDE4NTUzM0MxMy4yNTkxIC0wLjAwOTgxNDY4IDkuMzcyNzMgMi44MTAyNSA4LjAyNTIgNi45Nzc4M0M1LjIxMTM5IDcuNTU0MSAyLjc4MjU4IDkuMzE1MzggMS4zNjEzIDExLjgxMTdDLTAuODM3NDkzIDE1LjYwMTggLTAuMzM2MjMyIDIwLjM3OTQgMi42MDEzMyAyMy42Mjk0QzEuNjkzODEgMjYuMzUzMiAyLjAwNjMyIDI5LjMzNzEgMy40NTc2IDMxLjgxNDZDNS42NDAxNSAzNS42MTQ3IDEwLjAyNzcgMzcuNTY5NyAxNC4zMTI4IDM2LjY0OTdDMTYuMjE3OSAzOC43OTczIDE4Ljk1NzkgNDAuMDE4NSAyMS44MjkyIDM5Ljk5OThDMjYuMjExOCA0MC4wMTEgMzAuMDk5NCAzNy4xODg1IDMxLjQ0NjkgMzMuMDE3MUMzNC4yNjA4IDMyLjQ0MDkgMzYuNjg5NiAzMC42Nzk2IDM4LjExMDggMjguMTgzM0M0MC4zMDcxIDI0LjM5MzIgMzkuODA0NiAxOS42MTk0IDM2Ljg2ODMgMTYuMzY5M0wzNi44NjcxIDE2LjM3MThaTTIxLjgzMTcgMzcuMzg2QzIwLjA3OCAzNy4zODg1IDE4LjM3OTIgMzYuNzc0NyAxNy4wMzI5IDM1LjY1MDlDMTcuMDk0MSAzNS42MTg0IDE3LjIwMDQgMzUuNTU5NyAxNy4yNjkxIDM1LjUxNzJMMjUuMjM0MyAzMC45MTcxQzI1LjY0MTggMzAuNjg1OCAyNS44OTE4IDMwLjI1MjEgMjUuODg5MyAyOS43ODMzVjE4LjU1NDNMMjkuMjU1NyAyMC40OTgxQzI5LjI5MTkgMjAuNTE1NiAyOS4zMTU3IDIwLjU1MDYgMjkuMzIwNyAyMC41OTA2VjI5Ljg4OTZDMjkuMzE1NyAzNC4wMjQ3IDI1Ljk2NjggMzcuMzc3MiAyMS44MzE3IDM3LjM4NlpNNS43MjY0IDMwLjUwNzFDNC44NDc2MyAyOC45ODk2IDQuNTMxMzcgMjcuMjEwOCA0LjgzMjYzIDI1LjQ4NDVDNC44OTEzOCAyNS41MTk1IDQuOTk1MTMgMjUuNTgzMiA1LjA2ODg4IDI1LjYyNTdMMTMuMDM0MSAzMC4yMjU4QzEzLjQzNzggMzAuNDYyMSAxMy45Mzc4IDMwLjQ2MjEgMTQuMzQyOCAzMC4yMjU4TDI0LjA2NjggMjQuNjEwN1YyOC40OTgzQzI0LjA2OTMgMjguNTM4MyAyNC4wNTA1IDI4LjU3NyAyNC4wMTkzIDI4LjYwMkwxNS45Njc5IDMzLjI1MDlDMTIuMzgxNSAzNS4zMTU5IDcuODAxNDQgMzQuMDg4NCA1LjcyNzY1IDMwLjUwNzFINS43MjY0Wk0zLjYzMDEgMTMuMTIwNUM0LjUwNTEyIDExLjYwMDQgNS44ODY0IDEwLjQzNzkgNy41MzE0NCA5LjgzNDE1QzcuNTMxNDQgOS45MDI5IDcuNTI3NjkgMTAuMDI0MiA3LjUyNzY5IDEwLjEwOTJWMTkuMzEwNkM3LjUyNTE5IDE5Ljc3ODEgNy43NzUxOSAyMC4yMTE5IDguMTgxNDUgMjAuNDQzMUwxNy45MDU0IDI2LjA1N0wxNC41MzkxIDI4LjAwMDhDMTQuNTA1MyAyOC4wMjMzIDE0LjQ2MjggMjguMDI3IDE0LjQyNTMgMjguMDEwOEw2LjM3MjY2IDIzLjM1ODJDMi43OTM4MyAyMS4yODU2IDEuNTY2MzEgMTYuNzA2OCAzLjYyODg1IDEzLjEyMTdMMy42MzAxIDEzLjEyMDVaTTMxLjI4ODIgMTkuNTU2OUwyMS41NjQyIDEzLjk0MTdMMjQuOTMwNiAxMS45OTkyQzI0Ljk2NDMgMTEuOTc2NyAyNS4wMDY4IDExLjk3MjkgMjUuMDQ0MyAxMS45ODkyTDMzLjA5NyAxNi42MzhDMzYuNjgyMSAxOC43MDkzIDM3LjkxMDggMjMuMjk1NyAzNS44Mzk1IDI2Ljg4MDhDMzQuOTYzMyAyOC4zOTgzIDMzLjU4MzIgMjkuNTYwOCAzMS45Mzk1IDMwLjE2NThWMjAuNjg5NEMzMS45NDMyIDIwLjIyMTkgMzEuNjk0NSAxOS43ODk0IDMxLjI4OTQgMTkuNTU2OUgzMS4yODgyWk0zNC42MzgzIDE0LjUxNDJDMzQuNTc5NSAxNC40NzggMzQuNDc1OCAxNC40MTU1IDM0LjQwMiAxNC4zNzNMMjYuNDM2OCA5Ljc3Mjg5QzI2LjAzMzEgOS41MzY2NCAyNS41MzMxIDkuNTM2NjQgMjUuMTI4MSA5Ljc3Mjg5TDE1LjQwNDEgMTUuMzg4VjExLjUwMDRDMTUuNDAxNiAxMS40NjA0IDE1LjQyMDQgMTEuNDIxNyAxNS40NTE2IDExLjM5NjdMMjMuNTAzIDYuNzUxNThDMjcuMDg5NCA0LjY4Mjc5IDMxLjY3NDUgNS45MTQwNiAzMy43NDIgOS41MDE2NEMzNC42MTU4IDExLjAxNjcgMzQuOTMyIDEyLjc5MDUgMzQuNjM1OCAxNC41MTQySDM0LjYzODNaTTEzLjU3NDEgMjEuNDQzMUwxMC4yMDY1IDE5LjQ5OTRDMTAuMTcwMiAxOS40ODE5IDEwLjE0NjUgMTkuNDQ2OCAxMC4xNDE1IDE5LjQwNjhWMTAuMTA3OUMxMC4xNDQgNS45Njc4MSAxMy41MDI4IDIuNjEyNzQgMTcuNjQyOSAyLjYxNTI0QzE5LjM5NDIgMi42MTUyNCAyMS4wODkyIDMuMjMwMjUgMjIuNDM1NSA0LjM1MDI4QzIyLjM3NDMgNC4zODI3OCAyMi4yNjkzIDQuNDQxNTMgMjIuMTk5MiA0LjQ4NDAzTDE0LjIzNDEgOS4wODQxM0MxMy44MjY2IDkuMzE1MzggMTMuNTc2NiA5Ljc0Nzg5IDEzLjU3OTEgMTAuMjE2N0wxMy41NzQxIDIxLjQ0MDZWMjEuNDQzMVpNMTUuNDAyOSAxNy41MDA2TDE5LjczNDIgMTQuOTk5M0wyNC4wNjU1IDE3LjQ5OTNWMjIuNTAwN0wxOS43MzQyIDI1LjAwMDdMMTUuNDAyOSAyMi41MDA3VjE3LjUwMDZaIiBmaWxsPSIjN0Q3RDg3Ii8+Cjwvc3ZnPgo="},"displayName":"OpenAI Chat Model","typeVersion":1,"nodeCategories":[{"id":25,"name":"AI"},{"id":26,"name":"Langchain"}]},{"id":1247,"icon":"fa:comments","name":"@n8n/n8n-nodes-langchain.chatTrigger","codex":{"data":{"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.chattrigger/"}]},"categories":["Core Nodes","Langchain"]}},"group":"[\"trigger\"]","defaults":{"name":"When chat message received"},"iconData":{"icon":"comments","type":"icon"},"displayName":"Chat Trigger","typeVersion":1,"nodeCategories":[{"id":9,"name":"Core Nodes"},{"id":26,"name":"Langchain"}]},{"id":1262,"icon":"file:google.svg","name":"@n8n/n8n-nodes-langchain.lmChatGoogleGemini","codex":{"data":{"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatgooglegemini/"}]},"categories":["AI","Langchain"],"subcategories":{"AI":["Language Models","Root Nodes"],"Language Models":["Chat Models (Recommended)"]}}},"group":"[\"transform\"]","defaults":{"name":"Google Gemini Chat Model"},"iconData":{"type":"file","fileBuffer":"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDggNDgiPjxkZWZzPjxwYXRoIGlkPSJhIiBkPSJNNDQuNSAyMEgyNHY4LjVoMTEuOEMzNC43IDMzLjkgMzAuMSAzNyAyNCAzN2MtNy4yIDAtMTMtNS44LTEzLTEzczUuOC0xMyAxMy0xM2MzLjEgMCA1LjkgMS4xIDguMSAyLjlsNi40LTYuNEMzNC42IDQuMSAyOS42IDIgMjQgMiAxMS44IDIgMiAxMS44IDIgMjRzOS44IDIyIDIyIDIyYzExIDAgMjEtOCAyMS0yMiAwLTEuMy0uMi0yLjctLjUtNCIvPjwvZGVmcz48Y2xpcFBhdGggaWQ9ImIiPjx1c2UgeGxpbms6aHJlZj0iI2EiIG92ZXJmbG93PSJ2aXNpYmxlIi8+PC9jbGlwUGF0aD48cGF0aCBmaWxsPSIjRkJCQzA1IiBkPSJNMCAzN1YxMWwxNyAxM3oiIGNsaXAtcGF0aD0idXJsKCNiKSIvPjxwYXRoIGZpbGw9IiNFQTQzMzUiIGQ9Im0wIDExIDE3IDEzIDctNi4xTDQ4IDE0VjBIMHoiIGNsaXAtcGF0aD0idXJsKCNiKSIvPjxwYXRoIGZpbGw9IiMzNEE4NTMiIGQ9Im0wIDM3IDMwLTIzIDcuOSAxTDQ4IDB2NDhIMHoiIGNsaXAtcGF0aD0idXJsKCNiKSIvPjxwYXRoIGZpbGw9IiM0Mjg1RjQiIGQ9Ik00OCA0OCAxNyAyNGwtNC0zIDM1LTEweiIgY2xpcC1wYXRoPSJ1cmwoI2IpIi8+PC9zdmc+"},"displayName":"Google Gemini Chat Model","typeVersion":1,"nodeCategories":[{"id":25,"name":"AI"},{"id":26,"name":"Langchain"}]},{"id":1263,"icon":"file:groq.svg","name":"@n8n/n8n-nodes-langchain.lmChatGroq","codex":{"data":{"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatgroq/"}]},"categories":["AI","Langchain"],"subcategories":{"AI":["Language Models","Root Nodes"],"Language Models":["Chat Models (Recommended)"]}}},"group":"[\"transform\"]","defaults":{"name":"Groq Chat Model"},"iconData":{"type":"file","fileBuffer":"data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+CjxzdmcKICAgaWQ9IkxheWVyXzIiCiAgIHZpZXdCb3g9IjAgMCA0OTkuOTk5OTkgNDk5Ljk5OTk5IgogICB2ZXJzaW9uPSIxLjEiCiAgIHdpZHRoPSI1MDAiCiAgIGhlaWdodD0iNTAwIgogICB4bWw6c3BhY2U9InByZXNlcnZlIgogICB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciCiAgIHhtbG5zOnN2Zz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjxkZWZzCiAgICAgaWQ9ImRlZnM0IiAvPjxnCiAgICAgaWQ9IlBBR0VTIj48Y2lyY2xlCiAgICAgICBzdHlsZT0iZmlsbDojZjU0ZjM1O2ZpbGwtb3BhY2l0eToxO3N0cm9rZS13aWR0aDoxLjEzNjIyIgogICAgICAgaWQ9InBhdGg0IgogICAgICAgY3g9IjI1MCIKICAgICAgIGN5PSIyNTAiCiAgICAgICByPSIyNTAiIC8+PHBhdGgKICAgICAgIGQ9Ik0gMjUwLjUzNjY0LDk3LjEyMjk5NCBDIDE5Mi43MTkzMSw5Ni41ODg2MzggMTQ1LjQ4MjIyLDE0Mi45NzA3NSAxNDQuOTQ3ODYsMjAwLjc4ODA4IGMgLTAuNTM0MzQsNTcuODE3MzMgNDUuODQ3NzcsMTA1LjA1NDQyIDEwMy42NjUxLDEwNS41ODg3NyBoIDM2LjMzNjIxIHYgLTM5LjIyMTc0IGggLTM0LjQxMjUzIGMgLTM2LjEyMjQ4LDAuNDI3NSAtNjUuNzI1OCwtMjguNTM0NjIgLTY2LjE1MzI5LC02NC42NTcwOCAtMC40Mjc0OSwtMzYuMTIyNDggMjguNTM0NjMsLTY1LjcyNTgxIDY0LjY1NzA4LC02Ni4xNTMzIGggMS40OTYyMSBjIDM2LjEyMjQ4LDAgNjUuNDA1MiwyOS4yODI3MiA2NS41MTIwNyw2NS40MDUyIHYgMCA5Ni4zOTc4MyAwIGMgMCwzNS44MDE4NyAtMjkuMTc1ODUsNjQuOTc3NzMgLTY0Ljg3MDgzLDY1LjQwNTIxIC0xNy4wOTk0MSwtMC4xMDY4OCAtMzMuNDUwNzEsLTcuMDUzNTEgLTQ1LjUyNzE3LC0xOS4xMjk5NSBsIC0yNy43ODY1LDI3Ljc4NjUxIGMgMTkuMjM2ODEsMTkuMzQzNyA0NS4zMTMzOSwzMC4zNTE0MyA3Mi41NjU1NiwzMC42NzIwNSBoIDEuMzg5MzMgYyA1Ny4wNjkyNCwtMC44NTQ5NyAxMDIuOTE3LC00Ny4xMzAyMiAxMDMuMjM3NiwtMTA0LjE5OTQ1IFYgMTk5LjI5MTg5IEMgMzUzLjY2NzM5LDE0Mi40MzYzOSAzMDcuMjg1MjcsOTcuMTIyOTk0IDI1MC41MzY2NCw5Ny4xMjI5OTQgWiIKICAgICAgIHN0eWxlPSJmaWxsOiNmZmZmZmY7c3Ryb2tlLXdpZHRoOjBweCIKICAgICAgIGlkPSJwYXRoMS0zIiAvPjwvZz48L3N2Zz4K"},"displayName":"Groq Chat Model","typeVersion":1,"nodeCategories":[{"id":25,"name":"AI"},{"id":26,"name":"Langchain"}]},{"id":1313,"icon":"fa:comments","name":"@n8n/n8n-nodes-langchain.chat","codex":{"data":{"alias":["human","wait","hitl","respond","approve","confirm","send","message"],"resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.respondtochat/"}]},"categories":["Core Nodes","HITL","Langchain"],"subcategories":{"HITL":["Human in the Loop"]}}},"group":"[\"input\"]","defaults":{"name":"Chat"},"iconData":{"icon":"comments","type":"icon"},"displayName":"Chat","typeVersion":1,"nodeCategories":[{"id":9,"name":"Core Nodes"},{"id":26,"name":"Langchain"},{"id":28,"name":"HITL"}]},{"id":1315,"icon":"fa:table","name":"n8n-nodes-base.dataTable","codex":{"data":{"alias":["data","table","knowledge","data table","table","sheet","database","data base","mysql","postgres","postgresql","airtable","supabase","noco","notion"],"details":"Data table","resources":{"primaryDocumentation":[{"url":"https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-base.datatable/"}]},"categories":["Core Nodes","Development"],"nodeVersion":"1.0","codexVersion":"1.0","subcategories":{"Core Nodes":["Helpers"]}}},"group":"[\"input\",\"transform\"]","defaults":{"name":"Data table"},"iconData":{"icon":"table","type":"icon"},"displayName":"Data table","typeVersion":1,"nodeCategories":[{"id":5,"name":"Development"},{"id":9,"name":"Core Nodes"}]}],"categories":[{"id":35,"name":"Document Extraction"},{"id":49,"name":"AI Summarization"}],"image":[]}}