Skip to main content

Extract personal data with self-hosted LLM Mistral NeMo

Workflow preview

Extract personal data with self-hosted LLM Mistral NeMo preview
Open on n8n.io

Important notice

This workflow is provided as-is. Please review and test before using in production.

Overview

This workflow shows how to use a self-hosted Large Language Model (LLM) with n8n's LangChain integration to extract personal information from user input. This is particularly useful for enterprise environments where data privacy is crucial, as it allows sensitive information to be processed locally.

๐Ÿ“– For a detailed explanation and more insights on using open-source LLMs with n8n, take a look at our comprehensive guide on open-source LLMs.

๐Ÿ”‘ Key Features

  1. Local LLM

    • Connect Ollama to run Mistral NeMo LLM locally
    • Provide a foundation for compliant data processing, keeping sensitive information on-premises
  2. Data extraction

    • Convert unstructured text to a consistent JSON format
    • Adjust the JSON schema to meet your specific data extraction needs.
  3. Error handling

    • Implement auto-fixing for LLM outputs
    • Include error output for further processing

โš™๏ธ Setup and ัonfiguration

Prerequisites

Configuration steps

  1. Add the Basic LLM Chain node with system prompts.
  2. Set up the Ollama Chat Model with optimized parameters.
  3. Define the JSON schema in the Structured Output Parser node.

๐Ÿ” Further resources

Apply the power of self-hosted LLMs in your n8n workflows while maintaining control over your data processing pipeline!