Alok Kumar
Workflows by Alok Kumar
Supabase storage tutorial: Upload, fetch, sign & list files
## Learn Supabase Storage Fundamentals with n8n This template demonstrates how to integrate **Supabase Storage** with **n8n** for uploading, fetching, generating temporary signed URLs, and listing files. It’s a beginner-friendly workflow that helps you understand how to connect Supabase’s storage API with n8n automation. --- ## Who it’s for - Developers and teams new to **Supabase** who want a hands-on learning workflow. - Anyone looking to automate file uploads and retrieval from **Supabase Storage**. - Educators or technical teams teaching Supabase fundamentals with **practical demos**. --- ## How it works 1. **Upload File** – A user uploads a file through an n8n form, which gets stored in a Supabase storage bucket. 2. **Fetch File** – Retrieve files by providing their filename. 3. **Temporary Access** – Generate **signed URLs** with custom expiry for secure file sharing. 4. **List Objects** – View all stored files in the chosen Supabase bucket. --- ## How to set up - Create a **Supabase account** and set up a project. - Create a **bucket** in Supabase (e.g., `test-n8n`). - Get your **Project URL** and **Anon Key** from Supabase. - In n8n, create a **Supabase API Credential** using your keys. - Import this workflow and connect it with your credentials. - Run the forms to test file upload, retrieval, and listing. --- ## Requirements - A Supabase project with **storage enabled**. - A configured **Supabase API Credential** in n8n. --- ## Customization - Change the bucket name (`test-n8n`) to your own. - Adjust signed URL **expiry times** for temporary file access. - Replace Supabase with another S3-compatible storage if needed. - Extend the workflow with notifications (Slack, Email) after file upload. --- ## 📝 Lessons Included - **Lesson 1** – Upload file to Supabase storage. - **Lesson 2** – Fetch file from storage. - **Lesson 3** – Create a temporary signed document with expiry. - **Lesson 4** – List all items in Supabase storage. --- ## 🔑 Prerequisites - Supabase account + project. - Project URL and API Key (Anon). - Bucket created in Supabase. - Policy created to allow read/write access. - n8n with Supabase API credentials configured.
Automate document approvals with multi-level workflows using Supabase & Gmail
## Multi-Level Document Approval & Audit Workflow This workflow automates a **document approval process** using Supabase and Gmail. --- ### Who it’s for - Teams that need structured multi-level document approvals. - Companies managing policies, contracts, or proposals. - Medical document need multiple lavel of review and approval. --- ### How it works 1. **Form Trigger** – A user submits a document via the form. 2. **Supabase Integration** – The document is saved in the `documents` table. 3. **Supabase Storage** – The document is saved in the bucket. 4. **Workflow Levels** – Fetches the correct approval level from `workflow_levels`. 5. **Assign Approvers** – Matches approvers by role from the `users` table. 6. **Approval Record** – Creates an `approvals` record with a unique token and expiry. 7. **Email Notification** – Sends an email with **Approve / Reject links**. 8. **Audit Logs** – Records every approval request in `audit_logs`. 8. **Repeat** - repeat the flow till all the aproval level is comepted --- ### How to set up - Configure your **Supabase credentials**. - Create tables as per data model given. - Create a storage bucket in **Supabase Storage**. - Connect your **Gmail account**. - Adjust approval expiry time (`48h` default). - Deploy and test via the **Form Trigger**. --- ### Customization - Add multiple approval levels by chaining `workflow_levels`. - Replace Gmail with Slack, Teams, or another notification channel. - Adjust audit logging for compliance needs. - Update the endpoint http://localhost:5678/webhook-test/ based on instance and env (remove test if you run in prod) - Update the bucket name. --- ## Important steps ### 1. **Form Submit** - Triggered when by submiting form - Captures form parameters: - `Title` (Document Title) - `Description` (Document Description) - `file` (Document need for approval) ### 2. **Webhook Entry Point** - Triggered when an approver clicks the **Approve** or **Reject** link in email. - Captures query parameters: - `token` (approval token) - `decision` (approved/rejected) --- ### 3. **Approval Data Retrieval & Update** - Fetch approval record from **Supabase (approvals)** using `token`. - Update approval status: - `Approved` → moves to next workflow level or final approval. - `Rejected` → document marked as rejected immediately. - Records `acted_at` timestamp. --- ### 4. **Decision Check** - **IF Node** checks whether the decision is **approved** or **rejected**. - **Reject Path** → Update document status to **Rejected** in `documents`. - **Approve Path** → Continue workflow level progression. --- ### 5. **Workflow Level Progression** - Fetch details of the current workflow level. - Identify the **next level** (`workflow_levels`) based on `level_number`. #### ✅ If Next Level Exists: - Retrieve approvers by `role_id`. - Generate unique approval tokens. - Create new approval records in `approvals`. - Send **email notifications** with approval/reject links. #### ❌ If No Next Level (Last Level): - Update document status to **Approved** in `documents`. --- ### 6. **Audit Logging** - Every approval action is logged into `audit_logs` table: - `document_id` - `action` (e.g., `approval_sent`, `approved`, `rejected`) - `actor_email` (system/approver) - `details` (workflow level, role info, etc.) --- ## 📨 Email Template Approval request email includes **decision links**: ```html <p>Please review the document:</p> <p> <a href="http://localhost:5678/webhook-test/doc-approval?token={{$json.token}}&decision=approved">✅ Approve</a> | <a href="http://localhost:5678/webhook-test/doc-approval?token={{$json.token}}&decision=rejected">❌ Reject</a> </p> ``` Happy Automating! 🚀
Generate PRDs and test scenarios with GPT/Claude and PDF export
### 📒 Generate **Product Requirements Document (PRD)** and **test scenarios** form input to PDF with OpenRouter and APITemplate.io This workflow generates a **Product Requirements Document (PRD)** and **test scenarios** from structured form inputs. It uses **OpenRouter LLMs (GPT/Claude)** for natural language generation and **APITemplate.io** for PDF export. ## Who’s it for This template is designed for **product managers, business analysts, QA teams, and startup founders** who need to quickly create **Product Requirement Documents (PRDs)** and **test cases** from structured inputs. ## How it works 1. A **Form Trigger** collects key product details (name, overview, audience, goals, requirements). 2. The **LLM Chain (OpenRouter GPT/Claude)** generates a professional, structured **PRD in Markdown format**. 3. A second **LLM Chain** creates **test scenarios and Gherkin-style test cases** based on the PRD. 4. Data is cleaned and merged using a **Set node**. 5. The workflow sends the formatted document to **APITemplate.io** to generate a polished **PDF**. 6. Finally, the workflow returns the PDF via a **Form Completion node** for easy download. ## ⚡ Requirements - OpenRouter API Key (or any LLM) - APITemplate.io account ## 🎯 Use cases - Rapid PRD drafting for startups. - QA teams generating **test scenarios** automatically. - Standardized documentation workflows. 👉 Customize by editing prompts, PDF templates, or extending with integrations (Slack, Notion, Confluence). ### Need Help? Ask in the [n8n Forum](https://community.n8n.io/)! Happy Automating with n8n! 🚀
Website content chatbot with Pinecone, Airtable & OpenAI for RAG applications
### This n8n workflow shows how to **extract website content, index it in Pinecone, and leverage Airtable to power a chat agent for customer Q&A**. Use cases include: * Building a **knowledge base** from your website. * Creating a **chatbot** that answers customer queries using your own site content. * Powering **RAG workflows** for FAQs, support docs, or product knowledge. --- ### How it works * Workflow starts with a **manual trigger** or chat message. * Website content is fetched via **HTTP Request**. * The **HTML body** is extracted and converted into clean **Markdown**. * Text is split into chunks (~500 chars with 50 overlap) using the **Character Text Splitter**. * **OpenAI embeddings** are generated for each chunk. * Content and embeddings are stored in **Pinecone** with namespace separation. * A **Chat Agent** (powered by OpenAI or OpenRouter) retrieves answers from Pinecone and Airtable. * **Memory buffer** allows multi-turn conversations. * A **billing tool** (Airtable) provides dynamic billing-related answers when needed. --- ### How to use * Replace the sample website URL in the **HTTP Request** node with your own domain or content source. * Update Normalize code based on markdown content output to remove noise. * Adjust chunk size in the **Text Splitter** for your website markdown output. * In this example, the **Character Text Splitter with separator `######`** worked really well. * Always check the **Markdown output** to fine-tune your splitting logic. * Update **Pinecone namespace** to match your project. * Customize the **Chat Agent system prompt** to fit your brand voice and response rules. * Connect to your own **Airtable schema** if you want live billing/payment data access. --- ### Requirements * **OpenAI account** (for embeddings + chat model). * **Pinecone account** (vector DB for semantic search). * **Airtable account** (if using the billing tool). * (Optional) **OpenRouter account** (alternative chat model provider). * n8n self-hosted or cloud. --- ### Need Help? Ask in the [n8n Forum](https://community.n8n.io/)! Happy Automating! 🚀
Build a PDF Q&A system with LlamaIndex, OpenAI embeddings & Pinecone vector DB
## Parse, Normalize, Extract, and Store PDF Content for RAG in Pinecone This workflow automates a full RAG pipeline for structured documents (like insurance policies). ### What it does - Watches a Google Drive folder for new PDFs - Uploads to LlamaIndex Cloud for parsing → returns clean Markdown - Normalizes text (removes headers, footers, page numbers, formatting artifacts) - Splits text into chunks (~1200 chars with 150 overlap) - Generates embeddings with OpenAI - Stores vectors in Pinecone with metadata - Connects a Chat Agent that retrieves answers from Pinecone ### Who’s it for - Developers building **chatbots or Q&A systems** for structured docs - Teams working with **insurance, compliance, or legal PDFs** - Anyone who needs to **normalize & store documents for semantic search** ### Requirements - Google Drive connected (for source PDFs) - LlamaIndex Cloud account (parsing API key) - Pinecone account (vector DB) - OpenAI account (LLM and embeddings) ### How to use and customize * Update the folder name in google drive trigger node. * Place a pdf file in the same folder in google drive. * Customize the `Normalized Content` function node to adjust regex for headers/footers specific to your documents. * Adjust chunk size or metadata namespace in the Pinecone node to fit your project needs. ---