Edisson Garcia
Workflows by Edisson Garcia
Enhance Google Drive images with Gemini 2.5 Flash AI
🚀 **Google Drive Image Enhancement with Gemini nano banana** This workflow automates image enhancement by integrating **Google Drive** with **Google Gemini**. It fetches unprocessed images from a source folder, applies AI-driven transformations based on a customizable prompt (e.g., clean and realistic product backgrounds), and uploads the enhanced results into a destination folder—streamlining e-commerce catalog preparation or creative pipelines. --- 🔑 **Key Features** * **Customizable Prompt Node** → Easily adjust the style/instructions for Gemini (e.g., backgrounds, lighting, focus). * **Google Drive Integration** → Automatically fetches images from a source folder and uploads results to a target folder. * **AI Processing via Gemini** → Converts original images to Base64, sends them with the prompt to Gemini, and returns enhanced versions. * **Image Filtering** → Processes only files whose `mimeType` contains `"image"`. * **Loop Handling** → Iterates over all images in the source folder until all are processed. --- ⚙️ **Setup Instructions** 1. **Configure Prompt** * Open the `promt` node. * Replace the text with your desired Gemini instructions (e.g., "Add a clean, realistic background for baby products"). 2. **Set Google Drive Folders** * In `origin_folder` → set **Search Query** to the name of the source folder (with unprocessed images). * In `destination_folder` → set **Search Query** to the name of the target folder (to save results). 3. **Credentials** * Provide valid **Google Drive OAuth2** credentials for both Drive nodes. * Provide a **Google Gemini API** credential for the `banana-request` node. 4. **Run the Workflow** * Trigger from the `init` node. * Workflow will download → convert → send to Gemini → reconvert → upload results automatically. --- 🛠 **Customization Guidance** * Modify the **prompt text** to change how Gemini processes the images (background, style, product focus). * Swap `Search Query` for **folder IDs** in Drive nodes if you need more precise targeting. * Extend the workflow by chaining post-processing (e.g., watermarking, resizing, or tagging metadata). --- © 2025 Innovatex • Automation & AI Solutions • innovatexiot.carrd.co • LinkedIn ---
Message buffer system with Redis for efficient processing
## 🚀 Message-Batching Buffer Workflow (n8n) **This workflow implements a lightweight message-batching buffer using Redis for temporary storage and a JavaScript consolidation function to merge messages.** It collects incoming user messages per session, waits for a configurable inactivity window or batch size threshold, consolidates buffered messages via custom code, then clears the buffer and returns the combined response—all without external LLM calls. --- ### 🔑 Key Features * **Redis-backed buffer** queues incoming messages per `context_id`. * **Centralized Config Parameters** node to adjust thresholds and timeouts in one place. * **Dynamic wait time** based on message length (configurable `minWords`, `waitLong`, `waitShort`). * **Batch trigger** fires on inactivity timeout or when `buffer_count` ≥ `batchThreshold`. * **Zero-cost consolidation** via built-in JavaScript Function (`consolidate buffer`)—no GPT-4 or external API required. --- ### ⚙️ Setup Instructions 1. **Extract Session & Message** * Trigger: `When chat message received` (webhook) or `When clicking ‘Test workflow’` (manual). * Map inputs: set variables `context_id` and `message` into a Set node named **Mock input data** (for testing) or a proper mapping node in production. 2. **Config Parameters** * Add a Set node **Config Parameters** with: ``` minWords: 3 # Word threshold waitLong: 10 # Timeout (s) for long messages waitShort: 20 # Timeout (s) for short messages batchThreshold: 3 # Messages to trigger batch early ``` * All downstream nodes reference these JSON values dynamically. 3. **Determine Wait Time** * Node: **get wait seconds** (Code) * JS code: ```js const msg = $json.message || ''; const wordCount = msg.split(/\s+/).filter(w => w).length; const { minWords, waitLong, waitShort } = items[0].json; const waitSeconds = wordCount < minWords ? waitShort : waitLong; return [{ json: { context_id: $json.context_id, message: msg, waitSeconds } }]; ``` 4. **Buffer Message in Redis** * **Buffer messages**: `LPUSH buffer_in:{{$json.context_id}}` with payload `{text, timestamp}`. * **Set buffer\_count increment**: `INCR buffer_count:{{$json.context_id}}` with TTL `{{$json.waitSeconds + 60}}`. * **Set last\_seen**: record `last_seen:{{$json.context_id}}` timestamp with same TTL. 5. **Check & Set Waiting Flag** * **Get waiting\_reply**: if null, **Set waiting\_reply** to `true` with TTL `{{$json.waitSeconds}}`; else exit. 6. **Wait for Inactivity** * **WaitSeconds** (webhook): pauses for `{{$json.waitSeconds}}` seconds before batch evaluation. 7. **Check Batch Trigger** * **Get last\_seen** and **Get buffer\_count**. * IF `(now - last_seen) ≥ waitSeconds * 1000` OR `buffer_count ≥ batchThreshold`, proceed; else use **Wait** node to retry. 8. **Consolidate Buffer** * **consolidate buffer** (Code): ```js const j = items[0].json; const raw = Array.isArray(j.buffer) ? j.buffer : []; const buffer = raw.map(x => { try { return typeof x === 'string' ? JSON.parse(x) : x; } catch { return null; } }).filter(Boolean); buffer.sort((a, b) => new Date(a.timestamp) - new Date(b.timestamp)); const texts = buffer.map(e => e.text?.trim()).filter(Boolean); const unique = [...new Set(texts)]; const message = unique.join(' '); return [{ json: { context_id: j.context_id, message } }]; ``` 9. **Cleanup & Respond** * **Delete** Redis keys: `buffer_in`, `buffer_count`, `waiting_reply`, `last_seen` (for the `context_id`). * Return consolidated `message` to the user via your chat integration. --- ### 🛠 Customization Guidance * **Adjust thresholds** by editing the **Config Parameters** node. * **Change concatenation** (e.g., line breaks) by modifying the `join` separator in the consolidation code. * **Add filters** (e.g., ignore empty or system messages) inside the consolidation Function. * **Monitor performance**: for very high volume, consider sharding Redis keys by date or user segments. --- © 2025 Innovatex • Automation & AI Solutions • [innovatexiot.carrd.co](https://innovatexiot.carrd.co/) • [LinkedIn](https://www.linkedin.com/in/edisson-andres-garcia-herrera-63a91517b/)