Skip to main content
P

Pablo

2
Workflows

Workflows by Pablo

Workflow preview: Get Scaleway server info with dynamic filtering
Free advanced

Get Scaleway server info with dynamic filtering

# Get Scaleway Server Info with Dynamic Filtering ## Description This workflow is designed for developers, system administrators, and DevOps engineers who need to retrieve and filter Scaleway server information quickly and efficiently. It gathers data from Scaleway instances and baremetal servers across multiple zones and is ideal for: - Quickly identifying servers by tags, names, public IPs, or zones. - Automating server status checks in production, staging, or test environments. - Integrating Scaleway data into broader monitoring or inventory systems. ## High-Level Steps - **Webhook Trigger:** Receives an HTTP POST request (with basic authentication) containing the search criteria (`search_by` and `search`). - **Server Data Collection:** Fetches server data from Scaleway’s API endpoints for both instances and baremetal servers across defined zones. - **Data Processing:** Aggregates and normalizes the fetched data using a Code node with helper functions. - **Dynamic Filtering:** Routes data to dedicated filtering routines (by tags, name, public_ip, or zone) based on the input criteria. - **Response:** Returns the filtered data (or an error message) via a webhook response. ## Set Up Steps 1. **Insert Your Scaleway Token:** In the “Edit Fields” node, replace the placeholder `Your personal Scaleway X Auth Token` with your Scaleway API token. 2. **Configure Zones:** Review or update the zone lists (`ZONE_INSTANCE` and `ZONE_BAREMETAL`) to suit your environment. 3. **Send a Request:** Make a POST request to the workflow’s webhook endpoint with a JSON payload, for example: ```json { "search_by": "tags", "search": "Apiv1" } ``` 4. **View the Results:** The workflow returns a JSON array of servers matching your criteria, including details like name, tags, public IP, type, state, zone, and user.

P
Pablo
DevOps
16 Apr 2025
291
0
Workflow preview: Ultimate scraper workflow for n8n
Free advanced

Ultimate scraper workflow for n8n

## What this template does The Ultimate Scraper for n8n uses Selenium and AI to retrieve any information displayed on a webpage. You can also use session cookies to log in to the targeted webpage for more advanced scraping needs. ⚠️ Important: This project requires specific setup instructions. Please follow the guidelines provided in the GitHub repository: n8n Ultimate Scraper Setup : https://github.com/Touxan/n8n-ultimate-scraper/tree/main. The workflow version on n8n and the GitHub project may differ; however, the most up-to-date version will always be the one available on the GitHub repository : https://github.com/Touxan/n8n-ultimate-scraper/tree/main. ## How to use Deploy the project with all the requirements and request your webhook. **Example of request**: ```bash curl -X POST http://localhost:5678/webhook-test/yourwebhookid \ -H "Content-Type: application/json" \ -d '{ "subject": "Hugging Face", "Url": "github.com", "Target data": [ { "DataName": "Followers", "description": "The number of followers of the GitHub page" }, { "DataName": "Total Stars", "description": "The total numbers of stars on the different repos" } ], "cookie": [] }' ``` Or to just scrap a url : ```bash curl -X POST http://localhost:5678/webhook-test/67d77918-2d5b-48c1-ae73-2004b32125f0 \ -H "Content-Type: application/json" \ -d '{ "Target Url": "https://github.com", "Target data": [ { "DataName": "Followers", "description": "The number of followers of the GitHub page" }, { "DataName": "Total Stars", "description": "The total numbers of stars on the different repo" } ], "cookies": [] }' ```

P
Pablo
Market Research
25 Sep 2024
52720
0