Automate GitHub trending data collection with FireCrawl, GPT and Supabase
$20/month : Unlimited workflows
2500 executions/month
THE #1 IN WEB SCRAPING
Scrape any website without limits
HOSTINGER 🎉 Early Black Friday Deal
DISCOUNT 20% Try free
DISCOUNT 20%
Self-hosted n8n
Unlimited workflows - from $4.99/mo
#1 hub for scraping, AI & automation
6000+ actors - $5 credits/mo
GitHub Trending to Supabase (Daily, Weekly, Monthly)
Who is this for?
This workflow is for developers, researchers, founders, and data analysts who want a historical dataset of GitHub Trending repositories without manual scraping. It’s ideal for building dashboards, newsletters, or trend analytics on top of a clean Supabase table.
What problem is this workflow solving?
Checking GitHub Trending by hand (daily/weekly/monthly) is repetitive and error-prone. This workflow automates collection, parsing, and storage so you can reliably track changes over time and query them from Supabase.
What this workflow does
- Scrapes GitHub Trending across Daily, Weekly, and Monthly timeframes using FireCrawl.
- Extracts per-project fields:
name,url,description,language,stars. - Adds a
typedimension (daily/weekly/monthly) to each row. - Inserts structured results into a Supabase table for long-term storage.
Setup
Ensure you have an n8n instance (Cloud or self-hosted).
Create credentials:
- FireCrawl API credential (no hardcoded keys in nodes).
- Supabase credential (URL + Service Role / Insert-capable key).
Prepare a Supabase table (example):
CREATE TABLE public.githubtrending ( id bigint GENERATED ALWAYS AS IDENTITY NOT NULL, created_at timestamp with time zone NOT NULL DEFAULT now(), data_date date DEFAULT now(), url text, project_id text, project_desc text, code_language text, stars bigint DEFAULT '0'::bigint, type text, CONSTRAINT githubtrending_pkey PRIMARY KEY (id) );Import this workflow JSON into n8n.
Run once to validate, then schedule (e.g., daily at 08:00).