Executive Summary
Most Web3 projects talk about decentralisation but deploy to Vercel. This project ships the full stack: a static portfolio site that is built by GitHub Actions, tested with Vitest and Playwright, pinned to IPFS via Pinata, and made globally accessible through an ENS .eth domain — with an eth.limo HTTPS fallback for users without a Web3 browser. Netlify handles staging previews on every pull request; production publishes only on a merge to main, at which point the ENS contenthash record is updated on-chain. The result is a content pipeline with no single point of failure, no hosting bill, and a verifiable, immutable audit trail on Ethereum mainnet.
Problem → Solution → Outcome
Problem
Publishing a website typically means trusting a centralised host — Vercel, Netlify, AWS S3 — to keep your content available. That dependency is invisible until it isn’t: accounts get suspended, regions go down, providers change pricing. For a developer building in Web3, deploying to a Web2 CDN is an architectural contradiction.
Solution
Build a CI/CD pipeline where the production publish step is IPFS pinning and an ENS contenthash update, not an FTP upload or a git push to a platform. Every code change triggers automated tests. Every pull request gets a Netlify preview for human review. Every merge to main produces a deterministic build, pins it to IPFS, and updates the ENS record — so the domain always resolves to the latest content without anyone manually touching DNS or a dashboard.
Outcome
A fully automated, decentralised publishing workflow where:
- staging previews are live within 60 seconds of opening a pull request
- production is unreachable by any single hosting provider
- every published version is content-addressed, permanently retrievable by its CID, and independently verifiable
- the only irreversible action in the pipeline — the ENS
contenthashupdate — requires an explicit merge tomain, providing a natural gas-cost gate
Architecture
Pull Request / Staging Workflow
Every branch push triggers a GitHub Actions build job and unit + e2e test jobs. If tests pass, Netlify automatically generates a Deploy Preview at a unique URL. Pull request reviewers get a live, isolated preview environment with no manual deployment step. Nothing reaches IPFS at this stage — Netlify is used exclusively for ephemeral preview and is considered a convenience layer, not part of the production path.
Branch push
→ GitHub Actions: build + test
→ Netlify: Deploy Preview URL (staging only)
→ PR review on preview URL
Test Execution Before Deploy
Two test layers run in CI before any build artifact is used:
- Unit tests (Vitest): validate content collection frontmatter schemas — every Markdown file in
experience/,education/,projects/, andblog/must pass typed field assertions before the build proceeds. - E2E tests (Playwright): spin up a production preview server and assert that all routes return HTTP 200, the profile renders, nav links resolve, project case studies load, and the browser console is clean.
Production deploys are blocked if either layer fails.
Build Step
astro build generates a deterministic dist/ directory of pure static HTML, CSS, and minimal JavaScript. No server-side rendering, no Node.js runtime dependency at serve time. The output is a flat directory tree suitable for direct IPFS upload.
IPFS Upload and Pinning
On merge to main, a GitHub Actions job uploads the dist/ directory to IPFS using the Pinata API. Pinata returns a CID (content identifier) — a SHA-256-based hash of the entire directory tree. The CID is deterministic: the same content always produces the same CID, making every deployment independently verifiable.
Pinata holds a persistent pin, ensuring the content survives garbage collection across the IPFS network. A secondary pin can be added to a self-hosted IPFS node or Web3.Storage for redundancy with no additional workflow changes.
Merge to main
→ astro build → dist/
→ Pinata API: upload dist/ → CID
→ Pin confirmed
ENS contenthash Update
With the CID in hand, the pipeline calls the ENS Public Resolver on Ethereum mainnet to update the contenthash record for wakqasahmed.eth to ipfs://<CID>. This is a single on-chain transaction signed by a deployer wallet whose private key is stored as a GitHub Actions secret.
This step is intentionally gated to main branch merges only — not to every commit, not to staging, not to tags. The reason is straightforward: each ENS update costs gas. Triggering it on every push would be wasteful and would expose the deployer key to more signing events than necessary. Merging to main is the deliberate release action; the gas cost is the economic signal that this is a real publish event.
CID confirmed
→ ENS Public Resolver: setContenthash(wakqasahmed.eth, ipfs://CID)
→ Transaction broadcast → confirmed on mainnet
→ wakqasahmed.eth resolves to new content globally
eth.limo Fallback
For users without a Web3-capable browser or IPFS extension, https://wakqasahmed.eth.limo acts as an HTTPS gateway that resolves the ENS name and serves the IPFS content over HTTP. This is a read-only fallback — it introduces no centralised write dependency and can be swapped for any other ENS gateway (e.g., eth.link, a self-hosted gateway) without changing the pipeline.
Why This Is Better Than Manual Publishing
| Manual publishing | This pipeline |
|---|---|
| Developer remembers to deploy | Merge to main triggers deploy automatically |
| Hosting provider can suspend account | Content is on IPFS — no account to suspend |
| Previous versions may not be retrievable | Every CID is permanently accessible |
| No audit trail | Every deployment is an on-chain transaction |
| ”Works on my machine” staging | Isolated Netlify preview per PR |
| DNS propagation delay | ENS update effective within ~1 block (~12 seconds) |
Case Study
Challenge
Personal portfolio sites are typically hosted on Vercel or Netlify and forgotten. For a developer who works in Web3 and talks about decentralisation to clients, deploying to a Web2 CDN while claiming expertise in decentralised infrastructure is a credibility gap. The challenge was to build a pipeline that is genuinely decentralised in production — not as a proof of concept, but as the live, maintained deployment of a real site.
Requirements
- Automated builds on every push; no manual deployment steps
- Isolated staging environments per pull request for human review
- Test gate: unit and e2e tests must pass before any production publish
- Production content on IPFS, not on any centralised host
.ethdomain resolution without requiring visitors to install anything- Gas cost control: ENS updates triggered only on deliberate release events
- Secrets management: deployer key never committed to source
Implementation
The pipeline is implemented as three GitHub Actions workflows:
-
ci.yml— runs on every push tomainandstaging, and on every pull request targeting those branches. Jobs:build,unit,e2e. Thee2ejob depends onbuildcompleting successfully. Pull requests cannot be merged if this workflow fails. -
deploy-ipfs.yml— runs only on push tomainafterci.ymlpasses. Steps: build → upload to Pinata → extract CID → call ENS resolver. -
Netlify is connected to the repository via its GitHub App. Deploy Previews are generated automatically for every pull request with no workflow YAML required. Netlify is explicitly not used for production.
Content is authored as Markdown files with typed frontmatter, validated at build time by Astro’s content collections. Schema violations fail the build before any test runs.
Trade-offs
Netlify for staging vs. a self-hosted preview environment Netlify was chosen for staging previews because it provides isolated, shareable URLs per pull request with no infrastructure overhead. The trade-off is a dependency on Netlify for the review workflow. This is an acceptable risk: Netlify going down delays reviews but does not affect production. The production path has no Netlify dependency.
Pinata for pinning vs. self-hosted IPFS node Pinata is a managed pinning service. Using it introduces a dependency on a third party for pin persistence. The mitigation is that IPFS content-addressing means the content itself is not controlled by Pinata — only the pin is. If Pinata were to disappear, re-pinning the content elsewhere using the known CID would restore availability. A secondary pin on a self-hosted node or Web3.Storage would eliminate this risk entirely.
ENS updates on mainnet vs. a cheaper L2
ENS on Ethereum mainnet is the most widely supported option for .eth resolution. L2 ENS resolvers exist but have inconsistent gateway support. For infrequent updates (once per release), mainnet gas costs are manageable and the trade-off of broader compatibility is worth it.
Deterministic builds IPFS CIDs are sensitive to file metadata. Builds are pinned to a specific Node.js version in CI to prevent CID drift between local and CI environments. Non-deterministic builds would result in a different CID each time, requiring an ENS update even for identical content — wasting gas.
Security Considerations
- The deployer wallet private key is stored as an encrypted GitHub Actions secret. It is scoped to the minimum permission required: calling
setContenthashon the ENS resolver for a single name. - The wallet holds only enough ETH to cover gas for ENS updates. It is not a funds wallet.
- The ENS name itself is owned by a separate cold wallet. The deployer wallet is a hot key with no ownership rights — it can update content but cannot transfer the name.
- All CI jobs run in GitHub-hosted runners with no persistent state between runs.
Gas Fee Considerations
ENS contenthash updates cost approximately 45,000–65,000 gas, which at typical mainnet conditions amounts to a few cents to a few dollars depending on network congestion. This pipeline triggers the update exactly once per merge to main. A project with one release per week would spend well under $10/month on gas. Teams releasing multiple times per day should evaluate batching releases or using a cheaper resolver.
Reliability Considerations
- IPFS availability: content pinned by Pinata is served by Pinata’s gateway nodes and replicated across any node that requests it. Cold content (not recently requested) may have slower initial load via public gateways. A dedicated gateway (self-hosted or via a provider like Cloudflare’s IPFS gateway) eliminates this.
- ENS propagation: ENS updates are confirmed on-chain in ~12 seconds (one block). Gateway caches (eth.limo, eth.link) may lag by a few minutes.
- GitHub Actions availability: if GitHub Actions is down, deploys are delayed but no production content is affected. The last pinned CID remains live indefinitely.
What I Built
- A static portfolio site in Astro with content collections backed entirely by local Markdown files — no CMS, no API calls at runtime
- Vitest unit test suite validating all content collection schemas (experience, education, projects, blog)
- Playwright e2e smoke test suite covering all routes, navigation, rendering correctness, and console cleanliness
- GitHub Actions CI pipeline with three jobs (
build,unit,e2e) gating all merges tomainandstaging - An IPFS publish workflow that uploads
dist/to Pinata on merge tomainand stores the resulting CID - An ENS contenthash update step that writes
ipfs://<CID>to thewakqasahmed.ethresolver on Ethereum mainnet - An eth.limo HTTPS fallback for non-Web3 browsers, requiring no additional infrastructure
- A Netlify staging integration providing per-PR preview URLs for human review, explicitly decoupled from the production path
- Structured data (JSON-LD
Person,WebSite,SoftwareApplication,ItemList) on every page for SEO and answer engine optimisation
Why This Matters
For technical leads and engineering teams: This demonstrates the ability to design and operate a deployment pipeline end to end — not just write application code. The pipeline handles test automation, artifact management, third-party API integration (Pinata), on-chain transactions (ENS), and secrets management, all wired together in a reproducible CI/CD workflow. Every component has a documented reason for being there and a documented trade-off for the alternative.
For founders building in Web3:
Most Web3 projects have a centralised deployment problem they haven’t addressed. This pipeline is a concrete reference implementation of how to publish without a hosting dependency — with a .eth domain that works in Web3 browsers natively and degrades gracefully to HTTPS for everyone else. It can be adapted to any static frontend in a weekend.
For recruiters:
The candidate built a real system with real constraints (gas costs, IPFS determinism, secret scoping, test gating) and shipped it. It is live. You can visit wakqasahmed.eth.limo and see the result. This is not a tutorial project. The pipeline is documented, version-controlled, and maintained.
Technologies Used
- Astro — static site generator
- Tailwind CSS — utility-first styling
- Vitest — unit testing
- Playwright — end-to-end testing
- GitHub Actions — CI/CD orchestration
- Netlify — PR staging previews (not production)
- IPFS — decentralised content storage
- Pinata — IPFS pinning service
- ENS (Ethereum Name Service) — decentralised domain
- eth.limo — ENS/IPFS HTTPS gateway for non-Web3 browsers
- Ethereum mainnet — on-chain contenthash resolution
- ethers.js / viem — ENS resolver interaction in deploy script
Results
| Metric | Value |
|---|---|
| Time from merge to IPFS pin confirmed | < 3 minutes |
| Time from IPFS pin to ENS resolution live | < 1 block (~12 seconds after tx confirm) |
| Test coverage (routes) | 6 / 6 routes covered by e2e |
| Unit test assertions | 9 content schema assertions |
| Gas cost per release (ENS update) | |
| Single points of failure in production path | 0 |
| Hosting providers that can take the site down | 0 |
| Previous versions retrievable by CID | All |
Next Steps
- Secondary IPFS pin on a self-hosted node or Web3.Storage to eliminate Pinata as a single dependency for pin persistence
- L2 ENS resolver evaluation for cost reduction on higher-frequency release schedules
- Subgraph indexing of ENS contenthash history to expose a public, queryable deployment log
- Chatbot integration — a Claude-powered assistant grounded on the site’s Markdown content, deployed as a Cloudflare Worker (serverless, compatible with the static/IPFS architecture)
- Lighthouse CI added to the GitHub Actions pipeline for automated performance regression detection