Frequently Asked Questions

Find answers to common questions about Crawlinker

Getting Started

Simply go to the homepage, enter your website URL (e.g., https://example.com) in the scan form, and click "Start Scan". The process takes 1-5 minutes depending on your site size. No signup or account required.

No! Crawlinker is completely free and requires no signup. Just enter your URL and start scanning immediately.

Scan time depends on your website size:
  • Small sites (10-50 pages): 30 seconds - 1 minute
  • Medium sites (50-200 pages): 1-3 minutes
  • Larger sites (200-500 pages): 3-5 minutes

Yes! Crawlinker is a free hobby project. There are no hidden fees, premium tiers, or credit card requirements. The only limitation is the 500-page scan limit per session.

You'll be taken to a dashboard showing all findings organized by category: broken links, redirects, SEO issues, and crawl results. You can sort, filter, search, and export the data as CSV.

Features & Limits

Free scans are limited to 500 pages per session. This covers most small to medium websites. If your site has more pages, the scan will stop at 500 and show results for what was analyzed.

Crawlinker scans up to 10 levels deep from your homepage. This means it will follow links 10 clicks away from your starting URL. This is more than enough for most website structures.

Yes! Crawlinker validates all external links to ensure they're not broken. External links are checked using HEAD requests to verify they return successful HTTP status codes.

Crawlinker analyzes:
  • Missing or duplicate title tags
  • Missing or duplicate meta descriptions
  • Missing H1 tags
  • Images without alt text
  • Page load times
  • Word count per page
  • Duplicate pages via canonical URLs

Yes! You can export all results to CSV format. This allows you to save the data, share it with your team, or analyze it in Excel/Google Sheets.

Advanced Options let you control what gets scanned:
  • Allowed Paths: Only scan specific sections (e.g., /blog/, /docs/)
  • Excluded Paths: Skip certain sections (e.g., /admin/, /tag/)
This helps focus scans on important content and speeds up the process.

Technical Questions

No. Crawlinker can only scan publicly accessible pages. If your site requires login or authentication, those pages won't be crawled.

Not fully. Crawlinker analyzes server-rendered HTML. If your site uses client-side rendering (React, Vue, Angular without SSR), links and content generated by JavaScript won't be detected. It works best with traditional websites, WordPress, static sites, and server-side rendered applications.

Yes! Crawlinker automatically respects your robots.txt file and won't scan disallowed paths.

We recommend:
  • Monthly scans for actively maintained sites
  • After major updates (redesigns, content migrations, URL changes)
  • Quarterly scans for stable, low-traffic sites
Regular scanning helps catch broken links before they impact SEO or user experience.

Ready to Fix Your Broken Links?

Start scanning now. No credit card required.

Start Your Free Scan