Loading Status...

How We Test Website Status

Our enterprise-grade monitoring infrastructure performs comprehensive availability checks every 5-10 minutes using a 6-step validation process to ensure accurate status reporting.

1. HTTP/HTTPS Protocol Testing

We establish TCP connections to port 80 (HTTP) and 443 (HTTPS) to measure full request-response cycles. Response codes are parsed (200 OK, 301/302 redirects, 404/500 errors) with latency measured in milliseconds. Any timeout exceeding 10 seconds triggers a DOWN status.

2. DNS Resolution Validation

Before HTTP checks, we query authoritative nameservers for A/AAAA records to resolve domains to IP addresses. DNS failures (NXDOMAIN, SERVFAIL, no records) are logged separately from server issues. Resolution times under 200ms indicate healthy DNS infrastructure.

3. SSL/TLS Certificate Inspection

For HTTPS endpoints, we validate the entire certificate chain: root CA trust, intermediate certificates, expiration date, common name/SAN matching, and revocation status (OCSP). Expired or untrusted certificates trigger warnings even if HTTP responds with 200 OK.

4. Multi-Region Latency Testing

Distributed probes from 6 geographic locations (US East/West, EU West, Asia Pacific, South America, Africa) concurrently test domains to detect regional outages or CDN failures. Average latency is calculated, and any single region exceeding 5 seconds is flagged for performance degradation.

5. Real-Time Status Aggregation

Results from all monitoring nodes are aggregated using a weighted consensus algorithm. If 3+ nodes report DOWN simultaneously, the site is marked as globally down. Response times > 2000ms trigger SLOW status. Intermittent failures are tracked to calculate 30-day uptime percentage (99.9%+ is industry standard).

6. User Feedback Verification

Community-reported incidents are cross-validated against automated checks. User submissions with error screenshots, traceroutes, or specific HTTP codes are weighted higher. This crowdsourced data helps identify localized ISP routing issues or geo-blocked content that automated probes might miss.

🔬 Technical Details: Our infrastructure uses asynchronous PHP cURL multi-handles with socket timeouts, OpenSSL peer verification, and database-backed result caching. Historical data is retained for 90 days to enable trend analysis and predictive outage detection. All checks respect robots.txt and implement exponential backoff for rate limiting.

Check a Website Status