The Challenge
When evaluating suspicious products online, it's common to encounter websites designed to add legitimacy to questionable or knockoff goods. A polished landing page doesn't guarantee a trustworthy operation behind it.
I needed a repeatable, low-cost process to assess whether a site shows signs of malicious behavior — data harvesting, suspicious third-party tracking, or deceptive infrastructure — without installing specialized tools or relying on paid services.
The Questions I Wanted to Answer
- Does the domain and hosting configuration look consistent with a legitimate organization?
- Does the site behave in a way that suggests suspicious data collection or hidden third-party exfiltration?
Methodology
I used two complementary approaches that anyone with a browser can replicate in minutes.
1. Domain Intelligence via RDAP
RDAP (Registration Data Access Protocol) provides standardized registration data, including registrar, nameservers, registration dates, and DNS security posture. This is the modern replacement for WHOIS.
I focused on:
- Domain age and lifecycle events: Registration and expiration dates
- Registrar and nameserver alignment: Whether DNS points to expected providers
- DNSSEC status: Whether the domain uses signed DNS delegation
- Registry lock status: Transfer/update prohibitions that indicate maturity
2. Runtime Behavior via Chrome DevTools
Next, I inspected the site's actual behavior during page load using Chrome DevTools:
This technique is valuable because many malicious sites look normal visually, but their network behavior reveals where data is actually being sent.
Findings
The infrastructure and network behavior were consistent with a site hosted on a mainstream website-building platform (Wix). That resulted in many requests to platform/CDN domains used for assets, performance, and runtime functionality.
During filtering, two categories stood out:
Browser Extension Interference
A host that appeared in the Network list was actually a Chrome extension ID — not traffic from the website itself. This is an important lesson: testing should be repeated in Incognito with extensions disabled, otherwise extension-generated requests can be incorrectly attributed to the website.
Telemetry and Session Management
The site generated routine telemetry traffic (error monitoring) and first-party session-related requests. A request labeled access-tokens was examined closely. It was:
- A first-party request to the site's own domain
- Using common secure cookie patterns (Secure/HttpOnly flags)
- Containing platform-specific request IDs
On its own, this did not indicate credential harvesting — it was standard session management.
What Would Have Been a Red Flag
The most meaningful indicators of risk in this kind of review would include:
- Requests to unrelated third-party domains not attributable to common analytics or hosting
- Automatic POST requests sending payloads before any user interaction
- Endpoints with patterns:
/collect,/beacon,/track,/fp,/replay - Redirect chains to unrelated hosts triggered by normal page interaction
- Permission prompts, forced downloads, or deceptive overlays
- Session replay tooling capturing keystrokes or form inputs
Outcome
This project reinforced an important principle: domain registration data and platform hosting details help establish context, but runtime inspection is what reveals whether the site is quietly sending data elsewhere.
The Repeatable Checklist
The final output is a process I can apply to any unknown website in a few minutes:
- Pull RDAP data — Review domain age, registrar, DNS posture
- Load with DevTools open — Capture network traffic during page load
- Sort by domain — Filter out known "platform noise"
- Inspect high-signal requests — Focus on Fetch/XHR and POST, check destinations and payloads
- Repeat in clean context — Incognito mode with extensions disabled
Tools Used
- RDAP lookup (registry + registrar views)
- Google Chrome DevTools (Network and Headers inspection)
