MFranklin LLC
February 2026

Rapid Website Trust Assessment

RDAP Chrome DevTools OSINT Threat Analysis

The Challenge

When evaluating suspicious products online, it's common to encounter websites designed to add legitimacy to questionable or knockoff goods. A polished landing page doesn't guarantee a trustworthy operation behind it.

I needed a repeatable, low-cost process to assess whether a site shows signs of malicious behavior — data harvesting, suspicious third-party tracking, or deceptive infrastructure — without installing specialized tools or relying on paid services.

The Questions I Wanted to Answer

  • Does the domain and hosting configuration look consistent with a legitimate organization?
  • Does the site behave in a way that suggests suspicious data collection or hidden third-party exfiltration?
Core Principle: A website can look professional and still be used as a legitimacy tool for an untrustworthy product. Visual inspection alone isn't enough.

Methodology

I used two complementary approaches that anyone with a browser can replicate in minutes.

1. Domain Intelligence via RDAP

RDAP (Registration Data Access Protocol) provides standardized registration data, including registrar, nameservers, registration dates, and DNS security posture. This is the modern replacement for WHOIS.

I focused on:

  • Domain age and lifecycle events: Registration and expiration dates
  • Registrar and nameserver alignment: Whether DNS points to expected providers
  • DNSSEC status: Whether the domain uses signed DNS delegation
  • Registry lock status: Transfer/update prohibitions that indicate maturity
Important Note: New domains, privacy-shielded registrants, and "builder" hosting are not proof of fraud by themselves. They increase risk when paired with suspicious business behavior.

2. Runtime Behavior via Chrome DevTools

Next, I inspected the site's actual behavior during page load using Chrome DevTools:

# DevTools Network Analysis Steps 1. Open DevTools (F12) → Network tab 2. Enable "Preserve log" checkbox 3. Reload the page (Ctrl+R) 4. Right-click column headers → Add "Domain" column 5. Sort by Domain to group traffic by host 6. Use negative filters to remove known noise: -domain:wix.com -domain:parastorage.com 7. Focus on: Fetch/XHR, POST requests, auth endpoints

This technique is valuable because many malicious sites look normal visually, but their network behavior reveals where data is actually being sent.

Findings

The infrastructure and network behavior were consistent with a site hosted on a mainstream website-building platform (Wix). That resulted in many requests to platform/CDN domains used for assets, performance, and runtime functionality.

During filtering, two categories stood out:

Browser Extension Interference

A host that appeared in the Network list was actually a Chrome extension ID — not traffic from the website itself. This is an important lesson: testing should be repeated in Incognito with extensions disabled, otherwise extension-generated requests can be incorrectly attributed to the website.

Telemetry and Session Management

The site generated routine telemetry traffic (error monitoring) and first-party session-related requests. A request labeled access-tokens was examined closely. It was:

  • A first-party request to the site's own domain
  • Using common secure cookie patterns (Secure/HttpOnly flags)
  • Containing platform-specific request IDs

On its own, this did not indicate credential harvesting — it was standard session management.

What Would Have Been a Red Flag

The most meaningful indicators of risk in this kind of review would include:

High-Risk Indicators:
  • Requests to unrelated third-party domains not attributable to common analytics or hosting
  • Automatic POST requests sending payloads before any user interaction
  • Endpoints with patterns: /collect, /beacon, /track, /fp, /replay
  • Redirect chains to unrelated hosts triggered by normal page interaction
  • Permission prompts, forced downloads, or deceptive overlays
  • Session replay tooling capturing keystrokes or form inputs

Outcome

This project reinforced an important principle: domain registration data and platform hosting details help establish context, but runtime inspection is what reveals whether the site is quietly sending data elsewhere.

The Repeatable Checklist

The final output is a process I can apply to any unknown website in a few minutes:

  1. Pull RDAP data — Review domain age, registrar, DNS posture
  2. Load with DevTools open — Capture network traffic during page load
  3. Sort by domain — Filter out known "platform noise"
  4. Inspect high-signal requests — Focus on Fetch/XHR and POST, check destinations and payloads
  5. Repeat in clean context — Incognito mode with extensions disabled

Tools Used

  • RDAP lookup (registry + registrar views)
  • Google Chrome DevTools (Network and Headers inspection)
Final Thought: The goal isn't to prove a site is malicious — it's to gather enough signal to make an informed decision about whether to proceed. When the risk/reward doesn't add up, walk away.
Copied to clipboard!