1. Why “Correct Rules” Still Miss HTTPS Flows
Classic HTTP proxying can read the Host header in cleartext. HTTPS wraps that conversation in TLS, so a passive observer on the local machine often sees only a destination IP and port 443. Your DOMAIN rules expect a hostname. When the dataplane has not yet mapped the IP back to a name, the matcher may fall through to broader rows such as GEOIP or a final MATCH, which feels like “the rule file lied” even though the engine did exactly what the data allowed.
Two mainstream ways restore a hostname for matching. First, the resolver path: if the client asked the core for DNS and you use fake-ip, the core already associates synthetic addresses with real names. Second, the Sniffer path: inspect the first TLS ClientHello on the connection and extract SNI, then feed that string into the rule engine as if the flow had been named all along. Many “works for HTTP, fails for HTTPS” tickets are simply flows that never obtained a domain through either path at the moment of the first match.
Users sometimes misdiagnose this as a broken policy group because speed tests and latency checks still succeed on unrelated domains. Remember that each outbound decision is per connection (and sometimes per datagram class). A fast YouTube node does not prove that a banking site received the same policy row; verify the specific hostname or IP under inspection instead of extrapolating from unrelated successes.
2. What Sniffer Does in Clash Meta / mihomo
In mihomo-family cores, Sniffer is not magic decryption. It does not break TLS; it reads the unencrypted metadata that clients still send during negotiation. For typical TLS 1.2 and 1.3 flows on port 443, the ClientHello carries SNI unless something highly unusual strips it. The sniffer module watches eligible packets, extracts that name, and hands it to routing so DOMAIN and DOMAIN-SUFFIX rows can fire before you resort to coarse IP rules.
Some builds also inspect certain application protocols or certificates where configured, but the day-to-day debugging story for browsers is overwhelmingly “did we see the right SNI string, and did the matcher consume it?” If the site uses a different TLS server name than the brand you typed in the address bar—common with CDNs—you might need to match the SNI you observe, not the marketing domain you remember. That single mismatch explains a surprising share of “only this one subdomain misbehaves” reports.
Performance-wise, sniffing adds a small amount of work on the hot path. For most desktop workflows the cost is negligible compared with RTT across an overseas node. If you run huge bulk transfers on constrained hardware, you can still scope sniffing with skip-domain lists, but do not disable the feature globally until you have log proof that it is the bottleneck rather than a red herring.
3. Enabling Sniffer and Proving It in Config
Start from an explicit sniffer: block rather than assuming your subscription author enabled it. A minimal pattern enables the module, declares which protocols to parse, and lists ports. GUI shells may mirror these fields in toggles; the on-disk profile remains the source of truth after a restart. After you save, reload the core and confirm the running config view in your client actually shows Sniffer on, not just a grayed-out template that never merged into the active profile.
The following YAML skeleton is illustrative; exact keys evolve between releases, so align with the documentation shipped alongside your build. Treat it as a structural reminder, not a verbatim copy-paste contract.
sniffer:
enable: true
sniff:
TLS:
ports: [443, 8443]
QUIC:
ports: [443]
# Optional tuning: skip-domain / force-domain lists per your threat model
If you run TUN mode, ensure the applications you care about actually forward into the tunnel. System proxy-only setups may leave some binaries on DIRECT sockets regardless of how beautiful the YAML looks. Confirm with a controlled test app before you spend an hour tuning SNI lists for traffic that never touched Clash in the first place. Our Windows 11 Verge Rev guide covers first-time TUN alignment if you are straddling modes.
4. Reading mihomo Logs for SNI and Sniff Results
Set log level to debug temporarily. Reproduce a single failing browse: open one tab, trigger one navigation, stop. Scroll backward from the failure and locate the connection that corresponds to that timestamp. You are hunting for lines that mention the destination tuple, any “sniff” or “metadata” phrasing, and the policy decision that followed. Different builds phrase messages differently, but the invariant is: you should be able to answer “what name did the core believe it was routing?” without guessing.
When things work, you will often see a hostname that matches your mental model of the site. When things fail, you may see only a raw IP repeated, a surprise CDN hostname, or a sniff timeout followed by a GEOIP decision you thought you had overridden. Capture that line verbatim (redact unrelated noise) before editing rules. If the log shows a CDN name, add or adjust suffix rows for that observed host, not only for the shorter brand domain you wish were true.
Certificate-related fields sometimes appear adjacent to sniffing diagnostics. They describe the server’s presented identity, which can diverge from SNI in multi-tenant edge setups. Use cert names as secondary hints: if SNI says edge.example.net but the cert carries *.cdnvendor.com, your rule list probably needs the edge hostname class, not a single static row left over from last year’s template.
If your GUI collapses logs into summaries, open the raw core log file when possible. Summary views love to hide the one line that proves whether sniffing ran. Treat hidden truncation as an enemy of debugging: you want the first decision boundary, not a polished badge that says “error.”
5. Sniffer vs fake-ip and Resolver Order
fake-ip mode hands clients synthetic addresses that map internally to real names. That mapping can make DOMAIN rules work even before Sniffer runs, provided the client actually asked Clash for DNS and the resolver cache is warm. When applications cache old answers, bypass the core’s resolver, or pin addresses from earlier sessions, you can end up with “mystery IPs” that neither fake-ip nor your mental model explains until you flush caches and retry.
The interaction between Sniffer and fake-ip is cooperative, not redundant. Think of them as two bridges that can each supply a hostname when the other is silent. If you disable sniffing while fake-ip is misaligned, you may regress instantly. If you enable sniffing but your DNS stack leaks outside the tunnel, you may still see odd first-packet paths until resolver policy is tightened.
When debugging, ask two questions in order. First, did this app resolve through the core? Second, if not, did Sniffer still recover a usable SNI on the first TLS flight? A “no” to both explains most stubborn DIRECT HTTPS escapes that users blame on “bad nodes.”
6. QUIC, HTTP/3, and Encrypted Client Hello Surprises
Modern browsers love QUIC on UDP/443. If your tunnel or rule path handles TCP well but treats UDP differently, you can observe split-brain behavior: some assets load over HTTP/2 on TCP and obey rules, while others race over HTTP/3 and slip past the policy you tuned on the TCP assumption. Some profiles therefore extend sniff coverage to QUIC-like parsers where supported, or deliberately block QUIC to force TCP fallback during diagnosis.
This is not a recommendation to cripple protocols forever; it is a reminder that “HTTPS” is no longer one tidy TCP shape. When logs show parallel paths, compare UDP and TCP decisions explicitly instead of assuming a single flow. For a broader QUIC-oriented walkthrough, see Gemini, Google AI, and QUIC routing, which discusses similar class splits outside the narrow TLS-over-TCP mental model.
Looking ahead, ecosystem experiments around encrypted ClientHello material may further reduce cleartext metadata over time. Stay on maintained core releases so sniffing logic and fallbacks evolve with real-world traffic. Running a two-year-old binary is an unforced error when the web platform moves this quickly.
7. Rule Order, GEOIP, and “It Matched the Wrong Row”
Even perfect SNI data cannot save you from ordering mistakes. The first matching rule wins. A broad GEOIP row placed above your boutique DOMAIN-SUFFIX line will swallow traffic forever, and the log will cheerfully tell you it did exactly that. When triaging, read which rule index or label fired, not only the outbound name.
Another sneaky pattern is duplication: two similar suffix rows with different policy groups, where the earlier one sends traffic to a “default foreign” bucket you forgot about. The hostname is correct; the outcome is still wrong for your intent. Diff the active profile against your editor buffer—some GUIs merge subscription fragments you cannot see unless you export the effective config.
Finally, remember that some “direct” feel is actually a domestic node chosen by a balancer you did not scrutinize. Always map decisions back to the exact outbound label in the log, then open that group’s members. Half the confusion in support forums is vocabulary, not technology.
8. skip-domain, force-domain, and Override Pitfalls
Advanced sniff blocks let you bypass parsing for sensitive domains or force parsing when heuristics hesitate. Misused skip-domain lists are a self-inflicted wound: you told the core not to sniff banking flows, then wondered why hostname-based rules never triggered. Audit those lists with the same discipline you apply to firewall allow rules.
Features that rewrite destinations or override sniff results can fix weird NAT scenarios, but they can also mask the true endpoint until you read logs carefully. If you inherit a profile from a stranger on the internet, search for override knobs before you trust the comments at the top of the file. Comments lie; emitted log lines lie less often.
When experimenting, change one variable at a time. Toggle a skip list, reload, capture twenty log lines, revert. Braided edits make postmortems impossible and waste evenings.
9. Evidence-First Checklist Before You Rewrite the Profile
Walk this list once per incident; it is faster than rewriting half your policy groups on instinct. The goal is to move from “feels wrong” to a single falsified hypothesis backed by a log line you could paste into a notebook.
10. Closing Thoughts
HTTPS did not break your rules; opacity did. Clash Meta and mihomo give you the tools to recover SNI and reason about decisions, but only if you read the log as ground truth instead of trusting how the site “should” behave. Compared with endlessly duplicating node lists, a sniff-aware triage is calmer, faster, and easier to teach the next time a CDN rotates hostnames without asking your permission.
Once the dataplane matches your intent, dial logging back to a sensible default so you are not drowning in noise. Keep a short note in your profile about any unusual CDN rows you added after observing real SNI strings; future-you will appreciate the context when the template updates again.
For day-to-day installs and updates, prefer a maintained client bundle from our download center so your core stays current as sniffers and protocol parsers evolve. → Download Clash for free and experience the difference
Related Reading · topic cluster
Hand-picked deep-dives on the same topic — practical Clash routing guides in the same category.
Set Up Clash load-balance Groups: load-balance vs consistent-hashing Step by Step
Already use url-test or fallback? Learn Clash load-balance policy groups in YAML: round-robin spreads multi-connection downloads across nodes; consistent-hashin…
Read moreStop IPv6 Leaks When Using Clash TUN on Dual-Stack Networks (2026)
TUN is on but leak tests still show your ISP IPv6, or apps split across regions? On IPv4/IPv6 dual-stack links, align OS IPv6 policy with mihomo ipv6 and DNS (A…
Read moreFix Clash Meta Rule-Providers and GEOIP Updates: Mihomo Logs, Paths, and Permissions (2026)
Clash Meta rule-providers or GEOIP auto-update failing—download errors, missing cache paths, stale MMDB, or rules that never match? Read mihomo logs to split UR…
Read more