1. After Subscription Import: What Policy Groups Actually Do
Importing a remote profile gives you a bag of proxies—often dozens of regional servers with opaque names. The core does not magically know that you prefer Singapore for video and Japan for gaming unless you express that intent in two places: policy groups under proxy-groups, and the rules section that sends traffic to those groups. Think of groups as named strategies, and think of rules as the traffic classifier. If you only ever use a manual select group, you are the scheduler; if you switch the same slot to url-test or fallback, the core begins to automate parts of that scheduling based on measurements.
That automation is not free. Both url-test and fallback rely on periodic latency tests (HTTP checks against a url you specify) to decide whether a candidate is alive and how fast it responds. A poorly chosen test endpoint—blocked in your region, rate-limited, or slower than your real workloads—will produce misleading rankings. Likewise, an interval that is too aggressive wakes radios on laptops and phones more often than necessary, while an interval that is too relaxed means you might ride a degraded path for minutes. Getting comfortable with those knobs is the difference between “Clash feels smart” and “it keeps hopping nodes for no reason.”
Finally, remember that policy groups compose. A fallback group can list other groups as members, not only bare proxies, which is how advanced profiles chain “try my premium chain first, then budget nodes.” You do not have to build that complexity on day one, but recognizing composition helps when you read third-party templates that nest groups three levels deep.
2. url-test vs fallback: Two Different Questions
At a high level, url-test answers: “Among these candidates, which one is currently the fastest according to our probe?” It performs a latency test across members and typically selects the proxy with the lowest measured delay, subject to tolerance rules that prevent oscillation when differences are tiny. That matches the user story “I do not care which country wins—I want the snappiest working hop for general browsing or downloads.” It is the closest built-in equivalent to an automatic “pick best ping” workflow without building external scripts.
fallback, by contrast, answers: “Walk this list from top to bottom and use the first member that passes the health check.” Speed between healthy entries is not the primary ranking signal; order is. That maps directly to operational intent: “Primary datacenter first, secondary budget node if the primary is down, direct exit as last resort.” You still get automatic switching, but the switch is driven by failure, not by milliseconds of advantage. If you expected fallback to pick the fastest server, you will feel surprised—use url-test instead.
Both types support the same fundamental requirement: members must reference valid proxies entries already defined earlier in the file (or introduced via proxy-providers). Neither type invents credentials or protocols; they only orchestrate outbounds you already declared. If a member name typo slips in, the group silently misbehaves or fails validation depending on your client’s strictness, so keep names aligned with what the subscription merge produced.
3. url-test: Lowest Latency and Health Checks
A url-test group tracks each candidate’s round-trip time to the configured probe URL. The core repeats this on a schedule controlled by interval (seconds). When a new winner appears, tolerance (milliseconds) tells the engine how much better the challenger must be before it is worth switching—an essential guardrail against flapping when two nodes ping 42 ms versus 44 ms back and forth across samples.
The url should be a small, cache-friendly HTTPS endpoint that returns quickly in the regions you care about. Many users default to a generic connectivity check URL; that is fine if it is reachable through every candidate. If some nodes block that host, those nodes look “dead” even when they work for your actual sites. When in doubt, align the probe with something representative of your traffic—without violating anyone’s terms—or use a minimal object on a major CDN that your nodes can all reach. If your provider documents a recommended test URL, prefer theirs because they tuned it against their fleet.
Optional flags such as lazy: true (when supported) defer testing for members until they are actually considered, which reduces background noise in huge lists. Whether you enable lazy mode depends on whether you prioritize battery and log quietness over immediate readiness after profile reload. For a compact set of five regional servers, eager testing is usually acceptable; for provider merges with hundreds of outbounds, lazy strategies become more attractive.
One subtle point: url-test optimizes the probe metric, not necessarily your application’s full page load. A node that wins on a tiny HEAD request might still route bulk TCP flows differently. Treat the result as a strong heuristic, not a guarantee of perfect streaming or gaming performance. If you need per-protocol nuance, you eventually layer smarter rules or split traffic across multiple groups—topics we touch on in the routing and rules guide.
4. fallback: Ordered Failover and When Order Beats Speed
The fallback group walks proxies sequentially. The first member that passes the health check becomes active. If it later fails checks, the group advances to the next working entry. That is ideal when you have a clear preference unrelated to raw ping—perhaps your provider marks certain servers as “premium,” and you only want to touch the noisy overcrowded hop when you must. It is also the natural representation of “main node, backup node, tertiary direct path” stories that operations teams write on whiteboards.
Because order dominates, spend time curating the list itself. Put the stable commercial relay you trust at the top, your friend’s homelab in the middle only if you accept their maintenance window, and a conservative public node near the bottom if at all. If you invert that stack accidentally, you will wonder why Clash “always picks the slow one”—it is doing what you told it to do. Combine fallback with honest naming in proxies so future edits do not scramble priorities.
Health checks still matter: fallback is not a blind static chain. If the top node passes tests but performs terribly for real workloads, you may need to adjust the probe, tighten timeouts, or split responsibilities—maybe url-test for general web and fallback only for a narrow rule that must prioritize a contractual endpoint. Document these decisions in your personal notes; six months later you will not remember why proxy C sits above proxy D.
5. Example YAML: Wire Real Proxy Names
The snippets below are illustrative. Replace PROVIDER-A, PROVIDER-B, and DIRECT with the exact names your merged profile defines—usually visible in your client’s outbound list. Keep indentation strict; YAML is unforgiving about spaces.
url-test: pick the lowest latency among three candidates
proxy-groups: - name: 'Auto-Best' type: url-test proxies: - 'PROVIDER-A-SG' - 'PROVIDER-A-JP' - 'PROVIDER-A-US' url: 'https://www.gstatic.com/generate_204' interval: 300 tolerance: 50
fallback: try primary, then secondary, then direct
proxy-groups: - name: 'Work-VPN-Failover' type: fallback proxies: - 'Office-Primary' - 'Office-Backup' - DIRECT url: 'https://www.gstatic.com/generate_204' interval: 300
Some templates nest a fallback inside a larger select group so humans can flip between “automatic” and “manual” strategies. That pattern works well in graphical clients: you expose one group to the UI while hiding implementation details. If you came from older Clash for Windows workflows, our CFW migration notes explain how modern forks present the same concepts with fewer surprises.
6. Pointing Rules at Your New Group
Creating Auto-Best or Work-VPN-Failover does nothing until traffic is routed there. In the rules array, reference the group name exactly as declared. A minimal pattern sends all unmatched traffic to your automatic group, while more careful stacks front-load DOMAIN, GEOIP, and RULE-SET entries for split tunneling. Order matters: first match wins.
If you already maintain specialized groups—say, a static select for a streaming region—leave them in place and only route generic web traffic to url-test. That avoids surprising region shifts on services that validate catalog country codes. Likewise, corporate VPN domains might belong in a dedicated fallback chain with conservative intervals so a brief probe blip does not yank the entire tunnel during a live call.
DNS mode interacts with rules more than newcomers expect. If you use fake-ip, domain-based rules behave differently than in redir-host modes, and misaligned DNS can make it look like a policy group “never activates.” When debugging, simplify: temporarily collapse to a short rule list, confirm the group activates, then reintroduce complexity. The same discipline appears in our AI-focused routing articles—see ChatGPT routing for a worked example of stable policy attachment.
7. url-test vs select: Manual Control Still Has a Role
The select group is still the right tool when human intent must override measurements—testing a specific exit, satisfying a streaming library’s region lock, or reproducing a support ticket on an exact server. Automatic groups shine for “always on” workloads where micromanaging nodes is tedious. Many power users keep both: a manual selector for edge cases and a url-test default for everything else.
Avoid duplicating long proxy lists between select and url-test unless you have automation to keep them synchronized. Instead, reference the same underlying proxies once in proxies and compose groups that reuse those names. If your subscription refresh renames nodes, expect to adjust group membership—another reason some users offload bulk nodes to proxy-providers with filters, which is outside this article’s scope but worth reading in upstream docs when you outgrow static lists.
From a UX standpoint, automatic switching should feel boring. If your dashboard shows constant changes every few seconds, tighten tolerance, lengthen interval, or reduce the candidate pool to geographically coherent options. The goal is responsive adaptation to real outages and meaningful latency differences, not a slot machine.
8. Troubleshooting Flapping, Timeouts, and Wrong URLs
Everyone looks offline. Verify the health-check URL is reachable through proxies, not just in a bare browser on direct network. Some endpoints block data-center IPs or require SNI patterns your hop does not provide. Swap in a simpler probe temporarily, then bisect which members fail.
url-test keeps bouncing between two nodes. Raise tolerance, increase interval, or remove an outlier node that oscillates at the margin. Remember that Wi-Fi handoffs and laptop sleep also perturb measurements on mobile hardware.
fallback never leaves the first entry even when it feels slow. Slow is not failed. Fallback advances on failed checks, not subjective speed. Move to url-test if you need performance-based selection, or demote the slow proxy manually in the list.
Rules do not hit the new group. Double-check spelling, ensure the rule appears before a broad MATCH catch-all, and confirm your client reloaded the profile after edits. Cached connections may linger briefly; toggling the system proxy or restarting the core clears many stale states.
9. Closing Thoughts
Clash policy groups turn a flat list of subscription nodes into something you can reason about: url-test for latency test-driven automatic node switching, fallback for disciplined failover when policy beats raw speed. Neither replaces thoughtful rules or DNS hygiene, but together they remove the daily chore of clicking the fastest flag on a map. Compared with opaque all-in-one VPN apps, a transparent core plus well-chosen probes tends to fail in understandable ways—and that debuggability matters when you are on a deadline and the network is not cooperating.
When you are ready to standardize on maintained clients and installers that track the ecosystem you just configured, using a single curated download surface keeps your toolchain aligned across platforms. Visit our download center for up-to-date builds, then reload your profile and watch the health-check logs confirm that automatic switching behaves the way your YAML describes. → Download Clash for free and experience the difference
Related Reading · topic cluster
Hand-picked deep-dives on the same topic — practical Clash routing guides in the same category.
Set Up Clash load-balance Groups: load-balance vs consistent-hashing Step by Step
Already use url-test or fallback? Learn Clash load-balance policy groups in YAML: round-robin spreads multi-connection downloads across nodes; consistent-hashin…
Read moreStop IPv6 Leaks When Using Clash TUN on Dual-Stack Networks (2026)
TUN is on but leak tests still show your ISP IPv6, or apps split across regions? On IPv4/IPv6 dual-stack links, align OS IPv6 policy with mihomo ipv6 and DNS (A…
Read moreTelegram Won't Connect? Route MTProto and Domains in Clash (2026)
Telegram stuck on connecting, sync, or updates behind Clash? MTProto sessions hit DC IPs while web and CDN paths use telegram.org and t.me—route explicit mihomo…
Read more