1. The Problem load-balance Solves (vs url-test / fallback)
Url-test optimizes a probe URL and then sends essentially all eligible traffic through the current “winner” proxy until measurements justify a swap. That is excellent for general browsing when you want the lowest-latency hop. Fallback honors a fixed priority list—ideal when policy demands a primary node first and backups later. Neither design is aimed at saturating several nodes at once when an application opens many concurrent sockets to the same destination family.
Large object downloads, package mirrors, and some APIs naturally open dozens of parallel connections. If every connection lands on the same outbound because your group type collapses to a single selected proxy, you may leave headroom unused on other subscription lines. A load-balance policy group tells the core to distribute those connections among members according to the chosen strategy. You still operate inside the same Clash rule pipeline—you are only changing how the group fans traffic across the proxies you listed.
The trade-off is predictability. Spreading flows can improve aggregate throughput yet break assumptions that “all tabs share one exit IP.” Some services fingerprint rapid IP churn as abuse; others expect a stable region. That is why Clash offers multiple strategies under one type: load-balance umbrella: round-robin leans into distribution, while consistent-hashing pins each internal flow key to a node so repeated connections from the same vantage behave more steadily—for a precise contrast with automatic latency ranking, keep the url-test and fallback walkthrough open in another tab while you edit YAML.
2. Mental Model: Flows, Connections, and “One Winner” Groups
Modern proxies see traffic as streams of TCP (and sometimes QUIC) sessions. A web browser might reuse HTTP/2 multiplexing, but installers and CLI downloaders frequently spawn separate TLS connections per range request or chunk. Clash must decide which proxies entry backs each new session according to the active policy group your rules matched.
With url-test, once the core ranks candidates, the practical effect for many workloads resembles “stick to this node until the scoreboard changes.” With load-balance, the core still honors member health, but assignment logic rotates or hashes so that not every session inherits the same proxy index. The important nuance is application-level: chat or banking sessions that need a single stable public IP should generally use select or a conservative fallback chain, while bulk transfers that tolerate per-connection variance benefit from distribution.
Document your intent next to the YAML comment (English comments only in shared repos). Future edits are easier when the file says whether this group exists for throughput, resilience, or both. If you also maintain domain-specific rules, the routing and rules reference explains how first-match semantics interact with GEOIP and provider rule-sets—load-balance does not replace careful ordering, it only changes how a matched group schedules its outbounds.
3. Round-Robin: Spread Parallel Connections
Setting strategy: round-robin under type: load-balance asks the scheduler to walk member proxies in cyclic order as new connections arrive. In practice, a downloader that opens twelve sockets may see those connections land on different nodes in the pool, which is exactly what you want when each node carries its own bandwidth ceiling and you are not trying to preserve a sticky server-side session.
Round-robin shines for artifact mirrors, large ISO pulls, and multi-part CDN fetches where the remote side treats each connection anonymously. It is weaker when the remote application correlates cookies, device IDs, or account tokens with IP addresses observed on each session. If OAuth redirects bounce between countries because every click picked a fresh egress, switch the fragile service to a dedicated select group instead of a balancer.
Operationally, mind the size of your pool. A list with two durable nodes behaves calmly; a merged subscription with forty aliases can still be configured, but debugging becomes noisy when any member flaps. Trim the list to realistic candidates, then let health checks drop broken lines—otherwise your “effective” pool shrinks unpredictably during outages.
4. Consistent-Hashing: Sticky Mapping by Source
strategy: consistent-hashing changes the assignment question from “who is next in the carousel?” to “given this flow’s key, which slot in the ring should own it?” Implementations derive a deterministic mapping so flows that share the same hash inputs tend to land on the same outbound across reconnects, while different flows still distribute across members. Users often reach for this mode when they want node allocation that is spread across the pool but not wildly random on every newsocket.
This is not a guarantee of human-readable “same city forever.” Hash inputs typically involve addresses and ports seen by the proxy engine, which means per-device traffic patterns can still migrate when NAT tables recycle or when applications rebind. Treat consistent-hashing as “gentler spreading with correlation,” not as a compliance-grade static IP solution. For providers that require a single residential fingerprint, you still pin a manual select.
One sweet spot is mixed workloads on a home LAN: several clients share one Clash instance, and you prefer each LAN host’s long-lived flows to settle on a predictable node while aggregate utilization still balances. Another is API clients that open pools of workers yet expect a modest level of session locality. Always validate with real logs rather than expectations—turn on your client’s logging pane, tag the group distinctly, and verify that the hostnames you care about actually pass through the balancer rule you think they do.
5. Step-by-Step YAML: Declare Members and Strategy
Step 1: Confirm each member string under proxies already exists in your merged profile—either inline proxies: entries or names materialized from proxy-providers. Typos here fail silently in some GUIs and loudly in others, so copy names from the outbound list your client renders.
Step 2: Add a proxy-groups object with type: load-balance, the strategy you want, and the ordered proxies list. Step 3: Provide a reachable url for periodic checks unless your build documents otherwise; match intervals to how quickly you need to eject dead relays without draining laptop batteries. Step 4: Reload the profile and confirm the group appears with the right iconography in your dashboard.
Round-robin load-balance (spread connections)
proxy-groups: - name: 'LB-Download' type: load-balance strategy: round-robin proxies: - 'RELAY-SG-01' - 'RELAY-SG-02' - 'RELAY-JP-01' url: 'https://www.gstatic.com/generate_204' interval: 300
Consistent-hashing load-balance (sticky flow keys)
proxy-groups: - name: 'LB-Sticky' type: load-balance strategy: consistent-hashing proxies: - 'RELAY-SG-01' - 'RELAY-SG-02' - 'RELAY-JP-01' url: 'https://www.gstatic.com/generate_204' interval: 300
Replace relay labels with your subscription’s real tags. If your provider documents a bespoke health URL tuned to their backbone, prefer it over random third-party endpoints that might return 403 to datacenter ranges. Keep YAML indentation consistent—two spaces per level is the usual convention—and avoid mixing tabs, which break parsers in subtle ways.
6. Health Checks, Dead Proxies, and lazy Mode
Load balancing does not magically revive broken transports. Members that fail HTTP probes should drop out of rotation so usable relays receive traffic—behavior aligns with other automatic group types in mihomo-family cores. If you see bursts of errors right after subscription refreshes, extend the interval slightly so transient TLS hiccups do not reshuffle the entire pool mid-transfer.
Some profiles add lazy: true (when supported) so the engine postpones probing until a proxy participates in live traffic. That quiets logs on enormous merged lists, but it can delay detection of a dead hop until something tries to use it. For small, curated pools you rely on for nightly downloads, eager checks remain simpler to reason about.
When multiple members fail simultaneously—upstream outage or regional block—there is nothing load-balance can do except converge on whoever still passes probes. This is where chaining a fallback group above or below the balancer in your architecture sometimes helps: policy composition is a separate design topic, but remember that simple YAML is easier to emergency-edit at 2 a.m. than intricate nests nobody remembers.
7. Point Rules at the Group (Without Breaking Split Tunnels)
A balancer group does nothing until rules send traffic to its name. A popular pattern dedicates DOMAIN-KEYWORD or PROCESS-NAME rows for heavy installers to LB-Download, while interactive apps stay on a manual select. Another pattern routes an entire region’s GEOIP catch to a balancer when you explicitly want spread traffic across several exits targeting that geography.
Because first match wins, place specific rows ahead of broad MATCH fallbacks. Misordered rules are the leading cause of “I edited the group but nothing changed” reports: the traffic never reached your group at all. When debugging, temporarily insert a high-priority LOG or trace facility if your build offers one, or raise logging verbosity so you can see which rule terminates evaluation.
DNS mode intersects here as well. Under fake-ip setups, domain rules remain intuitive; under alternative DNS modes, be sure the engine still observes host-level distinctions for the destinations you target. If something resolves to a numeric IP before Clash classifies it, your careful domain list might never engage, and traffic flows to the wrong outbound regardless of balancer brilliance.
8. Pitfalls: Auth Sessions, Region Locks, and Log Noise
Session-based web apps remain the cautionary tale. When every refresh may ride a different egress IP, you can log yourself out or trigger risk scoring. Keep banking, email, and admin consoles on conservative groups; reserve multi-connection balancers for obvious bulk paths. The same discipline shows up in the AI routing articles on this site—sessions that hate drift need stable lanes, not clever randomization.
Region-sensitive streaming stacks may also disagree with aggressive spreading. Even if throughput rises, content licensing might care more about a coherent geo than raw megabits. If adaptive bitrate thrashes or DRM renegotiates endlessly, revert that traffic to a single-node strategy and leave load balancing to neutral CDNs or update channels.
Operationally noisy dashboards can fatigue you into ignoring real faults. Keep member counts proportional to intention, label groups with obvious strings like LB- prefixes, and snapshot working YAML before experimenting. Regression tests in production networks are simply “did tonight’s backup still finish?”—give yourself an easy rollback file.
9. Quick Comparison: url-test, fallback, load-balance
Url-test minimizes measured latency to the probe—best when you want a single “current best” hop and can tolerate occasional switches as conditions evolve. Fallback respects your explicit ordering—best when policy dictates primaries and backups. Load-balance maximizes distributed use of several healthy relays—best when workloads spawn many parallel connections and you accept per-flow diversity in exchange for aggregate capacity or gentler spreading via hashing.
None of these types fix bad rules, incomplete DNS, or a subscription that already exhausted quota. They orchestrate what you already have. If performance is mysteriously poor, capture logs, verify the health probe, then reconsider whether the group type matches how the application actually uses the network. Sometimes the winning move is fewer nodes with cleaner routing, not an additional layer of automation.
Advanced readers occasionally stack types: for example, each member of a fallback chain might itself be a selective strategy. That flexibility is powerful and easy to misconfigure. Until you need it, a handful of clearly named groups paired with tight rules beats an unmaintainable graph. You can always graduate to nested compositions after the basics behave predictably for a week of daily driving.
10. Closing Thoughts
Clash load balancing answers a different question than url-test or fallback: how should I use several working proxies at the same time instead of crowding one? With strategy: round-robin, you bias toward raw parallelism; with consistent-hashing, you bias toward correlated node allocation without surrendering distribution entirely. Pair either mode with honest health probes and surgical rules, and you get throughput that scales with your pool instead of fighting it.
When you want installers aligned with the same stack you just configured, consolidated packages and cross-platform clients reduce the guesswork of mixing outdated cores with modern YAML keywords. The ecosystem has moved quickly; choosing maintained builds matters as much as clever policy group tricks. For a vetted starting point, browse our download hub to grab a current client, load your profile, and confirm in the UI that LB-* groups rotate or hash the way your benchmarks expect. Compared with opaque one-tap VPN products, explicit cores reward you with logs you can read when midnight debugging arrives—and that transparency is worth preserving. → Download Clash for free and experience the difference
Related Reading · topic cluster
Hand-picked deep-dives on the same topic — practical Clash routing guides in the same category.
Stop IPv6 Leaks When Using Clash TUN on Dual-Stack Networks (2026)
TUN is on but leak tests still show your ISP IPv6, or apps split across regions? On IPv4/IPv6 dual-stack links, align OS IPv6 policy with mihomo ipv6 and DNS (A…
Read moreFix Clash Meta Rule-Providers and GEOIP Updates: Mihomo Logs, Paths, and Permissions (2026)
Clash Meta rule-providers or GEOIP auto-update failing—download errors, missing cache paths, stale MMDB, or rules that never match? Read mihomo logs to split UR…
Read moreFix Clash Meta HTTPS Routing: Sniffer, SNI, and Mihomo Log Checks (2026)
Rules look perfect yet HTTPS still goes DIRECT or the wrong policy group? Learn why TLS hides hostnames, how Clash Meta Sniffer recovers SNI for mihomo rule mat…
Read more