GSLB即广域网负载均衡,一般基于DNS来实现。我很早前研究过F5的BIG-IP GTM,它是有名的商业GSLB软件,一方面具备非常高的查询性能,另一方面它结合对各个数据中心的健康检查,来做用户查询的就近解析。Google的GSLB与此类似,它的权威服务器优势包括:
- 支持EDNS0 client subnet,递归服务器在发送给权威服务器的请求,带上用户客户端的真实IP。当然这是非标准实现,目前递归服务器也只有大型提供商如Google Public DNS、OpenDNS予以支持。
- Google尽量维护每一个递归服务器的IP地址,以及它背后的用户规模数。在Google数据变更时,可以最大可能估量到对每个部分用户的影响。
- Google衡量每一个递归服务器背后的用户地理位置分布,在递归服务器向Google的权威服务器发送DNS查询时,权威服务器返回对它们的用户最佳的解析(这点非常牛)。
- Google的权威服务器当然也结合了不同地域数据中心的健康状态来做解析判断。
如下是原文:
The DNS middleman has three very important implications on traffic management:
- Recursive resolution of IP addresses
- Nondeterministic reply paths
- Additional caching complications
Recursive resolution of IP addresses is problematic, as the IP address seen by the
authoritative nameserver does not belong to a user; instead, it’s the recursive resolv‐
er’s. This is a serious limitation, because it only allows reply optimization for the
shortest distance between resolver and the nameserver. A possible solution is to use
the EDNS0 extension proposed in [Con15], which includes information about the
client’s subnet in the DNS query sent by a recursive resolver. This way, an authorita‐
tive nameserver returns a response that is optimal from the user’s perspective, rather
than the resolver’s perspective. While this is not yet the official standard, its obvious
advantages have led the biggest DNS resolvers (such as OpenDNS and Google) to
support it already.
Not only is it difficult to find the optimal IP address to return to the nameserver for a
given user’s request, but that nameserver may be responsible for serving thousands or
millions of users, across regions varying from a single office to an entire continent.
For instance, a large national ISP might run nameservers for its entire network from
one datacenter, yet have network interconnects in each metropolitan area. The ISP’s nameservers would then return a response with the IP address best suited for their
datacenter, despite there being better network paths for all users!
Finally, recursive resolvers typically cache responses and forward those responses
within limits indicated by the time-to-live (TTL) field in the DNS record. The end
result is that estimating the impact of a given reply is difficult: a single authoritative
reply may reach a single user or multiple thousands of users. We solve this problem in
two ways:
- We analyze traffic changes and continuously update our list of known DNS
resolvers with the approximate size of the user base behind a given resolver,
which allows us to track the potential impact of any given resolver. - We estimate the geographical distribution of the users behind each tracked
resolver to increase the chance that we direct those users to the best location.
Estimating geographic distribution is particularly tricky if the user base is distributed
across large regions. In such cases, we make trade-offs to select the best location and
optimize the experience for the majority of users.
But what does “best location” really mean in the context of DNS load balancing? The
most obvious answer is the location closest to the user. However (as if determining
users’ locations isn’t difficult in and of itself ), there are additional criteria. The DNS
load balancer needs to make sure that the datacenter it selects has enough capacity to
serve requests from users that are likely to receive its reply. It also needs to know that
the selected datacenter and its network connectivity are in good shape, because
directing user requests to a datacenter that’s experiencing power or networking prob‐
lems isn’t ideal. Fortunately, we can integrate the authoritative DNS server with our
global control systems that track traffic, capacity, and the state of our infrastructure.
The third implication of the DNS middleman is related to caching. Given that author‐
itative nameservers cannot flush resolvers’ caches, DNS records need a relatively low
TTL. This effectively sets a lower bound on how quickly DNS changes can be propa‐
gated to users. Unfortunately, there is little we can do other than to keep this in mind
as we make load balancing decisions.