[英]Why ns_t_ns is faster than ns_t_a when query root sever?
I want to know the latency between client and local dns server. 我想知道客户端和本地DNS服务器之间的延迟。 So I send a query for the root dns server(.) like that: 所以我像这样发送对根dns server(。)的查询:
res_nquery(&res, ".", ns_c_in, ns_t_a, answer, sizeof(answer));
But if I change ns_t_a
to ns_t_ns
, the query is become more faster. 但是,如果将ns_t_a
更改为ns_t_ns
,查询将变得更快。 Why this happen? 为什么会这样?
A recursive resolver needs to cache the ./IN/NS
record set, and usually does so when the resolver is started. 递归解析器需要缓存./IN/NS
记录集,通常在启动解析器时就这样做。 This is called priming and is covered in this RFC: 这称为启动,并在此RFC中涵盖:
The set of root name servers also never expires from the cache (in a typical implementation). 根名称服务器集也永远不会从缓存中过期(在典型的实现中)。
A query for ./IN/A
does not happen during regular operation, so the cache needs to be populated first. 在常规操作期间不会发生对./IN/A
的查询,因此需要首先填充高速缓存。 This resource record set will also expire eventually. 此资源记录集也将最终过期。
If both resource records sets are in the cache, typical resolver response times will be identical. 如果两个资源记录集都在缓存中,则典型的解析程序响应时间将相同。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.