简体   繁体   English

nginx 代理后面带有 node.js 的 HTTP2

[英]HTTP2 with node.js behind nginx proxy

I have a node.js server running behind an nginx proxy.我有一个在 nginx 代理后面运行的 node.js 服务器。 node.js is running an HTTP 1.1 (no SSL) server on port 3000. Both are running on the same server. node.js 在端口 3000 上运行 HTTP 1.1(无 SSL)服务器。两者都在同一台服务器上运行。

I recently set up nginx to use HTTP2 with SSL (h2).我最近将 nginx 设置为使用带有 SSL (h2) 的 HTTP2。 It seems that HTTP2 is indeed enabled and working.似乎 HTTP2 确实已启用并且可以正常工作。

However, I want to know whether the fact that the proxy connection (nginx <--> node.js) is using HTTP 1.1 affects performance.但是,我想知道代理连接 (nginx <--> node.js) 使用 HTTP 1.1 是否会影响性能。 That is, am I missing the HTTP2 benefits in terms of speed because my internal connection is HTTP 1.1?也就是说,我是否因为我的内部连接是 HTTP 1.1 而错过了 HTTP2 在速度方面的优势?

In general, the biggest immediate benefit of HTTP/2 is the speed increase offered by multiplexing for the browser connections which are often hampered by high latency (ie slow round trip speed).一般来说,HTTP/2 最大的直接好处是通过多路复用为浏览器连接提供的速度提升,而浏览器连接通常受到高延迟(即缓慢的往返速度)的阻碍。 These also reduce the need (and expense) of multiple connections which is a work around to try to achieve similar performance benefits in HTTP/1.1.这些还减少了对多个连接的需求(和费用),这是一种尝试在 HTTP/1.1 中实现类似性能优势的解决方法。

For internal connections (eg between webserver acting as a reverse proxy and back end app servers) the latency is typically very, very, low so the speed benefits of HTTP/2 are negligible.对于内部连接(例如充当反向代理的网络服务器和后端应用程序服务器之间),延迟通常非常非常低,因此 HTTP/2 的速度优势可以忽略不计。 Additionally each app server will typically already be a separate connection so again no gains here.此外,每个应用服务器通常已经是一个单独的连接,因此这里也没有任何好处。

So you will get most of your performance benefit from just supporting HTTP/2 at the edge.因此,您只需在边缘支持 HTTP/2,即可获得大部分性能优势。 This is a fairly common set up - similar to the way HTTPS is often terminated on the reverse proxy/load balancer rather than going all the way through.这是一个相当常见的设置 - 类似于 HTTPS 通常在反向代理/负载平衡器上终止而不是一直通过的方式。

However there are potential benefits to supporting HTTP/2 all the way through.然而,一直支持 HTTP/2 有潜在的好处。 For example it could allow server push all the way from the application.例如,它可以允许服务器从应用程序一路推送。 Also potential benefits from reduced packet size for that last hop due to the binary nature of HTTP/2 and header compression.由于 HTTP/2 和标头压缩的二进制性质,最后一跳的数据包大小减小也有潜在好处。 Though, like latency, bandwidth is typically less of an issue for internal connections so importance of this is arguable.但是,与延迟一样,带宽对于内部连接而言通常不是问题,因此其重要性值得商榷。 Finally some argue that a reverse proxy does less work connecting a HTTP/2 connect to a HTTP/2 connection than it would to a HTTP/1.1 connection as no need to convert one protocol to the other, though I'm sceptical if that's even noticeable since they are separate connections (unless it's acting simply as a TCP pass through proxy).最后,有些人认为,反向代理将 HTTP/2 连接到 HTTP/2 连接的工作量少于连接到 HTTP/1.1 连接的工作量,因为不需要将一种协议转换为另一种协议,尽管我对此表示怀疑值得注意,因为它们是单独的连接(除非它只是作为 TCP 直通代理)。 So, to me, the main reason for end to end HTTP/2 is to allow end to end Server Push, but even that is probably better handled with HTTP Link Headers and 103-Early Hints due to the complications in managing push across multiple connections .所以,对我来说,端到端 HTTP/2 的主要原因是允许端到端服务器推送,但即使这样也可能更好地处理 HTTP 链接头和 103-Early 提示,因为管理跨多个连接的推送很复杂.

For now, while servers are still adding support and server push usage is low (and still being experimented on to define best practice), I would recommend only to have HTTP/2 at the end point.目前,虽然服务器仍在添加支持并且服务器推送使用率很低(并且仍在试验以定义最佳实践),但我建议仅在端点使用 HTTP/2。 Nginx also doesn't, at the time of writing, support HTTP/2 for ProxyPass connections (though Apache does), and has no plans to add this , and they make an interesting point about whether a single HTTP/2 connection might introduce slowness (emphasis mine):在撰写本文时,Nginx 也不支持 ProxyPass 连接的 HTTP/2(尽管 Apache 支持),并且没有计划添加此功能,并且他们提出了一个有趣的观点,即单个 HTTP/2 连接是否可能会导致缓慢(强调我的):

Is HTTP/2 proxy support planned for the near future?是否计划在不久的将来支持 HTTP/2 代理?

Short answer:简短的回答:

No, there are no plans.不,没有计划。

Long answer:长答案:

There is almost no sense to implement it, as the main HTTP/2 benefit is that it allows multiplexing many requests within a single connection, thus [almost] removing the limit on number of simalteneous requests - and there is no such limit when talking to your own backends.几乎没有实现它的意义,因为 HTTP/2 的主要好处是它允许在单个连接中多路复用多个请求,因此 [几乎] 消除了对同时请求数量的限制——并且在与之交谈时没有这样的限制你自己的后端。 Moreover, things may even become worse when using HTTP/2 to backends, due to single TCP connection being used instead of multiple ones .此外,当使用 HTTP/2 到后端时,情况甚至可能变得更糟,因为使用的是单个 TCP 连接而不是多个 TCP 连接

On the other hand, implementing HTTP/2 protocol and request multiplexing within a single connection in the upstream module will require major changes to the upstream module.另一方面,在上游模块的单个连接中实现 HTTP/2 协议和请求多路复用将需要对上游模块进行重大更改。

Due to the above, there are no plans to implement HTTP/2 support in the upstream module, at least in the foreseeable future.由于上述原因,至少在可预见的未来,没有计划在上游模块中实现 HTTP/2 支持。 If you still think that talking to backends via HTTP/2 is something needed - feel free to provide patches.如果您仍然认为需要通过 HTTP/2 与后端​​通信 - 请随时提供补丁。

Finally, it should also be noted that, while browsers require HTTPS for HTTP/2 (h2), most servers don't and so could support this final hop over HTTP (h2c).最后,还应该注意的是,虽然浏览器需要 HTTP/2 (h2) 的 HTTPS,但大多数服务器不需要,因此可以支持 HTTP (h2c) 上的最后一跳。 So there would be no need for end to end encryption if that is not present on the Node part (as it often isn't).因此,如果 Node 部分不存在端到端加密(通常不存在),则不需要端到端加密。 Though, depending where the backend server sits in relation to the front end server, using HTTPS even for this connection is perhaps something that should be considered if traffic will be travelling across an unsecured network (eg CDN to origin server across the internet).但是,根据后端服务器相对于前端服务器的位置,如果流量将通过不安全的网络(例如 CDN 到 Internet 上的源服务器)传输,即使为此连接使用 HTTPS 也可能是应该考虑的事情。

NGINX now supports HTTP2/Push for proxy_pass and it's awesome... NGINX 现在支持proxy_pass HTTP2/Push 并且它很棒......

Here I am pushing favicon.ico, minified.css, minified.js, register.svg, purchase_litecoin.svg from my static subdomain too.在这里,我也从我的静态子域中推送 favicon.ico、minified.css、minified.js、register.svg、purchase_litecoin.svg。 It took me some time to realize I can push from a subdomain.我花了一些时间才意识到我可以从子域推送。

location / {
            http2_push_preload              on;
            add_header                      Link "<//static.yourdomain.io/css/minified.css>; as=style; rel=preload";
            add_header                      Link "<//static.yourdomain.io/js/minified.js>; as=script; rel=preload";
            add_header                      Link "<//static.yourdomain.io/favicon.ico>; as=image; rel=preload";
            add_header                      Link "<//static.yourdomain.io/images/register.svg>; as=image; rel=preload";
            add_header                      Link "<//static.yourdomain.io/images/purchase_litecoin.svg>; as=image; rel=preload";
            proxy_hide_header               X-Frame-Options;
            proxy_http_version              1.1;
            proxy_redirect                  off;
            proxy_set_header                Upgrade $http_upgrade;
            proxy_set_header                Connection "upgrade";
            proxy_set_header                X-Real-IP $remote_addr;
            proxy_set_header                Host $http_host;
            proxy_set_header                X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header                X-Forwarded-Proto $scheme;
            proxy_pass                      http://app_service;
        }

In case someone is looking for a solution on this when it is not convenient to make your services HTTP2 compatible.万一有人在不方便使您的服务兼容 HTTP2 时为此寻找解决方案。 Here is the basic NGINX configuration you can use to convert HTTP1 service into HTTP2 service.以下是可用于将 HTTP1 服务转换为 HTTP2 服务的基本 NGINX 配置。

server {
  listen [::]:443 ssl http2;
  listen 443 ssl http2;

  server_name localhost;
  ssl on;
  ssl_certificate /Users/xxx/ssl/myssl.crt;
  ssl_certificate_key /Users/xxx/ssl/myssl.key;

  location / {
    proxy_pass http://localhost:3001;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_set_header Host $host;
  }
}

NGINX does not support HTTP/2 as a client. NGINX 不支持 HTTP/2 作为客户端。 As they're running on the same server and there is no latency or limited bandwidth I don't think it would make a huge different either way.由于它们在同一台服务器上运行并且没有延迟或带宽有限,我认为这两种方式都不会产生巨大的不同。 I would make sure you are using keepalives between nginx and node.js.我会确保你在 nginx 和 node.js 之间使用 keepalives。

https://www.nginx.com/blog/tuning-nginx/#keepalive https://www.nginx.com/blog/tuning-nginx/#keepalive

You are not losing performance in general, because nginx matches the request multiplexing the browser does over HTTP/2 by creating multiple simultaneous requests to your node backend.您通常不会损失性能,因为 nginx 通过向您的节点后端创建多个并发请求来匹配浏览器通过 HTTP/2 执行的多路复用请求。 (One of the major performance improvements of HTTP/2 is allowing the browser to do multiple simultaneous requests over the same connection, whereas in HTTP 1.1 only one simultaneous request per connection is possible. And the browsers limit the number of connections, too. ) (HTTP/2 的主要性能改进之一是允许浏览器在同一连接上同时执行多个请求,而在 HTTP 1.1 中,每个连接只能同时请求一个。浏览器也限制了连接数。)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM