[英]Why does nginx ingress controller return Bad Request 400 despite of large_client_header_buffers is applied?
I use k8s nginx ingress controller before my backend, having two instances.我在后端之前使用 k8s nginx ingress controller,有两个实例。 When web-cleint sends quite big request to ingress - nginx for POST/PUT/GET request always returns 400 Bad request.当 web-cleint 向入口发送相当大的请求时 - nginx 用于 POST/PUT/GET 请求总是返回 400 Bad request。 Btw, no another explanation or details for this error, or at least it is logged in some place where I cannot find it or identify that my problem is.顺便说一句,这个错误没有其他解释或详细信息,或者至少它记录在我找不到它或确定我的问题所在的某个地方。
After googling I figured out the recipe to fix this: apply large_client_header_buffers
with increased value: that's exactly what I did - now my buffer size is 4 256k
.谷歌搜索后,我想出了解决这个问题的方法:应用large_client_header_buffers
增加值:这正是我所做的 - 现在我的缓冲区大小是4 256k
。 But no effect, I still get this error.但没有效果,我仍然得到这个错误。
Pls give me any idea how to procede this problem请告诉我如何解决这个问题
So answer is: nginx is not guilty in described behaviour.所以答案是:nginx 在所描述的行为中是无罪的。 After thoroughly investigation of log of java-app which stands behind nginx this exception was noticed在对 nginx 后面的 java-app 日志进行彻底调查后,注意到了这个异常
[INFO ] 2021-05-10 16:20:56.354 --- [io-10104-exec-4] org.apache.juli.logging.DirectJDKLog : Error parsing HTTP request header
Note: further occurrences of HTTP request parsing errors will be logged at DEBUG level.
java.lang.IllegalArgumentException: Request header is too large
And because of this detail - Note: further occurrences of HTTP request parsing errors will be logged at DEBUG level.
由于这个细节 - Note: further occurrences of HTTP request parsing errors will be logged at DEBUG level.
- it was too fleeting to be catch up during fluent observation of log. - 在流畅的日志观察过程中,它太短暂了,无法赶上。
Summing up solution was to increase SpringBoot property server.max-http-header-size
to to more proper value.总结解决方案是将 SpringBoot 属性server.max-http-header-size
增加到更合适的值。 Default value was 8 Kb.默认值为 8 Kb。
Some additonal infromation about the problem.有关该问题的一些附加信息。
The large_client_header_buffers was changed for http context over ConfigMap, server context was also changed but by simply changing the nginx.conf and reloading nginx - that didn't help either. large_client_header_buffers 已更改为 http 上下文在 ConfigMap 上,服务器上下文也已更改,但只需更改 nginx.conf 并重新加载 ZEE434023CF89D7DFB21F63D64F0F9D7
The problem is that the buffer chain writer buf which can be seen in the debug mode log below reaches 8k and 400 is thrown问题是下面调试模式日志中可以看到的buffer chain writer buf达到8k,抛出400
2021/04/29 11:10:18 [debug] 805#805: *292613 http cleanup add: 0000555F4EFFC428
2021/04/29 11:10:18 [debug] 805#805: *292613 init keepalive peer
2021/04/29 11:10:18 [debug] 805#805: *292613 get keepalive peer
2021/04/29 11:10:18 [debug] 805#805: *292613 lua balancer peer, tries: 1
2021/04/29 11:10:18 [debug] 805#805: *292613 lua reset ctx
2021/04/29 11:10:18 [debug] 805#805: *292613 looking up Lua code cache with key 'balancer_by_luanhli_0f29762dfd828b8baa4d895affbc4b90'
2021/04/29 11:10:18 [debug] 805#805: *292613 stream socket 39
2021/04/29 11:10:18 [debug] 805#805: *292613 epoll add connection: fd:39 ev:80002005
2021/04/29 11:10:18 [debug] 805#805: *292613 connect to 172.18.112.41:10102, fd:39 #292619
2021/04/29 11:10:18 [debug] 805#805: *292613 http upstream connect: -2
2021/04/29 11:10:18 [debug] 805#805: *292613 posix_memalign: 0000555F4EFA1AC0:128 @16
2021/04/29 11:10:18 [debug] 805#805: *292613 event timer add: 39: 5000:5598957611
2021/04/29 11:10:18 [debug] 805#805: *292613 http finalize request: -4, "/api/assets/53f75d85-0528-434c-804f-922acb220c88?" a:1, c:2
2021/04/29 11:10:18 [debug] 805#805: *292613 http request count:2 blk:0
2021/04/29 11:10:18 [debug] 805#805: *292613 http run request: "/api/assets/53f75d85-0528-434c-804f-922acb220c88?"
2021/04/29 11:10:18 [debug] 805#805: *292613 http upstream check client, write event:1, "/api/assets/53f75d85-0528-434c-804f-922acb220c88"
2021/04/29 11:10:18 [debug] 805#805: *292613 http upstream request: "/api/assets/53f75d85-0528-434c-804f-922acb220c88?"
2021/04/29 11:10:18 [debug] 805#805: *292613 http upstream send request handler
2021/04/29 11:10:18 [debug] 805#805: *292613 http upstream send request
2021/04/29 11:10:18 [debug] 805#805: *292613 http upstream send request body
2021/04/29 11:10:18 [debug] 805#805: *292613 chain writer buf fl:1 s: 8222
2021/04/29 11:10:18 [debug] 805#805: *292613 chain writer in: 0000555F4EFB81A0
2021/04/29 11:10:18 [debug] 805#805: *292613 writev: 8222 of 8222
2021/04/29 11:10:18 [debug] 805#805: *292613 chain writer out: 0000000000000000
2021/04/29 11:10:18 [debug] 805#805: *292613 event timer del: 39: 5598957611
2021/04/29 11:10:18 [debug] 805#805: *292613 event timer add: 39: 60000:5599012614
2021/04/29 11:10:18 [debug] 805#805: *292613 http upstream request: "/api/assets/53f75d85-0528-434c-804f-922acb220c88?"
2021/04/29 11:10:18 [debug] 805#805: *292613 http upstream process header
2021/04/29 11:10:18 [debug] 805#805: *292613 malloc: 0000555F4EE529C0:4096
2021/04/29 11:10:18 [debug] 805#805: *292613 recv: eof:1, avail:-1
2021/04/29 11:10:18 [debug] 805#805: *292613 recv: fd:39 590 of 4096
2021/04/29 11:10:18 [debug] 805#805: *292613 http proxy status 400 "400 "
2021/04/29 11:10:18 [debug] 805#805: *292613 http proxy header: "Content-Type: text/html;charset=utf-8"
2021/04/29 11:10:18 [debug] 805#805: *292613 http proxy header: "Content-Language: en"
2021/04/29 11:10:18 [debug] 805#805: *292613 http proxy header: "Content-Length: 435"
2021/04/29 11:10:18 [debug] 805#805: *292613 http proxy header: "Date: Thu, 29 Apr 2021 11:10:18 GMT"
2021/04/29 11:10:18 [debug] 805#805: *292613 http proxy header: "Connection: close"
2021/04/29 11:10:18 [debug] 805#805: *292613 http proxy header done
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.