I'm building an API than enable users to stream (push) large content through HTTP and I would like to be able to process the request from a nodejs express server while the client is still pushing .
client.js
const request = require('request');
someReadableStream.pipe(request({
method: "POST",
uri: `${process.env.SERVER_URL || "https://streaming.service.com/"}/`,
headers: {
'Authorization': `Bearer ${API_TOKEN}`,
},
}));
server.js
const express = require('express');
const app = express();
app.post('/', function rootPostHandler(req, res) {
console.log('request received');
const tickInterval = setInterval(() => console.log('tick'), 1000);
req.on('data', (chunk) => console.log('data received'));
req.on('end', () => {
console.log('request end received');
clearInterval(tickInterval);
res.end('done')
});
});
app.listen(process.env.PORT);
Everything works as expected on my development server: server receives data and can start processing it while client is still pushing (see per-second tick
s in output below).
dev server output:
request received
data received
data received
data received
tick
data received
data received
data received
tick
data received
data received
data received
tick
data received
data received
request end received
Now, when we deploy the same server code to our kubernetes cluster, initial experiments suggest that Nginx Ingress (or some other K8s component) wait the request to be completed before sending it to the underlying HTTP service (no tick
in output below).
server pod logs
request received
data received
data received
data received
data received
data received
data received
data received
data received
data received
data received
data received
request end received
kubernetes config
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
labels:
tier: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "issuer"
nginx.ingress.kubernetes.io/proxy-body-size: 30720m
nginx.ingress.kubernetes.io/proxy-connect-timeout: "180"
nginx.ingress.kubernetes.io/proxy-read-timeout: "180"
nginx.ingress.kubernetes.io/proxy-send-timeout: "180"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_pass_request_headers on;
spec:
ingressClassName: nginx
tls:
- hosts:
- streaming.service.com
secretName: streaming-tls
rules:
- host: streaming.service.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: streaming
port:
number: 80
---
apiVersion: v1
kind: Service
metadata:
name: streaming
labels:
app: streaming
spec:
type: ClusterIP
clusterIP: None
sessionAffinity: ClientIP
ports:
- port: 80
selector:
app: streaming
Question: Is here a way to tell nginx-ingress to forward bytes to underlying HTTP service as they arrive, through annotations or snippets for example?
Answering to my own question since it may help others
All you have to do is to add nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
annotation
New Ingress configuration
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-stream
labels:
name: ingress-stream
tier: ingress
annotations:
nginx.ingress.kubernetes.io/proxy-request-buffering: "off" # here
spec:
ingressClassName: nginx
rules:
- host: streaming.service.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: streaming
port:
number: 80
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.