简体   繁体   中英

SSL_ERROR_SYSCALL by GKE ingress with tls termination

I have following problem. I have deployment and service, frontend config and ingres as follows (skiping deployment as it is not really interesting):

---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: backend-config
spec:
  healthCheck:
    type: HTTP
    requestPath: /readiness
    port: 8080
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    cloud.google.com/app-protocols: '{"http":"HTTP"}'
    cloud.google.com/backend-config: '{"default": "backend-config"}'
  name: app
  labels:
    app: app
spec:
  type: ClusterIP
  selector:
    app: app
  ports:
    - name: http
      port: 80
      targetPort: http
      protocol: TCP
---
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
  name: frontend-config
spec:
  redirectToHttps:
    enabled: true
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: 'app-ip'
    networking.gke.io/v1beta1.FrontendConfig: 'frontend-config'
    ingress.gcp.kubernetes.io/pre-shared-cert: 'ssl-certificate'
    kubernetes.io/ingress.allow-http: 'false'
  labels:
    app: app
spec:
#  tls:
#    - secretName: tls-secret
  rules:
    - host: myhost.com
      http:
        paths:
          - path: /*
            pathType: Prefix
            backend:
              service:
                name: app
                port:
                  name: http

as you can see there is static IP address and ssl-certificate, which I registered with GCP.

By this configuration I am getting mostly

OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to...

if I use curl and "Remote host terminated the handshake" with java clients. But sporadically packets come through. It has something to do with change of replicas. As you can see above I also tried to use kubernetes secret before. Had the same effect.

Does anybody had the same problem or maybe somebody has any clue what I am doing wrong?

Just to prevent the question with ssl. As you can see certificate is added as self managed certificates and is valid. Also it worked within other environment. In the cert part is complete chain stored till the root certificate.

Thank you in advance.

UPDATE:

I am using wildcard certificate. Means I have sub.domain.com covered by *.domain.com certificate. Tried to change configuration in ingress like this:

spec:
  tls:
    - hosts:
        - telematics.tranziit.com
      secretName: tranziit-tls-secret

Here how the certificate looks in GCP: Certificate in GCP

with no success - same effect. I used this certificate as self managed when I was using VMs and normal external load balancer - no problem.

UPDATE 2:

In between I removed completely backend config and frontend config. So service and ingress are looking as follows:

---
apiVersion: v1
kind: Service
metadata:
  name: app-service
  labels:
    app: app-service
spec:
  type: ClusterIP
  selector:
    app: app
  ports:
    - name: http
      port: 80
      targetPort: http
      protocol: TCP 
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: 'app-ip'
    kubernetes.io/ingress.allow-http: 'false'
  labels:
    app: app
spec:
  tls:
    - hosts:
        - sub.domain.com
      secretName: tls-secret
  rules:
    - host: sub.domain.com
      http:
        paths:
          - path: /api
            pathType: Prefix
            backend:
              service:
                name: app-service
                port:
                  name: http

names are changed of course. But the process was as follows: As soon as ingress was green I started several times curl -v https://mysubdomain-address and I have got at first several answers as expected with expected responses and in verbose log of curl I could see that handshakes are done. But then after third time or so I have got the issue again.

curl -v https://mydomain/path
*   Trying XX.XXX.XXX.XX:443...
* Connected to mydomain (34.149.251.22) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to mydomain:443
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to mydomain:443

That what I mean that it is no stable result. I can see that packets comming through also because this interface receives messages and pushes them to pub/sub and on the other end I have a function. So I can see that the function gets invoked as following:

Function Invocations

I really do not know what shall I try else. Only thing would be to change to managed non-wildcard certificate from GCP. It is IMHO only thing to try.

It looks like the answer was - use NodePort as it described in documentation. Since I have changed service type to NodePort ingress is running stable. Unfortunatelly, google statement that there is no limitation and ingress can work with NodePort or with ClusterIP through proxies is not realy confirmed. In my particular case ingress was created with ClusterIP backend but was instable in particular during the tls handshake.

Thanks to @boredabdel for mental support:)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM