简体   繁体   中英

Kibana deployed to kubernetes cluster returns 404

I have kibana deployed to kubernetes cluster as StatefulSet. However, when pointing my browser to the kibana, it returns {"statusCode":404,"error":"Not Found","message":"Not Found"}. Any advice and insight is appreciated. Here is the log that I see in the pod when accessing the application at the browser using http://app.domain.io/kibana

{"type":"response","@timestamp":"2019-01-29T04:18:50Z","tags":[],"pid":1,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"x-forwarded-for":"[IP]","x-forwarded-proto":"https","x-forwarded-port":"443","host":"[host]","x-amzn-trace-id":"Root=1-5c4fd42a-1261c1e0474144902a2d6840","cache-control":"max-age=0","upgrade-insecure-requests":"1","user-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8","accept-encoding":"gzip, deflate, br","accept-language":"en-US,en;q=0.9,zh-CN;q=0.8,zh-TW;q=0.7,zh;q=0.6,ko;q=0.5"},"remoteAddress":"[IP]","userAgent":"10.0.2.185"},"res":{"statusCode":404,"responseTime":19,"contentLength":9},"message":"GET /kibana 404 19ms - 9.0B"}
apiVersion: v1
kind: Service 
metadata:
  name: svc-kibana
  labels: 
    app: app-kibana
spec:
  selector:
    app: app-kibana
#    tier: database
  ports:  
  - name: kibana
    protocol: TCP
    port: 8080
    targetPort: 5601
  clusterIP: None # Headless
---
apiVersion: apps/v1 
kind: StatefulSet
metadata:
  name: kibana
spec:
  serviceName: "svc-kibana"
  podManagementPolicy: "Parallel" # Default is OrderedReady
  replicas: 1 # Default is 1
  selector:
    matchLabels:
      app: app-kibana # Has to match .spec.template.metadata.labels
  template:
    metadata:
      labels: 
        app: app-kibana # Has to match .spec.selector.matchLabels
    spec:   
      terminationGracePeriodSeconds: 10
      containers:
      - name: kibana
        securityContext:
          capabilities:
            add:
              - IPC_LOCK
              - SYS_RESOURCE
        image: kibana:6.5.4
        imagePullPolicy: Always
        env:
        - name: ELASTICSEARCH_URL
          value: http://svc-elasticsearch:9200
        - name: SERVER_BASEPATH
          value: /api/v1/namespaces/default/services/svc-kibana/proxy
        ports:
        - containerPort: 5601
          name: kibana
          protocol: TCP

Here is the healthcheck from AWS ALB:

{"type":"response","@timestamp":"2019-01-29T06:30:53Z","tags":[],"pid":1,"method":"get","statusCode":200,"req":{"url":"/app/kibana","method":"get","headers":{"host":"[IP]:5601","connection":"close","user-agent":"ELB-HealthChecker/2.0","accept-encoding":"gzip, compressed"},"remoteAddress":"[IP]","userAgent":"[IP]"},"res":{"statusCode":200,"responseTime":27,"contentLength":9},"message":"GET /app/kibana 200 27ms - 9.0B"}

I tried to remove the ENV values and use ConfigMap mounted on /etc/kibana/kibana.yml with the following config but to no avail:

apiVersion: v1
kind: ConfigMap
metadata:
  creationTimestamp: "2019-01-29T02:12:55Z"
  name: kibana-config
  namespace: default 
  resourceVersion: "4178388"
  selfLink: /api/v1/namespaces/default/configmaps/kibana-config
  uid: 63b10866-236b-11e9-a14d-482ae31e6a94
data:
  kibana.yml: |+
    server.port: 5601
    server.host: "0.0.0.0"
    elasticsearch.url: "http://svc-elasticsearch:9200"
    kibana.index: ".kibana"
    logging.silent: false
    logging.quiet: false
    logging.verbose: true

It works now after I add the following to the Kibana config:

    server.basePath: "/my-kibana"
    server.rewriteBasePath: true

Thanks to Matthew L Daniel, I have switched the healthcheck to /my-kibana/api/status

    - name: SERVER_BASEPATH
      value: /api/v1/namespaces/default/services/svc-kibana/proxy

is the erroneous setting causing you problems, since server.basePath is documented as

Enables you to specify a path to mount Kibana at if you are running behind a proxy. Use the server.rewriteBasePath setting to tell Kibana if it should remove the basePath from requests it receives, and to prevent a deprecation warning at startup. This setting cannot end in a slash (/).

So you will have to use /api/v1/namespaces/default/services/svc-kibana/proxy/app/kibana since you did not override server.defaultRoute: /app/kibana . I have no idea why the ELB healthcheck is only getting back 9 bytes of content, but you'd likely want to use /api/status as its healthcheck anyway

I was facing the same issue in the kubernetes cluster where the kibana was running behind the nginx proxy.

DNS: https://my-url/dashboard 

Nginx conf:

     location /dashboard/ {
          rewrite ^/dashboard/(.*) /$1 break;
          proxy_pass http://kibana:5601/;

After adding the above suggested parameters by Kok How teh in the kibana.yml

kibana.yml: 
 server.name: kibana
 server.host: "0"
 elasticsearch.url: http://elasticsearch:9200
 server.basePath: "/dashboard"   #this line

was able to resolve the redirection issue.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM