简体   繁体   English

错误 503 后端获取失败 Guru Meditation: XID: 45654 Varnish 缓存服务器

[英]Error 503 Backend fetch failed Guru Meditation: XID: 45654 Varnish cache server

I have created helm chart for varnish cache server which is running in kubernetes cluster , while testing with the "external IP" generated its throwing error , sharing below我为在 kubernetes 集群中运行的 varnish 缓存服务器创建了舵图,同时使用“外部 IP”进行测试时产生了它的抛出错误,在下面分享

Sharing varnish.vcl, values.yaml and deployment.yaml below .下面分享 varnish.vcl、values.yaml 和 deployment.yaml。 Any suggestions how to resolve as I have hardcoded the backend/web server as .host="www.varnish-cache.org" with port : "80".任何建议如何解决,因为我已将后端/网络服务器硬编码为 .host="www.varnish-cache.org" 端口:“80”。 My requirement is on executing curl -IL I should get the response with cached values not as described above (directly from backend server)..我的要求是执行 curl -IL 我应该得到缓存值的响应,而不是如上所述(直接从后端服务器)。

Any solutions/approach would be welcomed.欢迎任何解决方案/方法。

varnish.vcl:清漆.vcl:

   VCL version 5.0 is not supported so it should be 4.0 or 4.1 even though actually used Varnish version is 6
vcl 4.1;

import std;
# The minimal Varnish version is 5.0
# For SSL offloading, pass the following header in your proxy server or load balancer: 'X-Forwarded-Proto: https'



{{  .Values.varnishconfigData | indent 2 }}

sub vcl_recv {
#  set req.backend_hint = default;
 # unset req.http.cookie;

  if(req.url == "/healthcheck") {
    return(synth(200,"OK"));
  }
if(req.url == "/index.html") {
    return(synth(200,"OK"));
  }
}


probe index {
  .url = "/index.html";
  .timeout = 60ms;
  .interval = 2s;
  .window = 5;
  .threshold = 3;
}


backend website {
  .host = "www.varnish-cache.org";
  .port = "80";
  .probe = index;
  #.probe = {
   # .url = "/favicon.ico";
    #.timeout = 60ms;
    #.interval = 2s;
    #.window = 5;
    #.threshold = 3;
 # }
}


vcl_recv {
  if ( req.url ~ "/index.html/") {
    set req.backend = website;
  } else {
    Set req.backend = default;
  }
}




#DAEMON_OPTS="-a :80 \
#-T localhost:6082 \
#-f /etc/varnish/default.vcl \
#-S /etc/varnish/secret \
#-s malloc,256m"
#-p http_resp_hdr_len=65536 \
#-p http_resp_size=98304 \





#sub vcl_recv {
 ##       # Remove the cookie header to enable caching
   #     unset req.http.cookie;
#}

#sub vcl_deliver {
 #    if (obj.hits > 0) {
  #       set resp.http.X-Cache = "HIT";
   #  } else {
    #     set resp.http.X-Cache = "MISS";
    # }
#}

values.yaml:值.yaml:

# Default values for varnish.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: varnish
  tag: 6.3
  pullPolicy: IfNotPresent

nameOverride: ""
fullnameOverride: ""

service:
 # type: ClusterIP
  type: LoadBalancer
  port: 80

varnishconfigData: |- 
      backend default {
         .host = "http://35.170.216.115/";
         .port = "80";
         .first_byte_timeout = 60s;
         .connect_timeout = 300s ;
         .probe = {
                .url = "/";
                .timeout = 1s;
                .interval = 5s;
                .window = 5;
                .threshold = 3;
           }
          }
         sub vcl_backend_response {
          set beresp.ttl = 5m;
         }





ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  path: /
  hosts:
    - chart-example.local
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local


resources:
  limits:
    memory: 128Mi
  requests:
    memory: 64Mi




#resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #  cpu: 100m
  #  memory: 128Mi
  # requests:
  #  cpu: 100m
  #  memory: 128Mi

nodeSelector: {}

tolerations: []

affinity: {}

Deployment.yaml:部署.yaml:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: {{ include "varnish.fullname" . }}
  labels:
    app: {{ include "varnish.name" . }}
    chart: {{ include "varnish.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ include "varnish.name" . }}
      release: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app: {{ include "varnish.name" . }}
        release: {{ .Release.Name }}
    spec:
      volumes:
        - name: varnish-config
          configMap: 
             name: {{ include "varnish.fullname" . }}-varnish-config
             items: 
               - key: default.vcl
                 path: default.vcl

      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env: 
          - name: VARNISH_VCL
            value: /etc/varnish/default.vcl
          volumeMounts: 
           - name: varnish-config
             mountPath : /etc/varnish/

          ports:
            - name: http
              containerPort: 80
              protocol: TCP
              targetPort: 80
          livenessProbe:
            httpGet:
              path: /healthcheck
             # port: http
              port: 80
            failureThreshold: 3
            initialDelaySeconds: 45
            timeoutSeconds: 10
            periodSeconds: 20
          readinessProbe:
            httpGet:
              path: /healthcheck
              #port: http
              port: 80
            initialDelaySeconds: 10
            timeoutSeconds: 15
            periodSeconds: 5

          resources:
{{ toYaml .Values.resources | indent 12 }}
    {{- with .Values.nodeSelector }}
      nodeSelector:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.affinity }}
      affinity:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.tolerations }}
      tolerations:
{{ toYaml . | indent 8 }}
    {{- end }} 

Did checked with varnish logs , executed varnishlog -c and got following output检查清漆日志,执行 varnishlog -c 并得到以下输出

*   << Request  >> 556807    
-   Begin          req 556806 rxreq
-   Timestamp      Start: 1584534974.251924 0.000000 0.000000
-   Timestamp      Req: 1584534974.251924 0.000000 0.000000
-   VCL_use        boot
-   ReqStart       100.115.128.0 26466 a0
-   ReqMethod      GET
-   ReqURL         /healthcheck
-   ReqProtocol    HTTP/1.1
-   ReqHeader      Host: 100.115.128.11:80
-   ReqHeader      User-Agent: kube-probe/1.14
-   ReqHeader      Accept-Encoding: gzip
-   ReqHeader      Connection: close
-   ReqHeader      X-Forwarded-For: 100.115.128.0
-   VCL_call       RECV
-   VCL_return     synth
-   VCL_call       HASH
-   VCL_return     lookup
-   Timestamp      Process: 1584534974.251966 0.000042 0.000042
-   RespHeader     Date: Wed, 18 Mar 2020 12:36:14 GMT
-   RespHeader     Server: Varnish
-   RespHeader     X-Varnish: 556807
-   RespProtocol   HTTP/1.1
-   RespStatus     200
-   RespReason     OK
-   RespReason     OK
-   VCL_call       SYNTH
-   RespHeader     Content-Type: text/html; charset=utf-8
-   RespHeader     Retry-After: 5
-   VCL_return     deliver
-   RespHeader     Content-Length: 229
-   Storage        malloc Transient
-   Filters        
-   RespHeader     Accept-Ranges: bytes
-   RespHeader     Connection: close
-   Timestamp      Resp: 1584534974.252121 0.000197 0.000155
-   ReqAcct        125 0 125 210 229 439
-   End      

I don't think this will work:我不认为这会奏效:

     .host = "www.varnish-cache.org";
     .host = "100.68.38.132"

It has two host declaration and it's missing the ";"它有两个主机声明,但缺少“;” Please try to change it to请尝试将其更改为

     .host = "100.68.38.132";

Sharing the logs generated when running command varnishlog -g request -q "ReqHeader:Host eq 'a2dc15095678711eaae260ae72bc140c-214951329.ap-southeast-1.elb.amazonaws.com'" -q "ReqUrl eq '/'" below please look into it..共享运行命令 varnishlog -g request -q "ReqHeader:Host eq 'a2dc15095678711eaae260ae72bc140c-214951329.ap-southeast-1.elb.amazonaws.com'" -q "ReqUrl" 下面的日志时生成的日志,请查看它/' ..

*   << Request  >> 1512355   
-   Begin          req 1512354 rxreq
-   Timestamp      Start: 1584707667.287292 0.000000 0.000000
-   Timestamp      Req: 1584707667.287292 0.000000 0.000000
-   VCL_use        boot
-   ReqStart       100.112.64.0 51532 a0
-   ReqMethod      GET
-   ReqURL         /
-   ReqProtocol    HTTP/1.1
-   ReqHeader      Host: 52.220.214.66
-   ReqHeader      User-Agent: Mozilla/5.0 zgrab/0.x
-   ReqHeader      Accept: */*
-   ReqHeader      Accept-Encoding: gzip
-   ReqHeader      X-Forwarded-For: 100.112.64.0
-   VCL_call       RECV
-   ReqUnset       Host: 52.220.214.66
-   ReqHeader      host: 52.220.214.66
-   VCL_return     hash
-   VCL_call       HASH
-   VCL_return     lookup
-   VCL_call       MISS
-   VCL_return     fetch
-   Link           bereq 1512356 fetch
-   Timestamp      Fetch: 1584707667.287521 0.000228 0.000228
-   RespProtocol   HTTP/1.1
-   RespStatus     503
-   RespReason     Backend fetch failed
-   RespHeader     Date: Fri, 20 Mar 2020 12:34:27 GMT
-   RespHeader     Server: Varnish
-   RespHeader     Content-Type: text/html; charset=utf-8
-   RespHeader     Retry-After: 5
-   RespHeader     X-Varnish: 1512355
-   RespHeader     Age: 0
-   RespHeader     Via: 1.1 varnish (Varnish/6.3)
-   VCL_call       DELIVER
-   RespHeader     X-Cache: uncached
-   VCL_return     deliver
-   Timestamp      Process: 1584707667.287542 0.000250 0.000021
-   Filters        
-   RespHeader     Content-Length: 284
-   RespHeader     Connection: keep-alive
-   Timestamp      Resp: 1584707667.287591 0.000299 0.000048
-   ReqAcct        110 0 110 271 284 555
-   End            
**  << BeReq    >> 1512356   
--  Begin          bereq 1512355 fetch
--  VCL_use        boot
--  Timestamp      Start: 1584707667.287401 0.000000 0.000000
--  BereqMethod    GET
--  BereqURL       /
--  BereqProtocol  HTTP/1.1
--  BereqHeader    User-Agent: Mozilla/5.0 zgrab/0.x
--  BereqHeader    Accept: */*
--  BereqHeader    Accept-Encoding: gzip
--  BereqHeader    X-Forwarded-For: 100.112.64.0
--  BereqHeader    host: 52.220.214.66
--  BereqHeader    X-Varnish: 1512356
--  VCL_call       BACKEND_FETCH
--  VCL_return     fetch
--  FetchError     backend default: unhealthy
--  Timestamp      Beresp: 1584707667.287429 0.000028 0.000028
--  Timestamp      Error: 1584707667.287432 0.000031 0.000002
--  BerespProtocol HTTP/1.1
--  BerespStatus   503
--  BerespReason   Service Unavailable
--  BerespReason   Backend fetch failed
--  BerespHeader   Date: Fri, 20 Mar 2020 12:34:27 GMT
--  BerespHeader   Server: Varnish
--  VCL_call       BACKEND_ERROR
--  BerespHeader   Content-Type: text/html; charset=utf-8
--  BerespHeader   Retry-After: 5
--  VCL_return     deliver
--  Storage        malloc Transient
--  Length         284
--  BereqAcct      0 0 0 0 0 0
--  End 

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 获取错误清漆返回 HTTP/1.1 503 后端获取失败/ X-Cache:执行 curl -IL 时未缓存(外部 Ip) - Getting Error Varnish returning HTTP/1.1 503 Backend fetch failed/ X-Cache: uncached when executing curl -IL (external Ip) 获取就绪和活跃度探测失败:HTTP 探测失败,状态码:503 在使用 Helm Chart Kubernetes 启动 Varnish 期间” - Getting Readiness & Liveliness probe failed: HTTP probe failed with statuscode: 503 during starting of Varnish using Helm Chart Kubernetes” 启动 Kubernetes 多区域集群 - 服务不可用 503 后端错误 - Launching Kubernetes Multi Zone Cluster - Service unavailable 503 Backend Error 在 kubernetes 部署中配置 Varnish 后端 - Configure Varnish backend in kubernetes deployment Kubernetes GKE 502 服务器错误在多次重新加载时无法选择后端 - Kubernetes GKE 502 server error failed to pick backend when reloaded multiple times 托管在 kubernetes 中的后端清漆 - Varnish for backend hosted inside kubernetes Kubernetes VPA 无法获取容器列表。 原因:超出上下文截止日期。 上次服务器错误:<nil> - Kubernetes VPA failed to fetch list of containers. Reason: context deadline exceeded. Last server error: <nil> Kubernetes仪表板 - ServiceUnavailable(503错误) - Kubernetes dashboard - ServiceUnavailable (503 error) Nginx 入口失败:HTTP 探测失败,状态码:503 - Nginx ingress failed: HTTP probe failed with statuscode: 503 IstIO 出口网关给出 HTTP 503 错误 - IstIO egress gateway gives HTTP 503 error
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM