简体   繁体   English

Pgpool 无法在 kubernetes 作为 pod 启动

[英]Pgpool fails to start on kubernetes as a pod

I have hosted pgpool on a container and given the container config for kubernetes deployment -我在容器上托管了 pgpool,并为 kubernetes 部署提供了容器配置 -

Mountpaths -挂载路径 -

- name: cgroup
    mountPath: /sys/fs/cgroup:ro
- name: var-run
    mountPath: /run

And Volumes for mountpath for the cgroups are mentioned as below -并且 cgroup 的 mountpath 卷如下所述 -

- name: cgroup
    hostPath:
      path: /sys/fs/cgroup
      type: Directory
- name: var-run
  emptyDir:
      medium: Memory

Also in kubernetes deployment I have passed -同样在 kubernetes 部署中我已经通过 -

 securityContext:
    privileged: true

But when I open the pod and exec inside it to check the pgpool status I get the below issue -但是当我打开 pod 并在其中执行以检查 pgpool 状态时,我得到以下问题 -

[root@app-pg-6448dfb58d-vzk67 /]# journalctl -xeu pgpool
-- Logs begin at Sat 2020-07-04 16:28:41 UTC, end at Sat 2020-07-04 16:29:13 UTC. --
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 systemd[1]: Started Pgpool-II.
-- Subject: Unit pgpool.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit pgpool.service has finished starting up.
-- 
-- The start-up result is done.
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: [1-1] 2020-07-04 16:28:41: pid 34: INFO:  unrecognized configuration parameter "stateme
nt_level_load_balance"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO:  unrecognized configuration parameter "statement_lev
el_load_balance"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO:  unrecognized configuration parameter "auto_failback
"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO:  unrecognized configuration parameter "auto_failback
_interval"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO:  unrecognized configuration parameter "enable_consen
sus_with_half_votes"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO:  unrecognized configuration parameter "enable_shared
_relcache"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO:  unrecognized configuration parameter "relcache_quer
y_target"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: FATAL:  could not open pid file as /var/run/pgpool-II-11/p
gpool.pid. reason: No such file or directory
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 systemd[1]: pgpool.service: main process exited, code=exited, status=3/NOTIMPLEMENTED
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 systemd[1]: Unit pgpool.service entered failed state.
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 systemd[1]: pgpool.service failed.

Systemctl status pgpool inside the pod container - pod 容器内的 Systemctl status pgpool -

➜  app-app kubectl exec -it app-pg-6448dfb58d-vzk67  -- bash
[root@app-pg-6448dfb58d-vzk67 /]# systemctl status pgpool
● pgpool.service - Pgpool-II
   Loaded: loaded (/usr/lib/systemd/system/pgpool.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Sat 2020-07-04 16:28:41 UTC; 1h 39min ago
  Process: 34 ExecStart=/usr/bin/pgpool -f /etc/pgpool-II/pgpool.conf $OPTS (code=exited, status=3)
 Main PID: 34 (code=exited, status=3)

Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO:  unrecognized configuration parameter "stat...lance"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO:  unrecognized configuration parameter "auto...lback"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO:  unrecognized configuration parameter "auto...erval"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO:  unrecognized configuration parameter "enab...votes"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO:  unrecognized configuration parameter "enab...cache"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO:  unrecognized configuration parameter "relc...arget"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: FATAL:  could not open pid file as /var/run/pgpoo...ectory
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 systemd[1]: pgpool.service: main process exited, code=exited, status=3/NOTIMPLEMENTED
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 systemd[1]: Unit pgpool.service entered failed state.
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 systemd[1]: pgpool.service failed.
Hint: Some lines were ellipsized, use -l to show in full.

If required this is the whole deployment sample -如果需要,这是整个部署示例 -

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-pg
  labels:
    helm.sh/chart: app-pgpool-1.0.0
    app.kubernetes.io/name: app-pgpool
    app.kubernetes.io/instance: app-service
    app.kubernetes.io/version: "1.0.3"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: app-pgpool
      app.kubernetes.io/instance: app-service
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app-pgpool
        app.kubernetes.io/instance: app-service
    spec:
      volumes:
        - name: "pgpool-config"
          persistentVolumeClaim:
            claimName: "pgpool-pvc"
        - name: cgroup
          hostPath:
            path: /sys/fs/cgroup
            type: Directory
        - name: var-run
          emptyDir:
            # Tmpfs needed for systemd.
            medium: Memory
      # volumes:
      #   - name: pgpool-config
      #     configMap:
      #       name: pgpool-config
      # - name: pgpool-config
      #   azureFile:
      #     secretName: azure-fileshare-secret
      #     shareName: pgpool
      #     readOnly: false
      imagePullSecrets:
        - name: app-secret
      serviceAccountName: app-pg
      securityContext:
        {}
      containers:
        - name: app-pgpool
          securityContext:
            {}
          image: "appacr.azurecr.io/pgpool:1.0.3"
          imagePullPolicy: IfNotPresent
          securityContext:
            privileged: true
          stdin: true
          tty: true
          ports:
            - name: http
              containerPort: 9999
              protocol: TCP
          # livenessProbe:
          #   httpGet:
          #     path: /
          #     port: http
          # readinessProbe:
          #   httpGet:
          #     path: /
          #     port: http
          resources:
            {}
          volumeMounts:
            - name: "pgpool-config"
              mountPath: /etc/pgpool-II
            - name: cgroup
              mountPath: /sys/fs/cgroup:ro
            - name: var-run
              mountPath: /run

UPDATE -更新 -

Running this same setup on dockerfile runs perfectly good no issues at all -在 dockerfile 上运行相同的设置运行得非常好,完全没有问题 -

version: '2'
services:

  pgpool:
    container_name: pgpool
    image: appacr.azurecr.io/pgpool:1.0.3
    logging:
      options:
        max-size: 100m
    ports:
      - "9999:9999"
    networks:
      vpcbr:
        ipv4_address: 10.5.0.2
    restart: unless-stopped
    volumes:
     - /sys/fs/cgroup:/sys/fs/cgroup:ro
     - $HOME/Documents/app/docker-compose/pgpool.conf:/etc/pgpool-II/pgpool.conf
     - $HOME/Documents/app/docker-compose/pool_passwd:/etc/pgpool-II/pool_passwd
    privileged: true
    stdin_open: true
    tty: true

I dont know what am I doing wrong I am not able to start this pgpool anyway and not able to pinpoint the issue.我不知道我做错了什么,无论如何我都无法启动这个 pgpool,也无法查明问题所在。 What permission are we missing here or whether cgroups is the culprit?我们在这里缺少什么许可,或者 cgroups 是否是罪魁祸首? or not?或不?

Some direction would be appreciated.一些方向将不胜感激。

while this might not be a direct answer to your question, I have seen some very cryptic errors when trying to run any postgresql product from raw manifest, my recommandations would be to try leveraging the chart from Bitnami, they have put a lot of effort in ensuring that all of the security / permission culpits are taken care of properly.虽然这可能不是您问题的直接答案,但在尝试从原始清单运行任何 postgresql 产品时,我看到了一些非常神秘的错误,我的建议是尝试利用 Bitnami 的图表,他们付出了很多努力确保妥善处理所有安全/许可罪魁祸首。

https://github.com/bitnami/charts/tree/master/bitnami/postgresql-ha https://github.com/bitnami/charts/tree/master/bitnami/postgresql-ha

Hopefully, this help.希望这会有所帮助。

Also, if you do not want to use Helm, you can run the help template command另外,如果您不想使用 Helm,可以运行help template命令

https://helm.sh/docs/helm/helm_template/ https://helm.sh/docs/helm/helm_template/

this will generate manifest out of the chart's template file based on the provided values.yaml这将根据提供的值从图表的模板文件中生成清单。yaml

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Kubernetes 使用 pgpool 进行 postgres 复制 - Kubernetes postgres replication with pgpool pgpool-服务无法启动:合法性错误 - pgpool - service don't start: legitimation error PostgreSQL-无法在pgpool 2上启动委托IP地址 - PostgreSQL - Fail to start delegate ip address on pgpool 2 Kubernetes Postgres Pod 无法访问 - Kubernetes Postgres Pod not reachable 通过 psql 执行某些命令时,端口转发到 postgres kubernetes pod 失败并重置连接 - Port forward to postgres kubernetes pod fails with connection reset when executing certain commands via psql 在master_slave模式下配置pgpool无法对后端进行身份验证 - Configure pgpool in master_slave mode fails to auth against backend 无法连接到 kube.netes 中的 postgres pod - Cannot connect to postgres pod in kubernetes 在 kubernetes 的多容器 pod 环境中启动第一个容器 postgres 数据库后,如何将 keycloak 作为第二个容器运行? - How to run a keycloak as second container after first container postgres Database start up at multi-container pod environment of kubernetes? Pgpool 无法使用 md5 与后端进行身份验证,在 kube.netes 中找不到有效密码 - Pgpool failed to authenticate with backend using md5, valid password not found in kubernetes kubernetes timescaledb statefulset:吊舱休闲丢失的更改 - kubernetes timescaledb statefulset: Changes lost on pod recreation
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM