I have hosted pgpool on a container and given the container config for kubernetes deployment -
Mountpaths -
- name: cgroup
mountPath: /sys/fs/cgroup:ro
- name: var-run
mountPath: /run
And Volumes for mountpath for the cgroups are mentioned as below -
- name: cgroup
hostPath:
path: /sys/fs/cgroup
type: Directory
- name: var-run
emptyDir:
medium: Memory
Also in kubernetes deployment I have passed -
securityContext:
privileged: true
But when I open the pod and exec inside it to check the pgpool status I get the below issue -
[root@app-pg-6448dfb58d-vzk67 /]# journalctl -xeu pgpool
-- Logs begin at Sat 2020-07-04 16:28:41 UTC, end at Sat 2020-07-04 16:29:13 UTC. --
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 systemd[1]: Started Pgpool-II.
-- Subject: Unit pgpool.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit pgpool.service has finished starting up.
--
-- The start-up result is done.
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: [1-1] 2020-07-04 16:28:41: pid 34: INFO: unrecognized configuration parameter "stateme
nt_level_load_balance"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO: unrecognized configuration parameter "statement_lev
el_load_balance"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO: unrecognized configuration parameter "auto_failback
"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO: unrecognized configuration parameter "auto_failback
_interval"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO: unrecognized configuration parameter "enable_consen
sus_with_half_votes"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO: unrecognized configuration parameter "enable_shared
_relcache"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO: unrecognized configuration parameter "relcache_quer
y_target"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: FATAL: could not open pid file as /var/run/pgpool-II-11/p
gpool.pid. reason: No such file or directory
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 systemd[1]: pgpool.service: main process exited, code=exited, status=3/NOTIMPLEMENTED
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 systemd[1]: Unit pgpool.service entered failed state.
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 systemd[1]: pgpool.service failed.
Systemctl status pgpool inside the pod container -
➜ app-app kubectl exec -it app-pg-6448dfb58d-vzk67 -- bash
[root@app-pg-6448dfb58d-vzk67 /]# systemctl status pgpool
● pgpool.service - Pgpool-II
Loaded: loaded (/usr/lib/systemd/system/pgpool.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Sat 2020-07-04 16:28:41 UTC; 1h 39min ago
Process: 34 ExecStart=/usr/bin/pgpool -f /etc/pgpool-II/pgpool.conf $OPTS (code=exited, status=3)
Main PID: 34 (code=exited, status=3)
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO: unrecognized configuration parameter "stat...lance"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO: unrecognized configuration parameter "auto...lback"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO: unrecognized configuration parameter "auto...erval"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO: unrecognized configuration parameter "enab...votes"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO: unrecognized configuration parameter "enab...cache"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: INFO: unrecognized configuration parameter "relc...arget"
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 pgpool[34]: 2020-07-04 16:28:41: pid 34: FATAL: could not open pid file as /var/run/pgpoo...ectory
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 systemd[1]: pgpool.service: main process exited, code=exited, status=3/NOTIMPLEMENTED
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 systemd[1]: Unit pgpool.service entered failed state.
Jul 04 16:28:41 app-pg-6448dfb58d-vzk67 systemd[1]: pgpool.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
If required this is the whole deployment sample -
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-pg
labels:
helm.sh/chart: app-pgpool-1.0.0
app.kubernetes.io/name: app-pgpool
app.kubernetes.io/instance: app-service
app.kubernetes.io/version: "1.0.3"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: app-pgpool
app.kubernetes.io/instance: app-service
template:
metadata:
labels:
app.kubernetes.io/name: app-pgpool
app.kubernetes.io/instance: app-service
spec:
volumes:
- name: "pgpool-config"
persistentVolumeClaim:
claimName: "pgpool-pvc"
- name: cgroup
hostPath:
path: /sys/fs/cgroup
type: Directory
- name: var-run
emptyDir:
# Tmpfs needed for systemd.
medium: Memory
# volumes:
# - name: pgpool-config
# configMap:
# name: pgpool-config
# - name: pgpool-config
# azureFile:
# secretName: azure-fileshare-secret
# shareName: pgpool
# readOnly: false
imagePullSecrets:
- name: app-secret
serviceAccountName: app-pg
securityContext:
{}
containers:
- name: app-pgpool
securityContext:
{}
image: "appacr.azurecr.io/pgpool:1.0.3"
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
stdin: true
tty: true
ports:
- name: http
containerPort: 9999
protocol: TCP
# livenessProbe:
# httpGet:
# path: /
# port: http
# readinessProbe:
# httpGet:
# path: /
# port: http
resources:
{}
volumeMounts:
- name: "pgpool-config"
mountPath: /etc/pgpool-II
- name: cgroup
mountPath: /sys/fs/cgroup:ro
- name: var-run
mountPath: /run
UPDATE -
Running this same setup on dockerfile runs perfectly good no issues at all -
version: '2'
services:
pgpool:
container_name: pgpool
image: appacr.azurecr.io/pgpool:1.0.3
logging:
options:
max-size: 100m
ports:
- "9999:9999"
networks:
vpcbr:
ipv4_address: 10.5.0.2
restart: unless-stopped
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- $HOME/Documents/app/docker-compose/pgpool.conf:/etc/pgpool-II/pgpool.conf
- $HOME/Documents/app/docker-compose/pool_passwd:/etc/pgpool-II/pool_passwd
privileged: true
stdin_open: true
tty: true
I dont know what am I doing wrong I am not able to start this pgpool anyway and not able to pinpoint the issue. What permission are we missing here or whether cgroups is the culprit? or not?
Some direction would be appreciated.
while this might not be a direct answer to your question, I have seen some very cryptic errors when trying to run any postgresql product from raw manifest, my recommandations would be to try leveraging the chart from Bitnami, they have put a lot of effort in ensuring that all of the security / permission culpits are taken care of properly.
https://github.com/bitnami/charts/tree/master/bitnami/postgresql-ha
Hopefully, this help.
Also, if you do not want to use Helm, you can run the help template
command
https://helm.sh/docs/helm/helm_template/
this will generate manifest out of the chart's template file based on the provided values.yaml
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.