[英]SolrCloud Persistent Volume Permission Problems Using Kubernetes
Per Taking Solr To Production ( https://lucene.apache.org/solr/guide/6_6/taking-solr-to-production.html ), "Running Solr as root is not recommended for security reasons, and the control script start command will refuse to do so. 按照将Solr投入生产( https://lucene.apache.org/solr/guide/6_6/taking-solr-to-production.html ),“出于安全原因,不建议以root身份运行Solr,并且启动控制脚本命令将拒绝这样做。
Provisioning of the persistent volume occurred. 发生了永久卷的供应。 However, when we claim and mount that into the folder structure for our Pod, the permissions setup for that mounted folder are only writable as root. 但是,当我们声明并将其安装到Pod的文件夹结构中时,该已安装文件夹的权限设置只能以root用户身份写入。 Therefore, the SolrCloud micro services cannot either store its configuration files nor core/collection data or backups to the persistent volume. 因此,SolrCloud微服务既不能存储其配置文件,也不能存储核心/集合数据或备份到持久卷。
How should we go about addressing this permissions issue in Kubernetes, since Solr enforces the inability to use root via the Solr command / start script? 由于Solr强制无法通过Solr命令/启动脚本使用root用户,我们应该如何解决Kubernetes中的此权限问题?
Here is also an excerpt from the running pod after mounting, showing the permissions problem (root ownership for data folder): 这也是挂载后正在运行的pod的摘录,显示了权限问题(数据文件夹的根所有权):
Here is also information about the Kubernetes server version: 这也是有关Kubernetes服务器版本的信息:
C:\Users\xxxx>kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommi
t:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2
017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"windows/amd6
4"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.8+coreos.0",
GitCommit:"fc34f797fe56c4ab78bdacc29f89a33ad8662f8c", GitTreeState:"clean", Bui
ldDate:"2017-08-05T00:01:34Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"lin
ux/amd64"}
Please see below yaml, docker file and start script. 请在下面看到yaml,docker文件和启动脚本。
yaml file: yaml文件:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: "solrclouddemo1"
namespace: "com-xxx-cppseed-dev"
labels:
app: "solrclouddemo1"
version: "1.0.0"
data:
config-env: dev
zookeeper-hosts: xxxx.com:2181
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "solrclouddemo1"
namespace: "com-xxx-cppseed-dev"
labels:
app: "solrclouddemo1"
version: "1.0.0"
spec:
replicas: 1
selector:
matchLabels:
app: "solrclouddemo1"
version: "1.0.0"
template:
metadata:
labels:
app: "solrclouddemo1"
version: "1.0.0"
build: "252"
developer: "XXX"
annotations:
prometheus.io/scrape.ne: 'true'
prometheus.io/port: '8000'
spec:
serviceAccount: "default"
containers:
- env:
- name: ENV
valueFrom:
configMapKeyRef:
key: config-env
name: "solrclouddemo1"
- name: ZK_HOST
valueFrom:
configMapKeyRef:
key: zookeeper-hosts
name: "solrclouddemo1"
- name: java_runtime_arguments
value: ""
image: "xxx.com:5100/com.xxx.cppseed/solrclouddemo1:1.0.0"
imagePullPolicy: Always
name: "solrclouddemo1"
ports:
- name: http
containerPort: 8983
protocol: TCP
resources:
requests:
memory: "600Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: "solrclouddemo1"
namespace: "com-xxx-cppseed-dev"
labels:
app: "solrclouddemo1"
version: "1.0.0"
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8983
selector:
app: "solrclouddemo1"
version: "1.0.0"
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: "solrclouddemo1"
namespace: "com-xxx-cppseed-dev"
spec:
selector:
matchLabels:
app: "solrclouddemo1"
version: "1.0.0"
minAvailable: 1
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "solrclouddemo1"
namespace: "com-xxx-cppseed-dev"
spec:
selector:
matchLabels:
app: "solrclouddemo1"
serviceName: "solrclouddemo1"
replicas: 1
template:
metadata:
labels:
app: "solrclouddemo1"
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- "solrclouddemo1"
topologyKey: "kubernetes.io/hostname"
containers:
- name: "solrclouddemo1"
command:
- "/bin/bash"
- "-c"
- "/opt/docker-solr/scripts/startService.sh"
imagePullPolicy: Always
image: "xxx.com:5100/com.xxx.cppseed/solrclouddemo1:1.0.0"
resources:
requests:
memory: "600Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
ports:
- containerPort: 8983
name: http
*volumeMounts:
- name: datadir
mountPath: /opt/solr/server/data
securityContext:
runAsUser: 8983
fsGroup: 8983
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
selector:
matchLabels:
app: cppseed-solr*
Dockerfile: Dockerfile:
FROM xxx.com:5100/com.xxx.public/solr:7.0.0
LABEL maintainer="xxx.com"
ENV SOLR_USER="solr" \
SOLR_GROUP="solr"
# AAF Authentication
ADD aaf/config/ /opt/solr/server/etc/
ADD aaf/etc/ /opt/solr/server/etc/
ADD aaf/jars/ /opt/solr/server/lib/
ADD aaf/security/ /opt/solr/
# Entrypoint
ADD docker/startService.sh /opt/docker-solr/scripts/
# Monitoring
VOLUME /etc
#ADD monitoring/monitoring.jar /monitoring.jar
ADD /etc/ /etc/
# Permissions
USER root
RUN apt-get install sudo -y && \
chown -R $SOLR_USER:$SOLR_GROUP /opt/solr && \
chown -R $SOLR_USER:$SOLR_GROUP /opt/docker-solr/scripts/ && \
chmod 777 /opt/docker-solr/scripts/startService.sh
# && \ chmod 777 /monitoring.jar
WORKDIR /opt/solr
ENTRYPOINT ["startService.sh"]
startService.sh startService.sh
#!/bin/bash
#
# docker-entrypoint for docker-solr
# Fail immediately if anything has a non-zero result status
set -e
# Optionally echo commands before running them for debugging.
if [[ "$VERBOSE" = "yes" ]]; then
set -x
fi
# execute command passed in as arguments.
# The Dockerfile has specified the PATH to include
# /opt/solr/bin (for Solr) and /opt/docker-solr/scripts (for our scripts
# like solr-foreground, solr-create, solr-precreate, solr-demo).
# Note: if you specify "solr", you'll typically want to add -f to run it in
# the foreground.
echo "Invoking solr-foreground"
# Allow the clients to pass in java_runtime_arguments to tune the solr runtime when invoking the pipeline
if [[ -z "${java_runtime_arguments}" ]]; then
echo "No java_runtime_arguments received, so using default values"
exec solr-foreground -c -noprompt $@
else
echo "Received custom java_runtime_arguments. User will be responsible for prefixing all values passed with -a to allow SolrCloud to accept them. User is also responsible for establishing the -a -javaagent:/monitoring.jar=8000-/etc/config/prometheus_jmx_config.yaml-/etc/config/prometheus_application_config.yaml-/metrics which is used for Prometheus monitoring"
exec solr-foreground -c -noprompt $java_runtime_arguments $@
fi
Workaround: Use initContainers 解决方法:使用initContainers
# Before Pod Starts this will change the ownership of the initContainers:
initContainers:
- name: volume-mount-hack
image: busybox
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 250m
memory: 600Mi
command:
- /bin/sh
- -c
- "chown -R solr:solr /opt/solr/server/data"
volumeMounts:
- name: datadir
mountPath: /opt/solr/server/data
Make sure to use same volumeMouth details in the container spec, along with runAsUser 确保在容器规范中使用相同的volumeMouth详细信息以及runAsUser
containers:
- name: "${APP_NAME}"
imagePullPolicy: Always
image: "${IMAGE_NAME}"
env:
- name: ENV
valueFrom:
configMapKeyRef:
key: config-env
name: "${APP_NAME}"
- name: ZK_HOST
valueFrom:
configMapKeyRef:
key: zookeeper-hosts
name: "${APP_NAME}"
- name: ZK_CLIENT_TIMEOUT
value: "30000"
- name: java_runtime_arguments
value: "${JAVA_RUNTIME_ARGUMENTS}"
command:
- "/bin/bash"
- "-c"
- "/opt/docker-solr/scripts/startService.sh"
resources:
requests:
memory: "600Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
ports:
- containerPort: 8983
name: http
volumeMounts:
- name: datadir
mountPath: /opt/solr/server/data
securityContext:
runAsUser: 8983
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.