I have a simple k8s deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 3
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test-container
image: centos:7
command: ["/bin/sh"]
args: ["-c", "tail -f /dev/null"]
These lead to creation of pods that look like this:
NAME READY STATUS RESTARTS AGE
test-deployment-59bb6b8b4d-pnfg8 1/1 Running 0 11m
test-deployment-59bb6b8b4d-s7w8x 1/1 Running 0 11m
test-deployment-59bb6b8b4d-wvw7n 1/1 Running 0 11m
By default, the hostnames corresponding to each of these match the pod names.
>> kubectl exec test-deployment-59bb6b8b4d-pnfg8 -c test-container env | grep HOSTNAME
HOSTNAME=test-deployment-59bb6b8b4d-pnfg8
>> kubectl exec test-deployment-59bb6b8b4d-s7w8x -c test-container env | grep HOSTNAME
HOSTNAME=test-deployment-59bb6b8b4d-s7w8x
>> kubectl exec test-deployment-59bb6b8b4d-wvw7n -c test-container env | grep HOSTNAME
HOSTNAME=test-deployment-59bb6b8b4d-wvw7n
Here's my question. Is there a way, I could pre-configure the hostnames so that they look something like this?
>> kubectl exec test-deployment-59bb6b8b4d-pnfg8 -c test-container env | grep HOSTNAME
HOSTNAME=test-deployment-pod1
>> kubectl exec test-deployment-59bb6b8b4d-s7w8x -c test-container env | grep HOSTNAME
HOSTNAME=test-deployment-pod2
>> kubectl exec test-deployment-59bb6b8b4d-wvw7n -c test-container env | grep HOSTNAME
HOSTNAME=test-deployment-pod3
The expectation would also be that when a pod dies down and is replaced, the new pod binds back to the hostname that the older one was mapped to.
Thanks in advance!
Use StatefulSet instead of Deployment
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.