[英]Unable to access workload at the IP address of the Kubernetes Service
I have created a Load Balancer Service for my K8s Workload.我为我的 K8s 工作负载创建了一个负载均衡器服务。 I have exposed the workload as a service;
我已经将工作负载公开为服务; however, I am unable to access the resource at the IP address of the service:
35.193.34.113:80
但是,我无法访问服务的 IP 地址处的资源:
35.193.34.113:80
:80
My host port is 80 and target port is 9000 .我的主机端口是80 ,目标端口是9000 。
The following is the YAML configuration of my service:下面是我服务的YAML配置:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
creationTimestamp: "2022-09-18T06:15:14Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app: food-for-worms
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:allocateLoadBalancerNodePorts: {}
f:externalTrafficPolicy: {}
f:internalTrafficPolicy: {}
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector: {}
f:sessionAffinity: {}
f:type: {}
manager: GoogleCloudConsole
operation: Update
time: "2022-09-18T06:15:14Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"service.kubernetes.io/load-balancer-cleanup": {}
f:status:
f:loadBalancer:
f:ingress: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-09-18T06:15:49Z"
name: food-for-worms-service
namespace: default
resourceVersion: "64162"
uid: 2d541e31-0415-4583-a89f-7021d5984b50
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.44.5.70
clusterIPs:
- 10.44.5.70
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 31331
port: 80
protocol: TCP
targetPort: 9000
selector:
app: food-for-worms
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.193.34.113
The following is the YAML configuration of my workload:下面是我工作负载的YAML配置:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: "2022-09-18T06:13:19Z"
generation: 2
labels:
app: food-for-worms
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:containers:
k:{"name":"node-app-1"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:ports:
.: {}
k:{"containerPort":9000,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:protocol: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: GoogleCloudConsole
operation: Update
time: "2022-09-19T06:26:34Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
subresource: status
time: "2022-09-19T06:26:38Z"
name: food-for-worms
namespace: default
resourceVersion: "652865"
uid: 4e085d08-433c-468b-8a4c-c11326594a2e
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: food-for-worms
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: food-for-worms
spec:
containers:
- image: gcr.io/k8s-networking-test/node-app:v1.0
imagePullPolicy: IfNotPresent
name: node-app-1
ports:
- containerPort: 9000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 3
conditions:
- lastTransitionTime: "2022-09-18T06:13:20Z"
lastUpdateTime: "2022-09-18T06:13:20Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2022-09-18T06:13:19Z"
lastUpdateTime: "2022-09-19T06:26:38Z"
message: ReplicaSet "food-for-worms-76db78f674" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 2
readyReplicas: 3
replicas: 3
updatedReplicas: 3
The following is the output to kubectl describe service food-for-worms-service
:以下是
kubectl describe service food-for-worms-service
的 output :
Name: food-for-worms-service
Namespace: default
Labels: app=food-for-worms
Annotations: cloud.google.com/neg: {"ingress":true}
Selector: app=food-for-worms
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.44.5.70
IPs: 10.44.5.70
LoadBalancer Ingress: 35.193.34.113
Port: <unset> 80/TCP
TargetPort: 9000/TCP
NodePort: <unset> 31331/TCP
Endpoints: 10.40.6.3:9000,10.40.7.2:9000,10.40.8.2:9000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
The following is my app.js
listening on port 9000:下面是我的
app.js
监听9000端口:
const http = require('http');
const hostname = '127.0.0.1';
const port = 9000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Do not have an inflated sense of yourself. You are just food for worms.');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
The following is my Dockerfile code:以下是我的Dockerfile代码:
FROM node:16-alpine3.11
EXPOSE 9000
COPY app.js /
When I follow the link to 35.193.34.113:80
, I get the Page can't be reached error.当我点击指向
35.193.34.113:80
的链接时,出现无法访问页面错误。
What am I missing here please?请问我在这里错过了什么?
The problem is that your app is listening on localhost, which works fine when you directly run the app on a host (like your laptop).问题是您的应用程序正在侦听本地主机,当您直接在主机(如笔记本电脑)上运行该应用程序时,它工作正常。 But when you run it in a container, binding to localhost inside the container means that the app won't be exposed outside the container itself.
但是当你在容器中运行它时,绑定到容器内的 localhost 意味着应用程序不会暴露在容器本身之外。 So you need to change (or omit) the hostname in your app.
因此,您需要更改(或省略)应用程序中的主机名。
You can set the hostname
to 0.0.0.0
so that your app will listen on all container addresses:您可以将
hostname
设置为0.0.0.0
,以便您的应用程序将监听所有容器地址:
const http = require('http');
const hostname = '0.0.0.0';
const port = 9000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Do not have an inflated sense of yourself. You are just food for worms.');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
You can omit the hostname altogether (and your app will listen on all container addresses:您可以完全省略主机名(您的应用程序将监听所有容器地址:
const http = require('http');
const port = 9000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Do not have an inflated sense of yourself. You are just food for worms.');
});
server.listen(port, () => {
console.log(`HTTP server listening on port ${port}`);
});
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.