[英]Testing locally k8s distributed system
I'm new to k8s and I'm trying to build a distributed system.我是 k8s 的新手,我正在尝试构建一个分布式系统。 The idea is that a stateful pod will be spawened for each user.
这个想法是为每个用户生成一个有状态的 pod。
Main services are two Python applications MothershipService
and Ship
.主要服务是两个 Python 应用程序
MothershipService
和Ship
。 MothershipService's purpose is to keep track of ship-per-user, do health checks, etc. Ship
is running some (untrusted) user code. MothershipService 的目的是跟踪每个用户的 ship、进行健康检查等
Ship
正在运行一些(不受信任的)用户代码。
MothershipService Ship-user1
| | ---------- | |---vol1
|..............| -----. |--------|
\
\ Ship-user2
'- | |---vol2
|--------|
I can manage fine to get up the ship service我可以很好地安排船舶服务
> kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/ship-0 1/1 Running 0 7d 10.244.0.91 minikube <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ship ClusterIP None <none> 8000/TCP 7d app=ship
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d <none>
NAME READY AGE CONTAINERS IMAGES
statefulset.apps/ship 1/1 7d ship ship
My question is how do I go about testing this via curl
or a browser?我的问题是如何通过
curl
或浏览器进行测试? These are all backend services so NodePort
seems not the right approach since none of this should be accessible to the public.这些都是后端服务,因此
NodePort
似乎不是正确的方法,因为公众不应访问这些服务。 Eventually I will build a test-suite for all this and deploy on GKE.最终我将为所有这些构建一个测试套件并部署在 GKE 上。
ship.yml (pseudo-spec) ship.yml(伪规范)
kind: Service
metadata:
name: ship
spec:
ports:
- port: 8000
name: ship
clusterIP: None # headless service
..
---
kind: StatefulSet
metadata:
name: ship
spec:
serviceName: "ship"
replicas: 1
template:
spec:
containers:
- name: ship
image: ship
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
name: ship
..
One possibility is to use the kubectl port-forward
command to expose the pod port locally on your system.一种可能是使用
kubectl port-forward
命令在系统本地公开 pod 端口。 For example, if I'm use this deployment to run a simple web server listening on port 8000:例如,如果我使用此部署运行一个简单的 Web 服务器,监听端口 8000:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example
name: example
spec:
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- args:
- --port
- "8000"
image: docker.io/alpinelinux/darkhttpd
name: web
ports:
- containerPort: 8000
name: http
I can expose that on my local system by running:我可以通过运行在我的本地系统上公开它:
kubectl port-forward deploy/example 8000:8000
As long as that port-forward
command is running, I can point my browser (or curl
) at http://localhost:8000
to access the service.只要
port-forward
命令在运行,我就可以将我的浏览器(或curl
)指向http://localhost:8000
以访问该服务。
Alternately, I can use kubectl exec
to run commands (like curl
or wget
) inside the pod:或者,我可以使用
kubectl exec
在 pod 中运行命令(如curl
或wget
):
kubectl exec -it web -- wget -O- http://127.0.0.1:8000
Example process on how to create a Kubernetes Service object that exposes an external IP address:关于如何创建公开外部 IP 地址的 Kubernetes 服务对象的示例过程:
**Creating a service for an application running in five pods: ** **为在五个 pod 中运行的应用程序创建服务:**
Run a Hello World application in your cluster:在您的集群中运行 Hello World 应用程序:
kubectl run hello-world --replicas=5 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
The preceding command creates a Deployment object and an associated ReplicaSet object.前面的命令创建一个 Deployment 对象和一个关联的 ReplicaSet 对象。 The ReplicaSet has five Pods, each of which runs the Hello World application.
ReplicaSet 有五个 Pod,每个 Pod 都运行 Hello World 应用程序。
Display information about the Deployment:显示有关 Deployment 的信息:
kubectl get deployments hello-world
kubectl describe deployments hello-world
Display information about your ReplicaSet objects:显示有关您的 ReplicaSet 对象的信息:
kubectl get replicasets
kubectl describe replicasets
Create a Service object that exposes the deployment:创建一个公开部署的服务对象:
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
Display information about the Service:显示有关服务的信息:
kubectl get services my-service
The output is similar to this:输出类似于:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service 10.3.245.137 104.198.205.71 8080/TCP 54s
Note: If the external IP address is shown as, wait for a minute and enter the same command again.注意:如果外部 IP 地址显示为,请稍等片刻,然后再次输入相同的命令。
Display detailed information about the Service:显示有关服务的详细信息:
kubectl describe services my-service The output is similar to this: kubectl describe services my-service 输出类似这样:
Name: my-service
Namespace: default
Labels: run=load-balancer-example
Selector: run=load-balancer-example
Type: LoadBalancer
IP: 10.3.245.137
LoadBalancer Ingress: 104.198.205.71
Port: <unset> 8080/TCP
NodePort: <unset> 32377/TCP
Endpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more...
Session Affinity: None
Events: Make a note of the external IP address exposed by your service.事件:记下您的服务公开的外部 IP 地址。 In this example, the external IP address is 104.198.205.71.
在此示例中,外部 IP 地址为 104.198.205.71。 Also note the value of Port.
还要注意端口的值。 In this example, the port is 8080.
在此示例中,端口为 8080。
In the preceding output, you can see that the service has several endpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more.在前面的输出中,您可以看到该服务有几个端点:10.0.0.6:8080、10.0.1.6:8080、10.0.1.7:8080 + 2 个。 These are internal addresses of the pods that are running the Hello World application.
这些是运行 Hello World 应用程序的 pod 的内部地址。 To verify these are pod addresses, enter this command:
要验证这些是 Pod 地址,请输入以下命令:
kubectl get pods --output=wide
The output is similar to this:输出类似于:
NAME ... IP NODE
hello-world-2895499144-1jaz9 ... 10.0.1.6 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-2e5uh ... 0.0.1.8 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-9m4h1 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8d269-5v7a
hello-world-2895499144-o4z13 ... 10.0.1.7 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-segjf ... 10.0.2.5 gke-cluster-1-default-pool-e0b8d269-cpuc
Use the external IP address to access the Hello World application:使用外部 IP 地址访问 Hello World 应用程序:
curl http://<external-ip>:<port>
where <external-ip>
is the external IP address of your Service, and <port>
is the value of Port in your Service description.其中
<external-ip>
是您服务的外部 IP 地址, <port>
是您服务描述中端口的值。
The response to a successful request is a hello message:对成功请求的响应是一条问候消息:
Hello Kubernetes!
Please refer to How to Use external IP in GKE and Exposing an External IP Address to Access an Application in a Cluster for more information.请参考如何在 GKE 中使用外部 IP和暴露外部 IP 地址以访问集群中的应用程序以获取更多信息。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.