简体   繁体   中英

kubernetes pod port expose/forward

I'm trying to expose a port 8080 on a pod, so I can wget directly from server. With port-forward everything works fine ( kubectl --namespace jenkins port-forward pods/jenkins-6f8b486759-6vwkj 9000:8080 ), I'm able to connect to 127.0.0.1:9000

But when I try to avoid port-forward and open ports permanently ( kubectl expose deployment jenkins --type=LoadBalancer -njenkins ): I see it in svc ( kubectl describe svc jenkins -njenkins ):

Name:                     jenkins
Namespace:                jenkins
Labels:                   <none>
Annotations:              <none>
Selector:                 app=jenkins
Type:                     LoadBalancer
IP Families:              <none>
IP:                       10.111.244.192
IPs:                      10.111.244.192
Port:                     port-1  8080/TCP
TargetPort:               8080/TCP
NodePort:                 port-1  31461/TCP
Endpoints:                172.17.0.2:8080
Port:                     port-2  50000/TCP
TargetPort:               50000/TCP
NodePort:                 port-2  30578/TCP
Endpoints:                172.17.0.2:50000
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

but port is still not up, netstat does not show anything. How it should be done correctly?

Using minikube version: v1.20.0, pod yaml just in case:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: jenkins
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jenkins
  template:
    metadata:
      labels:
        app: jenkins
    spec:
      securityContext:

      containers:
      - name: jenkins
        image: jenkins/jenkins:lts

        ports:
          - name: http-port
            containerPort: 8080
            hostPort: 8080
          - name: jnlp-port
            containerPort: 50000
        volumeMounts:
          - name: task-pv-storage
            mountPath: /var/jenkins_home
      volumes:
        - name: task-pv-storage
          persistentVolumeClaim:
            claimName: task-pv-claim

What is your environment? Are you running your local k8s cluster with docker desktop/minikube/kubeadm?

Check that your Pods have external IPs with kubectl get pods -o=wide

Load Balancing is not supposed to be implemented on your single node machine (with Minikube), there is a somehow a "hack"

If you are deploying your cluster on a Cloud provider, Load Balancer would be fully-managed

For the "hack" i talk about, look at this tutorial video section on Ingress component explained: https://youtu.be/X48VuDVv0do?t=7312

You are expected to place a Pod with a nginx server in front of your ingress, in front of your loadbalancer, in front of your deployment Pods

I see that you are running your k8s cluster locally, in this case, LoadBalancer ServiceType is not recommended as this type uses cloud providers' load balancer to expose services externally. You might use self-hosted or hardware load balancer but I suppose it's a bit an overkill for minikube cluster.

In your minikube deployment, I'd suggest using NodePort Service Type as it uses IP address of your node to expose service. Example yaml:

apiVersion: v1
kind: Service
metadata:
  name: jenkins-service
spec:
  type: NodePort
  selector:
    app: jenkins
  ports:
    - port: 8080
      targetPort: 8080
      # nodePort field is optional, Kubernetes will allocate port from a range 30000-32767, but you can choose 
      nodePort: 30007
    - port: 50000
      targetPort: 50000     
      nodePort: 30008
  

Then, you can access your app on <NodeIP>:<nodePort> . If you want to read more about k8s Services go here .

You exposed the application with a service on port 8080, but that port is not known outside of kubernetes, same as the ip address of the service or the pod.

The service opened a NodePort that is pointing at the deployments port:

[...]
NodePort:                 port-1  31461/TCP
[...]

Using curl to that ip:port destination should work:

curl <cluster-node>:31461

The cluster node ip depends on how you have set up minikube.

The issue was with a minikube itself - found it while checking kubectl get events --all-namespaces , some strange things were happening, and looks like the internal proxy component was damaged.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM