I have an VM on Hyper-V running Kubernetes on it. I set the istio-ingressgateway as you see below.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: tech-ingressgateway
namespace: tech-ingress-ns
spec:
selector:
istio: ingressgateway # default istio ingressgateway defined in istio-system namespace
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
hosts:
- '*'
I open two ports one is for http
and second is for https
. And I have two backend Service whose virtual service definition are;
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: activemq-vs
namespace: tech-ingress-ns
spec:
hosts:
- '*'
gateways:
- tech-ingressgateway
http:
- match:
- uri:
prefix: '/activemq'
route:
- destination:
host: activemq-svc.tech-ns.svc.cluster.local
port:
number: 8161
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: userprofile-vs
namespace: tech-ingress-ns
spec:
hosts:
- '*'
gateways:
- tech-ingressgateway
http:
- match:
- uri:
prefix: '/userprofile'
route:
- destination:
host: userprofile-svc.business-ns.svc.cluster.local
port:
number: 7002
The istio-ingressgateway
hit successfully using curl as curl 192.168.xx
, but when I try to hit the activemq-backend-service
it fails. I don't know why. The same issue I face while accessing the userprofile service
. here is my curl command
curl -i 192.168.x.x/activemq
curl -i 192.168.x.x/userprofile
curl -i 192.168.x.x/userprofile/getUserDetails
The userProfile service
has three endpoints which are; getUserDetails
, verifyNaturalOtp
and updateProfile
.
EDIT
Here is Deployment and service manifest;
apiVersion: apps/v1
kind: Deployment
metadata:
name: userprofile-deployment
namespace: business-ns
labels:
app: userprofile
spec:
replicas: 1
selector:
matchLabels:
app: userprofile
template:
metadata:
labels:
app: userprofile
spec:
containers:
- env:
- name: APPLICATION_PORT
valueFrom:
configMapKeyRef:
name: configmap-cf
key: userProfilePort
- name: APPLICATION_NAME
valueFrom:
configMapKeyRef:
name: configmap-cf
key: userProfileName
- name: IAM_URL
valueFrom:
configMapKeyRef:
name: configmap-cf
key: userProfileIam
- name: GRAPHQL_DAO_LAYER_URL
valueFrom:
configMapKeyRef:
name: configmap-cf
key: userProfileGraphqlDao
- name: EVENT_PUBLISHER_URL
valueFrom:
configMapKeyRef:
name: configmap-cf
key: userProfilePublisher
name: userprofile
image: 'abc/userprofilesvc:tag1'
imagePullPolicy: IfNotPresent
resources: {}
ports:
- containerPort: 7002
---
apiVersion: v1
kind: Service
metadata:
name: userprofile-svc
namespace: business-ns
labels:
app: userprofile
spec:
selector:
app: userprofile
ports:
- name: http
protocol: TCP
port: 7002
targetPort: 7002
EDIT : kubectl describe pod activemq-deployment-5fb57d5f7c-2v9x5 -n tech-ns
the outout is;
Name: activemq-deployment-5fb57d5f7c-2v9x5
Namespace: tech-ns
Priority: 0
Node: kworker-2/192.168.18.223
Start Time: Fri, 27 May 2022 06:11:50 +0000
Labels: app=activemq
pod-template-hash=5fb57d5f7c
Annotations: cni.projectcalico.org/containerID: e2a5a843ee02655ed3cfc4fa538abcccc3dae34590cc61dab341465aa78565fb
cni.projectcalico.org/podIP: 10.233.107.107/32
cni.projectcalico.org/podIPs: 10.233.107.107/32
kubesphere.io/restartedAt: 2022-05-27T06:11:42.602Z
Status: Running
IP: 10.233.107.107
IPs:
IP: 10.233.107.107
Controlled By: ReplicaSet/activemq-deployment-5fb57d5f7c
Containers:
activemq:
Container ID: docker://94a1f07489f6d2db51d2fe3bfce0ed3654ea7150eb17223696363c1b7f355cd7
Image: vialogic/activemq:cluster1.0
Image ID: docker-pullable://vialogic/activemq@sha256:f3954187bf1ead0a0bc91ec5b1c654fb364bd2efaa5e84e07909d0a1ec062743
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 01 Jun 2022 06:51:19 +0000
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Mon, 30 May 2022 08:21:52 +0000
Finished: Wed, 01 Jun 2022 06:21:43 +0000
Ready: True
Restart Count: 20
Environment: <none>
Mounts:
/home/alpine/apache-activemq-5.16.4/conf/jetty-realm.properties from active-creds (rw,path="jetty-realm.properties")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nvbwv (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
active-creds:
Type: Secret (a volume populated by a Secret)
SecretName: creds
Optional: false
kube-api-access-nvbwv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 55m (x5 over 58m) kubelet Pod sandbox changed, it will be killed and re-created.
Normal Pulled 51m kubelet Container image "vialogic/activemq:cluster1.0" already present on machine
Normal Created 51m kubelet Created container activemq
Normal Started 51m kubelet Started container activemq
As per pod description shared, neither istio-init nor istio-proxy containers arent injected into application pod. So routing might not be happening from gateway to application pod. The namespace / deployment has to be istio enabled so that at the time application pod creation it will inject istio sidecar into it. Post that routing will happen to application.
Your Istio sidecar is not injected into the pod. You would see an output like this in the Annotations section for a pod that has been injected:
pod-template-hash=546859454c
security.istio.io/tlsMode=istio
service.istio.io/canonical-name=userprofile
service.istio.io/canonical-revision=latest
Annotations: kubectl.kubernetes.io/default-container: svc
kubectl.kubernetes.io/default-logs-container: svc
prometheus.io/path: /stats/prometheus
prometheus.io/port: 15020
prometheus.io/scrape: true
sidecar.istio.io/status:
{"initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-data","istio-podinfo","istio-token","istiod-...
You can enforce the injection at the namespace level with the following command:
kubectl label namespace <namespace> istio-injection=enabled --overwrite
Since the injection takes place at the pod's creation, you will need to kill the running pods after the command is issued.
Check out the Sidecar Injection Problems document as a reference.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.