[英]K8s monitoring with prometheus-POD metrics
Is there any tools/way to get CPU, MEM, NET metrics of PODs
. 是否有任何工具/方式来获取
PODs
CPU,MEM,NET指标。 Other than below links, is there any tools available 除了下面的链接以外,是否还有其他可用的工具
kube-state-metrics
- Able to deploy but no useful POD metrics. kube-state-metrics
可以部署,但没有有用的POD指标。 You can see the POD metrics here . prometheus-kubernetes
- But it strucks at registering custom service forever. prometheus-kubernetes
但是它永久地注册了自定义服务。 check here container_cpu metrics
, but I dont see any metrics like that container_cpu metrics
,但我看不到任何类似指标 UPDATE1 更新1
Tried launch POD with yaml file
as they mentioned in blog . 如他们在博客中所述,尝试使用
yaml file
启动POD。 Installed go lang
with GOPATH
& GOROOT
使用
GOPATH
& GOROOT
安装go lang
ubuntu@ip-172-:~$ kubectl create -f prometheus.yaml
panic: interface conversion: interface {} is []interface {}, not map[string]interface {}
goroutine 1 [running]:
k8s.io/kubernetes/pkg/kubectl/cmd/util/openapi/validation.getObjectKind(0x14dcb20, 0xc420c56480, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xffffffffffffff01, 0xc420f6bca0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/util/openapi/validation/validation.go:111 +0x539
k8s.io/kubernetes/pkg/kubectl/cmd/util/openapi/validation.(*SchemaValidation).ValidateBytes(0xc4207b01d0, 0xc420b3ca80, 0x16c, 0x180, 0xc420b51628, 0x4ed384)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/util/openapi/validation/validation.go:49 +0x8f
k8s.io/kubernetes/pkg/kubectl/validation.ConjunctiveSchema.ValidateBytes(0xc42073cba0, 0x2, 0x2, 0xc420b3ca80, 0x16c, 0x180, 0x4ed029, 0xc420b3ca80)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/validation/schema.go:130 +0x9a
k8s.io/kubernetes/pkg/kubectl/validation.(*ConjunctiveSchema).ValidateBytes(0xc42073cbc0, 0xc420b3ca80, 0x16c, 0x180, 0xc420b51700, 0x443693)
<autogenerated>:3 +0x7d
k8s.io/kubernetes/pkg/kubectl/resource.ValidateSchema(0xc420b3ca80, 0x16c, 0x180, 0x2183f80, 0xc42073cbc0, 0x20, 0xc420b51700)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:222 +0x68
k8s.io/kubernetes/pkg/kubectl/resource.(*StreamVisitor).Visit(0xc420c2eb00, 0xc420c3d440, 0x218a000, 0xc420c3d4a0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:543 +0x269
k8s.io/kubernetes/pkg/kubectl/resource.(*FileVisitor).Visit(0xc420c3d2c0, 0xc420c3d440, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:502 +0x181
k8s.io/kubernetes/pkg/kubectl/resource.EagerVisitorList.Visit(0xc420f6bc30, 0x1, 0x1, 0xc420903c50, 0x1, 0xc420903c50)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:211 +0x100
k8s.io/kubernetes/pkg/kubectl/resource.(*EagerVisitorList).Visit(0xc420c3d360, 0xc420903c50, 0x7ff854222000, 0x0)
<autogenerated>:115 +0x69
k8s.io/kubernetes/pkg/kubectl/resource.FlattenListVisitor.Visit(0x2183d00, 0xc420c3d360, 0xc420c2eac0, 0xc420c2eb40, 0xc420c3d401, 0xc420c2eb40)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:417 +0xa3
k8s.io/kubernetes/pkg/kubectl/resource.(*FlattenListVisitor).Visit(0xc420c3d380, 0xc420c2eb40, 0x18, 0x18)
<autogenerated>:130 +0x69
k8s.io/kubernetes/pkg/kubectl/resource.DecoratedVisitor.Visit(0x2183d80, 0xc420c3d380, 0xc420c3d3c0, 0x3, 0x4, 0xc420c3d400, 0xc420386901, 0xc420c3d400)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:325 +0xd8
k8s.io/kubernetes/pkg/kubectl/resource.(*DecoratedVisitor).Visit(0xc420903c20, 0xc420c3d400, 0x151b920, 0xc420f6bc60)
<autogenerated>:153 +0x73
k8s.io/kubernetes/pkg/kubectl/resource.ContinueOnErrorVisitor.Visit(0x2183c80, 0xc420903c20, 0xc420c370e0, 0x7ff854222000, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/visitor.go:352 +0xf1
k8s.io/kubernetes/pkg/kubectl/resource.(*ContinueOnErrorVisitor).Visit(0xc420f6bc50, 0xc420c370e0, 0x40f3f8, 0x60)
<autogenerated>:144 +0x60
k8s.io/kubernetes/pkg/kubectl/resource.(*Result).Visit(0xc4202c23f0, 0xc420c370e0, 0x6, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/resource/result.go:95 +0x62
k8s.io/kubernetes/pkg/kubectl/cmd.RunCreate(0x21acd60, 0xc420320e40, 0xc42029d440, 0x2182e40, 0xc42000c018, 0x2182e40, 0xc42000c020, 0xc420173000, 0x176f608, 0x4)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/create.go:187 +0x4a8
k8s.io/kubernetes/pkg/kubectl/cmd.NewCmdCreate.func1(0xc42029d440, 0xc4202aa580, 0x0, 0x2)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubectl/cmd/create.go:73 +0x17f
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc42029d440, 0xc4202aa080, 0x2, 0x2, 0xc42029d440, 0xc4202aa080)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:603 +0x22b
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc420235b00, 0x8000102, 0x0, 0xffffffffffffffff)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:689 +0x339
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(0xc420235b00, 0xc420320e40, 0x2182e00)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:648 +0x2b
k8s.io/kubernetes/cmd/kubectl/app.Run(0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/app/kubectl.go:39 +0xd5
main.main()
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:26 +0x22
prometheus.yaml
# This scrape config scrapes kubelets
- job_name: 'kubernetes-nodes'
kubernetes_sd_configs:
- role: node
# couldn't get prometheus to validate the kublet cert for scraping, so don't bother for now
tls_config:
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- target_label: __scheme__
replacement: https
- source_labels: [__meta_kubernetes_node_label_kubernetes_io_hostname]
target_label: instance
您正在寻找cadvisor 。
I just want to give complete answer to support @brian-brazil's answer. 我只想给出完整的答案以支持@ brian-brazil的答案。
cAvisor
supports Prometheus
. cAvisor
支持Prometheus
。
Just launch cAdvisor
container as said in Readme 按照自述文件中所述启动
cAdvisor
容器
Then curl
the http://localhost:8080/metrics to check the metrics. 然后
curl
http:// localhost:8080 / metrics以检查指标。 You can configure the URL in Prometheus
server to pull 您可以在
Prometheus
服务器中配置URL以提取
This is my setup and it gives you all the things you asked and for installing that stack you just need to run two lines. 这是我的设置,它为您提供了您所要求的所有内容,并且仅需运行两行就可以安装该堆栈。 I have to mention that you also need a working metrics-server Wrong CPU usage values in Grafana + Prometheus
我不得不提到,您还需要一个有效的指标服务器,Grafana + Prometheus中的CPU使用率值错误
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.