簡體   English   中英

如何將Spark指標發送到獨立群集上的Graphite?

[英]How to send Spark metrics to Graphite on Standalone cluster?

我嘗試使用以下配置將Spark指標發送到Graphite:

*.sink.graphite.class=org.apache.spark.metrics.sink.GraphiteSink
*.sink.graphite.host=85.10.206.170
*.sink.graphite.port=2003
*.sink.graphite.period=1
*.sink.graphite.unit=minutes

# Enable jvm source for instance master, worker, driver and executor
master.source.jvm.class=org.apache.spark.metrics.source.JvmSource

worker.source.jvm.class=org.apache.spark.metrics.source.JvmSource

driver.source.jvm.class=org.apache.spark.metrics.source.JvmSource

executor.source.jvm.class=org.apache.spark.metrics.source.JvmSource

application.source.jvm.class=org.apache.spark.metrics.source.JvmSource

保存在/data/configurations/metrics.properties

我用這些屬性提交我的申請:

--files=/data/configuration/metrics.properties --conf spark.metrics.conf=metrics.properties

我收到以下錯誤:

com.test.MyApp: metrics.properties (No such file or directory)
 java.io.FileNotFoundException: metrics.properties (No such file or directory)
    at java.io.FileInputStream.open0(Native Method) ~[?:1.8.0_45]
    at java.io.FileInputStream.open(FileInputStream.java:195) ~[?:1.8.0_45]
    at java.io.FileInputStream.<init>(FileInputStream.java:138) ~[?:1.8.0_45]
    at java.io.FileInputStream.<init>(FileInputStream.java:93) ~[?:1.8.0_45]
    at org.apache.spark.metrics.MetricsConfig$$anonfun$1.apply(MetricsConfig.scala:50) ~[spark-assembly-1.4.1-hadoop2.4.0.jar:1.4.1]
    at org.apache.spark.metrics.MetricsConfig$$anonfun$1.apply(MetricsConfig.scala:50) ~[spark-assembly-1.4.1-hadoop2.4.0.jar:1.4.1]
    at scala.Option.map(Option.scala:145) ~[spark-assembly-1.4.1-hadoop2.4.0.jar:1.4.1]
    at org.apache.spark.metrics.MetricsConfig.initialize(MetricsConfig.scala:50) ~[spark-assembly-1.4.1-hadoop2.4.0.jar:1.4.1]
    at org.apache.spark.metrics.MetricsSystem.<init>(MetricsSystem.scala:93) ~[spark-assembly-1.4.1-hadoop2.4.0.jar:1.4.1]
    at org.apache.spark.metrics.MetricsSystem$.createMetricsSystem(MetricsSystem.scala:222) ~[spark-assembly-1.4.1-hadoop2.4.0.jar:1.4.1]
    at org.apache.spark.SparkEnv$.create(SparkEnv.scala:361) ~[spark-assembly-1.4.1-hadoop2.4.0.jar:1.4.1]
    at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:188) ~[spark-assembly-1.4.1-hadoop2.4.0.jar:1.4.1]
    at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:267) ~[spark-assembly-1.4.1-hadoop2.4.0.jar:1.4.1]
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:424) ~[spark-assembly-1.4.1-hadoop2.4.0.jar:1.4.1]
    at org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:842) ~[spark-assembly-1.4.1-hadoop2.4.0.jar:1.4.1]
    at org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:80) ~[spark-assembly-1.4.1-hadoop2.4.0.jar:1.4.1]
    at org.apache.spark.streaming.api.java.JavaStreamingContext.<init>(JavaStreamingContext.scala:133) ~[spark-assembly-1.4.1-hadoop2.4.0.jar:1.4.1]

我哪里錯了?

tl; dr spark.metrics.conf應該是一個絕對路徑。

注意:星號( * )表示Spark中可用的任何度量源,可以是driverexecutor driver ,外部shuffleServicemasterapplicationsworkermesos_cluster

提示:您可以使用相應的服務URL訪問指標,例如4040用於驅動程序,8080用於Spark Standalone的主服務器和應用程序,使用http://localhost:[port]/metrics/json/ URL。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM