簡體   English   中英

從Kafka到HDFS的數據流期間Flume沒有足夠的空間錯誤

[英]Flume not enough space error while data flow from Kafka to HDFS

我們正在努力應對從卡夫卡到Flume管理的HDFS的數據流。 數據未完全傳輸到hdfs,原因如下所述。 但是此錯誤對我們來說似乎是誤導性的,我們在數據目錄和hdfs中都有足夠的空間。 我們認為這可能是通道配置的問題,但是對於其他來源,我們也有類似的配置,並且可以正確地使用它們。 如果有人必須處理這個問題,我將不勝感激。

17 Aug 2017 14:15:24,335 ERROR [Log-BackgroundWorker-channel1] (org.apache.flume.channel.file.Log$BackgroundWorker.run:1204)  - Error doing checkpoint
java.io.IOException: Usable space exhausted, only 0 bytes remaining, required 524288000 bytes
        at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:1003)
        at org.apache.flume.channel.file.Log.writeCheckpoint(Log.java:986)
        at org.apache.flume.channel.file.Log.access$200(Log.java:75)
        at org.apache.flume.channel.file.Log$BackgroundWorker.run(Log.java:1201)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
17 Aug 2017 14:15:27,552 ERROR [PollableSourceRunner-KafkaSource-kafkaSource] (org.apache.flume.source.kafka.KafkaSource.doProcess:305)  - KafkaSource EXCEPTION, {}
org.apache.flume.ChannelException: Commit failed due to IO error [channel=channel1]
        at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:639)
        at org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
        at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:194)
        at org.apache.flume.source.kafka.KafkaSource.doProcess(KafkaSource.java:286)
        at org.apache.flume.source.AbstractPollableSource.process(AbstractPollableSource.java:58)
        at org.apache.flume.source.PollableSourceRunner$PollingRunner.run(PollableSourceRunner.java:137)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Usable space exhausted, only 0 bytes remaining, required 524288026 bytes
        at org.apache.flume.channel.file.Log.rollback(Log.java:722)
        at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:637)
        ... 6 more

水槽配置:

agent2.sources = kafkaSource

#sources defined
agent2.sources.kafkaSource.type = org.apache.flume.source.kafka.KafkaSource
agent2.sources.kafkaSource.kafka.bootstrap.servers = …
agent2.sources.kafkaSource.kafka.topics = pega-campaign-response
agent2.sources.kafkaSource.channels = channel1

# channels defined
agent2.channels = channel1

agent2.channels.channel1.type = file
agent2.channels.channel1.checkpointDir = /data/cloudera/.flume/filechannel/checkpointdirs/pega
agent2.channels.channel1.dataDirs = /data/cloudera/.flume/filechannel/datadirs/pega
agent2.channels.channel1.capacity = 10000
agent2.channels.channel1.transactionCapacity = 10000

#hdfs sinks

agent2.sinks = sink

agent2.sinks.sink.type = hdfs
agent2.sinks.sink.hdfs.fileType = DataStream
agent2.sinks.sink.hdfs.path = hdfs://bigdata-cls:8020/stage/data/pega/campaign-response/%d%m%Y
agent2.sinks.sink.hdfs.batchSize = 1000
agent2.sinks.sink.hdfs.rollCount = 0
agent2.sinks.sink.hdfs.rollSize = 0
agent2.sinks.sink.hdfs.rollInterval = 120
agent2.sinks.sink.hdfs.useLocalTimeStamp = true
agent2.sinks.sink.hdfs.filePrefix = pega-

df -h命令:

Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root   26G  6.8G   18G  28% /
devtmpfs               126G     0  126G   0% /dev
tmpfs                  126G  6.3M  126G   1% /dev/shm
tmpfs                  126G  2.9G  123G   3% /run
tmpfs                  126G     0  126G   0% /sys/fs/cgroup
/dev/sda1              477M  133M  315M  30% /boot
tmpfs                   26G     0   26G   0% /run/user/0
cm_processes           126G  1.9G  124G   2% /run/cloudera-scm-agent/process
/dev/scinib            2.0T   53G  1.9T   3% /data
tmpfs                   26G   20K   26G   1% /run/user/2000

將通道類型更改為memory-channel並對其進行測試以隔離磁盤空間問題。 agent2.channels.channel1.type =內存

另外,由於您已經在設置中添加了kafka,因此可以將其用作水槽通道。

https://flume.apache.org/FlumeUserGuide.html#kafka-channel

您的錯誤並不指向hdfs中的可用空間,而是您的通道中使用的文件在本地磁盤中的可用空間。 如果您在此處看到文件通道 ,則將看到默認值為524288000。檢查可用本地空間是否足夠(根據您的錯誤,該空間似乎為0)。 您也可以更改屬性minimumRequiredSpace。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM