簡體   English   中英

為什么每當我嘗試運行 DataStax Kafka Connector 時 Cassandra 都會崩潰?

[英]Why is Cassandra crashing whenever I try to run DataStax Kafka Connector?

目標:我的目標是使用 Kafka 使用 Kafka Connect 將消息發送到 Cassandra 接收器。

我已經部署了 Kafka 和 Cassandra,我能夠單獨使用它們中的每一個——我可以毫無問題地將數據發送到 Kafka,使用生產者傳遞消息,並使用消費者來消費它們。 我使用 cqlsh 創建表並向其中插入數據沒有問題。 但是,每當我嘗試部署 DataStax Apache Kafka 連接器時,Cassandra 似乎會崩潰。

我正在嘗試學習如何使用獨立模式僅使用一個 Kafka 生產者、代理和一個 Cassandra 密鑰空間來使用 Kafka Connect。 我已經按照 DataStax 顯示的說明配置了connect-standalone.propertiescassandra-sink-standalone.properties:https://docs.datastax.com/en/kafka/doc/kafka/kafkaStringJson.html

連接-standalone.properties

bootstrap.servers=localhost:9092

key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.json.JsonConverter

key.converter.schemas.enable=false
value.converter.schemas.enable=false

offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000

plugin.path= *install_location*/kafka-connect-cassandra-sink-1.4.0.jar

cassandra-sink-standalone.properties

name=stocks-sink
connector.class=com.datastax.kafkaconnector.DseSinkConnector
tasks.max=1
topics=stocks_topic
topic.stocks_topic.stocks_keyspace.stocks_table.mapping = symbol=value.symbol, ts=value.ts, exchange=value.exchange, industry=value.industry, name=key, value=value.value

然后,使用bin/connect-standalone.sh connect-standalone.properties cassandra-sink-standalone.properties啟動 Kafka 連接器。

大約 95% 的時間我嘗試啟動 Kafka 連接器,Cassandra 崩潰。 運行bin/nodetool status顯示消息:

nodetool:無法連接到“127.0.0.1:7199”-ConnectException:“連接被拒絕(連接被拒絕)”

system.logdebug.log日志中,甚至沒有跡象表明 Cassandra 已崩潰。 最后一行仍然是:

INFO [main] 2023-01-31 00:00:00,143 StorageService.java:2806 - Node localhost/127.0.0.1:7000 state 跳轉到NORMAL

在 Kafka Connect 日志中,錯誤消息指出:

[2023-01-31 15:24:47,803] INFO [plc-sink|task-0] DataStax Java driver for Apache Cassandra(R) (com.datastax.oss:java-driver-core) version 4.6.0 (com.datastax.oss.driver.internal.core.DefaultMavenCoordinates:37)
[2023-01-31 15:24:47,947] INFO [plc-sink|task-0] Could not register Graph extensions; this is normal if Tinkerpop was explicitly excluded from classpath (com.datastax.oss.driver.internal.core.context.InternalDriverContext:540)
[2023-01-31 15:24:47,948] INFO [plc-sink|task-0] Could not register Reactive extensions; this is normal if Reactive Streams was explicitly excluded from classpath (com.datastax.oss.driver.internal.core.context.InternalDriverContext:559)
[2023-01-31 15:24:47,997] INFO [plc-sink|task-0] Using native clock for microsecond precision (com.datastax.oss.driver.internal.core.time.Clock:40)
[2023-01-31 15:24:47,999] INFO [plc-sink|task-0] [s0] No contact points provided, defaulting to /127.0.0.1:9042 (com.datastax.oss.driver.internal.core.metadata.MetadataManager:134)
[2023-01-31 15:24:48,190] WARN [plc-sink|task-0] [s0] Error connecting to Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=3247c5e4), trying next node (ConnectionInitException: [s0|control|connecting...] Protocol initialization request, step 1 (OPTIONS): failed to send request (java.nio.channels.ClosedChannelException)) (com.datastax.oss.driver.internal.core.control.ControlConnection:34)
[2023-01-31 15:24:48,200] ERROR [plc-sink|task-0] WorkerSinkTask{id=plc-sink-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:196)
com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=3247c5e4): [com.datastax.oss.driver.api.core.connection.ConnectionInitException: [s0|control|connecting...] Protocol initialization request, step 1 (OPTIONS): failed to send request (java.nio.channels.ClosedChannelException)]
    at com.datastax.oss.driver.api.core.AllNodesFailedException.copy(AllNodesFailedException.java:141)
    at com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:149)
    at com.datastax.oss.driver.api.core.session.SessionBuilder.build(SessionBuilder.java:612)
    at com.datastax.oss.kafka.sink.state.LifeCycleManager.buildCqlSession(LifeCycleManager.java:518)
    at com.datastax.oss.kafka.sink.state.LifeCycleManager.lambda$startTask$0(LifeCycleManager.java:113)
    at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
    at com.datastax.oss.kafka.sink.state.LifeCycleManager.startTask(LifeCycleManager.java:109)
    at com.datastax.oss.kafka.sink.CassandraSinkTask.start(CassandraSinkTask.java:83)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:312)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:187)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:244)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
    Suppressed: com.datastax.oss.driver.api.core.connection.ConnectionInitException: [s0|control|connecting...] Protocol initialization request, step 1 (OPTIONS): failed to send request (java.nio.channels.ClosedChannelException)
        at com.datastax.oss.driver.internal.core.channel.ProtocolInitHandler$InitRequest.fail(ProtocolInitHandler.java:342)
        at com.datastax.oss.driver.internal.core.channel.ChannelHandlerRequest.writeListener(ChannelHandlerRequest.java:87)
        at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
        at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
        at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
        at io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:183)
        at io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:95)
        at io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:30)
        at com.datastax.oss.driver.internal.core.channel.ChannelHandlerRequest.send(ChannelHandlerRequest.java:76)
        at com.datastax.oss.driver.internal.core.channel.ProtocolInitHandler$InitRequest.send(ProtocolInitHandler.java:183)
        at com.datastax.oss.driver.internal.core.channel.ProtocolInitHandler.onRealConnect(ProtocolInitHandler.java:118)
        at com.datastax.oss.driver.internal.core.channel.ConnectInitHandler.lambda$connect$0(ConnectInitHandler.java:57)
        at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
        at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:570)
        at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:549)
        at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
        at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
        at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
        at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:321)
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:337)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        ... 1 more
        Suppressed: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /127.0.0.1:9042
        Caused by: java.net.ConnectException: Connection refused
            at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
            at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
            at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
            at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
            at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
            at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
            at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
            at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
            at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
            at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
            at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
            at java.lang.Thread.run(Thread.java:750)
    Caused by: java.nio.channels.ClosedChannelException
        at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
        at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:921)
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:354)
        at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:897)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1372)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:748)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:740)
        at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:726)
        at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:127)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:748)
        at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:763)
        at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:788)
        at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:756)
        at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:806)
        at io.netty.channel.DefaultChannelPipeline.writeAndFlush(DefaultChannelPipeline.java:1025)
        at io.netty.channel.AbstractChannel.writeAndFlush(AbstractChannel.java:294)
        at com.datastax.oss.driver.internal.core.channel.ChannelHandlerRequest.send(ChannelHandlerRequest.java:75)
        ... 20 more

在 Cassandra 沒有真正崩潰的 5% 的時間內,以下消息顯示在 Kafka Connect 的日志中:

[2023-01-31 15:41:32,839] INFO [plc-sink|task-0] DataStax Java driver for Apache Cassandra(R) (com.datastax.oss:java-driver-core) version 4.6.0 (com.datastax.oss.driver.internal.core.DefaultMavenCoordinates:37)
[2023-01-31 15:41:32,981] INFO [plc-sink|task-0] Could not register Graph extensions; this is normal if Tinkerpop was explicitly excluded from classpath (com.datastax.oss.driver.internal.core.context.InternalDriverContext:540)
[2023-01-31 15:41:32,982] INFO [plc-sink|task-0] Could not register Reactive extensions; this is normal if Reactive Streams was explicitly excluded from classpath (com.datastax.oss.driver.internal.core.context.InternalDriverContext:559)
[2023-01-31 15:41:33,037] INFO [plc-sink|task-0] Using native clock for microsecond precision (com.datastax.oss.driver.internal.core.time.Clock:40)
[2023-01-31 15:41:33,040] INFO [plc-sink|task-0] [s0] No contact points provided, defaulting to /127.0.0.1:9042 (com.datastax.oss.driver.internal.core.metadata.MetadataManager:134)
[2023-01-31 15:41:33,254] INFO [plc-sink|task-0] [s0] Failed to connect with protocol DSE_V2, retrying with DSE_V1 (com.datastax.oss.driver.internal.core.channel.ChannelFactory:224)
[2023-01-31 15:41:33,263] INFO [plc-sink|task-0] [s0] Failed to connect with protocol DSE_V1, retrying with V4 (com.datastax.oss.driver.internal.core.channel.ChannelFactory:224)
[2023-01-31 15:41:34,091] INFO [plc-sink|task-0] WorkerSinkTask{id=plc-sink-0} Sink task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:313)
[2023-01-31 15:41:34,092] INFO [plc-sink|task-0] WorkerSinkTask{id=plc-sink-0} Executing sink task (org.apache.kafka.connect.runtime.WorkerSinkTask:198)
...

版本:

  • Apache Cassandra 4.0.7
  • Apache卡夫卡3.3.1
  • DataStax Apache 卡夫卡連接器 1.4.0

我目前在 Windows 11 上使用 WSL2 Ubuntu 20.04.5,規格如下:

  • 中央處理器:4 核
  • Memory:8GB 內存
  • 磁盤 (SSD):250 GB

看到它實際上有 5% 的時間有效,我懷疑這是https://community.datastax.com/questions/6947/index.html中概述的 OOM 問題(有時我恰好有足夠的 memory?) . 我已經嘗試過本文中的解決方案,但沒有幫助。 如何配置 Cassandra / Kafka Connect 以避免此問題? 這只是需要一台具有更多 memory 的計算機的問題嗎?

當您建議 memory 是一個問題時,我認為您走在正確的軌道上。

我有一個“小”Windows Surface Pro,我用它來復制像你這樣的問題。 我還在這台筆記本電腦上運行 Ubuntu 20.04 和 WSL2。

默認情況下,Windows 將系統 RAM 的一半分配給 WSL2,因此在我的 8GB 以下 RAM 安裝中,我的 Ubuntu 安裝占用 memory 的 3.7GB。Cassandra 的普通安裝(開箱即用的零配置),從 1.3 開始memory GB 分配給它,所以只剩下 2.4GB 用於其他所有內容。

我懷疑正在發生的事情是,當您在同一節點上啟動 Kafka 時,Ubuntu 用完了 memory 並觸發了 Linux oom-killer 盡管最終結果相似,但觸發器與我在您鏈接的帖子中描述的略有不同,因此我建議顯式設置disk_access_mode在這種情況下無濟於事。

作為解決方法,通過在conf/cassandra-env.sh中設置MAX_HEAP_SIZE ,將 Cassandra 配置為僅分配 memory 的 1GB:

MAX_HEAP_SIZE="1G"

Kafka 默認配置為 1GB,但如果不是,請在bin/kafka-server-start.sh中設置以下內容:

    export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"

通過設置兩者,Ubuntu 應該有超過 1GB 的剩余空間,並希望允許您運行測試。 干杯!

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM