简体   繁体   中英

How to write to Cassandra using foreachBatch() in Java Spark?

I have the following code and i would like to write into cassandra using spark 2.4 structured streaming foreachBatch

 Dataset<Row> df = spark .readStream() .format("kafka") .option("kafka.bootstrap.servers", "localhost:9092") .option("subscribe", "topic1") .load(); Dataset<Row> values=df.selectExpr( "split(value,',')[0] as field1", "split(value,',')[1] as field2", "split(value,',')[2] as field3", "split(value,',')[3] as field4", "split(value,',')[4] as field5"); //TODO write into cassandra values.writeStream().foreachBatch( new VoidFunction2<Dataset<String>, Long> { public void call(Dataset<String> dataset, Long batchId) { // Transform and write batchDF } ).start();

When you use .forEachBatch , your code is just working as with normal datasets... In Java the code could look like as following (full source is here ):

.foreachBatch((VoidFunction2<Dataset<Row>, Long>) (df, batchId) ->
         df.write()
         .format("org.apache.spark.sql.cassandra")
         .options(ImmutableMap.of("table", "sttest", "keyspace", "test"))
         .mode(SaveMode.Append)
         .save()
)

Update in September 2020th: support for spark structured streaming was added in the Spark Cassandra Connector 2.5.0

Try add it to your pom.xml:

<dependency>
    <groupId>com.datastax.spark</groupId>
    <artifactId>spark-cassandra-connector_2.11</artifactId>
    <version>2.4.2</version>
</dependency>

after that import cassandra implicits:

import org.apache.spark.sql.cassandra._

than you can use cassandraFormat method on your df:

dataset
      .write
      .cassandraFormat("table","keyspace")
      .save()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM