I am using Spark 2.2 on Databricks and trying to implement a Kinesis sink to write from Spark to a Kinesis stream.
I am using the following provide sample from here https://docs.databricks.com/_static/notebooks/structured-streaming-kinesis-sink.html
/**
* A simple Sink that writes to the given Amazon Kinesis `stream` in the given `region`. For authentication, users may provide
* `awsAccessKey` and `awsSecretKey`, or use IAM Roles when launching their cluster.
*
* This Sink takes a two column Dataset, with the columns being the `partitionKey`, and the `data` respectively.
* We will buffer data up to `maxBufferSize` before flushing to Kinesis in order to reduce cost.
*/
class KinesisSink(
stream: String,
region: String,
awsAccessKey: Option[String] = None,
awsSecretKey: Option[String] = None) extends ForeachWriter[(String, Array[Byte])] {
// Configurations
private val maxBufferSize = 500 * 1024 // 500 KB
private var client: AmazonKinesis = _
private val buffer = new ArrayBuffer[PutRecordsRequestEntry]()
private var bufferSize: Long = 0L
override def open(partitionId: Long, version: Long): Boolean = {
client = createClient
true
}
override def process(value: (String, Array[Byte])): Unit = {
val (partitionKey, data) = value
// Maximum of 500 records can be sent with a single `putRecords` request
if ((data.length + bufferSize > maxBufferSize && buffer.nonEmpty) || buffer.length == 500) {
flush()
}
buffer += new PutRecordsRequestEntry().withPartitionKey(partitionKey).withData(ByteBuffer.wrap(data))
bufferSize += data.length
}
override def close(errorOrNull: Throwable): Unit = {
if (buffer.nonEmpty) {
flush()
}
client.shutdown()
}
/** Flush the buffer to Kinesis */
private def flush(): Unit = {
val recordRequest = new PutRecordsRequest()
.withStreamName(stream)
.withRecords(buffer: _*)
client.putRecords(recordRequest)
buffer.clear()
bufferSize = 0
}
/** Create a Kinesis client. */
private def createClient: AmazonKinesis = {
val cli = if (awsAccessKey.isEmpty || awsSecretKey.isEmpty) {
AmazonKinesisClientBuilder.standard()
.withRegion(region)
.build()
} else {
AmazonKinesisClientBuilder.standard()
.withRegion(region)
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(awsAccessKey.get, awsSecretKey.get)))
.build()
}
cli
}
}
I then implement the KinesisSink class using
val kinesisSink = new KinesisSink("us-east-1", "MyStream", Option("xxx..."), Option("xxx..."))
Finally i create a stream using this sink. This KinesisSink takes a two column Dataset, with the columns being the partitionKey
, and the data
respectively.
case class MyData(partitionKey: String, data: Array[Byte])
val newsDataDF = kinesisDF
.selectExpr("apinewsseqid", "fullcontent").as[MyData]
.writeStream
.outputMode("append")
.foreach(kinesisSink)
.start
but i am still getting the following error
error: type mismatch;
found : KinesisSink
required: org.apache.spark.sql.ForeachWriter[MyData]
.foreach(kinesisSink)
You need to change the signature of KinesisSink.process
method, it should take your custom MyData
object and then extract partitionKey
and data
from there.
I used the exact same KinesisSink provided by databricks and got it working by creating dataset using
val dataset = df.selectExpr("CAST(rand() AS STRING) as partitionKey","message_bytes").as[(String, Array[Byte])]
And used the dataset to write to kinesis stream
val query = dataset
.writeStream
.foreach(kinesisSink)
.start()
.awaitTermination()
While I used dataset.selectExpr("partitionKey","message_bytes")
I got the same type mismatch error:
error: type mismatch;
found: KinesisSink
required: org.apache.spark.sql.ForeachWriter[(String, Array[Byte])]
.foreach(kinesisSink)
selectExpr
is not needed in this case since the dataset drives the data type of the ForeachWriter.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.