简体   繁体   English

使用python Spark将大型CSV发送到Kafka

[英]Sending Large CSV to Kafka using python Spark

I am trying to send a large CSV to kafka. 我正在尝试向kafka发送一个大的CSV。 The basic structure is to read a line of the CSV and zip it with the header. 基本结构是读取CSV的一行并用标题压缩它。

a = dict(zip(header, line.split(",")

This then gets converted to a json with: 然后将其转换为json:

message = json.dumps(a)

I then use kafka-python library to send the message 然后我使用kafka-python库发送消息

from kafka import SimpleProducer, KafkaClient
kafka = KafkaClient("localhost:9092")
producer = SimpleProducer(kafka)
producer.send_messages("topic", message)

Using PYSPARK I have easily created an RDD of messages from the CSV file 使用PYSPARK我很容易从CSV文件中创建RDD消息

sc = SparkContext()
text = sc.textFile("file.csv")
header = text.first().split(',')
def remove_header(itr_index, itr):
    return iter(list(itr)[1:]) if itr_index == 0 else itr
noHeader = text.mapPartitionsWithIndex(remove_header)

messageRDD = noHeader.map(lambda x: json.dumps(dict(zip(header, x.split(","))

Now I want to send these messages: I define a function 现在我想发送这些消息:我定义了一个函数

def sendkafka(message):
  kafka = KafkaClient("localhost:9092")
  producer = SimpleProducer(kafka)
  return producer.send_messages('topic',message)

Then I create a new RDD to send the messages 然后我创建一个新的RDD来发送消息

sentRDD = messageRDD.map(lambda x: kafkasend(x))

I then call sentRDD.count() 然后我调用sentRDD.count()

Which starts churning and sending messages 哪个开始搅拌和发送消息

Unfortunately this is very slow. 不幸的是,这很慢。 It sends 1000 messages a second. 它每秒发送1000条消息。 This is on a 10 node cluster of 4 cpus each and 8gb of memory. 这是一个10节点的集群,每个集群有4个cpus和8GB的内存。

In comparison, creating the messages takes about 7 seconds on a 10 million row csv. 相比之下,在1000万行csv上创建消息大约需要7秒。 ~ about 2gb 〜约2gb

I think the issue is that I am instantiating a kafka producer inside the function. 我认为问题是我在函数内实例化一个kafka生成器。 However, if I don't then spark complains that the producer doesn't exist even though I have tried defining it globally. 然而,如果我不这样做就会引发抱怨,即使我尝试在全球范围内定义它,生产者也不存在。

Perhaps someone can shed some light on how this problem may be approached. 也许有人可以阐明如何解决这个问题。

Thank you, 谢谢,

You can create a single producer per partition and use either mapPartitions or foreachPartition : 您可以为每个分区创建一个生产者,并使用mapPartitionsforeachPartition

def sendkafka(messages):
    kafka = KafkaClient("localhost:9092")
    producer = SimpleProducer(kafka)
    for message in messages:
        yield producer.send_messages('topic', message)

sentRDD = messageRDD.mapPartitions(sendkafka)

If above alone won't help you can try to extend it using an asynchronous producer . 如果单独上面没有帮助,您可以尝试使用异步生成器扩展它。

In Spark 2.x it is also possible to use Kafka data source. 在Spark 2.x中,也可以使用Kafka数据源。 You'll have to include spark-sql-kafka jar, matching Spark and Scala version (here 2.2.0 and 2.11 respectively): 你必须包含spark-sql-kafka jar,匹配Spark和Scala版本(分别为2.2.0和2.11):

spark.jars.packages  org.apache.spark:spark-sql-kafka-0-10_2.11:2.2.0

convert data to a DataFrame (if it is not DataFrame already): 将数据转换为DataFrame (如果它不是DataFrame ):

messageDF = spark.createDataFrame(messageRDD, "string")

and write using DataFrameWriter : 并使用DataFrameWriter编写:

(messageDF.write
    .format("kafka")
    .option("topic", topic_name)
    .option("kafka.bootstrap.servers", bootstrap_servers)
    .save())

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM