簡體   English   中英

mongo-hadoop軟件包upsert與spark似乎不起作用

[英]mongo-hadoop package upsert with spark doesn't seem to be working

我試圖將MongoDB Connector for Hadoop與Spark一起使用,以查詢MongoDB中的一個集合,並將所有檢索到的文檔向上插入另一個集合中。 MongoUpdateWritable類用於RDD的值來更新MongoDB中的集合,並且具有upsert標志。 不幸的是,upsert標志似乎對執行沒有影響。 代碼正在執行,沒有錯誤,好像upsert標志設置為false。

此(Scala)代碼連接到localhost mongod進程,使用mongo客戶端寫入一些數據,然后嘗試讀取該數據並將其寫入使用Spark的同一數據庫中的另一個集合。 寫入未完成后,代碼通過具有相同ID的mongo客戶端將文檔寫入目標表,並運行相同的spark作業,以顯示upsert的更新部分正常工作。

星火版本:1.6.0-cdh5.7.0

Hadoop版本:2.6.0-cdh5.4.7

mongo版本:3.2.0

mongo-hadoop-core版本:2.0.2

import com.mongodb.client.{FindIterable, MongoCollection, MongoDatabase}
import com.mongodb.{BasicDBObject, DBCollection, MongoClient}
import com.mongodb.hadoop.io.MongoUpdateWritable
import org.apache.hadoop.conf.Configuration
import org.bson.{BSONObject, BasicBSONObject, Document}
import com.mongodb.hadoop.{MongoInputFormat, MongoOutputFormat}
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}

object sparkTest extends App {

  //setting up mongo
  val mongo: MongoDatabase = new MongoClient("localhost",27017).getDatabase("test")
  var source: MongoCollection[Document] = mongo.getCollection("source")
  val target: MongoCollection[Document] = mongo.getCollection("target")
  source.drop()
  target.drop()
  //inserting document
  val sourceDoc = new Document()
  sourceDoc.put("unchanged","this field should not be changed")
  sourceDoc.put("_id","1")
  source.insertOne(sourceDoc)

  //setting up spark
  val conf = new SparkConf().setAppName("test mongo with spark").setMaster("local")
  val mongoConfig = new Configuration()
  val sc = new SparkContext(conf)
  mongoConfig.set("mongo.input.uri",
    "mongodb://localhost:27017/test.source")
  mongoConfig.set("mongo.output.uri",
    "mongodb://localhost:27017/test.target")

  //setting up read
  val documents = sc.newAPIHadoopRDD(
    mongoConfig,                // Configuration
    classOf[MongoInputFormat],  // InputFormat
    classOf[Object],            // Key type
    classOf[BSONObject])        // Value type

  //building updates with no document matching the query in the target collection
  val upsert_insert_rdd: RDD[(Object, MongoUpdateWritable)] =     documents.mapValues(
    (value: BSONObject) => {

  val query = new BasicBSONObject
  query.append("_id", value.get("_id").toString)

  val update = new BasicBSONObject(value.asInstanceOf[BasicBSONObject])
  update.append("added","this data will be added")

  println("val:"+value.toString)
  println("query:"+query.toString)
  println("update:"+update.toString)

  new MongoUpdateWritable(
  query,  // Query
  update,  // Update
  true,  // Upsert flag
  false,   // Update multiple documents flag
  true  // Replace flag
    )}
  )
  //saving updates
  upsert_insert_rdd.saveAsNewAPIHadoopFile(
    "",
    classOf[Object],
    classOf[MongoUpdateWritable],
    classOf[MongoOutputFormat[Object, MongoUpdateWritable]],
    mongoConfig)

  // At this point, there should be a new document in the target database, but there is not.
  val count = target.count()
  println("count after insert: "+count+", expected: 1")

  //adding doc to display working update. This code will throw an exception     if there is a
  //document with a matching _id field in the collection, so if this breaks that means the upsert worked!
  val targetDoc = new Document()
  targetDoc.put("overwritten","this field should not be changed")
  targetDoc.put("_id","1")
  target.insertOne(targetDoc)

  //building updates when a document matching the query exists in the target collection
  val upsert_update_rdd: RDD[(Object, MongoUpdateWritable)] = documents.mapValues(
    (value: BSONObject) => {

      val query = new BasicBSONObject
      query.append("_id", value.get("_id").toString)

      val update = new BasicBSONObject(value.asInstanceOf[BasicBSONObject])
      update.append("added","this data will be added")

      println("val:"+value.toString)
      println("query:"+query.toString)
      println("update:"+update.toString)

      new MongoUpdateWritable(
        query,  // Query
        update,  // Update
        true,  // Upsert flag
        false,   // Update multiple documents flag
        true  // Replace flag
      )}
  )
  //saving updates
  upsert_update_rdd.saveAsNewAPIHadoopFile(
    "",
    classOf[Object],
    classOf[MongoUpdateWritable],
    classOf[MongoOutputFormat[Object, MongoUpdateWritable]],
    mongoConfig)

  //checking that the update succeeded. should print:
  //contains new field:true, contains overwritten field:false
  val ret = target.find().first
  if (ret != null)
    println("contains new field:"+ret.containsKey("added")+", contains overwritten field:"+ret.containsKey("overwritten"))
  else
    println("no documents found in target")
}

對我所缺少的任何見解都會有所幫助。 我嘗試將輸出格式更改為MongoUpdateWritable,但這對行為沒有影響。 我知道這可能是配置問題,但它似乎像是mongo hadoop適配器的錯誤,因為使用其輸入和輸出格式以及MongoUpdateWritable類編寫文檔確實可以成功讀寫文檔。

POM為方便起見:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>test</groupId>
    <artifactId>spark_mongo_upsert_test</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <spark.version>1.6.0-cdh5.7.0</spark.version>
        <mongo.version>3.2.0</mongo.version>
        <mongo.hadoop.version>2.0.2</mongo.hadoop.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.10</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.10</artifactId>
            <version>${spark.version}</version>
        </dependency>

        <dependency>
            <groupId>org.mongodb.mongo-hadoop</groupId>
            <artifactId>mongo-hadoop-core</artifactId>
            <version>2.0.2</version>
        </dependency>
        <dependency>
            <groupId>org.mongodb.</groupId>
            <artifactId>mongo-java-driver</artifactId>
            <version>${mongo.version}</version>
        </dependency>

    </dependencies>

    <build>
        <plugins>
            <!-- Plugin to compile Scala code -->
            <plugin>
                <groupId>net.alchim31.maven</groupId>
                <artifactId>scala-maven-plugin</artifactId>
                <version>3.2.1</version>
            </plugin>
        </plugins>
    </build>

</project>

將包含_id字段的數據集保存到MongoDB將會替換並升序任何現有文檔。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM