简体   繁体   English

使用Java在Mongodb聚合中使用多个$ group

[英]Multiple $group in Mongodb Aggregation with java

Here is my Query, 这是我的查询,

db.product.aggregate([
{ $match : {categoryID : 4 } },
{ "$group" : { "_id" : { "productID": "$productID", 
                         "articleID": "$articleID", "colour":"$colour",
                         "set&size": { "sku" : "$skuID", "size" : "$size" },  
                        }, 
              }
},
{ "$group" : { "_id" : { "productID": "$_id.productID", "colour":"$_id.colour" }, 

   "size": { "$addToSet" : { "sku" : "$_id.set&size.sku", 
                                           "size" : "$_id.set&size.size" }
                         },   
 }
},
{"$project":{
     "_id":0,
     "productID":  "$_id.productID",
     "colour":"$_id.colour",
     "size":"$size",
     }
   },

   ]);

By executing this query on mongo shell i get perfect output. 通过在mongo shell上执行此查询,我可以获得完美的输出。

output 输出

{
"_id": {
    "productID": "PRD1523",
    "colour": "GREEN"
},
"size": [
    {
        "sku": "ALT50095",
        "size": "S"
    },
    {
        "sku": "ALT50096",
        "size": "XL"
    }
]
}
{
"_id": {
    "productID": "PRD1523",
    "colour": "RED"
},
"size": [
    {
        "sku": "ALT50094",
        "size": "M"
    },
    {
        "sku": "ALT50093",
        "size": "S"
    }
]
}

but when with my java code it gives exception. 但是当使用我的Java代码时,它会给出异常。

Here is java code for above query, 这是上述查询的Java代码,

DBCollection table = mongoTemplate.getCollection(collection_name);

    BasicDBObject matchTopics = new BasicDBObject();
    matchTopics.put("categoryID", 4);

    DBObject groupSameIdEntities = new BasicDBObject("_id", new BasicDBObject("productID", "$productID")
            .append("articleID", "$articleID").append("colour", "$colour")
            .append("set&size", new BasicDBObject("sku", "$skuID").append("size", "$size")));

DBObject secondGroup = new BasicDBObject("_id", new BasicDBObject("colour", "$_id.colour").append("productID",
            "$_id.productID").append(
            "size",
            new BasicDBObject("$addToSet", new BasicDBObject("sku", "$_id.set&size.sku").append("size",
                    "$_id.set&size.size"))));

AggregationOutput output = table.aggregate(new BasicDBObject("$match", matchTopics), new BasicDBObject(
            "$group", groupSameIdEntities), new BasicDBObject("$group", secondGroup));

Exception 例外

HTTP Status 500 - Request processing failed; HTTP状态500-请求处理失败; nested exception is com.mongodb.CommandFailureException: { "serverUsed" : "127.0.0.1:27017" , "errmsg" : "exception: invalid operator '$addToSet'" , "code" : 15999 , "ok" : 0.0} 嵌套的异常是com.mongodb.CommandFailureException:{“ serverUsed”:“ 127.0.0.1:27017”,“ errmsg”:“异常:无效的运算符'$ addToSet'”,“ code”:15999,“ ok”:0.0}

I can't figure out how to solve this error. 我不知道如何解决此错误。

It's usually the best approach to define your complete aggregation pipeline separate of the invoking method and following the same rules as structure and indentation as are present in the JSON samples you will find and have used here. 通常,最好的方法是定义完整的聚合管道(与调用方法分开),并遵循与将在此处找到并使用的JSON示例中存在的结构和缩进相同的规则。

In that way, it becomes a lot easier to see where you deviate from the structure: 这样,查看您偏离结构的位置变得容易得多:

List<DBObject> pipeline = Arrays.<DBObject>asList(
    new BasicDBObject("$match",new BasicDBObject("categoryID", 4)),
    new BasicDBObject("$group",
        new BasicDBObject("_id",
            new BasicDBObject("productID","$productID")
                .append("articleID", "$articleID")
                .append("colour", "$colour")
                .append("size",
                    new BasicDBObject("sku","$skuID")
                        .append("size","$size")
                )
        )
    ),
    new BasicDBObject("$group",
        new BasicDBObject("_id",
            new BasicDBObject("productID","$_id.productID")
                .append("articleID", "$_id.articleID")
                .append("colour", "$_id.colour")
        )
        .append("size",new BasicDBObject("$push","$_id.size")
    ),
    new BasicDBObject("$project",
        new BasicDBObject("_id",0)
        .append("productID","$_id.productID")
        .append("colour","$_id.colour")
        .append("size",1)
    )
);

Also note some of the simplified naming here and using $push rather than $addToSet . 还要注意一些简化的命名,这里使用$push而不是$addToSet That last is generally because you already determined unique values by including it in the first $group stage, so an $addToSet would do nothing of value here and in fact would remove any inherent order from the results that would have come from an earlier stage, or if you deliberately ordered. 最后一个通常是因为您已经通过在第一个$group阶段中包含了唯一值来确定了唯一值,所以$addToSet在这里不会有任何价值,实际上,它将删除来自较早阶段的结果中的任何固有顺序,或者如果您故意订购。

Much by that token, you can of course just shorten to a single $group as $addToSet does perform it's own "distinct" operation: 通过这种标记,您当然可以缩短到单个$group因为$addToSet确实执行了自己的“与众不同”操作:

List<DBObject> pipeline = Arrays.<DBObject>asList(
    new BasicDBObject("$match",new BasicDBObject("categoryID", 4)),
    new BasicDBObject("$group",
        new BasicDBObject("_id",
            new BasicDBObject("productID","$productID")
                .append("articleID", "$articleID")
                .append("colour", "$colour")
        )
        .append("size",new BasicDBObject("$addToSet",
            new BasicDBObject("sku","$skuID")
                .append("size","$size")
        )
    )
);

As I would also recommend removing that last $project as it essentially needs to pass through all results and alter all documents present. 我也建议删除最后一个$project因为它实际上需要通过所有结果并更改存在的所有文档。 This is just adding to processing that would generally be better handled on the client. 这只是增加了通常可以在客户端上更好地处理的处理。

Generally speaking, the less aggregation pipeline stages the better, and unless there is something significant happening, then another software layer is probably better of handling it rather than the database server. 一般而言,聚合管道的阶段越少越好,并且除非发生了重要的事情,否则另一个软件层可能比数据库服务器更好地处理它。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM