[英]Spark Dataset error with groupBy on Java POJO
I have a set of data not in any format Apache-spark can use. 我有一组数据,Apache-spark无法使用任何格式。 I create a class for such data ie 我为此类数据创建一个类,即
public class TBHits {
int status;
int trkID;
public TBHits(int trkID, int status) {
this.status = status;
this.trkID = trkID;
}
public int getStatus() {
return status;
}
public void setStatus(int status) {
this.status = status;
}
public int getTrkID() {
return trkID;
}
public void setTrkID(int trkID) {
this.trkID = trkID;
}
}
In a script that processes the data, I create a List 在处理数据的脚本中,我创建了一个列表
private List<TBHits> deptList = new ArrayList<TBHits>();
When processing the data I create the TBHits object and add it to the List 在处理数据时,我创建了TBHits对象并将其添加到列表中
...
...
TBHits tbHits = new TBHits((bnkHits.getInt("trkID", i)), (bnkHits.getInt("status", i)));
tbHitList.add(tbHits);
...
After the processing I create the DataSet and do a basic show and a basic filter 处理之后,我创建了DataSet并进行了基本显示和基本过滤器
Dataset<Row> tbHitDf = spSession.createDataFrame(tbHitList, TBHits.class);
tbHitDf.show();
deptDf.filter(deptDf.col("trkID").gt(0)).show();
And All is OK. 一切都OK。
+------+-----+
|status|trkID|
+------+-----+
| 1| 0|
| 1| 0|
...
...
+------+-----+
|status|trkID|
+------+-----+
| 1| 1|
| 1| 1|
| 1| 1|
...
...
When I attempt to use groupBy and count 当我尝试使用groupBy并计数时
tbHitDf.groupBy("trkID").count().show();
, I get an not understandable error ,我收到一个无法理解的错误
Exception in thread "main" java.lang.StackOverflowError
at java.io.ObjectStreamClass$WeakClassKey.<init>(ObjectStreamClass.java:2307)
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:322)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1134)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
...
...
...
But if I manually insert data 但是如果我手动插入数据
TBHits tb1 = new TBHits(1, 1);
TBHits tb2 = new TBHits(1, 2);
tbHitList.add(tb1);
tbHitList.add(tb2);
Then the groupBy function works properly. 然后,groupBy函数可以正常工作。 I cannot understand why. 我不明白为什么。
For future users. 对于未来的用户。 The solution was to use an Encoder, ie 解决方案是使用编码器,即
Encoder<TBHits> TBHitsEncoder = Encoders.bean(TBHits.class);
Dataset<TBHits> tbHitDf = spSession.createDataset(tbHitList, TBHitsEncoder);
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.