[英]How to traverse/iterate a Dataset in Spark Java?
I am trying to traverse a Dataset to do some string similarity calculations like Jaro winkler or Cosine Similarity. 我试图遍历数据集来进行一些字符串相似度计算,如Jaro winkler或Cosine Similarity。 I convert my Dataset to list of rows and then traverse with for statement which is not efficient spark way to do it. 我将我的数据集转换为行列表,然后遍历for语句,这不是有效的火花方式。 So I am looking forward for a better approach in Spark. 所以我期待在Spark中采用更好的方法。
public class sample {
public static void main(String[] args) {
JavaSparkContext sc = new JavaSparkContext(new SparkConf().setAppName("Example").setMaster("local[*]"));
SQLContext sqlContext = new SQLContext(sc);
SparkSession spark = SparkSession.builder().appName("JavaTokenizerExample").getOrCreate();
List<Row> data = Arrays.asList(RowFactory.create("Mysore","Mysuru"),
RowFactory.create("Name","FirstName"));
StructType schema = new StructType(
new StructField[] { new StructField("Word1", DataTypes.StringType, true, Metadata.empty()),
new StructField("Word2", DataTypes.StringType, true, Metadata.empty()) });
Dataset<Row> oldDF = spark.createDataFrame(data, schema);
oldDF.show();
List<Row> rowslist = oldDF.collectAsList();
}
}
I have found many JavaRDD examples which I am not clear. 我发现了许多我不清楚的JavaRDD示例。 An Example for Dataset will help me a lot. 数据集的示例将对我有所帮助。
您可以使用org.apache.spark.api.java.function.ForeachFunction
如下所示。
oldDF.foreach((ForeachFunction<Row>) row -> System.out.println(row));
For old java jdks that don't support lambda expressions, you can use the following after importing: 对于不支持lambda表达式的旧java jdks,可以在导入后使用以下内容:
import org.apache.spark.api.java.function.VoidFunction; import org.apache.spark.api.java.function.VoidFunction;
yourDataSet.toJavaRDD().foreach(new VoidFunction<Row>() {
public void call(Row r) throws Exception {
System.out.println(r.getAs("your column name here"));
}
});
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.