简体   繁体   中英

Working with non-english characters in columns of spark scala dataframes

Here is part of a file I am trying to load into a dataframe:

alphabet|Sentence|Comment1

è|Small e|None

Ü|Capital U|None

ã|Small a|

Ç|Capital C|None

When I load this file into a dataframe all the non-english characters get converted into boxes. Tried to give option("encoding","UTF-8") , but there is no change.

val nonEnglishDF = spark.read.format("com.databricks.spark.csv").option("delimiter","|").option("header",true).option("encoding","UTF-8").load(hdfs file path)

Please let me know is there is any solution for this. I need to save the file finally with no change in the non-english characters. Currently when the file is saved, it puts boxes or question mark instead of the non-english characters.

use decode function on that column:

decode(col("column_name"), "US-ASCII")

//It should work with one of these ('US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16')

It works with option("encoding","ISO-8859-1"). eg

val nonEnglishDF = spark.read.format("com.databricks.spark.csv").option("delimiter","|").option("header",true).option("encoding","ISO-8859-1").load(hdfs file path)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM