繁体   English   中英

如何从 spark dataframe 中的文本列中删除额外的转义字符

[英]How to remove extra Escape characters from a text column in spark dataframe

我在 json 中的数据看起来像 -

{"text": "\"I have recently taken out a 12 month mobile phone contract with Virgin but despite two calls to customer help I still am getting a message on my phone indicating \\\"No Service\\\" although intermittently I do get connected.\"", "created_at": "\"2018-08-27 16:58:30\"", "service_id": "51870", "category_id": "249"}

我读了这个 JSON 使用 -

val complaintsSourceRaw = spark.read.json("file:///complaints.jsonl")

当我读取 dataframe 中的数据时,它看起来像

|249        |"2018-08-27 16:58:30"|51870     |"I have recently taken out a 12 month mobile phone contract with Virgin but despite two calls to customer help I still am getting a message on my phone indicating **\"No Service\"** although intermittently I do get connected."  

问题是

 **\"No Service\"**  need to be like  **"No Service"** 
             

我是如何尝试的-

complaintsSourceRaw.withColumn("text_cleaned", functions.regexp_replace(complaintsSourceRaw.col("text"), "\", ""));

但是 \ 字符使我的 " 和代码中断。知道如何实现这一点吗?

您需要转义“\”字符,因此在您的 regexp_replace 中您应该寻找两个反斜杠 ("\\") 字符,而不是一个。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM