简体   繁体   中英

What's the delimiter equivalent of ^G when reading a CSV with spark?

So, I really need help with a stupid thing but apparently I can't manage do it by myself.

I have a set of rows in a file with this format (reading with less on OSX):

XXXXXXXX^GT^XXXXXXXX^G\N^G0^GDL^G\N^G2018-09-14 13:57:00.0^G2018-09-16 00:00:00.0^GCompleted^G\N^G\N^G1^G2018-09-16 21:41:02.267^G1^G2018-09-16 21:41:02.267^GXXXXXXX^G\N
YYYYYYYY^GS^XXXXXXXX^G\N^G0^GDL^G\N^G2018-08-29 00:00:00.0^G2018-08-29 23:00:00.0^GCompleted^G\N^G\N^G1^G2018-09-16 21:41:03.797^G1^G2018-09-16 21:41:03.81^GXXXXXXX^G\N

So the delimiter is the BEL delimiter and I'm loading the CSV this way:

val df = sqlContext.read.format("csv")
  .option("header", "false")
  .option("inferSchema", "true")
  .option("delimiter", "\u2407")
  .option("nullValue", "\\N")
  .load("part0000")

But when I read it it just reads lines as just one column this way:

XXXXXXXXCXXXXXXXX\N0DL\N2018-09-15 00:00:00.02018-09-16 00:00:00.0Completed\N\N12018-09-16 21:41:03.25712018-09-16 21:41:03.263XXXXXXXX\N
XXXXXXXXSXXXXXXXX\N0DL\N2018-09-15 00:00:00.02018-09-15 23:00:00.0Completed\N\N12018-09-16 21:41:03.3712018-09-16 21:41:03.373XXXXXXXX\N

It seems there is a unkown character (you see nothing only because I formatted it here on stackoverflow) in place of ^G .

UPDATE: could it be a limitation on spark for scala? If I run the code with scala this way:

val df = sqlContext.read.format("csv")
  .option("header", "false")
  .option("inferSchema", "true")
  .option("delimiter", "\\a")
  .option("nullValue", "\\N")
  .load("part-m-00000")

display(df)

I get a big fat

java.lang.IllegalArgumentException: Unsupported special character for delimiter: \a

whereas if I run with python:

df = sqlContext.read.format('csv').options(header='false', inferSchema='true', delimiter = "\a", nullValue = '\\N').load('part-m-00000')

display(df)

everything is fine!

It looks limitation with these versions in spark-scala, Here is the supported delimiter's for csv in code,

apache/spark/sql/catalyst/csv/CSVOptions.scala

val delimiter = CSVExprUtils.toChar(
    parameters.getOrElse("sep", parameters.getOrElse("delimiter", ",")))

--- CSVExprUtils.toChar

apache/spark/sql/catalyst/csv/CSVExprUtils.scala

  def toChar(str: String): Char = {
(str: Seq[Char]) match {
  case Seq() => throw new IllegalArgumentException("Delimiter cannot be empty string")
  case Seq('\\') => throw new IllegalArgumentException("Single backslash is prohibited." +
    " It has special meaning as beginning of an escape sequence." +
    " To get the backslash character, pass a string with two backslashes as the delimiter.")
  case Seq(c) => c
  case Seq('\\', 't') => '\t'
  case Seq('\\', 'r') => '\r'
  case Seq('\\', 'b') => '\b'
  case Seq('\\', 'f') => '\f'
  // In case user changes quote char and uses \" as delimiter in options
  case Seq('\\', '\"') => '\"'
  case Seq('\\', '\'') => '\''
  case Seq('\\', '\\') => '\\'
  case _ if str == """\u0000""" => '\u0000'
  case Seq('\\', _) =>
    throw new IllegalArgumentException(s"Unsupported special character for delimiter: $str")
  case _ =>
    throw new IllegalArgumentException(s"Delimiter cannot be more than one character: $str")
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM