簡體   English   中英

SqlServer 數據類型到 Hive 數據類型使用 Spark Scala

[英]SqlServer Datatype to Hive Datatype using Spark Scala

Spark 用於從 SQL 服務器數據庫獲取表的架構。 由於數據類型不匹配,我在使用此模式創建 Hive 表時遇到問題。 我們如何在 Spark Scala 中將 SQL Server 數據類型轉換為 Hive 數據類型。

val df = sqlContext.read.format("jdbc")
  .option("url", "jdbc:sqlserver://host:port;databaseName=DB")
  .option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
  .option("dbtable", "schema.tableName")
  .option("user", "Userid").option("password", "pswd")
  .load().schema

謝謝,得到了解決方案。創建了一種檢查數據類型的方法,如下所示。

def sqlToHiveDatatypeMapping(inputDatatype: String): String = inputDatatype match {
  case "numeric" => "int"
  case "bit" => "smallint"
  case "long" => "bigint"
  case "dec_float" => "double"
  case "money" => "double" 
  case "smallmoney" => "double"  
  case "real" => "double"
  case "char" => "string" 
  case "nchar" => "string"  
  case "varchar" => "string"
  case "nvarchar" => "string"
  case "text" => "string"
  case "ntext" => "string"
  case "binary" => "binary"
  case "varbinary" => "binary"
  case "image" => "binary"
  case "date" => "date"
  case "datetime" => "timestamp"
  case "datetime2" => "timestamp"
  case "smalldatetime" => "timestamp"
  case "datetimeoffset" => "timestamp"
  case "timestamp" => "timestamp"
  case "time" => "timestamp"
  case "clob" => "string"
  case "blob" => "binary"
  case _ => "string"
}
val columns = df.fields.map({field => field.name.toLowerCase+" "+sqlToHiveDatatypeMapping(field.dataType.typeName.toLowerCase)}).mkString(",")

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM