简体   繁体   中英

Null Pointer Error while doing “alter table table_name drop partition(part_column < value)” on Hive CLI

I have a spark job (Scala) which writes time-series data onto Hadoop over which there is an external table in Hive.

The table is partitioned by multiple columns and one of the columns ( circle ) has spaces in its values(eg "Punjab and Rajasthan" ).

Within the spark job when I try to do

sparksession.sql("""
alter table table_name 
drop if exists partition(creation_time < latestcreationtime)
"""
)

I get an Illegal Character Exception of hive metastore, stack trace is attached. I get the same in hive CLI, looks as Hive is unable to read spaces.

User class threw exception: org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Illegal character in path at index 131: /user/optimus/rohit/hive_dump/c360/version=v1.28/set_name=d_si/creation_time=1610994976/compaction_flag=U/si_lob=DTH/circle=Andaman and Nicobar Islands);

1.Try dropping and recreating the table 2.Your data might have some issue, what is the underlying data type and table type?

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM