繁体   English   中英

失败:执行错误,从 org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask​​ 返回代码 1

[英]FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask

我是 Hadoop 的新手,并尝试在 Hive 上运行一些连接查询。 我创建了两个表(table1 和 table2)。 我执行了 Join 查询,但收到以下错误消息:

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask

但是,当我在 Hive UI 中运行此查询时,查询将被执行,并且我得到正确的结果。 有人可以在这里帮助解释可能出了什么问题吗?

我只是在运行我的查询之前添加了以下内容并且它起作用了。

SET hive.auto.convert.join=false;

只需将此命令放在查询之前:

SET hive.auto.convert.join=false;

它绝对有效!

我在 Cloudera Quick Start VM - 5.12 上也遇到了这个问题,通过在 hive 提示符下执行以下语句解决了这个问题:

SET hive.auto.convert.join=false;

我希望以下信息会更有用:

Step-1:从MySQL的retail_db数据库中导入所有表

sqoop import-all-tables \
--connect jdbc:mysql://quickstart.cloudera:3306/retail_db \
--username retail_dba \
--password cloudera \
--num-mappers 1 \
--warehouse-dir /user/cloudera/sqoop/import-all-tables-text \
--as-textfile

步骤 2:在 Hive 中创建名为 Retail_db 的数据库和所需的表

create database retail_db;
use retail_db;

create external table categories(
  category_id int,
  category_department_id int,
  category_name string)
row format delimited 
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/categories';

create external table customers(
  customer_id int,
  customer_fname string,
  customer_lname string,
  customer_email string,
  customer_password string,
  customer_street string,
  customer_city string,
  customer_state string,
  customer_zipcode string)
row format delimited 
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/customers';

create external table departments(
  department_id int,
  department_name string)
row format delimited
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/departments';

create external table order_items(
  order_item_id int,
  order_item_order_id int,
  order_item_product_id int,
  order_item_quantity int,
  order_item_subtotal float,
  order_item_product_price float)
row format delimited
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/order_items';

create external table orders(
  order_id int,
  order_date string,
  order_customer_id int,
  order_status string)
row format delimited
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/orders';

create external table products(
  product_id int,
  product_category_id int,
  product_name string,
  product_description string,
  product_price float,
  product_image string)
row format delimited
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/products';

步骤 3:执行 JOIN 查询

SET hive.cli.print.current.db=true;

select o.order_date, sum(oi.order_item_subtotal)
from orders o join order_items oi on (o.order_id = oi.order_item_order_id)
group by o.order_date 
limit 10;

上面的查询给出了以下问题:

查询 ID = cloudera_20171029182323_6eedd682-256b-466c-b2e5-58ea100715fb 总作业数 = 1 失败:执行错误,从 org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask​​ 返回代码 1

第 4 步:通过在 HIVE 提示符下执行以下语句解决了上述问题:

SET hive.auto.convert.join=false;

第五步:查询结果

select o.order_date, sum(oi.order_item_subtotal)
from orders o join order_items oi on (o.order_id = oi.order_item_order_id)
group by o.order_date 
limit 10;

Query ID = cloudera_20171029182525_cfc70553-89d2-4c61-8a14-4bbeecadb3cf
Total jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1509278183296_0005, Tracking URL = http://quickstart.cloudera:8088/proxy/application_1509278183296_0005/
Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1509278183296_0005
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
2017-10-29 18:25:19,861 Stage-1 map = 0%,  reduce = 0%
2017-10-29 18:25:26,181 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 2.72 sec
2017-10-29 18:25:27,240 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 5.42 sec
2017-10-29 18:25:32,479 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 8.01 sec
MapReduce Total cumulative CPU time: 8 seconds 10 msec
Ended Job = job_1509278183296_0005
Launching Job 2 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1509278183296_0006, Tracking URL = http://quickstart.cloudera:8088/proxy/application_1509278183296_0006/
Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1509278183296_0006
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2017-10-29 18:25:38,676 Stage-2 map = 0%,  reduce = 0%
2017-10-29 18:25:43,925 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 0.85 sec
2017-10-29 18:25:49,142 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.13 sec
MapReduce Total cumulative CPU time: 2 seconds 130 msec
Ended Job = job_1509278183296_0006
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 2  Reduce: 1   Cumulative CPU: 8.01 sec   HDFS Read: 8422614 HDFS Write: 17364 SUCCESS
Stage-Stage-2: Map: 1  Reduce: 1   Cumulative CPU: 2.13 sec   HDFS Read: 22571 HDFS Write: 407 SUCCESS
Total MapReduce CPU Time Spent: 10 seconds 140 msec
OK
2013-07-25 00:00:00.0   68153.83132743835
2013-07-26 00:00:00.0   136520.17266082764
2013-07-27 00:00:00.0   101074.34193611145
2013-07-28 00:00:00.0   87123.08192253113
2013-07-29 00:00:00.0   137287.09244918823
2013-07-30 00:00:00.0   102745.62186431885
2013-07-31 00:00:00.0   131878.06256484985
2013-08-01 00:00:00.0   129001.62241744995
2013-08-02 00:00:00.0   109347.00200462341
2013-08-03 00:00:00.0   95266.89186286926
Time taken: 35.721 seconds, Fetched: 10 row(s)

尝试在连接时设置 AuthMech 参数

我已将其设置为 2 并定义了用户名

这解决了我在 cta 上的问题

问候, 奥坎

就我而言,为execute添加参数configuration将解决此问题。 这个问题是写访问冲突引起的。 您应该使用configuration来确保您具有写访问权限。

就我而言,这是未设置队列的问题,因此我执行了以下操作:

**设置 mapred.job.queue.name=**Queue-Name

这解决了我的问题。 希望这会对某人有所帮助。

虽然使用 Hue 界面面临同样的问题,下面是答案在 hdfs 中创建一个 /user/admin 并使用以下命令更改其权限:

[root@ip-10-0-0-163 ~]# su - hdfs

[hdfs@ip-10-0-0-163 ~]$ hadoop fs -mkdir /user/admin

[hdfs@ip-10-0-0-163 ~]$ hadoop fs -chown admin /user/admin

[hdfs@ip-10-0-0-163 ~]$ exit

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM