简体   繁体   中英

Hadoop in Cloudera path VMware

Hi I have implemented my average word count in java in cloudera vm 4.2.1 p and I have converted to Jar file and ran the command: hadoop jar averagewordlength.jar stubs.AvgWordLength shakespeare wordleng

Next: I have run the Shakespeare correctly and unable to run my file (Which I have created: newfile). It is throwing an exception:

Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://0.0.0.0:8020/user/training/newfile at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248) at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1064) at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1081) at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:993) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:946) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java)

please guide in which path to paste the newfile for checking my solution.

Seems your Hadoop configuration is incorrect.

hdfs://0.0.0.0 is not a valid address

cloudera vm 4.2.1 ? Try to download the newer CDH 5.x VM

I got it by command

hadoop fs -put localpath

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM