簡體   English   中英

Javac找不到作業類別的符號

[英]Javac cannot find symbol for job class

我的執行文件是:

package hadoop; 

import java.util.*; 

import java.io.IOException; 
import java.io.IOException; 

import org.apache.hadoop.fs.Path; 
import org.apache.hadoop.conf.*; 
import org.apache.hadoop.io.*; 
import org.apache.hadoop.mapred.*; 
import org.apache.hadoop.util.*; 
import javax.lang.model.util.Elements;
public class ProcessUnits 
{ 
   //Mapper class 
   public static class E_EMapper extends MapReduceBase implements 
   Mapper<LongWritable ,/*Input key Type */ 
   Text,                /*Input value Type*/ 
   Text,                /*Output key Type*/ 
   IntWritable>        /*Output value Type*/ 
   { 

      //Map function 
      public void map(LongWritable key, Text value, 
      OutputCollector<Text, IntWritable> output,   
      Reporter reporter) throws IOException 
      { 
         String line = value.toString(); 
         String lasttoken = null; 
         StringTokenizer s = new StringTokenizer(line,"\t"); 
         String year = s.nextToken(); 

         while(s.hasMoreTokens())
            {
               lasttoken=s.nextToken();
            } 

         int avgprice = Integer.parseInt(lasttoken); 
         output.collect(new Text(year), new IntWritable(avgprice)); 
      } 
   } 


   //Reducer class 
   public static class E_EReduce extends MapReduceBase implements 
   Reducer< Text, IntWritable, Text, IntWritable > 
   {  

      //Reduce function 
      public void reduce( Text key, Iterator <IntWritable> values, 
         OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException 
         { 
            int maxavg=30; 
            int val=Integer.MIN_VALUE; 

            while (values.hasNext()) 
            { 
               if((val=values.next().get())>maxavg) 
               { 
                  output.collect(key, new IntWritable(val)); 
               } 
            } 

         } 
   }  


   //Main function 
   public static void main(String args[])throws Exception 
   { 
      JobConf conf = new JobConf(Eleunits.class); 

      conf.setJobName("max_eletricityunits"); 
      conf.setOutputKeyClass(Text.class);
      conf.setOutputValueClass(IntWritable.class); 
      conf.setMapperClass(E_EMapper.class); 
      conf.setCombinerClass(E_EReduce.class); 
      conf.setReducerClass(E_EReduce.class); 
      conf.setInputFormat(TextInputFormat.class); 
      conf.setOutputFormat(TextOutputFormat.class); 

      FileInputFormat.setInputPaths(conf, new Path(args[0])); 
      FileOutputFormat.setOutputPath(conf, new Path(args[1])); 

      JobClient.runJob(conf); 
   } 
} 

當我編譯時:

javac -classpath /home/javier/entrada/hadoop-core-1.2.1.jar -d / home / javier / units /home/javier/entrada/ProcessUnits.java

我有以下錯誤:

javac -classpath /home/javier/entrada/hadoop-core-1.2.1.jar -d /home/javier/units /home/javier/entrada/ProcessUnits.java
/home/javier/entrada/ProcessUnits.java:72: error: cannot find symbol
      JobConf conf = new JobConf(Eleunits.class); 
                                 ^
  symbol:   class Eleunits
  location: class ProcessUnits
1 error

我的hadoop版本是2.9.2,而我的Java版本是1.8.0_191

當我用eclipse打開它並查看它時,找不到Eleunits.class的導入。

我的hadoop版本是2.9.2,而我的Java版本是1.8.0_191

首先, hadoop-core-1.2.1.jar是在甚至還沒有考慮Hadoop 2.9.2 hadoop-core-1.2.1.jar下構建的,因此您將需要一個新的JAR

當我用eclipse打開它並查看它時,找不到Eleunits.class的導入。

不清楚為什么您沒有一直使用Eclipse! 即使不使用Maven或Gradle來獲取Hadoop的正確庫版本,也令我感到恐懼……但是Eclipse可能並沒有說謊。 您只顯示了一個類,該類不稱為Eleunits ,除了從其他地方復制Eleunits ,我不確定您如何獲得該值

此外,主類應extends Configured implements Tool ,如您在其他MapReduce示例中所見。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM