简体   繁体   中英

Eclipse - Run on Hadoop does not prompt anything

Im trying to build a simple Wordcount Hadoop project( https://developer.yahoo.com/hadoop/tutorial/module3.html#running ) but when I click "Run on Hadoop" there is no action at all...Infact nothing is displayed in the console.

Here is my project structure -

在此处输入图片说明

Here is my wordcount job file...

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;

public class WordCount {

    public static void main(String[] args) {
        Configuration config = new Configuration();
        config.addResource(new Path("/HADOOP_HOME/conf/hadoop-default.xml"));
        config.addResource(new Path("/HADOOP_HOME/conf/hadoop-site.xml"));
        JobClient client = new JobClient();
        JobConf conf = new JobConf(WordCount.class);

        // specify output types
        conf.setOutputKeyClass(Text.class);
        conf.setOutputValueClass(IntWritable.class);

        // specify input and output dirs
        FileInputPath.addInputPath(conf, new Path("input"));
        FileOutputPath.addOutputPath(conf, new Path("output"));

        // specify a mapper
        conf.setMapperClass(WordCountMapper.class);

        // specify a reducer
        conf.setReducerClass(WordCountReducer.class);
        conf.setCombinerClass(WordCountReducer.class);

        client.setConf(conf);
        try {
            JobClient.runJob(conf);
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

I think the problem is with the jar file you used for hadoop client to server, So What happens is it tries to stay in the following line and try to search for the server

Configuration config = new Configuration();

Try to debug and let us know if you face any more problem,

If not try the following

Have you tried running the program on eclipse by pointing the core-site , hdfs-site

Configuration.addResource(new Path("path-to-your-core-site.xml file"));
Configuration.addResource(new Path("path-to-your-hdfs-site.xml file"));

and

FileInputPath.addInputPath(hdfs path to your input file);
FileInputPath.addOutputPath(hdfs path to your output file);

See that It works and get back to us

try this,

import java.io.IOException;
import java.util.Iterator;
import java.util.StringTokenizer;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.TextInputFormat;
import org.apache.hadoop.mapred.TextOutputFormat;


public class WordCount  {

    public static class Map extends MapReduceBase implements
            Mapper<Object, Text, Text, IntWritable> {

        @Override
        public void map(Object key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter)
                throws IOException {

            String line = value.toString();
            StringTokenizer tokenizer = new StringTokenizer(line);
            System.out.println(line);
            while (tokenizer.hasMoreTokens()) {
                value.set(tokenizer.nextToken());
                output.collect(value, new IntWritable(1));
            }

        }
    }

    public static class Reduce extends MapReduceBase implements
            Reducer<Text, IntWritable, Text, IntWritable> {

        @Override
        public void reduce(Text key, Iterator<IntWritable> values,
                OutputCollector<Text, IntWritable> output, Reporter reporter)
                throws IOException {
            int sum = 0;
            while (values.hasNext()) {
                sum += values.next().get();
            }

            output.collect(key, new IntWritable(sum));
        }
    }

    public static void main(String[] args) throws Exception,IOException  {

        JobConf conf = new JobConf(WordCount.class);
        conf.setJobName("WordCount");

        conf.setOutputKeyClass(Text.class);
        conf.setOutputValueClass(IntWritable.class);

        conf.setMapperClass(Map.class);
        conf.setReducerClass(Reduce.class);

        conf.setInputFormat(TextInputFormat.class);
        conf.setOutputFormat(TextOutputFormat.class);

        FileInputFormat.setInputPaths(conf, new Path("/home/user17/test.txt"));
        FileOutputFormat.setOutputPath(conf, new Path("hdfs://localhost:9000/out2"));

        JobClient.runJob(conf);

    }
}

I had the exact same problem, and I have just figured it out.

  1. Add parameters in the run configurations.
  2. Right Click WordCount.java > Run As > Run Configurations > Java Application > Word Count > Arguments
  3. Enter this hdfs://hadoop:9000/ hdfs://hadoop:9000/
  4. Apply and Finish Run again.

After running, refresh the project and the result is in the output folder.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM