简体   繁体   中英

Map-reduce Instantiation Exception

Hi I am having following map-reduce code by which I am trying to parse my XML file and create a CSV in output.

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

public class XMLParseMR {

   public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
      private final static IntWritable one = new IntWritable(1);
      public static String outputFile = null;
      private Text word = new Text();
      private JAXBC jax = new JAXBC();

      public void map(LongWritable key, Text value, Context context) throws 
          IOException, InterruptedException {

        String document = value.toString();
        System.out.println("XML : "+ document);
        try {
          ConnectHome ch = jax.convertJAXB(document);
      jax.convertCSV(ch, outputFile);
    } 
        catch (JAXBException e) {
          // TODO Auto-generated catch block
      e.printStackTrace();
    }
         } 
      }

 public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = new Job(conf, "wordcount");
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    job.setMapperClass(Map.class);
    conf.set("xmlinput.start", "<ConnectHome>");          
    conf.set("xmlinput.end", "</ConnectHome>");     
    job.setInputFormatClass(XMLInputFormat.class);
    job.setOutputFormatClass(TextOutputFormat.class);

    Map.outputFile = args[1];
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));


    job.waitForCompletion(true);
   }
  }

I have one more class called as Connect_Home where I do parsing of data that extracting data using JAXB. But when I run the code I get following errors:

WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. 
        Applications should implement Tool for the same.
    WARN mapred.JobClient: No job jar file set.  User classes may not be found. See      
        JobConf(Class) or JobConf#setJar(String).
    INFO input.FileInputFormat: Total input paths to process : 1
    INFO util.NativeCodeLoader: Loaded the native-hadoop library
    WARN snappy.LoadSnappy: Snappy native library not loaded
    INFO mapred.JobClient: Running job: job_201303121556_0011
    mapred.JobClient:  map 0% reduce 0%
    INFO mapred.JobClient: Task Id : attempt_201303121556_0011_m_000000_0, Status : 
         FAILED
    java.lang.RuntimeException: java.lang.ClassNotFoundException: XMLParseMR$Map
            at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1004)
        org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:217)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at 

       org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
            at org.apache.hadoop.mapred.Child.main(Child.java:260)
    Caused by: java.lang.ClassNotFoundException: XMLParseMR$Map
            at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
            at sun.misc.Launcher$AppClassLoader.loadCl
    INFO mapred.JobClient: Task Id : attempt_201303121556_0011_m_000000_1, Status :  
       FAILED
        java.lang.RuntimeException: java.lang.ClassNotFoundException: 
       XMLParseMR$Map
            at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1004)
            at 
       org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:217)
            at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:602)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at 
      org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
            at org.apache.hadoop.mapred.Child.main(Child.java:260)
    Caused by: java.lang.ClassNotFoundException: XMLParseMR$Map
            at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
            at java.security.AccessController.doPrivileged(Native Method)
            at sun.misc.Launcher$AppClassLoader.loadCl
    INFO mapred.JobClient: Task Id : attempt_201303121556_0011_m_000000_2, Status : 
       FAILED
    java.lang.RuntimeException: java.lang.ClassNotFoundException: XMLParseMR$Map
            at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1004)
            at 
      org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:217)
            at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:602)
            at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
            at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:396)
            at 

       org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
            at org.apache.hadoop.mapred.Child.main(Child.java:260)
    Caused by: java.lang.ClassNotFoundException: XMLParseMR$Map
            at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
            at sun.misc.Launcher$AppClassLoader.loadCl
    INFO mapred.JobClient: Job complete: job_201303121556_0011
    INFO mapred.JobClient: Counters: 7INFO mapred.JobClient:   Job Counters
    INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=20097
    INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving 
      slots (ms)=0
    INFO mapred.JobClient:     Total time spent by all maps waiting after reserving 
      slots (ms)=0
    INFO mapred.JobClient:     Launched map tasks=4
    INFO mapred.JobClient:     Data-local map tasks=4
    INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
    INFO mapred.JobClient:     Failed map tasks=1

The error message:

WARN mapred.JobClient: No job jar file set.  User classes may not be found. See      
    JobConf(Class) or JobConf#setJar(String).

tells you, that the job is not being set up properly. Instead of setting the JAR by name, you can let Hadoop determine it for itself by calling setJarByClass() when setting up the job:

Job job = new Job(conf, "wordcount");
job.setJarByClass(XMLParseMR.class);

It will set the Job's JAR based on your class name. Afterwards your job should run correctly and the error message mentioned above will disappear.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM