簡體   English   中英

如何在setup()中初始化實例變量?

[英]How to initialize instance variables in setup()?

這應該是一個簡單的問題,但是我正在努力。 除了初始化從配置文件中讀取的“ train_rows”和“ cols”的參數值外,我代碼中的所有內容均正常工作。

我設置了日志以在setup()方法中顯示“ train_rows”和“ cols”的值,並且這些值是正確的。 但是,當我在map()方法中嘗試相同的操作時,兩個值都顯示為0。我在做什么錯?

導入java.io.File; 導入java.io.IOException; 導入java.io.FileNotFoundException; 導入java.util.Scanner; 導入org.apache.log4j.Logger;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class KNNMapper extends Mapper<LongWritable, Text, IntWritable, IntWritable> {
  private static final Logger sLogger = Logger.getLogger(KNNMapper.class);
  private int[][] train_vals;
  private int[] train_label_vals;
  private int train_rows;
  private int test_rows;
  private int cols;

  @Override
  public void setup(Context context) throws IOException, InterruptedException {
      Configuration conf = context.getConfiguration();

      train_rows = conf.getInt("rows", -1);
      cols = conf.getInt("columns", -1);

      //just changed this
      //int[][] train_vals = new int[train_rows][cols];
      //int[] train_label_vals = new int[train_rows];

      train_vals = new int[train_rows][cols];
      train_label_vals = new int[train_rows];

      // read train csv, parse, and store into 2d int array
      Scanner myScan;
        try {
            File trainfile = new File("train_sample.csv");
            if (!trainfile.exists()) {
                throw new IllegalArgumentException("train file didn't load");
            }
            myScan = new Scanner(trainfile);

            //Set the delimiter used in file
            myScan.useDelimiter("[,\r\n]+");

            //Get all tokens and store them in some data structure
            //I am just printing them

            for(int row = 0; row < train_rows; row++) {
                for(int col = 0; col < cols; col++) {
                    train_vals[row][col] = Integer.parseInt(myScan.next().toString());
                }
            }

            myScan.close();

        } catch (FileNotFoundException e) {
            System.out.print("Error: Train file execution did not work.");
        }

    // read train_labels csv, parse, and store into 2d int array
        try {
            File trainlabels = new File("train_labels.csv");
            if (!trainlabels.exists()) {
                throw new IllegalArgumentException("train labels didn't load");
            }

            myScan = new Scanner(trainlabels);

            //Set the delimiter used in file
            myScan.useDelimiter("[,\r\n]+");

            //Get all tokens and store them in some data structure
            //I am just printing them

            for(int row = 0; row < train_rows; row++) {
                    train_label_vals[row] = Integer.parseInt(myScan.next().toString());
                    if(row < 10) {
                        System.out.println(train_label_vals[row]);
                    }
            }

            myScan.close();

        } catch (FileNotFoundException e) {
            System.out.print("Error: Train Labels file not found.");
        }
  }

  @Override
  public void map(LongWritable key, Text value, Context context)
      throws IOException, InterruptedException {

        // setup() gave us train_vals & train_label_vals.
        // Each line in map() represents a test observation.  We iterate 
        // through every train_val row to find nearest L2 match, then
        // return a key/value pair of <observation #, 

        // convert from Text to String

        System.out.println("I'm in the map!");
        String line = value.toString();
        double distance;
        double best_distance = Double.POSITIVE_INFINITY;
        int col_num;

        int best_digit = -1;
        IntWritable rowId = null;
        int i;
        IntWritable rowNum;
        String[] pixels;

        System.out.println("Number of train rows:" + train_rows);
        System.out.println("Number of columns:" + cols);
        // comma delimited files, split on commas
        // first we find the # of rows

        pixels = line.split(",");
        rowId = new IntWritable(Integer.parseInt(pixels[0]));
        System.out.println("working on row " + rowId);
        best_distance = Double.POSITIVE_INFINITY;

        for (i = 0; i < train_rows; i++) {
            distance = 0.0;

            col_num = 0;

            for (int j = 1; j < cols; j++) {
                distance += (Integer.parseInt(pixels[j]) - train_vals[i][j-1])^2;
            }

            if (distance < best_distance) {
                best_distance = distance;
                best_digit = train_label_vals[i];
            }
        }
        System.out.println("And we're out of the loop baby yeah!");
        context.write(rowId, new IntWritable(best_digit));
        System.out.println("Mapper done!");
  }
}

我對此表示懷疑,假設您要掃描hdfs中的文件。

您使用了:

導入java.io.File; 文件trainfile =新File(“ train_sample.csv”);

在hadoop中,這就是我們在hdfs中檢查文件的方式:

嘗試{FileSystem fs = FileSystem.get(context.getConfiguration());

if (fs.exists(new Path("/user/username/path/of/file/inhdfs"))) {
        System.out.println("File exists");
}

 } catch (IOException e) {
e.printStackTrace();

}

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM