简体   繁体   中英

Write data from Hadoop MapReduce job into MySQL

I've been parsing log files using MapReduce, but it always outputs a text file named "part-00000" to store my results, and I have to then import part--00000 into mysql manually.

Is there an easy way to store MapReduce results directly in MySQL? For example, how might I store the results of the classic "Word Count" MapReduce program in MySQL directly?

I'm using Hadoop 1.2.1, and the mapred libraries (ie org.apache.hadoop.mapred.* instead of org.apache.hadoop.mapreduce.* , and the two are not compatible as far as I'm aware.) I don't have access to Sqoop.

By using DBOutputFormat, we can write MapReduce output to direct databases.

Here is some example , go through this.

Personally i suggest Sqoop for Data imports (from DB to HDFS) and exports (From hdfs to DB).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM