![](/img/trans.png)
[英]How to save progress from scraping reddit data as database file using python
[英]How to save data from hadoop to database using python
我正在使用hadoop來處理一個xml文件,所以我在python中編寫了mapper文件,reducer文件。
假設需要處理的輸入是test.xml
<report>
<report-name name="ALL_TIME_KEYWORDS_PERFORMANCE_REPORT"/>
<date-range date="All Time"/>
<table>
<columns>
<column name="campaignID" display="Campaign ID"/>
<column name="adGroupID" display="Ad group ID"/>
</columns>
<row campaignID="79057390" adGroupID="3451305670"/>
<row campaignID="79057390" adGroupID="3451305670"/>
</table>
</report>
mapper.py文件
import sys
import cStringIO
import xml.etree.ElementTree as xml
if __name__ == '__main__':
buff = None
intext = False
for line in sys.stdin:
line = line.strip()
if line.find("<row") != -1:
.............
.............
.............
print '%s\t%s'%(campaignID,adGroupID )
reducer.py文件
import sys
if __name__ == '__main__':
for line in sys.stdin:
print line.strip()
我用以下命令運行了hadoop
bin/hadoop jar contrib/streaming/hadoop-streaming-1.0.4.jar
- file /path/to/mapper.py file -mapper /path/to/mapper.py file
-file /path/to/reducer.py file -reducer /path/to/reducer.py file
-input /path/to/input_file/test.xml
-output /path/to/output_folder/to/store/file
當我運行上面的命令時,hadoop正在我們在reducer.py
文件中正確提到的格式輸出路徑創建一個輸出文件,並帶有所需的數據
現在我想要做的就是,我不想將輸出數據存儲在我運行上面命令時由haddop默認創建的文本文件中,而是我想將數據保存到MYSQL
數據庫中
所以我在reducer.py
文件中編寫了一些python代碼,將數據直接寫入MYSQL
數據庫,並試圖通過刪除輸出路徑運行上面的命令,如下所示
bin/hadoop jar contrib/streaming/hadoop-streaming-1.0.4.jar
- file /path/to/mapper.py file -mapper /path/to/mapper.py file
-file /path/to/reducer.py file -reducer /path/to/reducer.py file
-input /path/to/input_file/test.xml
我收到的錯誤如下
12/11/08 15:20:49 ERROR streaming.StreamJob: Missing required option: output
Usage: $HADOOP_HOME/bin/hadoop jar \
$HADOOP_HOME/hadoop-streaming.jar [options]
Options:
-input <path> DFS input file(s) for the Map step
-output <path> DFS output directory for the Reduce step
-mapper <cmd|JavaClassName> The streaming command to run
-combiner <cmd|JavaClassName> The streaming command to run
-reducer <cmd|JavaClassName> The streaming command to run
-file <file> File/dir to be shipped in the Job jar file
-inputformat TextInputFormat(default)|SequenceFileAsTextInputFormat|JavaClassName Optional.
-outputformat TextOutputFormat(default)|JavaClassName Optional.
.........................
.........................
Database
的數據? 任何人都可以幫我解決上述問題.............
編輯
處理后跟着
創建上面的mapper
和reducer
文件,讀取xml文件並通過hadoop command
在某個文件夾中創建文本文件
例如:文本文件(使用hadoop命令處理xml文件的結果)的文件夾如下所示
/家庭/本地/用戶/ Hadoop的/ xml_processing / xml_output /兼職00000
這里的xml文件大小為1.3 GB
,在使用hadoop處理后,創建的text file
的大小為345 MB
現在我要做的就是reading the text file in the above path and saving data to the mysql database
。
我用基本的python試過這個,但是需要350 sec
來處理文本文件並保存到mysql數據庫。
現在如nichole所示,下載了sqoop,並在下面的某個路徑解壓縮
/home/local/user/sqoop-1.4.2.bin__hadoop-0.20
並輸入bin
文件夾並鍵入./sqoop
,我收到以下錯誤
sh-4.2$ ./sqoop
Warning: /usr/lib/hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: $HADOOP_HOME is deprecated.
Try 'sqoop help' for usage.
我也試過以下
./sqoop export --connect jdbc:mysql://localhost/Xml_Data --username root --table PerformaceReport --export-dir /home/local/user/Hadoop/xml_processing/xml_output/part-00000 --input-fields-terminated-by '\t'
結果
Warning: /usr/lib/hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: $HADOOP_HOME is deprecated.
12/11/27 11:54:57 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
12/11/27 11:54:57 INFO tool.CodeGenTool: Beginning code generation
12/11/27 11:54:57 ERROR sqoop.Sqoop: Got exception running Sqoop: java.lang.RuntimeException: Could not load db driver class: com.mysql.jdbc.Driver
java.lang.RuntimeException: Could not load db driver class: com.mysql.jdbc.Driver
at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:636)
at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:525)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:548)
at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:191)
at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:175)
at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:262)
at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1235)
at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1060)
at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:82)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:64)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:97)
at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)
上述sqoop命令是否對讀取文本文件和保存到數據庫的功能有用? ,因為我們必須從文本文件處理並插入到數據庫中!
我在python中編寫了所有的hadoop MR作業。 我只想說你不需要使用python來移動數據。 使用Sqoop : http : //sqoop.apache.org/
Sqoop是一個開源工具,允許用戶從關系數據庫中提取數據到Hadoop中進行進一步處理。 而且它的使用非常簡單。 你需要做的就是
閱讀本文以獲取更多信息: http : //archive.cloudera.com/cdh/3/sqoop/SqoopUserGuide.html
使用sqoop的好處是,我們現在可以轉換數據我們HDFS於任何類型的關系型數據庫(MySQL的,德比,蜂房等),反之亦然與單行命令
對於您的用例,請進行必要的更改:
mapper.py
#!/usr/bin/env python
import sys
for line in sys.stdin:
line = line.strip()
if line.find("<row") != -1 :
words=line.split(' ')
campaignID=words[1].split('"')[1]
adGroupID=words[2].split('"')[1]
print "%s:%s:"%(campaignID,adGroupID)
流命令
bin/hadoop jar contrib/streaming/hadoop-streaming-1.0.4.jar - file /path/to/mapper.py file -mapper /path/to/mapper.py file -file /path/to/reducer.py file -reducer /path/to/reducer.py file -input /user/input -output /user/output
MySQL的
create database test;
use test;
create table testtable ( a varchar (100), b varchar(100) );
sqoop
./sqoop export --connect jdbc:mysql://localhost/test --username root --table testnow --export-dir /user/output --input-fields-terminated-by ':'
注意 :
將數據寫入數據庫的最佳位置是OutputFormat。 可以完成減速機級寫入,但不是最好的事情。
如果您使用Java編寫了mapper和reducer,那么您可以使用DBOutputFormat 。
因此,您可以編寫一個自定義的OutputFormat,它符合reducer的數據輸出格式(鍵,值),以將數據接收到MySQL。
在Yahoo Developer Network上閱讀本教程,了解如何編寫自定義輸出格式
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.