简体   繁体   中英

How to save file in hadoop with python

I am trying to save file in Hadoop with python 2.7. I searched on the internet. I got some code to save a file in Hadoop but it is taking the entire folder while saving (total files in the folder are saving in Hadoop). But I need to save a specific file.

Here is the link to save a folder in Hadoop: http://www.hadoopy.com/en/latest/tutorial.html#putting-data-on-hdfs

Now what I need is save a particular file in Hadoop like abc.txt .

Here is my code:

import hadoopy
hdfs_path = 'hdfs://192.168.x.xxx:xxxx/video/py5'
def main():
   local_path = open('abc.txt').read()
   hadoopy.writetb(hdfs_path, local_path)


if __name__ == '__main__':
    main()

Here i am getting need more than one value to unpack

Any help would be appreciated.

The hadoopy.writetb seems to expects an iterable of two-values as its second argument. Try:

hadoopy.writetb(hdfs_path, [("abc.txt", open("abc.txt").read())])

http://www.hadoopy.com/en/latest/api.html?highlight=hadoopy.writetb#hadoopy.writetb

writedb requires second arg as kvs – Iterator of (key, value)

As per the link you have given, you have forgot to copy the function read_local_dir in your code.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM