简体   繁体   中英

Installing Snappy on a HDP Cluster

I have a HBase cluster built using Hortonworks Data Platform 2.6.1. Now I need to apply Snappy compression on HBase tables.

Without installing Snappy, I executed the Compression Test and I got a success output. I used below commands.

hbase org.apache.hadoop.hbase.util.CompressionTest file:///tmp/test.txt snappy

hbase org.apache.hadoop.hbase.util.CompressionTest hdfs://hbase.primary.namenode:8020/tmp/test1.txt snappy

In got below response for both commands.

2017-10-30 11:25:18,454 INFO  [main] hfile.CacheConfig: CacheConfig:disabled
2017-10-30 11:25:18,671 INFO  [main] compress.CodecPool: Got brand-new compressor [.snappy]
2017-10-30 11:25:18,679 INFO  [main] compress.CodecPool: Got brand-new compressor [.snappy]
2017-10-30 11:25:21,560 INFO  [main] hfile.CacheConfig: CacheConfig:disabled
2017-10-30 11:25:22,366 INFO  [main] compress.CodecPool: Got brand-new decompressor [.snappy]
SUCCESS

I see below libraries in the path /usr/hdp/2.6.1.0-129/hadoop/lib/native/ as well.

libhadoop.a 
libhadooppipes.a 
libhadoop.so 
libhadoop.so.1.0.0 
libhadooputils.a 
libhdfs.a 
libsnappy.so 
libsnappy.so.1 
libsnappy.so.1.1.4

Does HDP support snappy compression by default?

If so can I compress the HBase tables without installing Snappy?

Without installing Snappy, I executed the Compression Test and I got a success output.

Ambari installed it during cluster installation, so yes those commands are working

Does HDP support snappy compression by default?

Yes, the HDP-UTILS repository provides the snappy libraries.

can I compress the HBase tables without installing Snappy?

Hbase provides other compression algorithms, so yes

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM