hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "UsingLzoCompression" by DougMeil
Date Fri, 17 Jun 2011 19:27:49 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "UsingLzoCompression" page has been changed by DougMeil:
http://wiki.apache.org/hadoop/UsingLzoCompression?action=diff&rev1=24&rev2=25

Comment:
Per stack, changing the repo to Todd's version of LZO

  
  This distro doesn't contain all bug fixes (such as when LZO header or block header data
falls on read boundary).
  
- Please get latest distro with all fixes from http://github.com/kevinweil/hadoop-lzo
+ Please get latest distro with all fixes from https://github.com/toddlipcon/hadoop-lzo
  
  == Why compression? ==
  By enabling compression, the store file (HFile) will use a compression algorithm on blocks
as they are written (during flushes and compactions) and thus must be decompressed when reading.

Mime
View raw message