hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "UsingLzoCompression" by TedYu
Date Tue, 03 Aug 2010 01:11:34 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "UsingLzoCompression" page has been changed by TedYu.
http://wiki.apache.org/hadoop/UsingLzoCompression?action=diff&rev1=21&rev2=22

--------------------------------------------------

  
  This distro doesn't contain all bug fixes (such as when LZO header or block header data
falls on read boundary).
  
- Please get latest from http://github.com/kevinweil/hadoop-lzo
+ Please get latest distro with all fixes from http://github.com/kevinweil/hadoop-lzo
  
  == Why compression? ==
  By enabling compression, the store file (HFile) will use a compression algorithm on blocks
as they are written (during flushes and compactions) and thus must be decompressed when reading.

Mime
View raw message