hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michele Catasta (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5793) High speed compression algorithm like BMDiff
Date Mon, 07 Jun 2010 17:49:52 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12876332#action_12876332
] 

Michele Catasta commented on HADOOP-5793:
-----------------------------------------

@Luke: great to see you here - I was planning to bug you personally and ask if I did some
blatant mistakes while integrating your library!

Regarding composition: as far as I understood, BMZ supports bm_pack/unpack + LZO out of the
box - so we're not crossing the JNI chasm; copies and allocations should be as few as possible.
I think your point was that it would nice to support also other external libraries apart from
LZO. +1 from me. 
There are already a few other Hadoop issues about FastLZ and other codecs. I'd say that I'm
gonna wait a while to see what the community prefers.

 w.r.t. the code updates, are they available somewhere? If you like, I can integrate them.
Otherwise, feel free to fork my github repo.

Stability wise, I compressed and shuffled around a few TBs on Hadoop, plus I've a couple of
HBase tables using BMZ since a few weeks. So far, everything worked smoothly with no hiccups.
I hope someone will test the lib soon and let us know.

> High speed compression algorithm like BMDiff
> --------------------------------------------
>
>                 Key: HADOOP-5793
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5793
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: elhoim gibor
>            Assignee: Michele Catasta
>            Priority: Minor
>
> Add a high speed compression algorithm like BMDiff.
> It gives speeds ~100MB/s for writes and ~1000MB/s for reads, compressing 2.1billions
web pages from 45.1TB in 4.2TB
> Reference:
> http://norfolk.cs.washington.edu/htbin-post/unrestricted/colloq/details.cgi?id=437
> 2005 Jeff Dean talk about google architecture - around 46:00.
> http://feedblog.org/2008/10/12/google-bigtable-compression-zippy-and-bmdiff/
> http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=755678
> A reference implementation exists in HyperTable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message