hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Purtell <apurt...@apache.org>
Subject Re: TTL performance
Date Mon, 25 Jun 2012 17:10:25 GMT
On Mon, Jun 25, 2012 at 1:34 AM, Frédéric Fondement
<frederic.fondement@uha.fr> wrote:
> My question was actually: given a table with millions, billions or whatever
> number of rows, how fast is the TTL handling process ? How are rows scanned
> during major compaction ? Are they all scanned in order to know whether they
> should be removed from the filesystem (be it HDFS or whatever else) ? Or is
> there any optimization making sure it can fatly finds those parts to be
> deleted ?

All rows in the region(s) are processed during a major compaction. The
process is a streaming merge sort of existing HFiles into a new HFile.

Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet
Hein (via Tom White)

View raw message