accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeff Kubina <>
Subject Re: How does Accumulo process a r-files for bulk ingesting?
Date Wed, 07 Oct 2015 13:50:54 GMT
So if the HDFS has a replication factor of m and an r-file has a range that
intersects n tablets, then data-locality will never be achieved for
max(0,n-m) of the r-files, that is, they will never be on the same node as
their tablet server until compaction, correct?

Jeff Kubina

On Wed, Oct 7, 2015 at 9:35 AM, Josh Elser <> wrote:

> On Oct 7, 2015 8:47 AM, "Jeff Kubina" <> wrote:
> >
> > How does Accumulo process an r-file for bulk ingesting when the key
> range of an r-file is within one tablet's key range and when the key range
> of an r-file spans two or more tablets?
> >
> > If the r-file is within one tablet's range I thought the file was "just
> renamed" and added to the tablet's list of r-files. Is that correct?
> Bingo
> > If the key range of the r-file spans two or more files is the r-file
> partitioned into separate r-files for each appropriate tablet server or are
> the records "batch-written" to each appropriate tablet in memory?
> They're logically partitioned if memory serves (the files are not
> rewritten). So you would see multiple entries in the metadata table for a
> single file with certain offsets. No replaying of mutations by batch
> writers.

View raw message