accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ameet kini <>
Subject Re: tablet distribution
Date Fri, 13 Jul 2012 21:08:13 GMT
Thanks, I was looking for something of the equivalent:[],

The use case is common enough that there may be some other way to do what I
want. I have a table that doesn't have its tablets distributed evenly. At
the system (instance?) level, they are evenly distributed, but not at this
particular table level.

The docs seem to suggest that pre-splitting a table would automatically
guarantee that the tablets would be distributed evenly across the nodes of
the cluster. I see that pre-splitting guarantees that you have that many
tablets, but not necessarily that they are evenly distributed. Maybe if the
instance had only one table, then it would be forced to distribute its



Accumulo will balance and distribute tables accross servers. Before a table
gets large, it will be maintained as a single tablet on a single server.
This limits the speed at which data can be added or queried to the speed of
a single node. To improve performance when the a table is new, or small,
you can add split points and generate new tablets.

In the shell:

root@myinstance> createtable newTableroot@myinstance> addsplits -t
newTable g n t

This will create a new table with 4 tablets. The table will be split on the
letters g'',n'', and ``t'' which will work nicely if the row data start
with lower-case alphabetic characters. If your row data includes binary
information or numeric information, or if the distribution of the row
information is not flat, then you would pick different split points. Now
ingest and query can proceed on 4 nodes which can improve performance.

On Fri, Jul 13, 2012 at 3:04 PM, Eric Newton <> wrote:

> Yes, you need to write your own tablet balancer.
> -Eric
> On Fri, Jul 13, 2012 at 2:48 PM, ameet kini <> wrote:
> >
> > Hi,
> >
> > Is there a way to force a tablet to move to a particular tablet server?
> >
> > Thanks,
> > Ameet

View raw message