Return-Path: X-Original-To: apmail-accumulo-dev-archive@www.apache.org Delivered-To: apmail-accumulo-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8BC3010E4B for ; Mon, 16 Sep 2013 21:01:55 +0000 (UTC) Received: (qmail 71709 invoked by uid 500); 16 Sep 2013 21:01:54 -0000 Delivered-To: apmail-accumulo-dev-archive@accumulo.apache.org Received: (qmail 71568 invoked by uid 500); 16 Sep 2013 21:01:53 -0000 Mailing-List: contact dev-help@accumulo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@accumulo.apache.org Delivered-To: mailing list dev@accumulo.apache.org Received: (qmail 71414 invoked by uid 99); 16 Sep 2013 21:01:53 -0000 Received: from minotaur.apache.org (HELO minotaur.apache.org) (140.211.11.9) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 16 Sep 2013 21:01:53 +0000 Received: from localhost (HELO mail-lb0-f182.google.com) (127.0.0.1) (smtp-auth username vines, mechanism plain) by minotaur.apache.org (qpsmtpd/0.29) with ESMTP; Mon, 16 Sep 2013 21:01:52 +0000 Received: by mail-lb0-f182.google.com with SMTP id c11so4622290lbj.41 for ; Mon, 16 Sep 2013 14:01:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=mime-version:reply-to:in-reply-to:references:from:date:message-id :subject:to:content-type; bh=wXNkgS1oe0KbG940DCHsFaCvG18nOZRLl4kmWK7m6Gs=; b=Lij0p/a3L5RpZZJFAGrli2PodkQp9SxZilyeDExNBE0jQXiIywjKxBPgw6KtXbzw66 rbyUnr/JfTdPk0Z5uJQIwVQFqwik4Iiek0MmOzyKJHXBTxIQwzzLmuDCFtItJXkozLu/ Y+Q+phKMHkqC4sFSuTSmbpH7EymmEsmxVOWPif2pnKWCKW1cVRHfzFnH10Pg+PdsPTXs I++IsoshOJzfCrUggzsqmBCgQ1n3SElTsXSxsaPZWCTiDGKXVbjcIliGMpfQuhJOxvCv 4WnHhgkMwGK6gcLn2RHIXsXHM5Xke3h2Z+AcK4uxnsg6OBq2aWOXggC7+TAx+wFURC5M hmlg== X-Received: by 10.152.8.115 with SMTP id q19mr27002943laa.16.1379365310790; Mon, 16 Sep 2013 14:01:50 -0700 (PDT) MIME-Version: 1.0 Reply-To: vines@apache.org Received: by 10.114.177.97 with HTTP; Mon, 16 Sep 2013 14:01:10 -0700 (PDT) In-Reply-To: <1379364883034-5396.post@n5.nabble.com> References: <1379363396182-5393.post@n5.nabble.com> <1379364883034-5396.post@n5.nabble.com> From: John Vines Date: Mon, 16 Sep 2013 17:01:10 -0400 Message-ID: Subject: Re: Rebalance table over all tablet servers To: Accumulo Dev List Content-Type: multipart/alternative; boundary=001a11c34f5c5d82ea04e6868343 --001a11c34f5c5d82ea04e6868343 Content-Type: text/plain; charset=ISO-8859-1 Wait, are you trying to balance out your tablets over your tablet servers or your data over your tablets? If the former, my aforementioned suggestion should help. If the latter, it's more complicated. When you create a table, it's default one tablet, or more if you supply split points. As you ingest, once the size on disk of a tablet exceeds the split threshold, it divides in half as evenly as it can. It will not divide mid-row though. So if you have a few giant tablets and a lot of tiny tablets, this is most likely due to high cardinality rows. The only real thing you can deal with there is to change your key format and reingest. On Mon, Sep 16, 2013 at 4:54 PM, Mastergeek wrote: > Okay, thank you! Is there a means of checking the load of a given, if not > all, tablet(s)? I have an uneven distribution over my tablets where a few > have hundred GBs and the rest only have a few hundred MBs leading me to > believe that some of the tablets are being under-used, hence the question > about re-balancing. > > Thanks again, > Jeff > > > > ----- > > > > -- > View this message in context: > http://apache-accumulo.1065345.n5.nabble.com/Rebalance-table-over-all-tablet-servers-tp5393p5396.html > Sent from the Developers mailing list archive at Nabble.com. > --001a11c34f5c5d82ea04e6868343--