Return-Path: X-Original-To: apmail-accumulo-user-archive@www.apache.org Delivered-To: apmail-accumulo-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 30C8CDA3D for ; Mon, 4 Mar 2013 19:52:23 +0000 (UTC) Received: (qmail 62319 invoked by uid 500); 4 Mar 2013 19:52:22 -0000 Delivered-To: apmail-accumulo-user-archive@accumulo.apache.org Received: (qmail 62285 invoked by uid 500); 4 Mar 2013 19:52:22 -0000 Mailing-List: contact user-help@accumulo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@accumulo.apache.org Delivered-To: mailing list user@accumulo.apache.org Received: (qmail 62277 invoked by uid 99); 4 Mar 2013 19:52:22 -0000 Received: from minotaur.apache.org (HELO minotaur.apache.org) (140.211.11.9) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 04 Mar 2013 19:52:22 +0000 Received: from localhost (HELO mail-lb0-f177.google.com) (127.0.0.1) (smtp-auth username vines, mechanism plain) by minotaur.apache.org (qpsmtpd/0.29) with ESMTP; Mon, 04 Mar 2013 19:52:22 +0000 Received: by mail-lb0-f177.google.com with SMTP id go11so4251509lbb.36 for ; Mon, 04 Mar 2013 11:52:20 -0800 (PST) X-Received: by 10.112.25.202 with SMTP id e10mr4983284lbg.98.1362426740064; Mon, 04 Mar 2013 11:52:20 -0800 (PST) MIME-Version: 1.0 Reply-To: vines@apache.org Received: by 10.114.28.99 with HTTP; Mon, 4 Mar 2013 11:51:39 -0800 (PST) In-Reply-To: <75DA1BC8-1384-4A50-9416-7FFC78927C97@gmail.com> References: <1F2360A6-68CC-4067-9DB6-03573E1F24F3@texeltek.com> <75DA1BC8-1384-4A50-9416-7FFC78927C97@gmail.com> From: John Vines Date: Mon, 4 Mar 2013 14:51:39 -0500 Message-ID: Subject: Re: Remove table To: "user@accumulo.apache.org" Content-Type: multipart/alternative; boundary=bcaec554d434dfd33e04d71eb1d9 --bcaec554d434dfd33e04d71eb1d9 Content-Type: text/plain; charset=ISO-8859-1 Yes, there is a special system user which is used internally for the !METADATA writes, as well as internal system calls, such as the scans the Master does to keep consistency of the metadata table. As for zookeeper, we also have ACLs in place to prevent external sources from tampering or reading the data there. On Mon, Mar 4, 2013 at 2:44 PM, wrote: > Thanks Eric, Corey got it to work easily with just the WRITE permission > granted to root. > > So if root does not have write permission to METADATA, then how does the > accumulo server write to the METADATA table. Is there a server specific > user that is not visible to us? > > Also, one thing we tried to do is to remove the link in zookeeper from > /accumulo//tables/ and there was an authorization > issue. Is there a specific zookeeper permission that accumulo uses when > connecting to the zookeepers? > > Just curious :) > > Roshan > > On Mar 4, 2013, at 11:21 AM, Eric Newton wrote: > > Offline the table. It may take some time for it to settle down. Start > any failed tservers. > > Shutdown the accumulo garbage collector: > > $ pkill -f =gc > > Grant the root user the write permissions the !METADATA table: > > shell> grant -u root -t !METADATA Table.WRITE > > Find your table id: > > shell> tables -l > > Now use the id to construct a delete command: > > shell> deletemany -c file -s id; -e id< > > It's important that you use your table id, and not "id" in the above > command. > When you get tired of typing yes to each file, stop the shell and re-run > it like this: > > shell> deletemany -f -c file -s id; -e id< > > Now, go and move the directory in hdfs: > > $ hadoop fs -mv /accumulo/tables/id /files-from-dead-table > > You can bulk import the directories in /files-from-dead-table after you > bring the table back online with some appropriate splits. > > The accumulo garbage collector will complain about missing files, so > expect those as warnings. > > -Eric > > > > On Mon, Mar 4, 2013 at 10:53 AM, Corey Nolet wrote: > >> We have a sharded-event table that failed miserably when we accidentally >> tried to merge all of the tablets together. When starting accumulo, the >> monitor page says the event table (once having 43k tablets) now has 5 >> tablets and 1.05B rows. There are 14.5k unassigned tablets, The tablet >> servers each have response times ranging from 10s to 1m until eventually >> they all die. We thought that it may have been our ulimit on the accumulo >> master being set to 1024 but raising it to 65535 didn't seem to have any >> immediate effects. >> >> The Accumulo shell freezes when we try to drop the event table and we've >> gotten a little experimental (trying to remove the references to the R >> files in the !METADATA table manually, removing the reference to the event >> table in zookeeper), etc..) Our experiments have mostly ended in >> permissions issues. >> >> With that, do you guys have any good techniques/tools for unlinking a >> busted and unresponsive table? All of the other tables/tablets seem to be >> doing just fine. >> >> >> Thanks in advance! > > > > --bcaec554d434dfd33e04d71eb1d9 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Yes, there is a special system user which is used internal= ly for the !METADATA writes, as well as internal system calls, such as the = scans the Master does to keep consistency of the metadata table.

As = for zookeeper, we also have ACLs in place to prevent external sources from = tampering or reading the data there.


On Mon,= Mar 4, 2013 at 2:44 PM, <roshanp@gmail.com> wrote:
Thanks Eric, Corey got it to work = easily with just the WRITE permission granted to root.

=
So if root does not have write permission to METADATA, then how does t= he accumulo server write to the METADATA table. Is there a server specific = user that is not visible to us?

Also, one thing we tried to do is to remove the link in= zookeeper from /accumulo/<instanceId>/tables/<tableId> and the= re was an authorization issue. Is there a specific zookeeper permission tha= t accumulo uses when connecting to the zookeepers?

Just curious :)

Roshan

On Mar 4, 2013, at 11:21 AM, Eric Newton= <eric.newton= @gmail.com> wrote:

Offline the table. =A0I= t may take some time for it to settle down. =A0Start any failed tservers.

Shutdown the accumulo =A0garbage collector:

=A0$ pkill -f =3Dgc

Grant the root user the write permissions the !METADATA= table:

shell> grant -u root -t !METADATA =A0Ta= ble.WRITE

Find your table id:

shell> tables -l

Now use th= e id to construct a delete command:

shell> dele= temany -c file -s id; -e id<

It's important that you use your table id, and= not "id" in the above command.
When you= get tired of typing yes to each file, stop the shell and re-run it like th= is:

shell> deletemany -f -c file -s id; -e id<<= /div>

Now, go and move the directory in hdfs:
=
$ hadoop fs -mv /accumulo/tables/id /files-from-dead-table

You can bulk import the directories in /files-from-dead= -table after you bring the table back online with some appropriate splits.<= /div>

The accumulo garbage collector will complain about= missing files, so expect those as warnings.

-Eric



On Mon, Mar 4, 2013 at 10:53 AM,= Corey Nolet <cnolet@texeltek.com> wrote:
We have a sharded-event table that failed mi= serably when we accidentally tried to merge all of the tablets together. Wh= en starting accumulo, the monitor page says the event table (once having 43= k tablets) now has 5 tablets and 1.05B rows. There are 14.5k unassigned tab= lets, The tablet servers each have response times ranging from 10s to 1m un= til eventually they all die. We thought that it may have been our ulimit on= the accumulo master being set to 1024 but raising it to 65535 didn't s= eem to have any immediate effects.

The Accumulo shell freezes when we try to drop the event table and we'v= e gotten a little experimental (trying to remove the references to the R fi= les in the !METADATA table manually, removing the reference to the event ta= ble in zookeeper), etc..) Our experiments have mostly ended in permissions = issues.

With that, do you guys have any good techniques/tools for unlinking a buste= d and unresponsive table? All of the other tables/tablets seem to be doing = just fine.


Thanks in advance!



--bcaec554d434dfd33e04d71eb1d9--