Return-Path: X-Original-To: apmail-accumulo-user-archive@www.apache.org Delivered-To: apmail-accumulo-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 718DBF4D0 for ; Wed, 27 Mar 2013 15:23:38 +0000 (UTC) Received: (qmail 31453 invoked by uid 500); 27 Mar 2013 15:23:38 -0000 Delivered-To: apmail-accumulo-user-archive@accumulo.apache.org Received: (qmail 31415 invoked by uid 500); 27 Mar 2013 15:23:38 -0000 Mailing-List: contact user-help@accumulo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@accumulo.apache.org Delivered-To: mailing list user@accumulo.apache.org Received: (qmail 31407 invoked by uid 99); 27 Mar 2013 15:23:38 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 27 Mar 2013 15:23:38 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of eric.newton@gmail.com designates 209.85.212.181 as permitted sender) Received: from [209.85.212.181] (HELO mail-wi0-f181.google.com) (209.85.212.181) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 27 Mar 2013 15:23:33 +0000 Received: by mail-wi0-f181.google.com with SMTP id hj8so2337708wib.2 for ; Wed, 27 Mar 2013 08:23:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=BWTwUJnc1qfAYlqRk+XIgP8/oVbjcOTy2/IBlCnN23U=; b=HbxIas8gz4K3CK84JeVzmLr46hkKU6G9Vb7WM/x90WmpGx1XecdyxmDiyASdd6S4KK fDJYVMgDEOL5dOjiXrHlHbi6DGTYDnuV5V30sIVqw1Oo2QDUYncT8VdJGq11KF1A7Quq aMooIAXw4Beg4W192w+oTNm5nSs1EE8t/Gt7Ydz7+0iI5kc3QIDmvz9nJtmlyDfmvSC5 yvegj3rMJjxZWQ42Co/vvYClSg9Qu4vZSOs6cNvOcDrIm5OItyFwEkA3QnP+cYXiMCQP 84o3wRUQg2wJnbUxgMZ9+NHQhgajmEZfImzNSBUFVVM8AjyfZjfbfWo7DtpBXDb1CGvp M1mQ== MIME-Version: 1.0 X-Received: by 10.194.119.200 with SMTP id kw8mr32114051wjb.31.1364397791698; Wed, 27 Mar 2013 08:23:11 -0700 (PDT) Received: by 10.217.107.138 with HTTP; Wed, 27 Mar 2013 08:23:11 -0700 (PDT) In-Reply-To: References: Date: Wed, 27 Mar 2013 11:23:11 -0400 Message-ID: Subject: Re: Waiting for accumulo to be initialized From: Eric Newton To: "user@accumulo.apache.org" Content-Type: multipart/alternative; boundary=089e01177913b4e97304d8e99dc6 X-Virus-Checked: Checked by ClamAV on apache.org --089e01177913b4e97304d8e99dc6 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable "0 live nodes" that will continue to be a problem. Check the datanode logs. -Eric On Wed, Mar 27, 2013 at 11:20 AM, Aji Janis wrote: > > I removed everything under /opt/hadoop-data/hadoop/hdfs/data/current/ > because it seemed like old files were hanging around and I had to remove > them before I can start re-initialization. > > > I didn't move anything to /tmp or try reboot. > my old accumulo instance had everything under /accumulo (in hdfs) and its > still there but i m guessing me deleting stuff from hadoop-data has delet= ed > a bunch of its stuff. > > i tried to restart zookeeper and hadoop and it came up fine but now my > namenode url says there 0 live nodes (instead of 5 in my cluster). Doing = a > ps -ef | grep hadoop on each node in cluster however shows that hadoop is > running.... so i am not sure what I messed up. Suggestions? > > Have I lost accumulo for good? Should I just recreate the instance? > > > On Wed, Mar 27, 2013 at 10:52 AM, Eric Newton wrot= e: > >> Your DataNode has not started and reported blocks to the NameNode. >> >> Did you store things (zookeeper, hadoop) in /tmp and reboot? It's a >> common thing to do, and it commonly deletes everything in /tmp. If that= 's >> the case, you will need to shutdown hdfs and run: >> >> $ hadoop namenode -format >> >> And then start hdfs again. >> >> -Eric >> >> >> On Wed, Mar 27, 2013 at 10:47 AM, Aji Janis wrote: >> >>> I see thank you. When I bring up hdfs (start-all from node with >>> jobtracker) I see the following message on url: >>> http://mynode:50070/dfshealth.jsp >>> >>> *Safe mode is ON. The ratio of reported blocks 0.0000 has not reached >>> the threshold 0.9990. Safe mode will be turned off automatically. >>> **2352 files and directories, 2179 blocks =3D 4531 total. Heap Size is = 54 >>> MB / 888.94 MB (6%) * >>> * >>> * >>> Whats going on here? >>> >>> >>> >>> On Wed, Mar 27, 2013 at 10:44 AM, Eric Newton wr= ote: >>> >>>> This will (eventually) delete everything created by accumulo in hfds: >>>> >>>> $ hadoop fs -rmr /accumulo >>>> >>>> Accumulo will create a new area to hold your configurations. Accumulo >>>> will basically abandon that old configuration. There's a class that c= an be >>>> used to clean up old accumulo instances in zookeeper: >>>> >>>> $ ./bin/accumulo org.apache.accumulo.server.util.CleanZookeeper >>>> hostname:port >>>> >>>> Where "hostname:port" is the name of one of your zookeeper hosts. >>>> >>>> -Eric >>>> >>>> >>>> >>>> On Wed, Mar 27, 2013 at 10:29 AM, Aji Janis wrote: >>>> >>>>> Thanks Eric. But shouldn't I be cleaning up something in the >>>>> hadoop-data directory too? and Zookeeper? >>>>> >>>>> >>>>> >>>>> On Wed, Mar 27, 2013 at 10:27 AM, Eric Newton = wrote: >>>>> >>>>>> To re-initialize accumulo, bring up zookeeper and hdfs. >>>>>> >>>>>> $ hadoop fs -rmr /accumulo >>>>>> $ ./bin/accumulo init >>>>>> >>>>>> I do this about 100 times a day on my dev box. :-) >>>>>> >>>>>> -Eric >>>>>> >>>>>> >>>>>> On Wed, Mar 27, 2013 at 10:10 AM, Aji Janis wrote= : >>>>>> >>>>>>> Hello, >>>>>>> >>>>>>> We have the following set up: >>>>>>> >>>>>>> zookeeper - 3.3.3-1073969 >>>>>>> hadoop - 0.20.203.0 >>>>>>> accumulo - 1.4.2 >>>>>>> >>>>>>> Our zookeeper crashed for some reason. I tried to doing a clean sto= p >>>>>>> of everything and then brought up (in order) zookeeper and hadoop >>>>>>> (cluster). But when trying to do a start-all on accumulo I get the >>>>>>> following message gets infinitely printed to the screen: >>>>>>> >>>>>>> =9326 12:45:43,551 [server.Accumulo] INFO : Waiting for accumulo to= be >>>>>>> initialized=94 >>>>>>> >>>>>>> >>>>>>> >>>>>>> Doing some digging on the web it seems that accumulo is hosed and >>>>>>> needs some re-intialization. It also appears that may be I need to = clean >>>>>>> out things from zookeeper and hadoop prior to a re-initialization. = Has any >>>>>>> one done this before? Can someone please provide me some directions= on what >>>>>>> to do (or not to do)? Really appreciate help on this. Thanks. >>>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> > --089e01177913b4e97304d8e99dc6 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable
"0 live nodes" =A0that will continue to be a pro= blem.

Check the datanode logs.
-Eric


On Wed, Mar 27, 2013 at 11:20 AM, Aji Janis <aji1705@gmail.com> wrote:

I removed everything under=A0/opt/hadoop-data/hadoop/hd= fs/data/current/ because it seemed like old files were hanging around and I= had to remove them before I can start re-initialization.


I didn't move anything to /tmp or try reboot.=A0
my old accumulo instance had everything under /accumulo (in hdfs) a= nd its still there but i m guessing me deleting stuff from hadoop-data has = deleted a bunch of its stuff.=A0

i tried to restart zookeeper and hadoop and it came up = fine but now my namenode url says there 0 live nodes (instead of 5 in my cl= uster). Doing a ps -ef | grep hadoop on each node in cluster however shows = that hadoop is running.... so i am not sure what I messed up. Suggestions?<= /div>

Have I lost accumulo for good? Should I just recreate t= he instance?


On Wed, Mar 27, 2013 at 10:52 AM, Eric Newton <eri= c.newton@gmail.com> wrote:
Your DataNode has not start= ed and reported blocks to the NameNode.

Did you store th= ings (zookeeper, hadoop) in /tmp and reboot? =A0It's a common thing to = do, and it commonly deletes everything in /tmp. =A0If that's the case, = you will need to shutdown hdfs and run:

$ hadoop namenode -format

And = then start hdfs again.

-Eric


On Wed, Mar 27, 2013 at 10:47 AM, Aji Janis <aji1705@gmail.com> wrote:
I see thank you. When I bring up hdfs (start-all from node with jobtracker)= I see the following message on url:=A0http://mynode:50070/dfshealth.jsp

<= /div>
Safe mode is ON.= =A0The ratio of reported blocks 0.0000 has not reached the threshold 0.= 9990. Safe mode will be turned off automatically.
2352 files and directories, 21= 79 blocks =3D 4531 total. Heap Size is 54 MB / 888.94 MB (6%)=A0

Whats going on here?



On Wed, Mar 27, 2013 at 10:44 AM, Eri= c Newton <eric.newton@gmail.com> wrote:
This will (eventually) delete everything created by accumu= lo in hfds:

$ hadoop fs -rmr /accumulo
Accumulo will create a new area to hold your configurati= ons. =A0Accumulo will basically abandon that old configuration. =A0There= 9;s a class that can be used to clean up old accumulo instances in zookeepe= r:

$ ./bin/accumulo org.apache.accumulo.server.util.CleanZ= ookeeper hostname:port

Where "hostname:port&q= uot; is the name of one of your zookeeper hosts.=A0

-Eric



On Wed, Mar = 27, 2013 at 10:29 AM, Aji Janis <aji1705@gmail.com> wrote:
Thanks Eric. But shouldn't I be cleaning= up something in the hadoop-data directory too? and Zookeeper?



On Wed, Ma= r 27, 2013 at 10:27 AM, Eric Newton <eric.newton@gmail.com> wrote:
To re-initialize accumulo, = bring up zookeeper and hdfs.

$ hadoop fs -rmr /accumulo<= /div>
$ ./bin/accumulo init

I do this about 100 tim= es a day on my dev box. :-)

-Eric


On Wed, Mar 27, 2013 at 10:= 10 AM, Aji Janis <aji1705@gmail.com> wrote:
Hello,

We have the follow= ing set up:

zookeeper - 3.3.3-1073969
ha= doop - 0.20.203.0
accumulo - 1.4.2

Our zookeeper crashed for so= me reason. I tried to doing a clean stop of everything and then brought up = (in order) zookeeper and hadoop (cluster). But when trying to do a start-al= l on accumulo I get the following message gets infinitely printed to the sc= reen:

=9326 12:45:43,551 [server.Accumulo] INFO : Waiting for accumulo to be initialize= d=94



Doing some diggin= g on the web it seems that accumulo is hosed and needs some re-intializatio= n. It also appears that may be I need to clean out things from zookeeper an= d hadoop prior to a re-initialization. Has any one done this before? Can so= meone please provide me some directions on what to do (or not to do)? Reall= y appreciate help on this.=A0Thanks.







--089e01177913b4e97304d8e99dc6--