accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Josh Elser <josh.el...@gmail.com>
Subject Re: Waiting for accumulo to be initialized
Date Wed, 27 Mar 2013 15:50:14 GMT
Just remove the directories configured for dfs.name.dir and dfs.data.dir 
and run the `hadoop namenode -format` again.

On 3/27/13 11:31 AM, Aji Janis wrote:
> well... I found this in the datanode log
>
>  ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
> java.io.IOException: Incompatible namespaceIDs in 
> /opt/hadoop-data/hadoop/hdfs/data: namenode namespaceID = 2089335599; 
> datanode namespaceID = 1868050007
>
>
>
>
> On Wed, Mar 27, 2013 at 11:23 AM, Eric Newton <eric.newton@gmail.com 
> <mailto:eric.newton@gmail.com>> wrote:
>
>     "0 live nodes"  that will continue to be a problem.
>
>     Check the datanode logs.
>
>     -Eric
>
>
>     On Wed, Mar 27, 2013 at 11:20 AM, Aji Janis <aji1705@gmail.com
>     <mailto:aji1705@gmail.com>> wrote:
>
>
>         I removed everything
>         under /opt/hadoop-data/hadoop/hdfs/data/current/ because it
>         seemed like old files were hanging around and I had to remove
>         them before I can start re-initialization.
>
>
>         I didn't move anything to /tmp or try reboot.
>         my old accumulo instance had everything under /accumulo (in
>         hdfs) and its still there but i m guessing me deleting stuff
>         from hadoop-data has deleted a bunch of its stuff.
>
>         i tried to restart zookeeper and hadoop and it came up fine
>         but now my namenode url says there 0 live nodes (instead of 5
>         in my cluster). Doing a ps -ef | grep hadoop on each node in
>         cluster however shows that hadoop is running.... so i am not
>         sure what I messed up. Suggestions?
>
>         Have I lost accumulo for good? Should I just recreate the
>         instance?
>
>
>         On Wed, Mar 27, 2013 at 10:52 AM, Eric Newton
>         <eric.newton@gmail.com <mailto:eric.newton@gmail.com>> wrote:
>
>             Your DataNode has not started and reported blocks to the
>             NameNode.
>
>             Did you store things (zookeeper, hadoop) in /tmp and
>             reboot?  It's a common thing to do, and it commonly
>             deletes everything in /tmp.  If that's the case, you will
>             need to shutdown hdfs and run:
>
>             $ hadoop namenode -format
>
>             And then start hdfs again.
>
>             -Eric
>
>
>             On Wed, Mar 27, 2013 at 10:47 AM, Aji Janis
>             <aji1705@gmail.com <mailto:aji1705@gmail.com>> wrote:
>
>                 I see thank you. When I bring up hdfs (start-all from
>                 node with jobtracker) I see the following message on
>                 url: http://mynode:50070/dfshealth.jsp
>
>                 *Safe mode is ON. /The ratio of reported blocks 0.0000
>                 has not reached the threshold 0.9990. Safe mode will
>                 be turned off automatically./
>                 **2352 files and directories, 2179 blocks = 4531
>                 total. Heap Size is 54 MB / 888.94 MB (6%) *
>                 *
>                 *
>                 Whats going on here?
>
>
>
>                 On Wed, Mar 27, 2013 at 10:44 AM, Eric Newton
>                 <eric.newton@gmail.com <mailto:eric.newton@gmail.com>>
>                 wrote:
>
>                     This will (eventually) delete everything created
>                     by accumulo in hfds:
>
>                     $ hadoop fs -rmr /accumulo
>
>                     Accumulo will create a new area to hold your
>                     configurations.  Accumulo will basically abandon
>                     that old configuration.  There's a class that can
>                     be used to clean up old accumulo instances in
>                     zookeeper:
>
>                     $ ./bin/accumulo
>                     org.apache.accumulo.server.util.CleanZookeeper
>                     hostname:port
>
>                     Where "hostname:port" is the name of one of your
>                     zookeeper hosts.
>
>                     -Eric
>
>
>
>                     On Wed, Mar 27, 2013 at 10:29 AM, Aji Janis
>                     <aji1705@gmail.com <mailto:aji1705@gmail.com>> wrote:
>
>                         Thanks Eric. But shouldn't I be cleaning up
>                         something in the hadoop-data directory too?
>                         and Zookeeper?
>
>
>
>                         On Wed, Mar 27, 2013 at 10:27 AM, Eric Newton
>                         <eric.newton@gmail.com
>                         <mailto:eric.newton@gmail.com>> wrote:
>
>                             To re-initialize accumulo, bring up
>                             zookeeper and hdfs.
>
>                             $ hadoop fs -rmr /accumulo
>                             $ ./bin/accumulo init
>
>                             I do this about 100 times a day on my dev
>                             box. :-)
>
>                             -Eric
>
>
>                             On Wed, Mar 27, 2013 at 10:10 AM, Aji
>                             Janis <aji1705@gmail.com
>                             <mailto:aji1705@gmail.com>> wrote:
>
>                                 Hello,
>
>                                 We have the following set up:
>
>                                 zookeeper - 3.3.3-1073969
>                                 hadoop - 0.20.203.0
>                                 accumulo - 1.4.2
>
>                                 Our zookeeper crashed for some reason.
>                                 I tried to doing a clean stop of
>                                 everything and then brought up (in
>                                 order) zookeeper and hadoop (cluster).
>                                 But when trying to do a start-all on
>                                 accumulo I get the following message
>                                 gets infinitely printed to the screen:
>
>                                 “26 12:45:43,551 [server.Accumulo]
>                                 INFO : Waiting for accumulo to be
>                                 initialized”
>
>
>
>                                 Doing some digging on the web it seems
>                                 that accumulo is hosed and needs some
>                                 re-intialization. It also appears that
>                                 may be I need to clean out things from
>                                 zookeeper and hadoop prior to a
>                                 re-initialization. Has any one done
>                                 this before? Can someone please
>                                 provide me some directions on what to
>                                 do (or not to do)? Really appreciate
>                                 help on this. Thanks.
>
>
>
>
>
>
>
>
>


Mime
View raw message