Return-Path: X-Original-To: apmail-accumulo-user-archive@www.apache.org Delivered-To: apmail-accumulo-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2F55BD10F for ; Fri, 27 Jul 2012 15:39:09 +0000 (UTC) Received: (qmail 54443 invoked by uid 500); 27 Jul 2012 15:39:09 -0000 Delivered-To: apmail-accumulo-user-archive@accumulo.apache.org Received: (qmail 54412 invoked by uid 500); 27 Jul 2012 15:39:09 -0000 Mailing-List: contact user-help@accumulo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@accumulo.apache.org Delivered-To: mailing list user@accumulo.apache.org Received: (qmail 54401 invoked by uid 99); 27 Jul 2012 15:39:09 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Jul 2012 15:39:09 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FSL_RCVD_USER,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy) Received: from [206.112.75.238] (HELO iron-ugova-out.osis.gov) (206.112.75.238) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Jul 2012 15:39:01 +0000 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AnwFABK1ElCsEAbx/2dsb2JhbABFsGeIT4EPgiABAQQBGgExMgsLCwgFIAENIQEDEAUdGQiHdgMGAqtxhTINiU6KaGiGdAOTdIFUgRSJdod+ X-IronPort-AV: E=Sophos;i="4.77,667,1336363200"; d="scan'208";a="277130" Received: from ghost-a.center.osis.gov (HELO mail-vb0-f41.google.com) ([172.16.6.241]) by iron-ugova-in.osis.gov with ESMTP/TLS/RC4-SHA; 27 Jul 2012 11:35:18 -0400 Received: by vbkv13 with SMTP id v13so3258680vbk.0 for ; Fri, 27 Jul 2012 08:38:37 -0700 (PDT) MIME-Version: 1.0 Received: by 10.52.89.197 with SMTP id bq5mr2326905vdb.85.1343403517673; Fri, 27 Jul 2012 08:38:37 -0700 (PDT) Received: by 10.220.200.76 with HTTP; Fri, 27 Jul 2012 08:38:37 -0700 (PDT) Received: by 10.220.200.76 with HTTP; Fri, 27 Jul 2012 08:38:37 -0700 (PDT) In-Reply-To: <2108845459.198539.1343403171513.JavaMail.root@linzimmb04o.imo.intelink.gov> References: <689651633.197833.1343399353392.JavaMail.root@linzimmb04o.imo.intelink.gov> <665122977.197934.1343399720647.JavaMail.root@linzimmb04o.imo.intelink.gov> <2108845459.198539.1343403171513.JavaMail.root@linzimmb04o.imo.intelink.gov> Date: Fri, 27 Jul 2012 11:38:37 -0400 Message-ID: Subject: Re: Keep Tables on Shutdown From: John Vines To: user@accumulo.apache.org Content-Type: multipart/alternative; boundary=bcaec5015f5175ff0a04c5d181c0 --bcaec5015f5175ff0a04c5d181c0 Content-Type: text/plain; charset=ISO-8859-1 Your hdfs isn't online. Specifically, you have no running datanodes. Check the logs to figure out why it's not coming online and remedy that before initializing Accumulo. Sent from my phone, so pardon the typos and brevity. On Jul 27, 2012 11:32 AM, "Jonathan Hsu" wrote: > I tried again, reformatting the namenode first, and i got this error while > trying to start accumulo : > > > 27 11:29:08,295 [util.NativeCodeLoader] WARN : Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > 27 11:29:08,352 [hdfs.DFSClient] WARN : DataStreamer Exception: > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File > /accumulo/tables/!0/root_tablet/00000_00000.rf could only be replicated to > 0 nodes, instead of 1 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953) > > at org.apache.hadoop.ipc.Client.call(Client.java:740) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) > at $Proxy0.addBlock(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) > at $Proxy0.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) > > 27 11:29:08,352 [hdfs.DFSClient] WARN : Error Recovery for block null bad > datanode[0] nodes == null > 27 11:29:08,352 [hdfs.DFSClient] WARN : Could not get block locations. > Source file "/accumulo/tables/!0/root_tablet/00000_00000.rf" - Aborting... > 27 11:29:08,353 [util.Initialize] FATAL: Failed to initialize filesystem > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File > /accumulo/tables/!0/root_tablet/00000_00000.rf could only be replicated to > 0 nodes, instead of 1 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953) > > at org.apache.hadoop.ipc.Client.call(Client.java:740) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) > at $Proxy0.addBlock(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) > at $Proxy0.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) > 27 11:29:08,369 [hdfs.DFSClient] ERROR: Exception closing file > /accumulo/tables/!0/root_tablet/00000_00000.rf : > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File > /accumulo/tables/!0/root_tablet/00000_00000.rf could only be replicated to > 0 nodes, instead of 1 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953) > > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File > /accumulo/tables/!0/root_tablet/00000_00000.rf could only be replicated to > 0 nodes, instead of 1 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953) > > at org.apache.hadoop.ipc.Client.call(Client.java:740) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) > at $Proxy0.addBlock(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) > at $Proxy0.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) > > On Fri, Jul 27, 2012 at 11:18 AM, Marc Parisi wrote: > >> If you change the name dir I think you need to reformat the namenode. >> >> On Fri, Jul 27, 2012 at 10:54 AM, Jonathan Hsu wrote: >> >>> Yes, I ran "/opt/hadoop/bin/start-all.sh" >>> >>> >>> On Fri, Jul 27, 2012 at 10:51 AM, Marc Parisi wrote: >>> >>>> Is HDFS running? >>>> >>>> >>>> On Fri, Jul 27, 2012 at 10:49 AM, Jonathan Hsu wrote: >>>> >>>>> So i changed the dfs.data.dir and dfs.name.dir and tried to re-start >>>>> accumulo. >>>>> >>>>> On running this command : "/opt/accumulo/bin/accumulo init" I get the >>>>> following error : >>>>> >>>>> >>>>> 27 10:46:29,041 [ipc.Client] INFO : Retrying connect to server: >>>>> localhost/127.0.0.1:9000. Already tried 0 time(s). >>>>> 27 10:46:30,043 [ipc.Client] INFO : Retrying connect to server: >>>>> localhost/127.0.0.1:9000. Already tried 1 time(s). >>>>> 27 10:46:31,045 [ipc.Client] INFO : Retrying connect to server: >>>>> localhost/127.0.0.1:9000. Already tried 2 time(s). >>>>> 27 10:46:32,047 [ipc.Client] INFO : Retrying connect to server: >>>>> localhost/127.0.0.1:9000. Already tried 3 time(s). >>>>> 27 10:46:33,048 [ipc.Client] INFO : Retrying connect to server: >>>>> localhost/127.0.0.1:9000. Already tried 4 time(s). >>>>> 27 10:46:34,050 [ipc.Client] INFO : Retrying connect to server: >>>>> localhost/127.0.0.1:9000. Already tried 5 time(s). >>>>> 27 10:46:35,052 [ipc.Client] INFO : Retrying connect to server: >>>>> localhost/127.0.0.1:9000. Already tried 6 time(s). >>>>> 27 10:46:36,054 [ipc.Client] INFO : Retrying connect to server: >>>>> localhost/127.0.0.1:9000. Already tried 7 time(s). >>>>> 27 10:46:37,056 [ipc.Client] INFO : Retrying connect to server: >>>>> localhost/127.0.0.1:9000. Already tried 8 time(s). >>>>> 27 10:46:38,057 [ipc.Client] INFO : Retrying connect to server: >>>>> localhost/127.0.0.1:9000. Already tried 9 time(s). >>>>> 27 10:46:38,060 [util.Initialize] FATAL: java.net.ConnectException: >>>>> Call to localhost/127.0.0.1:9000 failed on connection exception: >>>>> java.net.ConnectException: Connection refused >>>>> java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on >>>>> connection exception: java.net.ConnectException: Connection refused >>>>> at org.apache.hadoop.ipc.Client.wrapException(Client.java:767) >>>>> at org.apache.hadoop.ipc.Client.call(Client.java:743) >>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) >>>>> at $Proxy0.getProtocolVersion(Unknown Source) >>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359) >>>>> at >>>>> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106) >>>>> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:207) >>>>> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:170) >>>>> at >>>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82) >>>>> at >>>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) >>>>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) >>>>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) >>>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) >>>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95) >>>>> at >>>>> org.apache.accumulo.core.file.FileUtil.getFileSystem(FileUtil.java:554) >>>>> at >>>>> org.apache.accumulo.server.util.Initialize.main(Initialize.java:426) >>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>>>> at >>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) >>>>> at >>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >>>>> at java.lang.reflect.Method.invoke(Method.java:597) >>>>> at org.apache.accumulo.start.Main$1.run(Main.java:89) >>>>> at java.lang.Thread.run(Thread.java:680) >>>>> Caused by: java.net.ConnectException: Connection refused >>>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) >>>>> at >>>>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) >>>>> at >>>>> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) >>>>> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404) >>>>> at >>>>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304) >>>>> at >>>>> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176) >>>>> at org.apache.hadoop.ipc.Client.getConnection(Client.java:860) >>>>> at org.apache.hadoop.ipc.Client.call(Client.java:720) >>>>> ... 20 more >>>>> Thread "init" died null >>>>> java.lang.reflect.InvocationTargetException >>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>>>> at >>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) >>>>> at >>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >>>>> at java.lang.reflect.Method.invoke(Method.java:597) >>>>> at org.apache.accumulo.start.Main$1.run(Main.java:89) >>>>> at java.lang.Thread.run(Thread.java:680) >>>>> Caused by: java.lang.RuntimeException: java.net.ConnectException: Call >>>>> to localhost/127.0.0.1:9000 failed on connection exception: >>>>> java.net.ConnectException: Connection refused >>>>> at >>>>> org.apache.accumulo.server.util.Initialize.main(Initialize.java:436) >>>>> ... 6 more >>>>> Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:9000failed on connection exception: java.net.ConnectException: Connection >>>>> refused >>>>> at org.apache.hadoop.ipc.Client.wrapException(Client.java:767) >>>>> at org.apache.hadoop.ipc.Client.call(Client.java:743) >>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) >>>>> at $Proxy0.getProtocolVersion(Unknown Source) >>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359) >>>>> at >>>>> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106) >>>>> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:207) >>>>> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:170) >>>>> at >>>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82) >>>>> at >>>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) >>>>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) >>>>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) >>>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) >>>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95) >>>>> at >>>>> org.apache.accumulo.core.file.FileUtil.getFileSystem(FileUtil.java:554) >>>>> at >>>>> org.apache.accumulo.server.util.Initialize.main(Initialize.java:426) >>>>> ... 6 more >>>>> Caused by: java.net.ConnectException: Connection refused >>>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) >>>>> at >>>>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) >>>>> at >>>>> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) >>>>> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404) >>>>> at >>>>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304) >>>>> at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176) >>>>> at org.apache.hadoop.ipc.Client.getConnection(Client.java:860) >>>>> at org.apache.hadoop.ipc.Client.call(Client.java:720) >>>>> ... 20 more >>>>> >>>>> >>>>> On Fri, Jul 27, 2012 at 10:45 AM, Marc Parisi wrote: >>>>> >>>>>> accumulo init is used to initialize the instance. are you running >>>>>> that every time? >>>>>> >>>>>> though it should error because you already have an instance, perhaps >>>>>> not setting the dfs.data.dir AND initializing it might cause the error >>>>>> >>>>>> >>>>>> >>>>>> On Fri, Jul 27, 2012 at 10:42 AM, Jonathan Hsu wrote: >>>>>> >>>>>>> I'm running these commands to start : >>>>>>> >>>>>>> /opt/hadoop/bin/start-all.sh >>>>>>> /opt/zookeeper/bin/zkServer.sh start >>>>>>> /opt/accumulo/bin/accumulo init >>>>>>> /opt/accumulo/bin/start-all.sh >>>>>>> /opt/accumulo/bin/accumulo shell -u root >>>>>>> >>>>>>> and these commands to stop : >>>>>>> >>>>>>> /opt/hadoop/bin/stop-all.sh >>>>>>> /opt/zookeeper/bin/zkServer.sh stop >>>>>>> /opt/accumulo/bin/stop-all.sh >>>>>>> >>>>>>> On Fri, Jul 27, 2012 at 10:39 AM, John Vines wrote: >>>>>>> >>>>>>>> Are you just doing stop-all.sh and then start-all.sh? Or are you >>>>>>>> running other commands? >>>>>>>> >>>>>>>> On Fri, Jul 27, 2012 at 10:35 AM, Jonathan Hsu < >>>>>>>> jreucypoda@gmail.com> wrote: >>>>>>>> >>>>>>>>> I don't get any errors. The tables just don't exist anymore, as >>>>>>>>> if I were starting accumulo for the first time. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Fri, Jul 27, 2012 at 10:32 AM, John Vines < >>>>>>>>> john.w.vines@ugov.gov> wrote: >>>>>>>>> >>>>>>>>>> Can you elaborate on how they don't exist? Do you mean you have >>>>>>>>>> errors about files not being found for your table or every time you start >>>>>>>>>> Accumulo it's like the first time? >>>>>>>>>> >>>>>>>>>> Sent from my phone, so pardon the typos and brevity. >>>>>>>>>> On Jul 27, 2012 10:29 AM, "Jonathan Hsu" >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> Hey all, >>>>>>>>>>> >>>>>>>>>>> I have a problem with my Accumulo tables deleting upon shutdown. >>>>>>>>>>> I currently have Accumulo, Zookeeper, and Hadoop in my /opt directory. >>>>>>>>>>> I'm assuming that somehow my tables are being placed in a tmp directory >>>>>>>>>>> that gets wiped when I shut my computer off. I'm trying to develop and >>>>>>>>>>> test on my local machine. >>>>>>>>>>> >>>>>>>>>>> What should I change in the conf files or otherwise in order to >>>>>>>>>>> ensure that the tables are not destroyed on shutdown? >>>>>>>>>>> >>>>>>>>>>> Thanks >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> - Jonathan Hsu >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> - Jonathan Hsu >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> - Jonathan Hsu >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> - Jonathan Hsu >>>>> >>>> >>>> >>> >>> >>> -- >>> - Jonathan Hsu >>> >> >> > > > -- > - Jonathan Hsu > --bcaec5015f5175ff0a04c5d181c0 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

Your hdfs isn't online. Specifically, you have no running datanodes.= Check the logs to figure out why it's not coming online and remedy tha= t before initializing Accumulo.

Sent from my phone, so pardon the typos and brevity.

On Jul 27, 2012 11:32 AM, "Jonathan Hsu&quo= t; <jreucypoda@gmail.com>= wrote:
I tried again, reformatting the namenode first, and i got this error while = trying to start accumulo :=A0


27 11:= 29:08,295 [util.NativeCodeLoader] WARN : Unable to load native-hadoop libra= ry for your platform... using builtin-java classes where applicable
27 11:29:08,352 [hdfs.DFSClient] WARN : DataStreamer Exception: org.ap= ache.hadoop.ipc.RemoteException: java.io.IOException: File /accumulo/tables= /!0/root_tablet/00000_00000.rf could only be replicated to 0 nodes, instead= of 1
at org.apache.hadoop.hdfs= .server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.h= dfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMeth= odAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMeth= odAccessorImpl.java:39)
at sun.reflect.Delegating= MethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
<= span style=3D"white-space:pre-wrap"> at java.lang.reflect.Method.inv= oke(Method.java:597)
at org.apache.hadoop.ipc.= RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.= Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Met= hod)
at javax.security.auth.Su= bject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apa= che.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.jav= a:220)
at $Proxy0.addBlock(Unkno= wn Source)
at sun.re= flect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMeth= odAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessor= Impl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Meth= od.invoke(Method.java:597)
= at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(Re= tryInvocationHandler.java:82)
at org.apache.hadoop.io.r= etry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy0.addBlock(Unknown = Source)
at org.apache.hadoop.hdfs= .DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
<= div> at org.apache.hadoop.hdfs.= DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at org.apache.hadoop.hdfs= .DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at org.apache.hadoop.hdfs.DFSClient= $DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

27 11:29:08,352 [hdfs.DFSClient] WARN : Error Recovery = for block null bad datanode[0] nodes =3D=3D null
27 11:29:08,352 = [hdfs.DFSClient] WARN : Could not get block locations. Source file "/a= ccumulo/tables/!0/root_tablet/00000_00000.rf" - Aborting...
27 11:29:08,353 [util.Initialize] FATAL: Failed to initialize filesyst= em
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Fi= le /accumulo/tables/!0/root_tablet/00000_00000.rf could only be replicated = to 0 nodes, instead of 1
at org.apache.hadoop.hdfs= .server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.h= dfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMeth= odAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMeth= odAccessorImpl.java:39)
at sun.reflect.Delegating= MethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
<= span style=3D"white-space:pre-wrap"> at java.lang.reflect.Method.inv= oke(Method.java:597)
at org.apache.hadoop.ipc.= RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.= Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Met= hod)
at javax.security.auth.Su= bject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apa= che.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.jav= a:220)
at $Proxy0.addBlock(Unkno= wn Source)
at sun.re= flect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMeth= odAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessor= Impl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Meth= od.invoke(Method.java:597)
= at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(Re= tryInvocationHandler.java:82)
at org.apache.hadoop.io.r= etry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy0.addBlock(Unknown = Source)
at org.apache.hadoop.hdfs= .DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
<= div> at org.apache.hadoop.hdfs.= DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at org.apache.hadoop.hdfs= .DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at org.apache.hadoop.hdfs.DFSClient= $DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
27 11:29:08,369 [hdfs.DFSClient] ERROR: Exception closing file /accumu= lo/tables/!0/root_tablet/00000_00000.rf : org.apache.hadoop.ipc.RemoteExcep= tion: java.io.IOException: File /accumulo/tables/!0/root_tablet/00000_00000= .rf could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs= .server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.h= dfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMeth= odAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMeth= odAccessorImpl.java:39)
at sun.reflect.Delegating= MethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
<= span style=3D"white-space:pre-wrap"> at java.lang.reflect.Method.inv= oke(Method.java:597)
at org.apache.hadoop.ipc.= RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.= Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Met= hod)
at javax.security.auth.Su= bject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

org.apache.hadoop.ipc.RemoteException: java.io.IOExcept= ion: File /accumulo/tables/!0/root_tablet/00000_00000.rf could only be repl= icated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAddit= ionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.hdfs= .server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.i= nvoke0(Native Method)
at sun.reflect.NativeMeth= odAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessor= Impl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Meth= od.invoke(Method.java:597)
= at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.= Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java= :955)
at java.security.AccessCo= ntroller.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.= Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.= java:740)
at org.apache.hadoop.ipc.= RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMeth= odAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMeth= odAccessorImpl.java:39)
at sun.reflect.Delegating= MethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
<= span style=3D"white-space:pre-wrap"> at java.lang.reflect.Method.inv= oke(Method.java:597)
at org.apache.hadoop.io.r= etry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.i= o.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy0.addBlock(Unkno= wn Source)
at org.ap= ache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.j= ava:2937)
at org.apache.hadoop.hdfs= .DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
=
at org.apache.hadoop.hdfs= .DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at org.apache.hadoop.hdfs= .DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

<= div class=3D"gmail_quote">On Fri, Jul 27, 2012 at 11:18 AM, Marc Parisi <m= arc@accumulo.net> wrote:
If you change the name dir I think you need = to reformat the namenode.

On Fri, Jul 27, 2012 at 10:54 AM, Jonathan Hsu <jreucypoda@gmail.com> wrote:
Yes, I ran "/opt/hadoop/bin/start-all.s= h"


On Fri, Jul 27, 2012 at 10:51 AM, Marc Parisi <marc@accumulo.net> wrote:
Is HDFS running?


On Fri, Jul 27, 2012 at 10:49 AM, Jonathan Hsu <jreu= cypoda@gmail.com> wrote:
So i changed the dfs.data.dir and dfs.name.dir and tried to re-start accumu= lo.

On running this command : "/opt/accumulo/bin/ac= cumulo init" I get the following error :


27 10:46:29,041 [ipc.Client] INFO : Retrying connect to ser= ver: localhost/127.0.0.= 1:9000. Already tried 0 time(s).
27 10:46:30,043 [ipc.Client]= INFO : Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
27 10:46:31,045 [ipc.Client] INFO : Retrying connect to server: localh= ost/127.0.0.1:9000.= Already tried 2 time(s).
27 10:46:32,047 [ipc.Client] INFO : Ret= rying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
27 10:46:33,048 [ipc.Client] INFO : Retrying connect to server: localh= ost/127.0.0.1:9000.= Already tried 4 time(s).
27 10:46:34,050 [ipc.Client] INFO : Ret= rying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
27 10:46:35,052 [ipc.Client] INFO : Retrying connect to server: localh= ost/127.0.0.1:9000.= Already tried 6 time(s).
27 10:46:36,054 [ipc.Client] INFO : Ret= rying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s).
27 10:46:37,056 [ipc.Client] INFO : Retrying connect to server: localh= ost/127.0.0.1:9000.= Already tried 8 time(s).
27 10:46:38,057 [ipc.Client] INFO : Ret= rying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s).
27 10:46:38,060 [util.Initialize] FATAL: java.net.ConnectException: Ca= ll to localhost/127.0.0= .1:9000 failed on connection exception: java.net.ConnectException: Conn= ection refused
java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exceptio= n: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(= Client.java:767)
at org.apache.hadoop.ipc.= Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.getProtocolVer= sion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.hdfs= .DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.<init&g= t;(DFSClient.java:207)
at org.apache.hadoop.hdfs= .DFSClient.<init>(DFSClient.java:170)
at org.apache.hadoop.hdfs.DistributedFileSystem.ini= tialize(DistributedFileSystem.java:82)
at org.apache.hadoop.fs.F= ileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(= FileSystem.java:66)
at org.apache.hadoop.fs.F= ileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.jav= a:196)
at org.apache.hadoop.fs.F= ileSystem.get(FileSystem.java:95)
at org.apache.accumulo.core.file.FileUtil.getFileSystem(FileU= til.java:554)
at org.apache.accumulo.se= rver.util.Initialize.main(Initialize.java:426)
at sun.reflect.NativeMethodAccessorImpl.invoke0(= Native Method)
at sun.reflect.NativeMeth= odAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessor= Impl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Meth= od.invoke(Method.java:597)
= at org.apache.accumulo.start.Main$1.run(Main.java:89)
at java.lang.Thread.run(T= hread.java:680)
Caused by: java.net.ConnectException: Connection = refused
at sun.nio.c= h.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChann= elImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.c= onnect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.= NetUtils.connect(NetUtils.java:404)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(C= lient.java:304)
at org.apache.hadoop.ipc.= Client$Connection.access$1700(Client.java:176)
at org.apache.hadoop.ipc.Client.getConnection(Cl= ient.java:860)
at org.apache.hadoop.ipc.= Client.call(Client.java:720)
... 20 more
Thread "init" died null
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke= 0(Native Method)
at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:3= 9)
at sun.reflect.Delegating= MethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
<= span style=3D"white-space:pre-wrap"> at java.lang.reflect.Method.inv= oke(Method.java:597)
at org.apache.accumulo.st= art.Main$1.run(Main.java:89)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.lang.RuntimeException: java.net.ConnectException: Call= to localhost/127.0.0.1= :9000 failed on connection exception: java.net.ConnectException: Connec= tion refused
at org.apache.accumulo.se= rver.util.Initialize.main(Initialize.java:436)
... 6 more
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connecti= on exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wra= pException(Client.java:767)
at org.apache.hadoop.ipc.= Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.getProtocolVer= sion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.hdfs= .DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.<init&g= t;(DFSClient.java:207)
at org.apache.hadoop.hdfs= .DFSClient.<init>(DFSClient.java:170)
at org.apache.hadoop.hdfs.DistributedFileSystem.ini= tialize(DistributedFileSystem.java:82)
at org.apache.hadoop.fs.F= ileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(= FileSystem.java:66)
at org.apache.hadoop.fs.F= ileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.jav= a:196)
at org.apache.hadoop.fs.F= ileSystem.get(FileSystem.java:95)
at org.apache.accumulo.core.file.FileUtil.getFileSystem(FileU= til.java:554)
at org.apache.accumulo.se= rver.util.Initialize.main(Initialize.java:426)
... 6 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.c= heckConnect(Native Method)
= at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java= :567)
at org.apache.hadoop.net.= SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.con= nect(NetUtils.java:404)
at org.apache.hadoop.ipc.= Client$Connection.setupIOstreams(Client.java:304)
at org.apache.hadoop.ipc.Client$Connection.ac= cess$1700(Client.java:176)
at org.apache.hadoop.ipc.= Client.getConnection(Client.java:860)
at org.apache.hadoop.ipc.Client.call(Client.java:720)
... 20 more

On Fri, Jul 27, 2012 a= t 10:45 AM, Marc Parisi <marc@accumulo.net> wrote:
accumulo init is used to initialize the inst= ance. are you running that every time?

though it should error becaus= e you already have an instance, perhaps not setting the dfs.data.dir AND in= itializing it might cause the error



On Fri, Jul 27, 2012 at 10:42 AM, Jonath= an Hsu <jreucypoda@gmail.com> wrote:
I'm running these commands to start :

/opt/hado= op/bin/start-all.sh
/opt/zookeeper/bin/zkServer.sh start
/opt/accumulo/bin/accumulo init
/opt/accumulo/bin/start-all.sh<= /div>
/opt/accumulo/bin/accumulo shell -u root

and = these commands to stop :

/opt/hadoop/bin/stop= -all.sh
/opt/zookeeper/bin/zkServer.sh stop
/opt/accumu= lo/bin/stop-all.sh

On Fri, Jul 27, 2012 at 10:3= 9 AM, John Vines <john.w.vines@ugov.gov> wrote:
Are you just doing stop-all.sh and then start-all.sh? Or are you running ot= her commands?

On Fri, Jul 27, 2012 a= t 10:35 AM, Jonathan Hsu <jreucypoda@gmail.com> wrote:
I don't get any errors. =A0Th= e tables just don't exist anymore, as if I were starting accumulo for t= he first time.


On Fri, Jul 27, 2012 at 10:32 = AM, John Vines <john.w.vines@ugov.gov> wrote:

Can you elaborate on how = they don't exist? Do you mean you have errors about files not being fou= nd for your table or every time you start Accumulo it's like the first = time?

Sent from my phone, so pardon the typos and brevity.

On Jul 27, 2012 10:29 AM, "Jonathan Hsu&quo= t; <jreucypoda= @gmail.com> wrote:
Hey all,

I have a problem with my Accumulo ta= bles deleting upon shutdown. =A0I currently have Accumulo, Zookeeper, and H= adoop in my /opt directory. =A0I'm assuming that somehow my tables are = being placed in a tmp directory that gets wiped when I shut my computer off= . =A0I'm trying to develop and test on my local machine.

What should I change in the conf files or otherwise in = order to ensure that the tables are not destroyed on shutdown?
Thanks


--
- Jonathan= Hsu



--
- Jonathan Hsu




<= font color=3D"#888888">--
- Jonathan Hsu




<= /div>--
- Jonathan Hsu




<= /div>--
- Jonathan Hsu




--
= - Jonathan Hsu
--bcaec5015f5175ff0a04c5d181c0--