Return-Path: X-Original-To: apmail-accumulo-user-archive@www.apache.org Delivered-To: apmail-accumulo-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3A3F5D3FD for ; Fri, 27 Jul 2012 15:19:15 +0000 (UTC) Received: (qmail 85825 invoked by uid 500); 27 Jul 2012 15:19:12 -0000 Delivered-To: apmail-accumulo-user-archive@accumulo.apache.org Received: (qmail 85794 invoked by uid 500); 27 Jul 2012 15:19:12 -0000 Mailing-List: contact user-help@accumulo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@accumulo.apache.org Delivered-To: mailing list user@accumulo.apache.org Received: (qmail 85754 invoked by uid 99); 27 Jul 2012 15:19:12 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Jul 2012 15:19:12 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=FSL_RCVD_USER,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_SOFTFAIL,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: softfail (athena.apache.org: transitioning domain of marc@accumulo.net does not designate 209.85.214.169 as permitted sender) Received: from [209.85.214.169] (HELO mail-ob0-f169.google.com) (209.85.214.169) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Jul 2012 15:19:07 +0000 Received: by obhx4 with SMTP id x4so5486515obh.0 for ; Fri, 27 Jul 2012 08:18:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:in-reply-to:references:date :message-id:subject:from:to:content-type:x-gm-message-state; bh=K9W/wViXHpVUIMjl858/P7qzD0oDcw5TcWoyY3Krnd4=; b=ZMMR7vGRcRF/yw7USX0hdlBIFf76BVJYH9Lp81TGE+uocWgm7klZyMUfKTWlIRR+Sb 3XsQW0foD1jbGX39yrJhYypbueYsAjeAQgdDN2WH4GLNgU1hsVvHXAIImp9jfkRN1Xjp vqaJJkDSFgOeWVfUdWhUuKprOT/cVyF3UbYu94b/dSo/7hOsbvqHhIwpwAr4cHoCDCM2 mfqgrNUVKR/1Z3xdqqW8Dl3hyMUGoJiuLH9nbVBVrSWxytPkYMU1Q4pkmC6Ddfv1irEI Ugx/4kCWDpd4yVFrnD7elV9YphHy+1ltpvBUR8YTdE3mirdfwKXFC3JvT6AFfKSMn2bd YeTg== MIME-Version: 1.0 Received: by 10.182.110.37 with SMTP id hx5mr3975626obb.48.1343402326323; Fri, 27 Jul 2012 08:18:46 -0700 (PDT) Received: by 10.76.97.197 with HTTP; Fri, 27 Jul 2012 08:18:46 -0700 (PDT) X-Originating-IP: [63.239.65.11] In-Reply-To: References: <689651633.197833.1343399353392.JavaMail.root@linzimmb04o.imo.intelink.gov> <665122977.197934.1343399720647.JavaMail.root@linzimmb04o.imo.intelink.gov> Date: Fri, 27 Jul 2012 11:18:46 -0400 Message-ID: Subject: Re: Keep Tables on Shutdown From: Marc Parisi To: user@accumulo.apache.org Content-Type: multipart/alternative; boundary=f46d044518117371a304c5d13a25 X-Gm-Message-State: ALoCoQnXKRUfF0lE+Q6RrRxodd1c1Eb89oCecZamDjUVodYv64xf1OIe05Qbmb/KkVLPPHH0SA7t X-Virus-Checked: Checked by ClamAV on apache.org --f46d044518117371a304c5d13a25 Content-Type: text/plain; charset=ISO-8859-1 If you change the name dir I think you need to reformat the namenode. On Fri, Jul 27, 2012 at 10:54 AM, Jonathan Hsu wrote: > Yes, I ran "/opt/hadoop/bin/start-all.sh" > > > On Fri, Jul 27, 2012 at 10:51 AM, Marc Parisi wrote: > >> Is HDFS running? >> >> >> On Fri, Jul 27, 2012 at 10:49 AM, Jonathan Hsu wrote: >> >>> So i changed the dfs.data.dir and dfs.name.dir and tried to re-start >>> accumulo. >>> >>> On running this command : "/opt/accumulo/bin/accumulo init" I get the >>> following error : >>> >>> >>> 27 10:46:29,041 [ipc.Client] INFO : Retrying connect to server: >>> localhost/127.0.0.1:9000. Already tried 0 time(s). >>> 27 10:46:30,043 [ipc.Client] INFO : Retrying connect to server: >>> localhost/127.0.0.1:9000. Already tried 1 time(s). >>> 27 10:46:31,045 [ipc.Client] INFO : Retrying connect to server: >>> localhost/127.0.0.1:9000. Already tried 2 time(s). >>> 27 10:46:32,047 [ipc.Client] INFO : Retrying connect to server: >>> localhost/127.0.0.1:9000. Already tried 3 time(s). >>> 27 10:46:33,048 [ipc.Client] INFO : Retrying connect to server: >>> localhost/127.0.0.1:9000. Already tried 4 time(s). >>> 27 10:46:34,050 [ipc.Client] INFO : Retrying connect to server: >>> localhost/127.0.0.1:9000. Already tried 5 time(s). >>> 27 10:46:35,052 [ipc.Client] INFO : Retrying connect to server: >>> localhost/127.0.0.1:9000. Already tried 6 time(s). >>> 27 10:46:36,054 [ipc.Client] INFO : Retrying connect to server: >>> localhost/127.0.0.1:9000. Already tried 7 time(s). >>> 27 10:46:37,056 [ipc.Client] INFO : Retrying connect to server: >>> localhost/127.0.0.1:9000. Already tried 8 time(s). >>> 27 10:46:38,057 [ipc.Client] INFO : Retrying connect to server: >>> localhost/127.0.0.1:9000. Already tried 9 time(s). >>> 27 10:46:38,060 [util.Initialize] FATAL: java.net.ConnectException: Call >>> to localhost/127.0.0.1:9000 failed on connection exception: >>> java.net.ConnectException: Connection refused >>> java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on >>> connection exception: java.net.ConnectException: Connection refused >>> at org.apache.hadoop.ipc.Client.wrapException(Client.java:767) >>> at org.apache.hadoop.ipc.Client.call(Client.java:743) >>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) >>> at $Proxy0.getProtocolVersion(Unknown Source) >>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359) >>> at >>> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106) >>> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:207) >>> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:170) >>> at >>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82) >>> at >>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) >>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) >>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) >>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) >>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95) >>> at >>> org.apache.accumulo.core.file.FileUtil.getFileSystem(FileUtil.java:554) >>> at org.apache.accumulo.server.util.Initialize.main(Initialize.java:426) >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>> at >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) >>> at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >>> at java.lang.reflect.Method.invoke(Method.java:597) >>> at org.apache.accumulo.start.Main$1.run(Main.java:89) >>> at java.lang.Thread.run(Thread.java:680) >>> Caused by: java.net.ConnectException: Connection refused >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) >>> at >>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) >>> at >>> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) >>> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404) >>> at >>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304) >>> at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176) >>> at org.apache.hadoop.ipc.Client.getConnection(Client.java:860) >>> at org.apache.hadoop.ipc.Client.call(Client.java:720) >>> ... 20 more >>> Thread "init" died null >>> java.lang.reflect.InvocationTargetException >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>> at >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) >>> at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >>> at java.lang.reflect.Method.invoke(Method.java:597) >>> at org.apache.accumulo.start.Main$1.run(Main.java:89) >>> at java.lang.Thread.run(Thread.java:680) >>> Caused by: java.lang.RuntimeException: java.net.ConnectException: Call >>> to localhost/127.0.0.1:9000 failed on connection exception: >>> java.net.ConnectException: Connection refused >>> at org.apache.accumulo.server.util.Initialize.main(Initialize.java:436) >>> ... 6 more >>> Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:9000failed on connection exception: java.net.ConnectException: Connection >>> refused >>> at org.apache.hadoop.ipc.Client.wrapException(Client.java:767) >>> at org.apache.hadoop.ipc.Client.call(Client.java:743) >>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) >>> at $Proxy0.getProtocolVersion(Unknown Source) >>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359) >>> at >>> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106) >>> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:207) >>> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:170) >>> at >>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82) >>> at >>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) >>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) >>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) >>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) >>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95) >>> at >>> org.apache.accumulo.core.file.FileUtil.getFileSystem(FileUtil.java:554) >>> at org.apache.accumulo.server.util.Initialize.main(Initialize.java:426) >>> ... 6 more >>> Caused by: java.net.ConnectException: Connection refused >>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) >>> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) >>> at >>> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) >>> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404) >>> at >>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304) >>> at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176) >>> at org.apache.hadoop.ipc.Client.getConnection(Client.java:860) >>> at org.apache.hadoop.ipc.Client.call(Client.java:720) >>> ... 20 more >>> >>> >>> On Fri, Jul 27, 2012 at 10:45 AM, Marc Parisi wrote: >>> >>>> accumulo init is used to initialize the instance. are you running that >>>> every time? >>>> >>>> though it should error because you already have an instance, perhaps >>>> not setting the dfs.data.dir AND initializing it might cause the error >>>> >>>> >>>> >>>> On Fri, Jul 27, 2012 at 10:42 AM, Jonathan Hsu wrote: >>>> >>>>> I'm running these commands to start : >>>>> >>>>> /opt/hadoop/bin/start-all.sh >>>>> /opt/zookeeper/bin/zkServer.sh start >>>>> /opt/accumulo/bin/accumulo init >>>>> /opt/accumulo/bin/start-all.sh >>>>> /opt/accumulo/bin/accumulo shell -u root >>>>> >>>>> and these commands to stop : >>>>> >>>>> /opt/hadoop/bin/stop-all.sh >>>>> /opt/zookeeper/bin/zkServer.sh stop >>>>> /opt/accumulo/bin/stop-all.sh >>>>> >>>>> On Fri, Jul 27, 2012 at 10:39 AM, John Vines wrote: >>>>> >>>>>> Are you just doing stop-all.sh and then start-all.sh? Or are you >>>>>> running other commands? >>>>>> >>>>>> On Fri, Jul 27, 2012 at 10:35 AM, Jonathan Hsu wrote: >>>>>> >>>>>>> I don't get any errors. The tables just don't exist anymore, as if >>>>>>> I were starting accumulo for the first time. >>>>>>> >>>>>>> >>>>>>> On Fri, Jul 27, 2012 at 10:32 AM, John Vines wrote: >>>>>>> >>>>>>>> Can you elaborate on how they don't exist? Do you mean you have >>>>>>>> errors about files not being found for your table or every time you start >>>>>>>> Accumulo it's like the first time? >>>>>>>> >>>>>>>> Sent from my phone, so pardon the typos and brevity. >>>>>>>> On Jul 27, 2012 10:29 AM, "Jonathan Hsu" >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Hey all, >>>>>>>>> >>>>>>>>> I have a problem with my Accumulo tables deleting upon shutdown. >>>>>>>>> I currently have Accumulo, Zookeeper, and Hadoop in my /opt directory. >>>>>>>>> I'm assuming that somehow my tables are being placed in a tmp directory >>>>>>>>> that gets wiped when I shut my computer off. I'm trying to develop and >>>>>>>>> test on my local machine. >>>>>>>>> >>>>>>>>> What should I change in the conf files or otherwise in order to >>>>>>>>> ensure that the tables are not destroyed on shutdown? >>>>>>>>> >>>>>>>>> Thanks >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> - Jonathan Hsu >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> - Jonathan Hsu >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> - Jonathan Hsu >>>>> >>>> >>>> >>> >>> >>> -- >>> - Jonathan Hsu >>> >> >> > > > -- > - Jonathan Hsu > --f46d044518117371a304c5d13a25 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable If you change the name dir I think you need to reformat the namenode.
<= br>
On Fri, Jul 27, 2012 at 10:54 AM, Jonathan Hs= u <jreucypoda@gmail.com> wrote:
Yes, I ran "/opt/hadoop/bin/start-all.s= h"


On Fri, Jul 27, 2012 at 10:51 AM, Marc Parisi <marc@accumulo.net> wrote:
Is HDFS running?


On Fri, Jul 27, 2012 at 10:49 AM, Jonathan Hsu <jreu= cypoda@gmail.com> wrote:
So i changed the dfs.data.dir and dfs.name.dir and tried to re-start accumu= lo.

On running this command : "/opt/accumulo/bin/ac= cumulo init" I get the following error :


27 10:46:29,041 [ipc.Client] INFO : Retrying connect to ser= ver: localhost/127.0.0.= 1:9000. Already tried 0 time(s).
27 10:46:30,043 [ipc.Client]= INFO : Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
27 10:46:31,045 [ipc.Client] INFO : Retrying connect to server: localh= ost/127.0.0.1:9000.= Already tried 2 time(s).
27 10:46:32,047 [ipc.Client] INFO : Ret= rying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
27 10:46:33,048 [ipc.Client] INFO : Retrying connect to server: localh= ost/127.0.0.1:9000.= Already tried 4 time(s).
27 10:46:34,050 [ipc.Client] INFO : Ret= rying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
27 10:46:35,052 [ipc.Client] INFO : Retrying connect to server: localh= ost/127.0.0.1:9000.= Already tried 6 time(s).
27 10:46:36,054 [ipc.Client] INFO : Ret= rying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s).
27 10:46:37,056 [ipc.Client] INFO : Retrying connect to server: localh= ost/127.0.0.1:9000.= Already tried 8 time(s).
27 10:46:38,057 [ipc.Client] INFO : Ret= rying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s).
27 10:46:38,060 [util.Initialize] FATAL: java.net.ConnectException: Ca= ll to localhost/127.0.0= .1:9000 failed on connection exception: java.net.ConnectException: Conn= ection refused
java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exceptio= n: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(= Client.java:767)
at org.apache.hadoop.ipc.= Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.getProtocolVer= sion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.hdfs= .DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.<init&g= t;(DFSClient.java:207)
at org.apache.hadoop.hdfs= .DFSClient.<init>(DFSClient.java:170)
at org.apache.hadoop.hdfs.DistributedFileSystem.ini= tialize(DistributedFileSystem.java:82)
at org.apache.hadoop.fs.F= ileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(= FileSystem.java:66)
at org.apache.hadoop.fs.F= ileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.jav= a:196)
at org.apache.hadoop.fs.F= ileSystem.get(FileSystem.java:95)
at org.apache.accumulo.core.file.FileUtil.getFileSystem(FileU= til.java:554)
at org.apache.accumulo.se= rver.util.Initialize.main(Initialize.java:426)
at sun.reflect.NativeMethodAccessorImpl.invoke0(= Native Method)
at sun.reflect.NativeMeth= odAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessor= Impl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Meth= od.invoke(Method.java:597)
= at org.apache.accumulo.start.Main$1.run(Main.java:89)
at java.lang.Thread.run(T= hread.java:680)
Caused by: java.net.ConnectException: Connection = refused
at sun.nio.c= h.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChann= elImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.c= onnect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.= NetUtils.connect(NetUtils.java:404)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(C= lient.java:304)
at org.apache.hadoop.ipc.= Client$Connection.access$1700(Client.java:176)
at org.apache.hadoop.ipc.Client.getConnection(Cl= ient.java:860)
at org.apache.hadoop.ipc.= Client.call(Client.java:720)
... 20 more
Thread "init" died null
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke= 0(Native Method)
at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:3= 9)
at sun.reflect.Delegating= MethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
<= span style=3D"white-space:pre-wrap"> at java.lang.reflect.Method.inv= oke(Method.java:597)
at org.apache.accumulo.st= art.Main$1.run(Main.java:89)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.lang.RuntimeException: java.net.ConnectException: Call= to localhost/127.0.0.1= :9000 failed on connection exception: java.net.ConnectException: Connec= tion refused
at org.apache.accumulo.se= rver.util.Initialize.main(Initialize.java:436)
... 6 more
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connecti= on exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wra= pException(Client.java:767)
at org.apache.hadoop.ipc.= Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.getProtocolVer= sion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.hdfs= .DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.<init&g= t;(DFSClient.java:207)
at org.apache.hadoop.hdfs= .DFSClient.<init>(DFSClient.java:170)
at org.apache.hadoop.hdfs.DistributedFileSystem.ini= tialize(DistributedFileSystem.java:82)
at org.apache.hadoop.fs.F= ileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(= FileSystem.java:66)
at org.apache.hadoop.fs.F= ileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.jav= a:196)
at org.apache.hadoop.fs.F= ileSystem.get(FileSystem.java:95)
at org.apache.accumulo.core.file.FileUtil.getFileSystem(FileU= til.java:554)
at org.apache.accumulo.se= rver.util.Initialize.main(Initialize.java:426)
... 6 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.c= heckConnect(Native Method)
= at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java= :567)
at org.apache.hadoop.net.= SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.con= nect(NetUtils.java:404)
at org.apache.hadoop.ipc.= Client$Connection.setupIOstreams(Client.java:304)
at org.apache.hadoop.ipc.Client$Connection.ac= cess$1700(Client.java:176)
at org.apache.hadoop.ipc.= Client.getConnection(Client.java:860)
at org.apache.hadoop.ipc.Client.call(Client.java:720)
... 20 more

On Fri, Jul 27, 2012 a= t 10:45 AM, Marc Parisi <marc@accumulo.net> wrote:
accumulo init is used to initialize the inst= ance. are you running that every time?

though it should error becaus= e you already have an instance, perhaps not setting the dfs.data.dir AND in= itializing it might cause the error



On Fri, Jul 27, 2012 at 10:42 AM, Jonath= an Hsu <jreucypoda@gmail.com> wrote:
I'm running these commands to start :

/opt/hado= op/bin/start-all.sh
/opt/zookeeper/bin/zkServer.sh start
/opt/accumulo/bin/accumulo init
/opt/accumulo/bin/start-all.sh<= /div>
/opt/accumulo/bin/accumulo shell -u root

and = these commands to stop :

/opt/hadoop/bin/stop= -all.sh
/opt/zookeeper/bin/zkServer.sh stop
/opt/accumu= lo/bin/stop-all.sh

On Fri, Jul 27, 2012 at 10:3= 9 AM, John Vines <john.w.vines@ugov.gov> wrote:
Are you just doing stop-all.sh and then start-all.sh? Or are you running ot= her commands?

On Fri, Jul 27, 2012 a= t 10:35 AM, Jonathan Hsu <jreucypoda@gmail.com> wrote:
I don't get any errors. =A0Th= e tables just don't exist anymore, as if I were starting accumulo for t= he first time.


On Fri, Jul 27, 2012 at 10:32 = AM, John Vines <john.w.vines@ugov.gov> wrote:

Can you elaborate on how = they don't exist? Do you mean you have errors about files not being fou= nd for your table or every time you start Accumulo it's like the first = time?

Sent from my phone, so pardon the typos and brevity.

On Jul 27, 2012 10:29 AM, "Jonathan Hsu&quo= t; <jreucypoda= @gmail.com> wrote:
Hey all,

I have a problem with my Accumulo ta= bles deleting upon shutdown. =A0I currently have Accumulo, Zookeeper, and H= adoop in my /opt directory. =A0I'm assuming that somehow my tables are = being placed in a tmp directory that gets wiped when I shut my computer off= . =A0I'm trying to develop and test on my local machine.

What should I change in the conf files or otherwise in = order to ensure that the tables are not destroyed on shutdown?
Thanks


--
- Jonathan= Hsu



--
- Jonathan Hsu




<= font color=3D"#888888">--
- Jonathan Hsu




<= /div>--
- Jonathan Hsu




<= /div>--
- Jonathan Hsu

--f46d044518117371a304c5d13a25--