Return-Path: X-Original-To: apmail-accumulo-user-archive@www.apache.org Delivered-To: apmail-accumulo-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0F41ED313 for ; Fri, 27 Jul 2012 14:54:35 +0000 (UTC) Received: (qmail 32224 invoked by uid 500); 27 Jul 2012 14:54:34 -0000 Delivered-To: apmail-accumulo-user-archive@accumulo.apache.org Received: (qmail 32200 invoked by uid 500); 27 Jul 2012 14:54:34 -0000 Mailing-List: contact user-help@accumulo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@accumulo.apache.org Delivered-To: mailing list user@accumulo.apache.org Received: (qmail 32192 invoked by uid 99); 27 Jul 2012 14:54:34 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Jul 2012 14:54:34 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FSL_RCVD_USER,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of jreucypoda@gmail.com designates 209.85.212.177 as permitted sender) Received: from [209.85.212.177] (HELO mail-wi0-f177.google.com) (209.85.212.177) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Jul 2012 14:54:30 +0000 Received: by wibhm11 with SMTP id hm11so2630708wib.6 for ; Fri, 27 Jul 2012 07:54:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=rsZ6rrrvs7AzJQJsRix3RLezVbYhU+tO7Av6BuJEZwo=; b=E9Cfz/I2/HwhgBvwLsZgP7xOCZWqkoXoqoN7HZxUkK4k+DFbVyK2lMDDfqQcd2Y/Zu FGFCTqiU7WOClQ4c/Om4+FeRjrbgeSM//Keza0JSOvjnU2PnGQ8cq3WNFPH2J8QEphT4 6JGqz/WO15LVb+zEdU2lEnE9rhxphZW87G7Km8MsVQkwa0uUSWiiZrmusf2h1Ad05WYN OzkVgal5F8ffBENJGhudoh8CfC91tsaNl/8QGDPgm/Wz9DWsCZ8IwzpzBRzCi164M2Bj RoUioc56Xd6RMt0oZDLqJKoVBqLSmsaLHx8x81q/6TBO/HcR/8HCLZQZR2OEbYfnQY6C uK0Q== MIME-Version: 1.0 Received: by 10.180.98.200 with SMTP id ek8mr6941104wib.0.1343400848910; Fri, 27 Jul 2012 07:54:08 -0700 (PDT) Received: by 10.180.84.66 with HTTP; Fri, 27 Jul 2012 07:54:08 -0700 (PDT) In-Reply-To: References: <689651633.197833.1343399353392.JavaMail.root@linzimmb04o.imo.intelink.gov> <665122977.197934.1343399720647.JavaMail.root@linzimmb04o.imo.intelink.gov> Date: Fri, 27 Jul 2012 10:54:08 -0400 Message-ID: Subject: Re: Keep Tables on Shutdown From: Jonathan Hsu To: user@accumulo.apache.org Content-Type: multipart/alternative; boundary=f46d0442886063eb3104c5d0e282 X-Virus-Checked: Checked by ClamAV on apache.org --f46d0442886063eb3104c5d0e282 Content-Type: text/plain; charset=ISO-8859-1 Yes, I ran "/opt/hadoop/bin/start-all.sh" On Fri, Jul 27, 2012 at 10:51 AM, Marc Parisi wrote: > Is HDFS running? > > > On Fri, Jul 27, 2012 at 10:49 AM, Jonathan Hsu wrote: > >> So i changed the dfs.data.dir and dfs.name.dir and tried to re-start >> accumulo. >> >> On running this command : "/opt/accumulo/bin/accumulo init" I get the >> following error : >> >> >> 27 10:46:29,041 [ipc.Client] INFO : Retrying connect to server: localhost/ >> 127.0.0.1:9000. Already tried 0 time(s). >> 27 10:46:30,043 [ipc.Client] INFO : Retrying connect to server: localhost/ >> 127.0.0.1:9000. Already tried 1 time(s). >> 27 10:46:31,045 [ipc.Client] INFO : Retrying connect to server: localhost/ >> 127.0.0.1:9000. Already tried 2 time(s). >> 27 10:46:32,047 [ipc.Client] INFO : Retrying connect to server: localhost/ >> 127.0.0.1:9000. Already tried 3 time(s). >> 27 10:46:33,048 [ipc.Client] INFO : Retrying connect to server: localhost/ >> 127.0.0.1:9000. Already tried 4 time(s). >> 27 10:46:34,050 [ipc.Client] INFO : Retrying connect to server: localhost/ >> 127.0.0.1:9000. Already tried 5 time(s). >> 27 10:46:35,052 [ipc.Client] INFO : Retrying connect to server: localhost/ >> 127.0.0.1:9000. Already tried 6 time(s). >> 27 10:46:36,054 [ipc.Client] INFO : Retrying connect to server: localhost/ >> 127.0.0.1:9000. Already tried 7 time(s). >> 27 10:46:37,056 [ipc.Client] INFO : Retrying connect to server: localhost/ >> 127.0.0.1:9000. Already tried 8 time(s). >> 27 10:46:38,057 [ipc.Client] INFO : Retrying connect to server: localhost/ >> 127.0.0.1:9000. Already tried 9 time(s). >> 27 10:46:38,060 [util.Initialize] FATAL: java.net.ConnectException: Call >> to localhost/127.0.0.1:9000 failed on connection exception: >> java.net.ConnectException: Connection refused >> java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on >> connection exception: java.net.ConnectException: Connection refused >> at org.apache.hadoop.ipc.Client.wrapException(Client.java:767) >> at org.apache.hadoop.ipc.Client.call(Client.java:743) >> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) >> at $Proxy0.getProtocolVersion(Unknown Source) >> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359) >> at >> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106) >> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:207) >> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:170) >> at >> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82) >> at >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) >> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) >> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) >> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) >> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95) >> at org.apache.accumulo.core.file.FileUtil.getFileSystem(FileUtil.java:554) >> at org.apache.accumulo.server.util.Initialize.main(Initialize.java:426) >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> at >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) >> at >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >> at java.lang.reflect.Method.invoke(Method.java:597) >> at org.apache.accumulo.start.Main$1.run(Main.java:89) >> at java.lang.Thread.run(Thread.java:680) >> Caused by: java.net.ConnectException: Connection refused >> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) >> at >> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) >> at >> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) >> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404) >> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304) >> at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176) >> at org.apache.hadoop.ipc.Client.getConnection(Client.java:860) >> at org.apache.hadoop.ipc.Client.call(Client.java:720) >> ... 20 more >> Thread "init" died null >> java.lang.reflect.InvocationTargetException >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> at >> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) >> at >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >> at java.lang.reflect.Method.invoke(Method.java:597) >> at org.apache.accumulo.start.Main$1.run(Main.java:89) >> at java.lang.Thread.run(Thread.java:680) >> Caused by: java.lang.RuntimeException: java.net.ConnectException: Call to >> localhost/127.0.0.1:9000 failed on connection exception: >> java.net.ConnectException: Connection refused >> at org.apache.accumulo.server.util.Initialize.main(Initialize.java:436) >> ... 6 more >> Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:9000failed on connection exception: java.net.ConnectException: Connection >> refused >> at org.apache.hadoop.ipc.Client.wrapException(Client.java:767) >> at org.apache.hadoop.ipc.Client.call(Client.java:743) >> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) >> at $Proxy0.getProtocolVersion(Unknown Source) >> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359) >> at >> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106) >> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:207) >> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:170) >> at >> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82) >> at >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) >> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) >> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) >> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) >> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95) >> at org.apache.accumulo.core.file.FileUtil.getFileSystem(FileUtil.java:554) >> at org.apache.accumulo.server.util.Initialize.main(Initialize.java:426) >> ... 6 more >> Caused by: java.net.ConnectException: Connection refused >> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) >> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) >> at >> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) >> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404) >> at >> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304) >> at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176) >> at org.apache.hadoop.ipc.Client.getConnection(Client.java:860) >> at org.apache.hadoop.ipc.Client.call(Client.java:720) >> ... 20 more >> >> >> On Fri, Jul 27, 2012 at 10:45 AM, Marc Parisi wrote: >> >>> accumulo init is used to initialize the instance. are you running that >>> every time? >>> >>> though it should error because you already have an instance, perhaps not >>> setting the dfs.data.dir AND initializing it might cause the error >>> >>> >>> >>> On Fri, Jul 27, 2012 at 10:42 AM, Jonathan Hsu wrote: >>> >>>> I'm running these commands to start : >>>> >>>> /opt/hadoop/bin/start-all.sh >>>> /opt/zookeeper/bin/zkServer.sh start >>>> /opt/accumulo/bin/accumulo init >>>> /opt/accumulo/bin/start-all.sh >>>> /opt/accumulo/bin/accumulo shell -u root >>>> >>>> and these commands to stop : >>>> >>>> /opt/hadoop/bin/stop-all.sh >>>> /opt/zookeeper/bin/zkServer.sh stop >>>> /opt/accumulo/bin/stop-all.sh >>>> >>>> On Fri, Jul 27, 2012 at 10:39 AM, John Vines wrote: >>>> >>>>> Are you just doing stop-all.sh and then start-all.sh? Or are you >>>>> running other commands? >>>>> >>>>> On Fri, Jul 27, 2012 at 10:35 AM, Jonathan Hsu wrote: >>>>> >>>>>> I don't get any errors. The tables just don't exist anymore, as if I >>>>>> were starting accumulo for the first time. >>>>>> >>>>>> >>>>>> On Fri, Jul 27, 2012 at 10:32 AM, John Vines wrote: >>>>>> >>>>>>> Can you elaborate on how they don't exist? Do you mean you have >>>>>>> errors about files not being found for your table or every time you start >>>>>>> Accumulo it's like the first time? >>>>>>> >>>>>>> Sent from my phone, so pardon the typos and brevity. >>>>>>> On Jul 27, 2012 10:29 AM, "Jonathan Hsu" >>>>>>> wrote: >>>>>>> >>>>>>>> Hey all, >>>>>>>> >>>>>>>> I have a problem with my Accumulo tables deleting upon shutdown. I >>>>>>>> currently have Accumulo, Zookeeper, and Hadoop in my /opt directory. I'm >>>>>>>> assuming that somehow my tables are being placed in a tmp directory that >>>>>>>> gets wiped when I shut my computer off. I'm trying to develop and test on >>>>>>>> my local machine. >>>>>>>> >>>>>>>> What should I change in the conf files or otherwise in order to >>>>>>>> ensure that the tables are not destroyed on shutdown? >>>>>>>> >>>>>>>> Thanks >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> - Jonathan Hsu >>>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> - Jonathan Hsu >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> - Jonathan Hsu >>>> >>> >>> >> >> >> -- >> - Jonathan Hsu >> > > -- - Jonathan Hsu --f46d0442886063eb3104c5d0e282 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Yes, I ran "/opt/hadoop/bin/start-all.sh"

On Fri, Jul 27, 2012 at 10:51 AM, Marc Parisi = <marc@accumulo.ne= t> wrote:
Is HDFS running?


On Fri, Jul 27, 2012 at 10:4= 9 AM, Jonathan Hsu <jreucypoda@gmail.com> wrote:
So i changed the dfs.data.dir and dfs.name.dir and tried to re-start accumu= lo.

On running this command : "/opt/accumulo/bin/ac= cumulo init" I get the following error :


27 10:46:29,041 [ipc.Client] INFO : Retrying connect to ser= ver: localhost/127.0.0.= 1:9000. Already tried 0 time(s).
27 10:46:30,043 [ipc.Client]= INFO : Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
27 10:46:31,045 [ipc.Client] INFO : Retrying connect to server: localh= ost/127.0.0.1:9000.= Already tried 2 time(s).
27 10:46:32,047 [ipc.Client] INFO : Ret= rying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
27 10:46:33,048 [ipc.Client] INFO : Retrying connect to server: localh= ost/127.0.0.1:9000.= Already tried 4 time(s).
27 10:46:34,050 [ipc.Client] INFO : Ret= rying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
27 10:46:35,052 [ipc.Client] INFO : Retrying connect to server: localh= ost/127.0.0.1:9000.= Already tried 6 time(s).
27 10:46:36,054 [ipc.Client] INFO : Ret= rying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s).
27 10:46:37,056 [ipc.Client] INFO : Retrying connect to server: localh= ost/127.0.0.1:9000.= Already tried 8 time(s).
27 10:46:38,057 [ipc.Client] INFO : Ret= rying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s).
27 10:46:38,060 [util.Initialize] FATAL: java.net.ConnectException: Ca= ll to localhost/127.0.0= .1:9000 failed on connection exception: java.net.ConnectException: Conn= ection refused
java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exceptio= n: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(= Client.java:767)
at org.apache.hadoop.ipc.= Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.getProtocolVer= sion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.hdfs= .DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.<init&g= t;(DFSClient.java:207)
at org.apache.hadoop.hdfs= .DFSClient.<init>(DFSClient.java:170)
at org.apache.hadoop.hdfs.DistributedFileSystem.ini= tialize(DistributedFileSystem.java:82)
at org.apache.hadoop.fs.F= ileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(= FileSystem.java:66)
at org.apache.hadoop.fs.F= ileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.jav= a:196)
at org.apache.hadoop.fs.F= ileSystem.get(FileSystem.java:95)
at org.apache.accumulo.core.file.FileUtil.getFileSystem(FileU= til.java:554)
at org.apache.accumulo.se= rver.util.Initialize.main(Initialize.java:426)
at sun.reflect.NativeMethodAccessorImpl.invoke0(= Native Method)
at sun.reflect.NativeMeth= odAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessor= Impl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Meth= od.invoke(Method.java:597)
= at org.apache.accumulo.start.Main$1.run(Main.java:89)
at java.lang.Thread.run(T= hread.java:680)
Caused by: java.net.ConnectException: Connection = refused
at sun.nio.c= h.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChann= elImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.c= onnect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.= NetUtils.connect(NetUtils.java:404)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(C= lient.java:304)
at org.apache.hadoop.ipc.= Client$Connection.access$1700(Client.java:176)
at org.apache.hadoop.ipc.Client.getConnection(Cl= ient.java:860)
at org.apache.hadoop.ipc.= Client.call(Client.java:720)
... 20 more
Thread "init" died null
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke= 0(Native Method)
at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:3= 9)
at sun.reflect.Delegating= MethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
<= span style=3D"white-space:pre-wrap"> at java.lang.reflect.Method.inv= oke(Method.java:597)
at org.apache.accumulo.st= art.Main$1.run(Main.java:89)
at java.lang.Thread.run(Thread.java:680)
Caused by: java.lang.RuntimeException: java.net.ConnectException: Call= to localhost/127.0.0.1= :9000 failed on connection exception: java.net.ConnectException: Connec= tion refused
at org.apache.accumulo.se= rver.util.Initialize.main(Initialize.java:436)
... 6 more
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connecti= on exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wra= pException(Client.java:767)
at org.apache.hadoop.ipc.= Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.getProtocolVer= sion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.hdfs= .DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.<init&g= t;(DFSClient.java:207)
at org.apache.hadoop.hdfs= .DFSClient.<init>(DFSClient.java:170)
at org.apache.hadoop.hdfs.DistributedFileSystem.ini= tialize(DistributedFileSystem.java:82)
at org.apache.hadoop.fs.F= ileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(= FileSystem.java:66)
at org.apache.hadoop.fs.F= ileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.jav= a:196)
at org.apache.hadoop.fs.F= ileSystem.get(FileSystem.java:95)
at org.apache.accumulo.core.file.FileUtil.getFileSystem(FileU= til.java:554)
at org.apache.accumulo.se= rver.util.Initialize.main(Initialize.java:426)
... 6 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.c= heckConnect(Native Method)
= at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java= :567)
at org.apache.hadoop.net.= SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.con= nect(NetUtils.java:404)
at org.apache.hadoop.ipc.= Client$Connection.setupIOstreams(Client.java:304)
at org.apache.hadoop.ipc.Client$Connection.ac= cess$1700(Client.java:176)
at org.apache.hadoop.ipc.= Client.getConnection(Client.java:860)
at org.apache.hadoop.ipc.Client.call(Client.java:720)
... 20 more

On Fri, Jul 27, 2012 a= t 10:45 AM, Marc Parisi <marc@accumulo.net> wrote:
accumulo init is used to initialize the inst= ance. are you running that every time?

though it should error becaus= e you already have an instance, perhaps not setting the dfs.data.dir AND in= itializing it might cause the error



On Fri, Jul 27, 2012 at 10:42 AM, Jonath= an Hsu <jreucypoda@gmail.com> wrote:
I'm running these commands to start :

/opt/hado= op/bin/start-all.sh
/opt/zookeeper/bin/zkServer.sh start
/opt/accumulo/bin/accumulo init
/opt/accumulo/bin/start-all.sh<= /div>
/opt/accumulo/bin/accumulo shell -u root

and = these commands to stop :

/opt/hadoop/bin/stop= -all.sh
/opt/zookeeper/bin/zkServer.sh stop
/opt/accumu= lo/bin/stop-all.sh

On Fri, Jul 27, 2012 at 10:3= 9 AM, John Vines <john.w.vines@ugov.gov> wrote:
Are you just doing stop-all.sh and then start-all.sh? Or are you running ot= her commands?

On Fri, Jul 27, 2012 a= t 10:35 AM, Jonathan Hsu <jreucypoda@gmail.com> wrote:
I don't get any errors. =A0Th= e tables just don't exist anymore, as if I were starting accumulo for t= he first time.


On Fri, Jul 27, 2012 at 10:32 = AM, John Vines <john.w.vines@ugov.gov> wrote:

Can you elaborate on how = they don't exist? Do you mean you have errors about files not being fou= nd for your table or every time you start Accumulo it's like the first = time?

Sent from my phone, so pardon the typos and brevity.

On Jul 27, 2012 10:29 AM, "Jonathan Hsu&quo= t; <jreucypoda= @gmail.com> wrote:
Hey all,

I have a problem with my Accumulo ta= bles deleting upon shutdown. =A0I currently have Accumulo, Zookeeper, and H= adoop in my /opt directory. =A0I'm assuming that somehow my tables are = being placed in a tmp directory that gets wiped when I shut my computer off= . =A0I'm trying to develop and test on my local machine.

What should I change in the conf files or otherwise in = order to ensure that the tables are not destroyed on shutdown?
Thanks


--
- Jonathan= Hsu



--
- Jonathan Hsu




<= font color=3D"#888888">--
- Jonathan Hsu




<= /div>--
- Jonathan Hsu




--
= - Jonathan Hsu
--f46d0442886063eb3104c5d0e282--