accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Hsu <jreucyp...@gmail.com>
Subject Re: Keep Tables on Shutdown
Date Fri, 27 Jul 2012 15:32:12 GMT
I tried again, reformatting the namenode first, and i got this error while
trying to start accumulo :


27 11:29:08,295 [util.NativeCodeLoader] WARN : Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
27 11:29:08,352 [hdfs.DFSClient] WARN : DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/accumulo/tables/!0/root_tablet/00000_00000.rf could only be replicated to
0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy0.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

27 11:29:08,352 [hdfs.DFSClient] WARN : Error Recovery for block null bad
datanode[0] nodes == null
27 11:29:08,352 [hdfs.DFSClient] WARN : Could not get block locations.
Source file "/accumulo/tables/!0/root_tablet/00000_00000.rf" - Aborting...
27 11:29:08,353 [util.Initialize] FATAL: Failed to initialize filesystem
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/accumulo/tables/!0/root_tablet/00000_00000.rf could only be replicated to
0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy0.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
27 11:29:08,369 [hdfs.DFSClient] ERROR: Exception closing file
/accumulo/tables/!0/root_tablet/00000_00000.rf :
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/accumulo/tables/!0/root_tablet/00000_00000.rf could only be replicated to
0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/accumulo/tables/!0/root_tablet/00000_00000.rf could only be replicated to
0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy0.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

On Fri, Jul 27, 2012 at 11:18 AM, Marc Parisi <marc@accumulo.net> wrote:

> If you change the name dir I think you need to reformat the namenode.
>
> On Fri, Jul 27, 2012 at 10:54 AM, Jonathan Hsu <jreucypoda@gmail.com>wrote:
>
>> Yes, I ran "/opt/hadoop/bin/start-all.sh"
>>
>>
>> On Fri, Jul 27, 2012 at 10:51 AM, Marc Parisi <marc@accumulo.net> wrote:
>>
>>> Is HDFS running?
>>>
>>>
>>> On Fri, Jul 27, 2012 at 10:49 AM, Jonathan Hsu <jreucypoda@gmail.com>wrote:
>>>
>>>> So i changed the dfs.data.dir and dfs.name.dir and tried to re-start
>>>> accumulo.
>>>>
>>>> On running this command : "/opt/accumulo/bin/accumulo init" I get the
>>>> following error :
>>>>
>>>>
>>>> 27 10:46:29,041 [ipc.Client] INFO : Retrying connect to server:
>>>> localhost/127.0.0.1:9000. Already tried 0 time(s).
>>>> 27 10:46:30,043 [ipc.Client] INFO : Retrying connect to server:
>>>> localhost/127.0.0.1:9000. Already tried 1 time(s).
>>>> 27 10:46:31,045 [ipc.Client] INFO : Retrying connect to server:
>>>> localhost/127.0.0.1:9000. Already tried 2 time(s).
>>>> 27 10:46:32,047 [ipc.Client] INFO : Retrying connect to server:
>>>> localhost/127.0.0.1:9000. Already tried 3 time(s).
>>>> 27 10:46:33,048 [ipc.Client] INFO : Retrying connect to server:
>>>> localhost/127.0.0.1:9000. Already tried 4 time(s).
>>>> 27 10:46:34,050 [ipc.Client] INFO : Retrying connect to server:
>>>> localhost/127.0.0.1:9000. Already tried 5 time(s).
>>>> 27 10:46:35,052 [ipc.Client] INFO : Retrying connect to server:
>>>> localhost/127.0.0.1:9000. Already tried 6 time(s).
>>>> 27 10:46:36,054 [ipc.Client] INFO : Retrying connect to server:
>>>> localhost/127.0.0.1:9000. Already tried 7 time(s).
>>>> 27 10:46:37,056 [ipc.Client] INFO : Retrying connect to server:
>>>> localhost/127.0.0.1:9000. Already tried 8 time(s).
>>>> 27 10:46:38,057 [ipc.Client] INFO : Retrying connect to server:
>>>> localhost/127.0.0.1:9000. Already tried 9 time(s).
>>>> 27 10:46:38,060 [util.Initialize] FATAL: java.net.ConnectException:
>>>> Call to localhost/127.0.0.1:9000 failed on connection exception:
>>>> java.net.ConnectException: Connection refused
>>>> java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on
>>>> connection exception: java.net.ConnectException: Connection refused
>>>> at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
>>>>  at org.apache.hadoop.ipc.Client.call(Client.java:743)
>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>>>  at $Proxy0.getProtocolVersion(Unknown Source)
>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>>>>  at
>>>> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>>>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>>>>  at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>>>> at
>>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>>>>  at
>>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>>>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>>>  at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>>>>  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>>>> at
>>>> org.apache.accumulo.core.file.FileUtil.getFileSystem(FileUtil.java:554)
>>>>  at
>>>> org.apache.accumulo.server.util.Initialize.main(Initialize.java:426)
>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>  at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>> at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>  at java.lang.reflect.Method.invoke(Method.java:597)
>>>> at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>>>  at java.lang.Thread.run(Thread.java:680)
>>>> Caused by: java.net.ConnectException: Connection refused
>>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>>>  at
>>>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>>>> at
>>>> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>>>>  at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
>>>> at
>>>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
>>>>  at
>>>> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
>>>> at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
>>>>  at org.apache.hadoop.ipc.Client.call(Client.java:720)
>>>> ... 20 more
>>>> Thread "init" died null
>>>> java.lang.reflect.InvocationTargetException
>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>  at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>> at java.lang.reflect.Method.invoke(Method.java:597)
>>>>  at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>>> at java.lang.Thread.run(Thread.java:680)
>>>> Caused by: java.lang.RuntimeException: java.net.ConnectException: Call
>>>> to localhost/127.0.0.1:9000 failed on connection exception:
>>>> java.net.ConnectException: Connection refused
>>>>  at
>>>> org.apache.accumulo.server.util.Initialize.main(Initialize.java:436)
>>>> ... 6 more
>>>> Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:9000failed
on connection exception: java.net.ConnectException: Connection
>>>> refused
>>>> at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
>>>>  at org.apache.hadoop.ipc.Client.call(Client.java:743)
>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>>>  at $Proxy0.getProtocolVersion(Unknown Source)
>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>>>>  at
>>>> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>>>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
>>>>  at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>>>> at
>>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
>>>>  at
>>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>>>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>>>  at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>>>>  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>>>> at
>>>> org.apache.accumulo.core.file.FileUtil.getFileSystem(FileUtil.java:554)
>>>>  at
>>>> org.apache.accumulo.server.util.Initialize.main(Initialize.java:426)
>>>> ... 6 more
>>>> Caused by: java.net.ConnectException: Connection refused
>>>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>>> at
>>>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>>>>  at
>>>> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>>>> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
>>>>  at
>>>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
>>>> at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
>>>>  at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
>>>> at org.apache.hadoop.ipc.Client.call(Client.java:720)
>>>>  ... 20 more
>>>>
>>>>
>>>> On Fri, Jul 27, 2012 at 10:45 AM, Marc Parisi <marc@accumulo.net>wrote:
>>>>
>>>>> accumulo init is used to initialize the instance. are you running that
>>>>> every time?
>>>>>
>>>>> though it should error because you already have an instance, perhaps
>>>>> not setting the dfs.data.dir AND initializing it might cause the error
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Jul 27, 2012 at 10:42 AM, Jonathan Hsu <jreucypoda@gmail.com>wrote:
>>>>>
>>>>>> I'm running these commands to start :
>>>>>>
>>>>>> /opt/hadoop/bin/start-all.sh
>>>>>> /opt/zookeeper/bin/zkServer.sh start
>>>>>> /opt/accumulo/bin/accumulo init
>>>>>> /opt/accumulo/bin/start-all.sh
>>>>>> /opt/accumulo/bin/accumulo shell -u root
>>>>>>
>>>>>> and these commands to stop :
>>>>>>
>>>>>> /opt/hadoop/bin/stop-all.sh
>>>>>> /opt/zookeeper/bin/zkServer.sh stop
>>>>>> /opt/accumulo/bin/stop-all.sh
>>>>>>
>>>>>> On Fri, Jul 27, 2012 at 10:39 AM, John Vines <john.w.vines@ugov.gov>wrote:
>>>>>>
>>>>>>> Are you just doing stop-all.sh and then start-all.sh? Or are
you
>>>>>>> running other commands?
>>>>>>>
>>>>>>> On Fri, Jul 27, 2012 at 10:35 AM, Jonathan Hsu <jreucypoda@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> I don't get any errors.  The tables just don't exist anymore,
as if
>>>>>>>> I were starting accumulo for the first time.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Jul 27, 2012 at 10:32 AM, John Vines <john.w.vines@ugov.gov
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Can you elaborate on how they don't exist? Do you mean
you have
>>>>>>>>> errors about files not being found for your table or
every time you start
>>>>>>>>> Accumulo it's like the first time?
>>>>>>>>>
>>>>>>>>> Sent from my phone, so pardon the typos and brevity.
>>>>>>>>> On Jul 27, 2012 10:29 AM, "Jonathan Hsu" <jreucypoda@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Hey all,
>>>>>>>>>>
>>>>>>>>>> I have a problem with my Accumulo tables deleting
upon shutdown.
>>>>>>>>>>  I currently have Accumulo, Zookeeper, and Hadoop
in my /opt directory.
>>>>>>>>>>  I'm assuming that somehow my tables are being placed
in a tmp directory
>>>>>>>>>> that gets wiped when I shut my computer off.  I'm
trying to develop and
>>>>>>>>>> test on my local machine.
>>>>>>>>>>
>>>>>>>>>> What should I change in the conf files or otherwise
in order to
>>>>>>>>>> ensure that the tables are not destroyed on shutdown?
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> - Jonathan Hsu
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> - Jonathan Hsu
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> - Jonathan Hsu
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> - Jonathan Hsu
>>>>
>>>
>>>
>>
>>
>> --
>> - Jonathan Hsu
>>
>
>


-- 
- Jonathan Hsu

Mime
View raw message