accumulo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jim Klucar <klu...@gmail.com>
Subject Re: [External] Re: Need help getting Accumulo running.
Date Fri, 06 Jul 2012 01:47:44 GMT
Looks like hadoop isn't running correctly. I'd backup and start poking
around there, making sure you can put files in HDFS, see all the
Hadoop monitoring pages, etc.

On Thu, Jul 5, 2012 at 9:44 PM, Park, Jee [USA] <Park_Jee@bah.com> wrote:
> I just tried to initialize an accumulo instance, and this is what I got
> after typing the instance name and password:
>
> 05 18:43:41,085 [util.NativeCodeLoader] INFO : Loaded the native-hadoop
> library
> 05 18:43:41,247 [hdfs.DFSClient] WARN : DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /accumulo/tables/!0/root_tablet/00000_00000.rf could only be replicated to 0
> nodes, instead of 1
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNam
> esystem.java:1271)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57
> )
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
> .java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:740)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>         at $Proxy0.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57
> )
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
> .java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocati
> onHandler.java:82)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHand
> ler.java:59)
>         at $Proxy0.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSCli
> ent.java:2937)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSCl
> ient.java:2819)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:
> 2102)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.
> java:2288)
>
> 05 18:43:41,264 [hdfs.DFSClient] WARN : Error Recovery for block null bad
> datanode[0] nodes == null
> 05 18:43:41,264 [hdfs.DFSClient] WARN : Could not get block locations.
> Source file "/accumulo/tables/!0/root_tablet/00000_00000.rf" - Aborting...
> 05 18:43:41,265 [util.Initialize] FATAL: Failed to initialize filesystem
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /accumulo/tables/!0/root_tablet/00000_00000.rf could only be replicated to 0
> nodes, instead of 1
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNam
> esystem.java:1271)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57
> )
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
> .java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:740)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>         at $Proxy0.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57
> )
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
> .java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocati
> onHandler.java:82)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHand
> ler.java:59)
>         at $Proxy0.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSCli
> ent.java:2937)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSCl
> ient.java:2819)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:
> 2102)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.
> java:2288)
> 05 18:43:41,309 [hdfs.DFSClient] ERROR: Exception closing file
> /accumulo/tables/!0/root_tablet/00000_00000.rf :
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /accumulo/tables/!0/root_tablet/00000_00000.rf could only be replicated to 0
> nodes, instead of 1
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNam
> esystem.java:1271)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57
> )
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
> .java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /accumulo/tables/!0/root_tablet/00000_00000.rf could only be replicated to 0
> nodes, instead of 1
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNam
> esystem.java:1271)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57
> )
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
> .java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:740)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>         at $Proxy0.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57
> )
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
> .java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocati
> onHandler.java:82)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHand
> ler.java:59)
>         at $Proxy0.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSCli
> ent.java:2937)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSCl
> ient.java:2819)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:
> 2102)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.
> java:2288)
>
> -----Original Message-----
> From: Jim Klucar [mailto:klucar@gmail.com]
> Sent: Thursday, July 05, 2012 9:41 PM
> To: dev@accumulo.apache.org
> Subject: Re: [External] Re: Need help getting Accumulo running.
>
> jps -m will show more. The mains are typically the Accumulo processes.
>
> On Thu, Jul 5, 2012 at 9:37 PM, Park, Jee [USA] <Park_Jee@bah.com> wrote:
>> No I didn't use any non-ascii chracters. Also, when I use the command
>> jps, datanode often disappears and there happens to be 5-6 "mains" like
> so:
>>
>> ~$ jps
>> 6283 Main
>> 4369 Main
>> 6910 Jps
>> 2781 NameNode
>> 3193 SecondaryNameNode
>> 5848 Main
>> 5425 Main
>> 3465 TaskTracker
>> 4366 Main
>> 4990 Main
>> 3260 JobTracker
>>
>> -----Original Message-----
>> From: Jim Klucar [mailto:klucar@gmail.com]
>> Sent: Thursday, July 05, 2012 9:36 PM
>> To: dev@accumulo.apache.org
>> Subject: Re: [External] Re: Need help getting Accumulo running.
>>
>> That's a weird one then. You can shut down accumulo and re-run the
>> init to see if you just mis-typed the password or something. By any
>> chance did you use non-ascii characters in your password?
>>
>> On Thu, Jul 5, 2012 at 9:31 PM, Park, Jee [USA] <Park_Jee@bah.com> wrote:
>>> Yes, I can
>>>
>>> -----Original Message-----
>>> From: Jim Klucar [mailto:klucar@gmail.com]
>>> Sent: Thursday, July 05, 2012 9:31 PM
>>> To: dev@accumulo.apache.org
>>> Subject: Re: [External] Re: Need help getting Accumulo running.
>>>
>>> Can you see the monitor page at http://localhost:50095 ?
>>>
>>>
>>> On Thu, Jul 5, 2012 at 9:26 PM, Park, Jee [USA] <Park_Jee@bah.com> wrote:
>>>> That's exactly what I typed in for the password, however it is still
>>>> giving me that error. Also, if it makes any difference, I have not
>>>> changed anything in the accumulo-site.xml file.
>>>>
>>>> -----Original Message-----
>>>> From: Jim Klucar [mailto:klucar@gmail.com]
>>>> Sent: Thursday, July 05, 2012 9:25 PM
>>>> To: dev@accumulo.apache.org
>>>> Subject: Re: [External] Re: Need help getting Accumulo running.
>>>>
>>>> It is asking for the password you setup for the accumulo root user,
>>>> not the machine root user. It's whatever you typed in when you ran
>>>> the $ACCUMULO_HOME/bin/accumulo init   command.
>>>>
>>>> On Thu, Jul 5, 2012 at 9:21 PM, Park, Jee [USA] <Park_Jee@bah.com>
> wrote:
>>>>> Hello,
>>>>> I currently have hadoop, zookeeper and accumulo running, however I
>>>>> keep getting the following error when trying to start the accumulo
>> shell:
>>>>>
>>>>> ~$: accumulo/bin/accumulo shell -u root Enter current password for
>>>>> 'root'@'accumulo': ********
>>>>> 05 18:18:26,233 [shell.Shell] ERROR:
>>>>> org.apache.accumulo.core.client.AccumuloSecurityException: Error
>>>>> BAD_CREDENTIALS - Username or Password is Invalid
>>>>>
>>>>> Thanks in advance.
>>>>> -----Original Message-----
>>>>> From: David Medinets [mailto:david.medinets@gmail.com]
>>>>> Sent: Friday, June 29, 2012 7:37 PM
>>>>> To: dev@accumulo.apache.org
>>>>> Subject: [External] Re: Need help getting Accumulo running.
>>>>>
>>>>> oh... I think you missed a few steps from the gist:
>>>>>
>>>>> $ cd ~
>>>>> $ export TAR_DIR=~/workspace/accumulo/src/assemble/target
>>>>> $ tar xvzf $TAR_DIR/accumulo-1.5.0-incubating-SNAPSHOT-dist.tar.gz
>>>>>
>>>>> # Add the following to your .bashrc file.
>>>>> $ export ACCUMULO_HOME=~/accumulo-1.5.0-incubating-SNAPSHOT
>>>>>
>>>>> $ cd $ACCUMULO_HOME/conf
>>>>>
>>>>> These are the steps where you unpack the newly-created gz file into
>>>>> your home directory. It seems like you are running Accumulo from
>>>>> the source code directory. Also notice that I wrote those steps for
>>>>> v1.5.0 which might be different from v1.4.0
>>>>>
>>>>> On Fri, Jun 29, 2012 at 2:58 PM, Miguel Pereira
>>>>> <miguelapereira1@gmail.com>
>>>>> wrote:
>>>>>> Hi Jee,
>>>>>>
>>>>>> I used that same guide to install Accumulo, but I used this guide
>>>>>> to install hadoop.
>>>>>>
>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-lin
>>>>>> u
>>>>>> x
>>>>>> -
>>>>>> s
>>>>>> ingle-node-cluster/
>>>>>>
>>>>>> furthermore here are the steps I took to install accumulo were I
>>>>>> used version 1.4.0 and standalone conf.
>>>>>> please note you also need to install java jdk, and set your
>>>>>> JAVA_HOME i used jdk 1.7
>>>>>>
>>>>>> Setting up Accumulo
>>>>>>
>>>>>>
>>>>>>   - git clone     git://github.com/apache/accumulo.git
>>>>>>   - cd accumulo
>>>>>>   - git checkout     tags/1.4.0 -b 1.4.0
>>>>>>   - mvn package && mvn assembly:single -N.          
  // this can
>>>>>> take a
>>>>>>   while
>>>>>>   - cp conf/examples/512MB/standalone/* conf
>>>>>>   - vi accumulo-env.sh
>>>>>>
>>>>>>
>>>>>> test -z "$JAVA_HOME" && export
>>>>>> JAVA_HOME=/home/hduser/pkg/jdk1.7.0_04
>>>>>> test -z "$HADOOP_HOME" && export
>>>>>> HADOOP_HOME=/home/hduser/developer/workspace/hadoop
>>>>>> test -z "$ZOOKEEPER_HOME" && export
>>>>>> ZOOKEEPER_HOME=/home/hduser/developer/workspace/zookeeper-3.3.5
>>>>>>
>>>>>>   - vi     accumulo-site.xml
>>>>>>
>>>>>>
>>>>>>    modify user, password, secret, memory
>>>>>>
>>>>>>
>>>>>>   - bin/accumulo     init
>>>>>>   - bin/start-all.sh
>>>>>>   - bin/accumulo     shell -u root
>>>>>>
>>>>>> if you get the shell up you know your good.
>>>>>>
>>>>>>
>>>>>> On Fri, Jun 29, 2012 at 2:49 PM, John Vines
>>>>>> <john.w.vines@ugov.gov>
>>> wrote:
>>>>>>
>>>>>>> We currently don't really support running on Windows. I'm sure
>>>>>>> there are ways to get it running with Cygwin, but our efforts
are
>>>>>>> better spend in other directions for now.
>>>>>>>
>>>>>>> As for getting it going in Ubuntu, I haven't seen that guide
before.
>>>>>>> Can you let me know where it broke?
>>>>>>>
>>>>>>> For the record, when I was developing ACCUMULO-404, I was working
>>>>>>> in Ubuntu VMs and I used Apache-BigTop and our debians to
>>>>>>> facilitate
>>>>> installation.
>>>>>>> They don't do everything for you, but I think if you use 1.4.1
>>>>>>> (not sure if I got the debs into 1..4.0), it should diminish
the
>>>>>>> installation work you must do to some minor configuration.
>>>>>>>
>>>>>>> John
>>>>>>>
>>>>>>> On Fri, Jun 29, 2012 at 2:28 PM, Park, Jee [USA]
>>>>>>> <Park_Jee@bah.com>
>>>>> wrote:
>>>>>>>
>>>>>>> > Hi, ****
>>>>>>> >
>>>>>>> > ** **
>>>>>>> >
>>>>>>> > I had trouble getting Accumulo to work on a VM instance
of
>>>>>>> > Ubuntu
>>>>>>> > (11.04) using this guide: https://gist.github.com/1535657.****
>>>>>>> >
>>>>>>> > Does anyone have a step-by-step guide to get it running
on
>>>>>>> > either Ubuntu or Windows 7?****
>>>>>>> >
>>>>>>> > ** **
>>>>>>> >
>>>>>>> > Thanks!****
>>>>>>> >
>>>>>>>

Mime
View raw message