accumulo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Vines <vi...@apache.org>
Subject Re: File X could only be replicated to 0 nodes instead of 1
Date Sun, 12 May 2013 03:54:53 GMT
Do you mind explicitly pointing out what was wrong and how you fixed it so
when people search for this issue they can easily find the resolution?

Sent from my phone, please pardon the typos and brevity.
On May 11, 2013 11:08 PM, "David Medinets" <david.medinets@gmail.com> wrote:

> Resolution: I had some part of the installation out of order. A working
> installation script for v1.4.3 is at
> https://github.com/medined/accumulo-at-home<
> https://github.com/medined/accumulo-at-home/tree/master/1.4.3>
> in
> the v1.4.3 directory.
>
>
> On Sat, May 11, 2013 at 11:12 AM, Eric Newton <eric.newton@gmail.com>
> wrote:
>
> > Check your datanode logs... it's probably not running.
> >
> > -Eric
> >
> >
> > On Fri, May 10, 2013 at 1:53 PM, David Medinets <
> david.medinets@gmail.com
> > >wrote:
> >
> > > I tried an install of 1.4.3 and am seeing the following message when I
> > run
> > > 'accumulo init' without any logs being generated. Both hadoop and
> > zookeeper
> > > seem to be running OK. Any ideas where I should look to resolve this?
> > >
> > > 2013-05-10 13:43:54,894 [hdfs.DFSClient] WARN : DataStreamer Exception:
> > > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> > > /user/accumulo/accumulo/tables/!0/root_tablet/00000_00000.rf could only
> > be
> > > replicated to 0 nodes, instead of 1
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> > >     at
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> > >     at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
> > >     at
> > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > >     at java.lang.reflect.Method.invoke(Method.java:616)
> > >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> > >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> > >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> > >     at java.security.AccessController.doPrivileged(Native Method)
> > >     at javax.security.auth.Subject.doAs(Subject.java:416)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> > >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> > >
> > >     at org.apache.hadoop.ipc.Client.call(Client.java:1070)
> > >     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> > >     at sun.proxy.$Proxy1.addBlock(Unknown Source)
> > >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >     at
> > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > >     at
> > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > >     at java.lang.reflect.Method.invoke(Method.java:616)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> > >     at sun.proxy.$Proxy1.addBlock(Unknown Source)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message