accumulo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Todd Lipcon <t...@cloudera.com>
Subject Re: WAL issues in 1.5.0
Date Tue, 13 Aug 2013 22:09:52 GMT
Out of disk space? HDFS won't write to a volume if you don't have 5x the
block size available.

-Todd

On Tue, Aug 13, 2013 at 3:06 PM, John Vines <vines@apache.org> wrote:

> I had a few instances of it before, but I was never able to concretely
> create it in a non-virtual environment. Except today, I had a next clean
> checkout of first 1.5.1-SNAPSHOT and then 1.5.0 from git, with a fresh hdfs
> directory and I got the never ending stream of
>
> "       java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File
> /accumulo/wal/127.0.0.1+9997/86c798b2-2de3-4860-ba84-645cc9d38cc7
> could only be replicated to 0 nodes, instead of 1"
>
>
> Normally when this happens, restarting the namenode is all I need to do to
> fix it, but not this time. I'm willing to bet when I restart my computer it
> will be fine. But, while this is happening, I'm seeing the number of files
> in hdfs under the wal directory ever growing. I'm wondering if we have an
> overly time sensitive contstraint or if there is a check we need to do
> before giving up? I am seeing that error echoed in the namenode, so I'm not
> quite sure.  This is on hadoop 1.0.4.
>



-- 
Todd Lipcon
Software Engineer, Cloudera

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message