hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Loddengaard <a...@cloudera.com>
Subject Re: HDFS out of space
Date Mon, 22 Jun 2009 20:30:09 GMT
Are you seeing any exceptions because of the disk being at 99% capacity?

Hadoop should do something sane here and write new data to the disk with
more capacity.  That said, it is ideal to be balanced.  As far as I know,
there is no way to balance an individual DataNode's hard drives (Hadoop does
round-robin scheduling when writing data).


On Mon, Jun 22, 2009 at 10:12 AM, Kris Jirapinyo <kjirapinyo@biz360.com>wrote:

> Hi all,
>    How does one handle a mount running out of space for HDFS?  We have two
> disks mounted on /mnt and /mnt2 respectively on one of the machines that
> are
> used for HDFS, and /mnt is at 99% while /mnt2 is at 30%.  Is there a way to
> tell the machine to balance itself out?  I know for the cluster, you can
> balance it using start-balancer.sh but I don't think that it will tell the
> individual machine to balance itself out.  Our "hack" right now would be
> just to delete the data on /mnt, since we have replication of 3x, we should
> be OK.  But I'd prefer not to do that.  Any thoughts?

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message