Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 4FB091754A for ; Fri, 7 Nov 2014 02:37:34 +0000 (UTC) Received: (qmail 94596 invoked by uid 500); 7 Nov 2014 02:37:28 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 94463 invoked by uid 500); 7 Nov 2014 02:37:28 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 94448 invoked by uid 99); 7 Nov 2014 02:37:27 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Nov 2014 02:37:27 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_IMAGE_ONLY_32,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of hadoophive@gmail.com designates 209.85.223.179 as permitted sender) Received: from [209.85.223.179] (HELO mail-ie0-f179.google.com) (209.85.223.179) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Nov 2014 02:37:23 +0000 Received: by mail-ie0-f179.google.com with SMTP id rl12so4221218iec.24 for ; Thu, 06 Nov 2014 18:36:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=vClRHMAjCX55pjKf/FheIgzbbW8+ljLQMVu1W207XDY=; b=k1hoaTEd2aHDp/iSdyJzQDHjwTaKKrXJDzxY3yDf7HiH9zQBP5lmWdEgM2YYgxWfF/ KJ4woKmOAtl3YNysKQCx/XYlQ8BX6lK8ijvyGHFicPT0TnoxDQWyq0ngKb7yTACDLjHr npqeTjl/w/hJwRI1bLdUEvbtv9oso7+Z48cylxMlwp76dR5aMd7lSDFNs3JPAVZSYhb1 rfBgkmr17YuPlpYSpYXRMOkcu4wHYiTtGRDxhHFLAPqNjgz/q6QRAvyWzUXrB+96pWMB orXgLYW5jRG+ZcBLqeTxVXjuBRTZ4RoidByOSVOYF+bqrXPNahOBXLyedpEo/nC4SIPI wQcA== MIME-Version: 1.0 X-Received: by 10.42.15.135 with SMTP id l7mr15100569ica.38.1415327778253; Thu, 06 Nov 2014 18:36:18 -0800 (PST) Received: by 10.107.43.147 with HTTP; Thu, 6 Nov 2014 18:36:18 -0800 (PST) Received: by 10.107.43.147 with HTTP; Thu, 6 Nov 2014 18:36:18 -0800 (PST) In-Reply-To: <144f13c660b6de2afa7ecf39156e7edf@cweb19.nm.nhnsystem.com> References: <144f13c660b6de2afa7ecf39156e7edf@cweb19.nm.nhnsystem.com> Date: Fri, 7 Nov 2014 08:06:18 +0530 Message-ID: Subject: Re: How can I be able to balance the disk? From: hadoop hive To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=20cf303ea8d876b1fc05073bad8b X-Virus-Checked: Checked by ClamAV on apache.org --20cf303ea8d876b1fc05073bad8b Content-Type: text/plain; charset=UTF-8 Hey, 1. Stop datanode 2. Copy blocks from 1 disk to another on same path 3. Run fsck so that can be updated in metastore You can find steps at www.bigdataboard.in Thanks Vikas srivastava On Nov 7, 2014 7:56 AM, "cho ju il" wrote: > My Hadoop Cluster Version > > Hadoop 1.1.2, Hadoop 2.4.1 > > > > The disk usage of a datanode is unbalanced. > > I guess that the cause is due to the amount of 100% of disk problems. > > example ) > > /disk01 100% > > /disk02 45% > > /disk03 70% > > > > My guess is that correct? > > If so, How can I be able to balance the disk? > > > > **** upload application, hadoop client log > > java.io.IOException: All datanodes [server:port] are bad. Aborting... > > > > > > **** datanode > > 2014-11-01 17:47:02,820 DataStreamer Exception: java.io.IOException: > Connection reset by peer > > at sun.nio.ch.FileDispatcher.write0(Native Method) > > at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29) > > at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69) > > at sun.nio.ch.IOUtil.write(IOUtil.java:40) > > at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334) > > at > org.kapache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55) > > at > org.kapache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) > > at > org.kapache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146) > > at > org.kapache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107) > > at > java.io.BufferedOutputStream.write(BufferedOutputStream.java:105) > > at java.io.DataOutputStream.write(DataOutputStream.java:90) > > at > org.kapache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3083) > > > > 2014-11-01 17:47:02,821 Error Recovery for > blk_-7118739414552476963_15341530 bad datanode[0] [server:port] > --20cf303ea8d876b1fc05073bad8b Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

Hey,

1. Stop datanode
2. Copy blocks from 1 disk to another on same path
3. Run fsck so that can be updated in metastore

You can find steps at www.bigdataboard.in

Thanks
Vikas srivastava

On Nov 7, 2014 7:56 AM, "cho ju il" &l= t;tjstory@kgrid.co.kr> wrote:=

My Hadoop Cluster Version

Hadoop 1.1.2= ,=C2=A0Hadoop 2.4.1

=C2=A0

The disk usage of a datanode is unbalanced.=C2=A0

I = guess that the cause is due to the amount of 100% of disk problems.=C2=A0

example )=C2=A0

/disk01 100%

/disk02 45%

/disk03 70%=

=C2=A0

My guess is that correct?=C2=A0

If so, How can I = be able to balance the disk?=C2=A0

=C2=A0

**** upload applicati= on, hadoop client log

java.io.IOException: All datanodes [server:port= ] are bad. Aborting...=C2=A0

=C2=A0

=C2=A0

**** datanode<= /p>

2014-11-01 17:47:02,820 =C2=A0DataStreamer Exception: java.io.IOExcep= tion: Connection reset by peer

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at sun.nio= .ch.FileDispatcher.write0(Native Method)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 = at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)

=C2=A0= =C2=A0 =C2=A0 =C2=A0 at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.jav= a:69)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at sun.nio.ch.IOUtil.write(IOUtil.j= ava:40)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at sun.nio.ch.SocketChannelImpl.w= rite(SocketChannelImpl.java:334)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.k= apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.ja= va:55)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.kapache.hadoop.net.SocketIO= WithTimeout.doIO(SocketIOWithTimeout.java:142)

=C2=A0 =C2=A0 =C2=A0 = =C2=A0 at org.kapache.hadoop.net.SocketOutputStream.write(SocketOutputStrea= m.java:146)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.kapache.hadoop.net.Soc= ketOutputStream.write(SocketOutputStream.java:107)

=C2=A0 =C2=A0 =C2= =A0 =C2=A0 at java.io.BufferedOutputStream.write(BufferedOutputStream.java:= 105)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.io.DataOutputStream.write(Da= taOutputStream.java:90)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.kapache.ha= doop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3083)

=C2=A0

2014-11-01 17:47:02,821 =C2=A0Error Recovery for blk_-711= 8739414552476963_15341530 bad datanode[0]=C2=A0[server:port]

--20cf303ea8d876b1fc05073bad8b--