Return-Path: Delivered-To: apmail-hadoop-core-user-archive@www.apache.org Received: (qmail 86748 invoked from network); 15 Jun 2008 13:49:18 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 15 Jun 2008 13:49:18 -0000 Received: (qmail 90738 invoked by uid 500); 15 Jun 2008 13:49:16 -0000 Delivered-To: apmail-hadoop-core-user-archive@hadoop.apache.org Received: (qmail 90699 invoked by uid 500); 15 Jun 2008 13:49:16 -0000 Mailing-List: contact core-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-user@hadoop.apache.org Delivered-To: mailing list core-user@hadoop.apache.org Received: (qmail 90688 invoked by uid 99); 15 Jun 2008 13:49:16 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 15 Jun 2008 06:49:16 -0700 X-ASF-Spam-Status: No, hits=3.2 required=10.0 tests=HTML_MESSAGE,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: local policy) Received: from [69.147.107.20] (HELO mrout1-b.corp.re1.yahoo.com) (69.147.107.20) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 15 Jun 2008 13:48:24 +0000 Received: from SNV-EXBH01.ds.corp.yahoo.com (snv-exbh01.ds.corp.yahoo.com [207.126.227.249]) by mrout1-b.corp.re1.yahoo.com (8.13.8/8.13.8/y.out) with ESMTP id m5FDmDpG070346 for ; Sun, 15 Jun 2008 06:48:16 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; s=serpent; d=yahoo-inc.com; c=nofws; q=dns; h=received:x-mimeole:content-class:mime-version: content-type:subject:date:message-id:x-ms-has-attach: x-ms-tnef-correlator:thread-topic:thread-index:from:to:return-path:x-originalarrivaltime; b=PWjZtyTrkSwFFNmZ0pKndcJiN3ixZMypNkMTcdwkFU2AZ8KPNaLARIkRXjAPisn0 Received: from SNV-EXVS03.ds.corp.yahoo.com ([207.126.227.235]) by SNV-EXBH01.ds.corp.yahoo.com with Microsoft SMTPSVC(6.0.3790.3959); Sun, 15 Jun 2008 06:48:11 -0700 X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----_=_NextPart_001_01C8CEEE.50ADD35D" Subject: All datanodes getting marked as dead Date: Sun, 15 Jun 2008 06:47:10 -0700 Message-ID: <6BEB21C567740F4A83F93D3BC8C9FE6D02104D89@SNV-EXVS03.ds.corp.yahoo.com> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: All datanodes getting marked as dead Thread-Index: AcjO7k9HKJshnm9jT4S+mP8oeCzHMQ== From: "Murali Krishna" To: X-OriginalArrivalTime: 15 Jun 2008 13:48:11.0280 (UTC) FILETIME=[7358D900:01C8CEEE] X-Virus-Checked: Checked by ClamAV on apache.org ------_=_NextPart_001_01C8CEEE.50ADD35D Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hi, I was running some M/R job on a 90+ node cluster. While the job was running the entire data nodes seems to have become dead. Only major error I saw in the name node log is 'java.io.IOException: Too many open files'. The job might try to open thousands of file. After some time, there are lot of exceptions saying 'could only be replicated to 0 nodes instead of 1'. So looks like all the data nodes are not responding now; job has failed since it couldn't write. I can see the following in the data nodes logs: 2008-06-15 02:38:28,477 WARN org.apache.hadoop.dfs.DataNode: java.net.SocketTimeoutException: timed out waiting for rpc response at org.apache.hadoop.ipc.Client.call(Client.java:484) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:184) at org.apache.hadoop.dfs.$Proxy0.sendHeartbeat(Unknown Source) =20 All processes (datanodes + namenodes) are still running..(dfs health status page shows all nodes as dead) =20 Some questions: * Is this kind of behavior expected when name node runs out of file handles? * Why the data nodes are not able to send the heart beat (is it related to name node not having enough handles?) * What happens to the data in the hdfs when all the data nodes fail to send the heart beat and name node is in this state? * Is the solution is to just increase the number of file handles and restart the cluster?=20 =20 Thanks, Murali ------_=_NextPart_001_01C8CEEE.50ADD35D--