Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A8A434A90 for ; Fri, 17 Jun 2011 13:10:56 +0000 (UTC) Received: (qmail 57713 invoked by uid 500); 17 Jun 2011 13:10:55 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 57663 invoked by uid 500); 17 Jun 2011 13:10:55 -0000 Mailing-List: contact mapreduce-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-user@hadoop.apache.org Delivered-To: mailing list mapreduce-user@hadoop.apache.org Received: (qmail 57655 invoked by uid 99); 17 Jun 2011 13:10:55 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 17 Jun 2011 13:10:55 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of mlortiz@uci.cu designates 200.55.140.180 as permitted sender) Received: from [200.55.140.180] (HELO mx3.uci.cu) (200.55.140.180) by apache.org (qpsmtpd/0.29) with SMTP; Fri, 17 Jun 2011 13:10:51 +0000 Received: (qmail 31734 invoked by uid 507); 17 Jun 2011 13:10:24 -0000 Received: from 10.0.0.185 by ns3.uci.cu (envelope-from , uid 501) with qmail-scanner-2.01st (avp: 5.0.2.0. spamassassin: 3.0.6. perlscan: 2.01st. Clear:RC:1(10.0.0.185):. Processed in 0.653675 secs); 17 Jun 2011 13:10:24 -0000 Received: from unknown (HELO ucimail4.uci.cu) (10.0.0.185) by 0 with SMTP; 17 Jun 2011 13:10:23 -0000 Received: from localhost (localhost.localdomain [127.0.0.1]) by ucimail4.uci.cu (Postfix) with ESMTP id DE7BF14C4008; Fri, 17 Jun 2011 09:10:23 -0400 (CDT) X-Virus-Scanned: amavisd-new at uci.cu Received: from ucimail4.uci.cu ([127.0.0.1]) by localhost (ucimail4.uci.cu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id zS616f5BQQ61; Fri, 17 Jun 2011 09:10:23 -0400 (CDT) Received: from [10.36.18.29] (unknown [10.36.18.29]) by ucimail4.uci.cu (Postfix) with ESMTP id EF49514C4029; Fri, 17 Jun 2011 09:10:22 -0400 (CDT) Message-ID: <4DFB59BE.4050905@uci.cu> Date: Fri, 17 Jun 2011 09:12:22 -0430 From: Marcos Ortiz User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.17) Gecko/20110424 Lightning/1.0b2 Thunderbird/3.1.10 MIME-Version: 1.0 To: mapreduce-user@hadoop.apache.org CC: Lemon Cheng Subject: Re: Query about "hadoop dfs -cat" in hadoop-0-0.20.2 References: In-Reply-To: Content-Type: multipart/alternative; boundary="------------030201020300070109050701" This is a multi-part message in MIME format. --------------030201020300070109050701 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit On 06/17/2011 07:41 AM, Lemon Cheng wrote: > Hi, > > I am using the hadoop-0.20.2. After calling ./start-all.sh, i can type > "hadoop dfs -ls". > However, when i type "hadoop dfs -cat > /usr/lemon/wordcount/input/file01", the error is shown as follow. > I have searched the related problem in the web, but i can't find a > solution for helping me to solve this problem. > Anyone can give suggestion? > Many Thanks. > > > > 11/06/17 19:27:12 INFO hdfs.DFSClient: No node available for block: > blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01 > 11/06/17 19:27:12 INFO hdfs.DFSClient: Could not obtain block > blk_7095683278339921538_1029 from any node: java.io.IOException: No > live nodes contain current block > 11/06/17 19:27:15 INFO hdfs.DFSClient: No node available for block: > blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01 > 11/06/17 19:27:15 INFO hdfs.DFSClient: Could not obtain block > blk_7095683278339921538_1029 from any node: java.io.IOException: No > live nodes contain current block > 11/06/17 19:27:18 INFO hdfs.DFSClient: No node available for block: > blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01 > 11/06/17 19:27:18 INFO hdfs.DFSClient: Could not obtain block > blk_7095683278339921538_1029 from any node: java.io.IOException: No > live nodes contain current block > 11/06/17 19:27:21 WARN hdfs.DFSClient: DFS Read: java.io.IOException: > Could not obtain block: blk_7095683278339921538_1029 > file=/usr/lemon/wordcount/input/file01 > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767) > at java.io.DataInputStream.read(DataInputStream.java:83) > at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47) > at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85) > at org.apache.hadoop.fs.FsShell.printToStdout(FsShell.java:114) > at org.apache.hadoop.fs.FsShell.access$100(FsShell.java:49) > at org.apache.hadoop.fs.FsShell$1.process(FsShell.java:352) > at > org.apache.hadoop.fs.FsShell$DelayedExceptionThrowing.globAndProcess(FsShell.java:1898) > at org.apache.hadoop.fs.FsShell.cat > (FsShell.java:346) > at org.apache.hadoop.fs.FsShell.doall(FsShell.java:1543) > at org.apache.hadoop.fs.FsShell.run(FsShell.java:1761) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) > at org.apache.hadoop.fs.FsShell.main(FsShell.java:1880) > > > Regards, > Lemon Are you sure that all your DataNodes are online? -- Marcos Lu�s Ort�z Valmaseda Software Engineer (UCI) http://marcosluis2186.posterous.com http://twitter.com/marcosluis2186 --------------030201020300070109050701 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit On 06/17/2011 07:41 AM, Lemon Cheng wrote:
Hi,

I am using the hadoop-0.20.2. After calling ./start-all.sh, i can type "hadoop dfs -ls".
However, when i type "hadoop dfs -cat /usr/lemon/wordcount/input/file01", the error is shown as follow.
I have searched the related problem in the web, but i can't find a solution for helping me to solve this problem.
Anyone can give suggestion?
Many Thanks.



11/06/17 19:27:12 INFO hdfs.DFSClient: No node available for block: blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
11/06/17 19:27:12 INFO hdfs.DFSClient: Could not obtain block blk_7095683278339921538_1029 from any node:  java.io.IOException: No live nodes contain current block
11/06/17 19:27:15 INFO hdfs.DFSClient: No node available for block: blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
11/06/17 19:27:15 INFO hdfs.DFSClient: Could not obtain block blk_7095683278339921538_1029 from any node:  java.io.IOException: No live nodes contain current block
11/06/17 19:27:18 INFO hdfs.DFSClient: No node available for block: blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
11/06/17 19:27:18 INFO hdfs.DFSClient: Could not obtain block blk_7095683278339921538_1029 from any node:  java.io.IOException: No live nodes contain current block
11/06/17 19:27:21 WARN hdfs.DFSClient: DFS Read: java.io.IOException: Could not obtain block: blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
        at java.io.DataInputStream.read(DataInputStream.java:83)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
        at org.apache.hadoop.fs.FsShell.printToStdout(FsShell.java:114)
        at org.apache.hadoop.fs.FsShell.access$100(FsShell.java:49)
        at org.apache.hadoop.fs.FsShell$1.process(FsShell.java:352)
        at org.apache.hadoop.fs.FsShell$DelayedExceptionThrowing.globAndProcess(FsShell.java:1898)
        at org.apache.hadoop.fs.FsShell.cat(FsShell.java:346)
        at org.apache.hadoop.fs.FsShell.doall(FsShell.java:1543)
        at org.apache.hadoop.fs.FsShell.run(FsShell.java:1761)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
        at org.apache.hadoop.fs.FsShell.main(FsShell.java:1880)


Regards,
Lemon
Are you sure that all your DataNodes are online?


-- 
Marcos Luís Ortíz Valmaseda
 Software Engineer (UCI)
 http://marcosluis2186.posterous.com
 http://twitter.com/marcosluis2186

--------------030201020300070109050701--