Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id F2D545000 for ; Fri, 17 Jun 2011 15:59:54 +0000 (UTC) Received: (qmail 80553 invoked by uid 500); 17 Jun 2011 15:59:54 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 80509 invoked by uid 500); 17 Jun 2011 15:59:54 -0000 Mailing-List: contact mapreduce-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-user@hadoop.apache.org Delivered-To: mailing list mapreduce-user@hadoop.apache.org Received: (qmail 80501 invoked by uid 99); 17 Jun 2011 15:59:54 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 17 Jun 2011 15:59:54 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FREEMAIL_FROM,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,RFC_ABUSE_POST,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of lemoncmf@gmail.com designates 209.85.210.176 as permitted sender) Received: from [209.85.210.176] (HELO mail-iy0-f176.google.com) (209.85.210.176) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 17 Jun 2011 15:59:47 +0000 Received: by iyi20 with SMTP id 20so1181512iyi.35 for ; Fri, 17 Jun 2011 08:59:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=Kk81+f2LXN3ZXRweiv1Zl2QRVVJxy2VlGLO3AQ343xA=; b=VSD4MvN5S+f5l9vJDr8YXXoWpiS5rpsehn4LH0f8So8pZP7zwrIX23WPuY+bmF/cUH oHRqDkJJcfJmTPX9hollITa68Jly3mtEQAY/1qZrrztx9san5BWUv3cqhvHdaA8B/H75 D5HLIJ3YRk0iWhP+/stQ8CizmRlAtzPetr/xU= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=mtc4Lopd+mAi1z21sgwxKNM5f8HxpJJibKG/veHvWtY56wXzuiyqbRtzeBmGIuZcGJ 3D+2QjKnNHU3Km5jEzjAK+RdPuWlf8lkKvK9G2el55vhjBDgJhPEb+MnOv4cR0ZMSTb/ 1HLaWwToKuNcocNb2nV8dUBhZI2uNeIH68B/Y= MIME-Version: 1.0 Received: by 10.42.142.71 with SMTP id r7mr2212272icu.397.1308326365677; Fri, 17 Jun 2011 08:59:25 -0700 (PDT) Received: by 10.42.239.5 with HTTP; Fri, 17 Jun 2011 08:59:25 -0700 (PDT) In-Reply-To: <4DFB7C23.4040207@uci.cu> References: <4DFB59BE.4050905@uci.cu> <4DFB7C23.4040207@uci.cu> Date: Fri, 17 Jun 2011 23:59:25 +0800 Message-ID: Subject: Re: Query about "hadoop dfs -cat" in hadoop-0-0.20.2 From: Lemon Cheng To: Marcos Ortiz Cc: mapreduce-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=90e6ba6e8eb846c99e04a5ea78bb X-Virus-Checked: Checked by ClamAV on apache.org --90e6ba6e8eb846c99e04a5ea78bb Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Nothing can see of slaves command. Am I missing something ? Background: When my first time installation of hadoop on last month, i followed the instructions of mapreduce wordcount example, and it works. And this is the second time i use, the computer is restarted, and call bin/start-all.sh, and then i can't do that. [appuser@localhost hadoop-0.20.2]$ ./bin/slaves.sh jps | grep Datanode |sor= t appuser@localhost's password: [appuser@localhost hadoop-0.20.2]$ ./bin/hadoop dfsadmin -report Safe mode is ON Configured Capacity: 470117756928 (437.83 GB) Present Capacity: 98024734720 (91.29 GB) DFS Remaining: 98024710144 (91.29 GB) DFS Used: 24576 (24 KB) DFS Used%: 0% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 ------------------------------------------------- Datanodes available: 1 (1 total, 0 dead) Name: 127.0.0.1:50010 Decommission Status : Normal Configured Capacity: 470117756928 (437.83 GB) DFS Used: 24576 (24 KB) Non DFS Used: 372093022208 (346.54 GB) DFS Remaining: 98024710144(91.29 GB) DFS Used%: 0% DFS Remaining%: 20.85% Last contact: Fri Jun 17 23:50:27 HKT 2011 ------------------------------------- NameNode 'localhost.localdomain:9000' NameNode Storage: Storage Directory Type State /tmp/hadoop-appuser/dfs/name IMAGE_AND_EDITS Active -------------------------------------- Regards, Lemon On Sat, Jun 18, 2011 at 12:09 AM, Marcos Ortiz wrote: > ** > On 06/17/2011 09:51 AM, Lemon Cheng wrote: > > Hi, > > Thanks for your reply. > I am not sure that. How can I prove that? > > Which is your dfs.tmp.dir and dfs.data.dir values? > > You can check the DataNodes=B4s health with bin/slaves.sh jps | grep Data= node > | sort > > Which is the output of bin/hadoop dfsadmin -report? > > One recomendation that I could say you is to have at least 1 NameNode and > two Datanodes > > regards > > > I checked the localhost:50070, it shows 1 live node and 0 dead node. > And the log "hadoop-appuser-datanode-localhost.localdomain.log" shows: > > ************************************************************/ > 2011-06-17 19:59:38,658 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: > /************************************************************ > STARTUP_MSG: Starting DataNode > STARTUP_MSG: host =3D localhost.localdomain/127.0.0.1 > STARTUP_MSG: args =3D [] > STARTUP_MSG: version =3D 0.20.2 > STARTUP_MSG: build =3D > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r > 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010 > ************************************************************/ > 2011-06-17 19:59:46,738 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: Registered > FSDatasetStatusMBean > 2011-06-17 19:59:46,749 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50= 010 > 2011-06-17 19:59:46,752 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is > 1048576 bytes/s > 2011-06-17 19:59:46,812 INFO org.mortbay.log: Logging to > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via > org.mortbay.log.Slf4jLog > 2011-06-17 19:59:46,870 INFO org.apache.hadoop.http.HttpServer: Port > returned by webServer.getConnectors()[0].getLocalPort() before open() is = -1. > Opening the listener on 50075 > 2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer: > listener.getLocalPort() returned 50075 > webServer.getConnectors()[0].getLocalPort() returned 50075 > 2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer: Jetty bou= nd > to port 50075 > 2011-06-17 19:59:46,875 INFO org.mortbay.log: jetty-6.1.14 > 2011-06-17 20:01:45,702 INFO org.mortbay.log: Started > SelectChannelConnector@0.0.0.0:50075 > 2011-06-17 20:01:45,709 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: > Initializing JVM Metrics with processName=3DDataNode, sessionId=3Dnull > 2011-06-17 20:01:45,743 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: > Initializing RPC Metrics with hostName=3DDataNode, port=3D50020 > 2011-06-17 20:01:45,751 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =3D > DatanodeRegistration(localhost.localdomain:50010, > storageID=3DDS-993704729-127.0.0.1-50010-1308296320968, infoPort=3D50075, > ipcPort=3D50020) > 2011-06-17 20:01:45,751 INFO org.apache.hadoop.ipc.Server: IPC Server > listener on 50020: starting > 2011-06-17 20:01:45,753 INFO org.apache.hadoop.ipc.Server: IPC Server > Responder: starting > 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 2 on 50020: starting > 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 0 on 50020: starting > 2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 1 on 50020: starting > 2011-06-17 20:01:45,795 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration( > 127.0.0.1:50010, storageID=3DDS-993704729-127.0.0.1-50010-1308296320968, > infoPort=3D50075, ipcPort=3D50020)In DataNode.run, data =3D > FSDataset{dirpath=3D'/tmp/hadoop-appuser/dfs/data/current'} > > > > 2011-06-17 20:01:45,799 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERV= AL > of 3600000msec Initial delay: 0msec > 2011-06-17 20:01:45,828 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks = got > processed in 11 msecs > 2011-06-17 20:01:45,833 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block > scanner. > 2011-06-17 20:56:02,945 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks = got > processed in 1 msecs > 2011-06-17 21:56:02,248 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks = got > processed in 1 msecs > > > On Fri, Jun 17, 2011 at 9:42 PM, Marcos Ortiz wrote: > >> On 06/17/2011 07:41 AM, Lemon Cheng wrote: >> >> Hi, >> >> I am using the hadoop-0.20.2. After calling ./start-all.sh, i can type >> "hadoop dfs -ls". >> However, when i type "hadoop dfs -cat /usr/lemon/wordcount/input/file01"= , >> the error is shown as follow. >> I have searched the related problem in the web, but i can't find a >> solution for helping me to solve this problem. >> Anyone can give suggestion? >> Many Thanks. >> >> >> >> 11/06/17 19:27:12 INFO hdfs.DFSClient: No node available for block: >> blk_7095683278339921538_1029 file=3D/usr/lemon/wordcount/input/file01 >> 11/06/17 19:27:12 INFO hdfs.DFSClient: Could not obtain block >> blk_7095683278339921538_1029 from any node: java.io.IOException: No liv= e >> nodes contain current block >> 11/06/17 19:27:15 INFO hdfs.DFSClient: No node available for block: >> blk_7095683278339921538_1029 file=3D/usr/lemon/wordcount/input/file01 >> 11/06/17 19:27:15 INFO hdfs.DFSClient: Could not obtain block >> blk_7095683278339921538_1029 from any node: java.io.IOException: No liv= e >> nodes contain current block >> 11/06/17 19:27:18 INFO hdfs.DFSClient: No node available for block: >> blk_7095683278339921538_1029 file=3D/usr/lemon/wordcount/input/file01 >> 11/06/17 19:27:18 INFO hdfs.DFSClient: Could not obtain block >> blk_7095683278339921538_1029 from any node: java.io.IOException: No liv= e >> nodes contain current block >> 11/06/17 19:27:21 WARN hdfs.DFSClient: DFS Read: java.io.IOException: >> Could not obtain block: blk_7095683278339921538_1029 >> file=3D/usr/lemon/wordcount/input/file01 >> at >> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient= .java:1812) >> at >> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.ja= va:1638) >> at >> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767= ) >> at java.io.DataInputStream.read(DataInputStream.java:83) >> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47) >> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85) >> at org.apache.hadoop.fs.FsShell.printToStdout(FsShell.java:114) >> at org.apache.hadoop.fs.FsShell.access$100(FsShell.java:49) >> at org.apache.hadoop.fs.FsShell$1.process(FsShell.java:352) >> at >> org.apache.hadoop.fs.FsShell$DelayedExceptionThrowing.globAndProcess(FsS= hell.java:1898) >> at org.apache.hadoop.fs.FsShell.cat >> (FsShell.java:346) >> at org.apache.hadoop.fs.FsShell.doall(FsShell.java:1543) >> at org.apache.hadoop.fs.FsShell.run(FsShell.java:1761) >> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) >> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) >> at org.apache.hadoop.fs.FsShell.main(FsShell.java:1880) >> >> >> Regards, >> Lemon >> >> Are you sure that all your DataNodes are online? >> >> >> -- >> Marcos Lu=EDs Ort=EDz Valmaseda >> Software Engineer (UCI) >> http://marcosluis2186.posterous.com >> http://twitter.com/marcosluis2186 >> >> > > > -- > Marcos Lu=EDs Ort=EDz Valmaseda > Software Engineer (UCI) > http://marcosluis2186.posterous.com > http://twitter.com/marcosluis2186 > > --90e6ba6e8eb846c99e04a5ea78bb Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

Nothing can see of slaves command. Am I missing something ?<= div>Background: When my first time installation of hadoop on last month, i = followed the instructions of =A0mapreduce wordcount example, and it works.<= /div>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0And this is the second tim= e i use, the computer is restarted, and call bin/start-all.sh, =A0and then = i can't do that.
=A0=A0
[appuser@localhost hadoop-0.2= 0.2]$ ./bin/slaves.sh jps | grep Datanode |sort
appuser@localhost's password:=A0
[appuser@localhost hado= op-0.20.2]$ ./bin/hadoop dfsadmin -report
Safe mode is ON
Configured Capacity: 470117756928 (437.83 GB)
Present Capacity= : 98024734720 (91.29 GB)
DFS Remaining: 98024710144 (91.29 GB)
DFS Used: 24576 (24 KB= )
DFS Used%: 0%
Under replicated blocks: 0
Bl= ocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Data= nodes available: 1 (1 total, 0 dead)

Decommission Stat= us : Normal
Configured Capacity: 470117756928 (437.83 GB)
DFS Used: 2457= 6 (24 KB)
Non DFS Used: 372093022208 (346.54 GB)
DFS Re= maining: 98024710144(91.29 GB)
DFS Used%: 0%
DFS Remain= ing%: 20.85%
Last contact: Fri Jun 17 23:50:27 HKT 2011

--= -----------------------------------
NameNode 'localhost.local= domain:9000'

NameNode Storage:
Storage Directory = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0Type =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0=A0State
/tmp/hadoop-appuser/dfs/name IMAGE_AND_EDITS Active
--------------------------------------

=

Regards,
Lemon

=
On Sat, Jun 18, 2011 at 12:09 AM, Marcos Ortiz <= span dir=3D"ltr"><mlortiz@uci.cu&g= t; wrote:
=20 =20 =20
On 06/17/2011 09:51 AM, Lemon Cheng wrote:
Hi,

Thanks for your reply.
I am not sure that. How can I prove that?
Which is your dfs.tmp.dir and dfs.data.dir values?

You can check the DataNodes=B4s health with bin/slaves.sh jps | grep Datanode | sort

Which is the output of bin/hadoop dfsadmin -report?

One recomendation that I could say you is to have at least 1 NameNode and two Datanodes

regards

I checked the localhost:50070, it shows 1 live node and 0 dead node.
And =A0the log "hadoop-appuser-datanode-localhost.localdomain.log" shows= :
=A0
************************************************************/<= /div>
2011-06-17 19:59:38,658 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:=A0<= /div>
/************************************************************<= /div>
STARTUP_MSG: Starting DataNode
STARTUP_MSG: =A0 host =3D localhost.localdomain/127.0.0.1
STARTUP_MSG: =A0 args =3D []
STARTUP_MSG: =A0 version =3D 0.20.2
STARTUP_MSG: =A0 build =3D https://svn.= apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 U= TC 2010
************************************************************/<= /div>
2011-06-17 19:59:46,738 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean
2011-06-17 19:59:46,749 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
2011-06-17 19:59:46,752 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2011-06-17 19:59:46,812 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2011-06-17 19:59:46,870 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075
2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
2011-06-17 19:59:46,875 INFO org.mortbay.log: jetty-6.1.14
2011-06-17 20:01:45,702 INFO org.mortbay.log: Started SelectCh= annelConnector@0.0.0.0:50075
2011-06-17 20:01:45,709 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=3DDataNode, sessionId=3Dnull
2011-06-17 20:01:45,743 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=3DDataNode, port=3D50020
2011-06-17 20:01:45,751 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =3D DatanodeRegistration(localhost.localdomain:50010, storageID=3DDS-993704729-127.0.0.1-50010-1308296320968, infoPort=3D50075, ipcPort=3D50020)
2011-06-17 20:01:45,751 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2011-06-17 20:01:45,753 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting
2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting
2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting
2011-06-17 20:01:45,795 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=3DDS-993704729-127.0.0.1-50010-1308296320968, infoPort=3D50075, ipcPort=3D50020)In DataNode.run, data =3D FSDataset{dirpath=3D'/tmp/hadoop-appuser/dfs/data/current'= ;}


2011-06-17 20:01:45,799 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2011-06-17 20:01:45,828 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 11 msecs
2011-06-17 20:01:45,833 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block scanner.
2011-06-17 20:56:02,945 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 1 msecs
2011-06-17 21:56:02,248 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 1 msecs


On Fri, Jun 17, 2011 at 9:42 PM, Marcos Ortiz <mlortiz@uci.cu> wrote:
On 06/17/2011 07:41 AM, Lemon Cheng wrote:
Hi,

I am using the hadoop-0.20.2. After calling ./start-all.sh, i can type "hadoop dfs -ls&quo= t;.
However, when i type "hadoop dfs -cat /usr/lemon/wordcount/input/file01", the error = is shown as follow.
I have searched the related problem in the web, but i can't find a solution for helping me to solve this problem.
Anyone can give suggestion?
Many Thanks.



11/06/17 19:27:12 INFO hdfs.DFSClient: No node available for block: blk_7095683278339921538_1029 file=3D/usr/lemon/wordcount/input/file01
11/06/17 19:27:12 INFO hdfs.DFSClient: Could not obtain block blk_7095683278339921538_1029 from any node: =A0java.io.IOException: No live nodes contain current block
11/06/17 19:27:15 INFO hdfs.DFSClient: No node available for block: blk_7095683278339921538_1029 file=3D/usr/lemon/wordcount/input/file01
11/06/17 19:27:15 INFO hdfs.DFSClient: Could not obtain block blk_7095683278339921538_1029 from any node: =A0java.io.IOException: No live nodes contain current block
11/06/17 19:27:18 INFO hdfs.DFSClient: No node available for block: blk_7095683278339921538_1029 file=3D/usr/lemon/wordcount/input/file01
11/06/17 19:27:18 INFO hdfs.DFSClient: Could not obtain block blk_7095683278339921538_1029 from any node: =A0java.io.IOException: No live nodes contain current block
11/06/17 19:27:21 WARN hdfs.DFSClient: DFS Read: java.io.IOException: Could not obtain block: blk_7095683278339921538_1029 file=3D/usr/lemon/wordcount/input/file01
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.ja= va:1812)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:= 1638)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
=A0 =A0 =A0 =A0 at java.io.DataInputStream.read(DataInputStream.ja= va:83)
=A0 =A0 =A0 =A0 at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.= java:47)
=A0 =A0 =A0 =A0 at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.= java:85)
=A0 =A0 =A0 =A0 at org.apache.hadoop.fs.FsShell.printToStdout(FsSh= ell.java:114)
=A0 =A0 =A0 =A0 at org.apache.hadoop.fs.FsShell.access$100(FsShell= .java:49)
=A0 =A0 =A0 =A0 at org.apache.hadoop.fs.FsShell$1.process(FsShell.= java:352)
=A0 =A0 =A0 =A0 at org.apache.hadoop.fs.FsShell$DelayedExceptionThrowing.globAndProcess(FsShel= l.java:1898)
=A0 =A0 =A0 =A0 at=A0org.apache.hadoop.fs.FsShell.cat(FsShell.java:346)
=A0 =A0 =A0 =A0 at org.apache.hadoop.fs.FsShell.doall(FsShell.java= :1543)
=A0 =A0 =A0 =A0 at org.apache.hadoop.fs.FsShell.run(FsShell.java:1= 761)
=A0 =A0 =A0 =A0 at org.apache.hadoop.util.ToolRunner.run(ToolRunne= r.java:65)
=A0 =A0 =A0 =A0 at org.apache.hadoop.util.ToolRunner.run(ToolRunne= r.java:79)
=A0 =A0 =A0 =A0 at org.apache.hadoop.fs.FsShell.main(FsShell.java:= 1880)


Regards,
Lemon
Are you sure that all your DataNodes are online?


--=20
Marcos Lu=EDs Ort=EDz Valmaseda
 Software Engineer (UCI)
 http://m=
arcosluis2186.posterous.com
 http://twi=
tter.com/marcosluis2186




--=20
Marcos Lu=EDs Ort=EDz Valmaseda
 Software Engineer (UCI)
 http://m=
arcosluis2186.posterous.com
 http://twi=
tter.com/marcosluis2186


--90e6ba6e8eb846c99e04a5ea78bb--