Return-Path: Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: (qmail 48819 invoked from network); 26 Sep 2010 11:29:38 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 26 Sep 2010 11:29:38 -0000 Received: (qmail 88725 invoked by uid 500); 26 Sep 2010 11:29:36 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 88446 invoked by uid 500); 26 Sep 2010 11:29:32 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 88438 invoked by uid 99); 26 Sep 2010 11:29:31 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 26 Sep 2010 11:29:31 +0000 X-ASF-Spam-Status: No, hits=2.2 required=10.0 tests=FREEMAIL_FROM,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_NONE,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of airbots@gmail.com designates 74.125.82.176 as permitted sender) Received: from [74.125.82.176] (HELO mail-wy0-f176.google.com) (74.125.82.176) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 26 Sep 2010 11:29:24 +0000 Received: by wyb34 with SMTP id 34so4612561wyb.35 for ; Sun, 26 Sep 2010 04:29:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:content-type; bh=QTQGOKmtBnNUXf+86n0Oc36MX6xtzrsK9ZhKG90RFzE=; b=D/svLDA5xAIZ7i5mMncPLlzhV58UhCHFabDe/pkZSRk1RHV1mqnSNDbpTbKXDqmCqL OwrjuSylJmkkeE6cywNicMEEx+Yke/Cs6qJYdau1TwvJORUSQQK4Rag5H8/Nbh6iMbyG i+gpYBb4oN+rnYZtWJqTFcgzS5/lsfvJkA7us= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=ZyAWWz5mDQjvTvxATCTyma2Ul30oh9xgG3SIoqXUq+98TdqOPCA5rSuO+oW0LargDO DHx9G+Sb8YA8XST8y505bzQqf/pxcOGX/6qA/Yyyzs6dFDXpz452S5qjli2somAcWIW4 e5Uy1UH+0enoLRlv6u9SVRCgrxujvy+Lza+Jk= MIME-Version: 1.0 Received: by 10.216.22.131 with SMTP id t3mr4897652wet.39.1285500543774; Sun, 26 Sep 2010 04:29:03 -0700 (PDT) Received: by 10.216.138.74 with HTTP; Sun, 26 Sep 2010 04:29:03 -0700 (PDT) In-Reply-To: References: Date: Sun, 26 Sep 2010 06:29:03 -0500 Message-ID: Subject: Re: Can not upload local file to HDFS From: He Chen To: common-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=0016364c7949452eb7049127eb89 X-Virus-Checked: Checked by ClamAV on apache.org --0016364c7949452eb7049127eb89 Content-Type: text/plain; charset=ISO-8859-1 The problem is every datanode may be listed in the error report. That means all my datanodes are bad? One thing I forgot to mention. I can not use start-all.sh and stop-all.sh to start and stop all dfs and mapred processes on my clusters. But the jobtracker and namenode web interface still work. I think I can solve this problem by ssh to every node and kill current hadoop processes and restart them again. The previous problem will also be solved( it's my opinion). But I really want to know why the HDFS reports me previous errors. On Sat, Sep 25, 2010 at 11:20 PM, Nan Zhu wrote: > Hi Chen, > > It seems that you have a bad datanode? maybe you should reformat them? > > Nan > > On Sun, Sep 26, 2010 at 10:42 AM, He Chen wrote: > > > Hello Neil > > > > No matter how big the file is. It always report this to me. The file size > > is > > from 10KB to 100MB. > > > > On Sat, Sep 25, 2010 at 6:08 PM, Neil Ghosh > wrote: > > > > > How Big is the file? Did you try Formatting Name node and Datanode? > > > > > > On Sun, Sep 26, 2010 at 2:12 AM, He Chen wrote: > > > > > > > Hello everyone > > > > > > > > I can not load local file to HDFS. It gave the following errors. > > > > > > > > WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for > > > block > > > > blk_-236192853234282209_419415java.io.EOFException > > > > at java.io.DataInputStream.readFully(DataInputStream.java:197) > > > > at java.io.DataInputStream.readLong(DataInputStream.java:416) > > > > at > > > > > > > > > > > > > > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2397) > > > > 10/09/25 15:38:25 WARN hdfs.DFSClient: Error Recovery for block > > > > blk_-236192853234282209_419415 bad datanode[0] 192.168.0.23:50010 > > > > 10/09/25 15:38:25 WARN hdfs.DFSClient: Error Recovery for block > > > > blk_-236192853234282209_419415 in pipeline 192.168.0.23:50010, > > > > 192.168.0.39:50010: bad datanode 192.168.0.23:50010 > > > > Any response will be appreciated! > > > > > > > > > --0016364c7949452eb7049127eb89--