hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From He Chen <airb...@gmail.com>
Subject Re: Can not upload local file to HDFS
Date Mon, 27 Sep 2010 15:34:02 GMT
Thanks, but I think you goes too far to focus on the problem itself.

On Sun, Sep 26, 2010 at 11:43 AM, Nan Zhu <zhunansjtu@gmail.com> wrote:

> Have you ever check the log file in the directory?
>
> I always find some important information there,
>
> I suggest you to recompile hadoop with ant since mapred daemons also don't
> work
>
> Nan
>
> On Sun, Sep 26, 2010 at 7:29 PM, He Chen <airbots@gmail.com> wrote:
>
> > The problem is every datanode may be listed in the error report. That
> means
> > all my datanodes are bad?
> >
> > One thing I forgot to mention. I can not use start-all.sh and stop-all.sh
> > to
> > start and stop all dfs and mapred processes on my clusters. But the
> > jobtracker and namenode web interface still work.
> >
> > I think I can solve this problem by ssh to every node and kill current
> > hadoop processes and restart them again. The previous problem will also
> be
> > solved( it's my opinion). But I really want to know why the HDFS reports
> me
> > previous errors.
> >
> >
> > On Sat, Sep 25, 2010 at 11:20 PM, Nan Zhu <zhunansjtu@gmail.com> wrote:
> >
> > > Hi Chen,
> > >
> > > It seems that you have a bad datanode? maybe you should reformat them?
> > >
> > > Nan
> > >
> > > On Sun, Sep 26, 2010 at 10:42 AM, He Chen <airbots@gmail.com> wrote:
> > >
> > > > Hello Neil
> > > >
> > > > No matter how big the file is. It always report this to me. The file
> > size
> > > > is
> > > > from 10KB to 100MB.
> > > >
> > > > On Sat, Sep 25, 2010 at 6:08 PM, Neil Ghosh <neil.ghosh@gmail.com>
> > > wrote:
> > > >
> > > > > How Big is the file? Did you try Formatting Name node and Datanode?
> > > > >
> > > > > On Sun, Sep 26, 2010 at 2:12 AM, He Chen <airbots@gmail.com>
> wrote:
> > > > >
> > > > > > Hello everyone
> > > > > >
> > > > > > I can not load local file to HDFS. It gave the following errors.
> > > > > >
> > > > > > WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor exception
> >  for
> > > > > block
> > > > > > blk_-236192853234282209_419415java.io.EOFException
> > > > > >        at
> > java.io.DataInputStream.readFully(DataInputStream.java:197)
> > > > > >        at
> > java.io.DataInputStream.readLong(DataInputStream.java:416)
> > > > > >        at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2397)
> > > > > > 10/09/25 15:38:25 WARN hdfs.DFSClient: Error Recovery for block
> > > > > > blk_-236192853234282209_419415 bad datanode[0]
> 192.168.0.23:50010
> > > > > > 10/09/25 15:38:25 WARN hdfs.DFSClient: Error Recovery for block
> > > > > > blk_-236192853234282209_419415 in pipeline 192.168.0.23:50010,
> > > > > > 192.168.0.39:50010: bad datanode 192.168.0.23:50010
> > > > > > Any response will be appreciated!
> > > > > >
> > > > > >
> > >
> >
>



-- 
Best Wishes!
顺送商祺!

--
Chen He
(402)613-9298
PhD. student of CSE Dept.
Research Assistant of Holland Computing Center
University of Nebraska-Lincoln
Lincoln NE 68588

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message