hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From lohit <lohit...@yahoo.com>
Subject Re: hadoop file system error
Date Thu, 26 Jun 2008 15:44:42 GMT
Hi Roman,

Which version of hadoop are you running. And do you see any errors/stack trace dumps in log
Can you check $HADOOP_LOG_DIR/*-datanode-*.log and $HADOOP_LOG_DIR/*-namenode-*.log

Can you also make sure you have NameNode and DataNode running. 


----- Original Message ----
From: brainstorm <braincode@gmail.com>
To: core-user@hadoop.apache.org
Sent: Thursday, June 26, 2008 8:24:49 AM
Subject: Re: hadoop file system error

I'm having a similar problem but with the hadoop CLI tool (not
programatically), and it's driving me nuts:

hadoop@escerttop:~/nutch/trunk$ cat urls/urls.txt

hadoop@escerttop:~/nutch/trunk$ bin/hadoop dfs -ls
Found 0 items
hadoop@escerttop:~/nutch/trunk$ bin/hadoop dfs -put urls urls

hadoop@escerttop:~/nutch/trunk$ bin/hadoop dfs -ls
Found 1 items
/user/hadoop/urls    <dir>        2008-06-26 17:20    rwxr-xr-x    hadoop    supergroup
hadoop@escerttop:~/nutch/trunk$ bin/hadoop dfs -ls urls
Found 1 items
/user/hadoop/urls/urls.txt    <r 1>    0    2008-06-26 17:20    rw-r--r--    hadoop

hadoop@escerttop:~/nutch/trunk$ bin/hadoop dfs -cat urls/urls.txt
hadoop@escerttop:~/nutch/trunk$ bin/hadoop dfs -get urls/urls.txt .
hadoop@escerttop:~/nutch/trunk$ cat urls.txt

As you see, I put a txt file on HDFS from local, containing a line,
but afterwards, this file is empty... amb I missing any "close",
"flush" or "commit" command ?

Thanks in advance,

On Thu, Jun 19, 2008 at 7:23 PM, Mori Bellamy <mbellamy@apple.com> wrote:
> might it be a synchronization problem? i don't know if hadoops DFS magically
> takes care of that, but if it doesn't then you might have a problem because
> of multiple processes trying to write to the same file?
> perhaps as a control experiment you could run your process on some small
> input, making sure that each reduce task outputs to a different filename (i
> just use Math.random()*Integer.MAX_VALUE and cross my fingers).
> On Jun 18, 2008, at 6:01 PM, 晋光峰 wrote:
>> i'm sure i close all the files in the reduce step. Any other reasons cause
>> this problem?
>> 2008/6/18 Konstantin Shvachko <shv@yahoo-inc.com>:
>>> Did you close those files?
>>> If not they may be empty.
>>> ??? wrote:
>>>> Dears,
>>>> I use hadoop-0.16.4 to do some work and found a error which i can't get
>>>> the
>>>> reasons.
>>>> The scenario is like this: In the reduce step, instead of using
>>>> OutputCollector to write result, i use FSDataOutputStream to write
>>>> result
>>>> to
>>>> files on HDFS(becouse i want to split the result by some rules). After
>>>> the
>>>> job finished, i found that *some* files(but not all) are empty on HDFS.
>>>> But
>>>> i'm sure in the reduce step the files are not empty since i added some
>>>> logs
>>>> to read the generated file. It seems that some file's contents are lost
>>>> after the reduce step. Is anyone happen to face such errors? or it's a
>>>> hadoop bug?
>>>> Please help me to find the reason if you some guys know
>>>> Thanks & Regards
>>>> Guangfeng
>> --
>> Guangfeng Jin
>> Software Engineer
>> iZENEsoft (Shanghai) Co., Ltd
>> Room 601 Marine Tower, No. 1 Pudong Ave.
>> Tel:86-21-68860698
>> Fax:86-21-68860699
>> Mobile: 86-13621906422
>> Company Website:www.izenesoft.com

View raw message