hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Adarsh Sharma <adarsh.sha...@orkash.com>
Subject Re: No locks available
Date Tue, 18 Jan 2011 07:26:59 GMT
Edward Capriolo wrote:
> On Mon, Jan 17, 2011 at 8:13 AM, Adarsh Sharma <adarsh.sharma@orkash.com> wrote:
>   
>> Harsh J wrote:
>>     
>>> Could you re-check your permissions on the $(dfs.data.dir)s for your
>>> failing DataNode versus the user that runs it?
>>>
>>> On Mon, Jan 17, 2011 at 6:33 PM, Adarsh Sharma <adarsh.sharma@orkash.com>
>>> wrote:
>>>
>>>       
>>>> Can i know why it occurs.
>>>>
>>>>         
>>>       
>> Thanx Harsh , I know this issue and I cross-check several times permissions
>> of of all dirs ( dfs.name.dir, dfs.data.dir, mapred.local.dir ).
>>
>> It is 755 and is owned by hadoop user and group.
>>
>> I found that in failed datanode dir , it is unable to create 5 files in
>> dfs.data.dir whereas on the other hand, it creates following files in
>> successsful datanode :
>>
>> curent
>> tmp
>> storage
>> in_use.lock
>>
>> Does it helps.
>>
>> Thanx
>>
>>     
>
> No locks available can mean that you are trying to use hadoop on a
> filesystem that does not support file level locking. Are you trying to
> run your name node storage in NFS space?
>   
I am sorry but my Namenode is in separate Machine outside CLoud.

The path is in /home/hadoop/project/hadoop-0.20.2/name

It's is running properly.

I find it difficult because I followed the same steps in the other 2 
VM's and they are running.

How could I debug this for 1 exceptional case where it is failing.


Thanks & Regards

Adarsh Sharma

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message