hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: configuring hadoop
Date Mon, 31 May 2010 15:28:16 GMT
Was NFS involved ?
Did the cluster admin mount the NFS with nolocks option ?

On Mon, May 31, 2010 at 8:18 AM, Khaled BEN BAHRI <
Khaled.Ben_bahri@it-sudparis.eu> wrote:

> When i try to start the namenode by start-dfs.sh
> it gives this error
>
>
> 2010-05-31 17:02:19,674 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException: No locks available
>        at sun.nio.ch.FileChannelImpl.lock0(Native Method)
>        at sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:881)
>        at java.nio.channels.FileChannel.tryLock(FileChannel.java:962)
>        at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:527)
>        at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:505)
>        at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:363)
>        at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:285)
>        at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
>
>
>
>
> Quoting Khaled BEN BAHRI <Khaled.Ben_bahri@it-sudparis.eu>:
>
>  Hi
>>
>> The same error happens with the the new command.
>>
>> Bad connection to FS. command aborted.
>> Thanks for your help :))))
>>
>> Khaled
>>
>> Quoting Pierre ANCELOT <pierreact@gmail.com>:
>>
>>  Maybe you want this?
>>> bin/hadoop fs -copyFromLocal /home/khaled-b/myfile.tar.gz
>>> /khaled-b/hadoop/exple
>>>
>>> Pierre.
>>>
>>>
>>> On Mon, May 31, 2010 at 11:42 AM, Khaled BEN BAHRI <
>>> Khaled.Ben_bahri@it-sudparis.eu> wrote:
>>>
>>>  Thank you for your help :)
>>>>
>>>> i will try to get acces for 6 node, but at this time i must store some
>>>> data
>>>> in hdfs
>>>>
>>>> when i try to copy a file by this command, operation failed
>>>>
>>>> bin/hadoop fs -put /home/khaled-b/myfile.tar.gz /khaled-b/hadoop/exple
>>>>
>>>> and i have this error
>>>>
>>>> Bad connection to FS. Command aborted.
>>>>
>>>> The name node and the secondarynamenode are running.
>>>>
>>>> i don't know what's wrong??
>>>>
>>>> thanks in advance
>>>>
>>>>
>>>>
>>>> Quoting Pierre ANCELOT <pierreact@gmail.com>:
>>>>
>>>> Hi,
>>>>
>>>>>
>>>>> I think you should consider running it on a bit more nodes...
>>>>> Here's our test configuration:
>>>>> 1 node as namenode
>>>>> 1 node as jobtracker
>>>>> 4 nodes as datanode/tasktracker (those who really handle the work
>>>>> done...)
>>>>>
>>>>> Means you need a 2+n configuration...
>>>>> For production use, you'll have a third one with the secondary
>>>>> namenode.
>>>>> I think 6 machines for a test cluster is a good deal.
>>>>>
>>>>>
>>>>> On Mon, May 31, 2010 at 10:34 AM, Khaled BEN BAHRI <
>>>>> Khaled.Ben_bahri@it-sudparis.eu> wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>>>
>>>>>> I'm a novice at hadoop, and I want to install it on 3 nodes, I try
to
>>>>>> configure it by editing core-site.xml, hdfs-site.xml and
>>>>>> mapred-site.xml
>>>>>> that the first node is the namenode, the second is the jobtracker,
abd
>>>>>> the
>>>>>> third is both the datanode and the tasktracker.
>>>>>>
>>>>>> My question how can i store data in my hdfs (node 3) ??
>>>>>> And  how can i restore and use it to manipulate this data???
>>>>>>
>>>>>> Thanks for any help;
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>> --
>>>>> http://www.neko-consulting.com
>>>>> Ego sum quis ego servo
>>>>> "Je suis ce que je protège"
>>>>> "I am what I protect"
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>> --
>>> http://www.neko-consulting.com
>>> Ego sum quis ego servo
>>> "Je suis ce que je protège"
>>> "I am what I protect"
>>>
>>>
>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message