hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ulul <had...@ulul.org>
Subject Re: adding another namenode to an existing hadoop instance
Date Wed, 18 Feb 2015 21:25:14 GMT
Erratum : dfs.namenode.name.dir is not unique, it can be a 
comma-separated list in order to have more than one dir containing the 
fsimage, preferably with one pointing to an NFS mount or at least on 
different physical disks.
Sorry about that

Anyway, having more than one dir won't create a standby node, you need 
to configure HA

Ulul

Le 18/02/2015 21:14, Ulul a écrit :
> Hi
>
> This is not your hadoop version but your java version you displayed
> For hadoop version remove the dash : hdfs version
>
> The dfs.namenode.name.dir is the dir for the namenode process to store 
> the filesystem image. It is unique.
>
> What you seem to be looking for is Namenode HA, how to install a 
> standby NN (and not a secondary NN which is the poorly named process 
> that merges FS changes into the fs image).
> This exists in Hadoop v2.x
>
> You can find doc there : 
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> Since you seem to already have a running cluster here is a blog post 
> about configuring HA on an existing cluster (I didn't try it) : 
> http://johnjianfang.blogspot.fr/2014/09/how-to-configure-hadoop-ha-for-running.html
>
> But you need to test thoroughly on a test environment before doing 
> anything on a cluster holding production data
>
> Cheers
> Ulul
>
> Le 18/02/2015 10:50, Mich Talebzadeh a écrit :
>>
>> Hi,
>>
>> I have a Hadoop instance (single node) installed on RHES5 running OK.
>>
>> The version of Hadoop is
>>
>> hdfs -version
>>
>> java version "1.7.0_25"
>>
>> Java(TM) SE Runtime Environment (build 1.7.0_25-b15)
>>
>> Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)
>>
>> I would like to add another file to namenode in hdfs-site.xml. That 
>> is effectively building resiliency and multi-plexing it.
>>
>> The new file will be on Solid State Disk. I tried adding it to 
>> hdfs-site.xml file by having
>>
>> <property>
>>
>> <name>dfs.namenode.name.dir</name>
>>
>>    <value>file:/work/hadoop/hadoop_store/hdfs/namenode</value>
>>
>> </property>
>>
>> And changing it to
>>
>> <property>
>>
>> <name>dfs.namenode.name.dir</name>
>>
>>    <value>file:/work/hadoop/hadoop_store/hdfs/namenode</value>
>>
>> *<value>file:/ssddata6/hadoop/hadoop_store/hdfs/namenode</value>*
>>
>> </property>
>>
>> But it does not work!
>>
>> Any ideas if one can add the secondary namenode file without losing 
>> data etc?
>>
>> Thanks
>>
>> Mich Talebzadeh
>>
>> http://talebzadehmich.wordpress.com
>>
>> __
>>
>> _Publications due shortly:_
>>
>> *Creating in-memory Data Grid for Trading Systems with Oracle 
>> TimesTen and Coherence Cache*
>>
>> NOTE: The information in this email is proprietary and confidential. 
>> This message is for the designated recipient only, if you are not the 
>> intended recipient, you should destroy it immediately. Any 
>> information in this message shall not be understood as given or 
>> endorsed by Peridale Ltd, its subsidiaries or their employees, unless 
>> expressly so stated. It is the responsibility of the recipient to 
>> ensure that this email is virus free, therefore neither Peridale Ltd, 
>> its subsidiaries nor their employees accept any responsibility.
>>
>


Mime
View raw message