hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Adarsh Sharma <adarsh.sha...@orkash.com>
Subject Re: When applying a patch, which attachment should I use?
Date Thu, 13 Jan 2011 09:20:11 GMT
Thanks Edward,

Can you describe me the architecture used in your configuration.

Fore.g I have a cluster of 10 servers and

1 node act as ( Namenode, Jobtracker, Hmaster ).
Remainning 9 nodes act as ( Slaves, datanodes, Tasktracker, 
Hregionservers ).
Among these 9 nodes I also set 3 nodes in zookeeper.quorum.property.

I want to know that is it necessary to configure zookeeper separately 
with the zookeeper-3.2.2 package or just have some IP's listed in

zookeeper.quorum.property and Hbase take care of it.

Can we specify IP's of Hregionservers used before as zookeeper servers ( 
HQuorumPeer ) or we must need separate servers for it.

My problem arises in running zookeeper. My Hbase is up and running  in 
fully distributed mode too.




With Best Regards

Adarsh Sharma
 






edward choi wrote:
> Dear Adarsh,
>
> My situation is somewhat different from yours as I am only running Hadoop
> and Hbase (as opposed to Hadoop/Hive/Hbase).
>
> But I hope my experience could be of help to you somehow.
>
> I applied the "hdfs-630-0.20-append.patch" to every single Hadoop node.
> (including master and slaves)
> Then I followed exactly what they told me to do on
> http://hbase.apache.org/docs/current/api/overview-summary.html#overview_description
> .
>
> I didn't get a single error message and successfully started HBase in a
> fully distributed mode.
>
> I am not using Hive so I can't tell what caused the
> MasterNotRunningException, but the patch above is meant to  allow DFSClients
> pass NameNode lists of known dead Datanodes.
> I doubt that the patch has anything to do with MasterNotRunningException.
>
> Hope this helps.
>
> Regards,
> Ed
>
> 2011/1/13 Adarsh Sharma <adarsh.sharma@orkash.com>
>
>   
>> I am also facing some issues  and i think applying
>>
>> hdfs-630-0.20-append.patch<
>> https://issues.apache.org/jira/secure/attachment/12446812/hdfs-630-0.20-append.patch
>>     
>> would solve my problem.
>>
>> I try to run Hadoop/Hive/Hbase integration in fully Distributed mode.
>>
>> But I am facing master Not Running Exception mentioned in
>>
>> http://wiki.apache.org/hadoop/Hive/HBaseIntegration.
>>
>> My Hadoop Version= 0.20.2, Hive =0.6.0 , Hbase=0.20.6.
>>
>> What you think Edward.
>>
>>
>> Thanks  Adarsh
>>
>>
>>
>>
>>
>>
>> edward choi wrote:
>>
>>     
>>> I am not familiar with this whole svn and patch stuff, so please
>>> understand
>>> my asking.
>>>
>>> I was going to apply
>>> hdfs-630-0.20-append.patch<
>>> https://issues.apache.org/jira/secure/attachment/12446812/hdfs-630-0.20-append.patch
>>>       
>>> only
>>> because I wanted to install HBase and the installation guide told me to.
>>> The append branch you mentioned, does that include
>>> hdfs-630-0.20-append.patch<
>>> https://issues.apache.org/jira/secure/attachment/12446812/hdfs-630-0.20-append.patch
>>>       
>>> as
>>> well?
>>> Is it like the latest patch with all the good stuff packed in one?
>>>
>>> Regards,
>>> Ed
>>>
>>> 2011/1/12 Ted Dunning <tdunning@maprtech.com>
>>>
>>>
>>>
>>>       
>>>> You may also be interested in the append branch:
>>>>
>>>> http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/
>>>>
>>>> On Tue, Jan 11, 2011 at 3:12 AM, edward choi <mp2893@gmail.com> wrote:
>>>>
>>>>
>>>>
>>>>         
>>>>> Thanks for the info.
>>>>> I am currently using Hadoop 0.20.2, so I guess I only need apply
>>>>> hdfs-630-0.20-append.patch<
>>>>>
>>>>>
>>>>>
>>>>>           
>>>> https://issues.apache.org/jira/secure/attachment/12446812/hdfs-630-0.20-append.patch
>>>>
>>>>
>>>>         
>>>>> .
>>>>> I wasn't familiar with the term "trunk". I guess it means "the latest
>>>>> development".
>>>>> Thanks again.
>>>>>
>>>>> Best Regards,
>>>>> Ed
>>>>>
>>>>> 2011/1/11 Konstantin Boudnik <cos@apache.org>
>>>>>
>>>>>
>>>>>
>>>>>           
>>>>>> Yeah, that's pretty crazy all right. In your case looks like that
3
>>>>>> patches on the top are the latest for 0.20-append branch, 0.21 branch
>>>>>> and trunk (which perhaps 0.22 branch at the moment). It doesn't look
>>>>>> like you need to apply all of them - just try the latest for your
>>>>>> particular branch.
>>>>>>
>>>>>> The mess is caused by the fact the ppl are using different names
for
>>>>>> consequent patches (as in file.1.patch, file.2.patch etc) This is
>>>>>> _very_ confusing indeed, especially when different contributors work
>>>>>> on the same fix/feature.
>>>>>> --
>>>>>>  Take care,
>>>>>> Konstantin (Cos) Boudnik
>>>>>>
>>>>>>
>>>>>> On Mon, Jan 10, 2011 at 01:10, edward choi <mp2893@gmail.com>
wrote:
>>>>>>
>>>>>>
>>>>>>             
>>>>>>> Hi,
>>>>>>> For the first time I am about to apply a patch to HDFS.
>>>>>>>
>>>>>>> https://issues.apache.org/jira/browse/HDFS-630
>>>>>>>
>>>>>>> Above is the one that I am trying to do.
>>>>>>> But there are like 15 patches and I don't know which one to use.
>>>>>>>
>>>>>>> Could anyone tell me if I need to apply them all or just the
one at
>>>>>>>
>>>>>>>
>>>>>>>               
>>>>>> the
>>>>>>             
>>>>         
>>>>> top?
>>>>>           
>>>>>>             
>>>>>>> The whole patching process is just so confusing :-(
>>>>>>>
>>>>>>> Ed
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>               
>>>       
>>     
>
>   


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message