hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Uma Maheswara Rao G 72686 <mahesw...@huawei.com>
Subject Re: RE: risks of using Hadoop
Date Thu, 22 Sep 2011 05:03:32 GMT
Absolutely agree with you.
Mainly we should consider SPOF and minimize the problem with our carefulness. 
(there are many ways to minimize this issue, we have seen in this thread)

Regards,
Uma
----- Original Message -----
From: Bill Habermaas <bill.habermaas@oracle.com>
Date: Thursday, September 22, 2011 10:04 am
Subject: RE: risks of using Hadoop
To: common-user@hadoop.apache.org

> Amen to that. I haven't heard a good rant in a long time, I am 
> definitely amused end entertained. 
> 
> As a veteran of 3 years with Hadoop I will say that the SPOF issue 
> is whatever you want to make it. But it has not, nor will it ever 
> defer me from using this great system. Every system has its risks 
> and they can be minimized by careful architectural crafting and 
> intelligent usage. 
> 
> Bill
> 
> -----Original Message-----
> From: Michael Segel [mailto:michael_segel@hotmail.com] 
> Sent: Wednesday, September 21, 2011 1:48 PM
> To: common-user@hadoop.apache.org
> Subject: RE: risks of using Hadoop
> 
> 
> Kobina
> 
> The points 1 and 2 are definitely real risks. SPOF is not.
> 
> As I pointed out in my mini-rant to Tom was that your end users / 
> developers who use the cluster can do more harm to your cluster 
> than a SPOF machine failure.
> 
> I don't know what one would consider a 'long learning curve'. With 
> the adoption of any new technology, you're talking at least 3-6 
> months based on the individual and the overall complexity of the 
> environment. 
> 
> Take anyone who is a strong developer, put them through Cloudera's 
> training, plus some play time, and you've shortened the learning 
> curve.The better the java developer, the easier it is for them to 
> pick up Hadoop.
> 
> I would also suggest taking the approach of hiring a senior person 
> who can cross train and mentor your staff. This too will shorten 
> the runway.
> 
> HTH
> 
> -Mike
> 
> 
> > Date: Wed, 21 Sep 2011 17:02:45 +0100
> > Subject: Re: risks of using Hadoop
> > From: kobina.kwarko@gmail.com
> > To: common-user@hadoop.apache.org
> > 
> > Jignesh,
> > 
> > Will your point 2 still be valid if we hire very experienced Java
> > programmers?
> > 
> > Kobina.
> > 
> > On 20 September 2011 21:07, Jignesh Patel <jignesh@websoft.com> 
> wrote:> 
> > >
> > > @Kobina
> > > 1. Lack of skill set
> > > 2. Longer learning curve
> > > 3. Single point of failure
> > >
> > >
> > > @Uma
> > > I am curious to know about .20.2 is that stable? Is it same as 
> the one you
> > > mention in your email(Federation changes), If I need scaled 
> nameNode and
> > > append support, which version I should choose.
> > >
> > > Regarding Single point of failure, I believe Hortonworks(a.k.a 
> Yahoo) is
> > > updating the Hadoop API. When that will be integrated with Hadoop.
> > >
> > > If I need
> > >
> > >
> > > -Jignesh
> > >
> > > On Sep 17, 2011, at 12:08 AM, Uma Maheswara Rao G 72686 wrote:
> > >
> > > > Hi Kobina,
> > > >
> > > > Some experiences which may helpful for you with respective 
> to DFS.
> > > >
> > > > 1. Selecting the correct version.
> > > >    I will recommend to use 0.20X version. This is pretty 
> stable version
> > > and all other organizations prefers it. Well tested as well.
> > > > Dont go for 21 version.This version is not a stable 
> version.This is risk.
> > > >
> > > > 2. You should perform thorough test with your customer 
> operations.> > >  (of-course you will do this :-))
> > > >
> > > > 3. 0.20x version has the problem of SPOF.
> > > >   If NameNode goes down you will loose the data.One way of 
> recovering is
> > > by using the secondaryNameNode.You can recover the data till last
> > > checkpoint.But here manual intervention is required.
> > > > In latest trunk SPOF will be addressed bu HDFS-1623.
> > > >
> > > > 4. 0.20x NameNodes can not scale. Federation changes 
> included in latest
> > > versions. ( i think in 22). this may not be the problem for 
> your cluster.
> > > But please consider this aspect as well.
> > > >
> > > > 5. Please select the hadoop version depending on your security
> > > requirements. There are versions available for security as 
> well in 0.20X.
> > > >
> > > > 6. If you plan to use Hbase, it requires append support. 
> 20Append has the
> > > support for append. 0.20.205 release also will have append 
> support but not
> > > yet released. Choose your correct version to avoid sudden 
> surprises.> > >
> > > >
> > > >
> > > > Regards,
> > > > Uma
> > > > ----- Original Message -----
> > > > From: Kobina Kwarko <kobina.kwarko@gmail.com>
> > > > Date: Saturday, September 17, 2011 3:42 am
> > > > Subject: Re: risks of using Hadoop
> > > > To: common-user@hadoop.apache.org
> > > >
> > > >> We are planning to use Hadoop in my organisation for 
> quality of
> > > >> servicesanalysis out of CDR records from mobile operators. 
> We are
> > > >> thinking of having
> > > >> a small cluster of may be 10 - 15 nodes and I'm preparing the
> > > >> proposal. my
> > > >> office requires that i provide some risk analysis in the 
> proposal.> > >>
> > > >> thank you.
> > > >>
> > > >> On 16 September 2011 20:34, Uma Maheswara Rao G 72686
> > > >> <maheswara@huawei.com>wrote:
> > > >>
> > > >>> Hello,
> > > >>>
> > > >>> First of all where you are planning to use Hadoop?
> > > >>>
> > > >>> Regards,
> > > >>> Uma
> > > >>> ----- Original Message -----
> > > >>> From: Kobina Kwarko <kobina.kwarko@gmail.com>
> > > >>> Date: Saturday, September 17, 2011 0:41 am
> > > >>> Subject: risks of using Hadoop
> > > >>> To: common-user <common-user@hadoop.apache.org>
> > > >>>
> > > >>>> Hello,
> > > >>>>
> > > >>>> Please can someone point some of the risks we may incur 
> if we
> > > >>>> decide to
> > > >>>> implement Hadoop?
> > > >>>>
> > > >>>> BR,
> > > >>>>
> > > >>>> Isaac.
> > > >>>>
> > > >>>
> > > >>
> > >
> > >
>                                              
> 

Mime
View raw message