hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Cristian Ivascu <civa...@adobe.com>
Subject Re: Commit hdfs-630 to 0.21?
Date Tue, 15 Dec 2009 17:08:05 GMT
+1

Cristian

On Dec 15, 2009, at 6:59 PM, Cosmin Lehene wrote:

> +1 
> 
> Cosmin
> 
> 
> On 12/15/09 10:44 AM, "Lars George" <lars.george@gmail.com> wrote:
> 
>> +1
>> 
>> Lars
>> 
>> On Tue, Dec 15, 2009 at 8:53 AM, Jean-Daniel Cryans
>> <jdcryans@apache.org>wrote:
>> 
>>> +1 for 0.21.0
>>> 
>>> J-D
>>> 
>>> On Mon, Dec 14, 2009 at 11:30 PM, Andrew Purtell <apurtell@apache.org>
>>> wrote:
>>>> +1
>>>> 
>>>> 
>>>> On Sat, Dec 12, 2009 at 3:54 PM, stack <stack@duboce.net> wrote:
>>>> 
>>>>> HDFS-630 is kinda critical to us over in hbase.  We'd like to get it
>>> into
>>>>> 0.21 (Its been committed to TRUNK).  Its probably hard to argue its a
>>>>> blocker for 0.21.  We could run a vote.  Or should we just file it
>>> against
>>>>> 0.21.1 hdfs and commit it after 0.21 goes out?  What would folks
>>> suggest?
>>>>> 
>>>>> Without it, a node crash (datanode+regionserver) will bring down a
>>> second
>>>>> regionserver, particularly if the cluster is small (See HBASE-1876 for
>>>>> description of the play-by-play where NN keeps giving out dead DN as
>>> place
>>>>> to locate new blocks).  Since the bulk of hbase clusters are small --
>>>>> whether evaluations, test, or just small productions -- this issue is
an
>>>>> important fix for us.  If the cluster is of 5 or less nodes, we'll
>>> probably
>>>>> recover but there'll be a period of churn.  At a minimum mapreduce jobs
>>>>> running against the cluster will fail (usually some kind of bullk
>>> upload).
>>>>> 
>>>>> St.Ack
>>>>> 
>>>> 
>>>> 
>>>> 
>>> 
> 


Mime
View raw message