hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From aakash_j j_shah <aakash_j_s...@yahoo.com>
Subject Re: Architecture question.
Date Sat, 20 Dec 2008 01:15:30 GMT
Hello Edwin,
 
  Thanks for the answer. Records are very small usually key is about 64 bytes ( ascii ) and
updates are for 10 integer values. So I would say that record size including key is about
104 bytes. 

Sid.


--- On Fri, 12/19/08, Edwin Gonzalez <gonzalez@zenbe.com> wrote:
From: Edwin Gonzalez <gonzalez@zenbe.com>
Subject: Re: Architecture question.
To: core-user@hadoop.apache.org, aakash_j_shah@yahoo.com
Date: Friday, December 19, 2008, 5:13 PM

How large are the records ?
 1 mil updates / mil.  . . .  you mind sharing the complexity of the updates ?

On Fri, Dec 19, 2008 at 8:05 PM "aakash_j j_shah"
<aakash_j_shah@yahoo.com> wrote:
>Hello All,
> 
>    I am designing an architecture which should support 10 million
records storage capacity and 1 million updates / minute. Data persistancy is not
that important as I will be purging this data every day.
>  
>   
> I am familiar with memcache but not hadoop. It will be great if I can
> get some points from the group regarding designing this architecture.
> 
> Thanks,
> Aakash.
> 
> 
> 



      
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message