hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wilm Schumacher <wilm.schumac...@cawoom.com>
Subject Re: What is the best database to handle large volume of data
Date Fri, 23 May 2014 21:08:50 GMT
Hi,

your question is very general and hard to answer regarding the lack of
essential information.

However, based on my assumption on what you are trying to do I would
recommend cassandra and materialized views for your portal (if the
questions are pre-computable) and indices (if the questions are
foreseeable).

On the other hand: if the questions to your data (i.e. your portal) are
more complex and user driven, hbase would be the method of choice (map
reduce)

Cassandra is said to be faster on writes. But 100 million insertions of
reasonable rows should be easy to manage for both db systems, even if
your cluster is very small.

However, if your data stream (and deletions) is constant (thus the
database size isn't growing), and your data is very complex, couchDB
(with the bigCouch extension) could be fine for your.

But my guess for you would be cassandra

Best wishes

Wilm

ps: I hope I will not get slapped for recommending something else than
HBase on this list ;)

pps: @Ramasubramanian: My ansers should be reviewed critically. I'm not
an "noSQL" expert. I ran very small hbase and cassandra clusters and a
very small mongoDB. So if an expert gives another answer ... go with it!

Am 23.05.2014 22:44, schrieb Ramasubramanian:
> Hi,
> 
> Just to add: there will be heavy writes and updates.
> 
> Regards, Rams
> 
> 
>> On 24-May-2014, at 1:37 am, Ramasubramanian
>> <ramasubramanian.narayanan@gmail.com> wrote:
>> 
>> Hi,
>> 
>> Request your advice and suggestions on deciding what database can
>> we consider other than oracle to store huge volume of transactional
>> data. It is expected to get around 100 millions of data in a day
>> and we need to keep this in the database for any updates not less
>> than 3 months. There is a portal which shows details which these
>> data.
>> 
>> So here the volume is too large which oracle cannot handle.
>> 
>> Pls suggest what is the next data that we should consider. Will be
>> helpful if you could state a rough write & read speeds.
>> 
>> Note : after 'n' months the data will be moved to Hadoop (any other
>> options?) for analytics with tableau as BI tool.
>> 
>> Regards, Rams
>> 
>> 
> 

Mime
View raw message