hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Rawson <ryano...@gmail.com>
Subject Re: Please help me overcome HBase's weaknesses
Date Sun, 05 Sep 2010 10:50:29 GMT
The class count is a bit of a red herring - most users do not need to
delve into the internal implementation of Hadoop, which is exactly
that, implementation.

Back to the node thing, one things people forget is that in a DHT each
node has an 'id' and if that node goes down, it must come back and
rejoin the DHT system... if it permanently unavailable you will need
to do some work to make a new node take it's place.  Ie: not automatic
recovery.  I'm sure in time the automatic tools will be there, but the
fact remains, each node in a Cassandra cluster has a unique identity
and they are not actually really interchangeable.

I've been running a hadoop/hbase cluster for over a year now, and I
really don't understand the multiple nodes have multiple roles junk.
You really only have 2 nodes: the namenode/master node, and all the
worker nodes.  The worker nodes are all exactly the same.  The master
node is somewhat special, but at SU we run the same OS/disk config.
With the default .tar.gz mechanisms, updating is easy, you install the
update on the master node, copy config files over then rsync to all
other machines.  The scripts like start-hbase/start-dfs/etc all work
from the master node, and you can bring your cluster up/down with
little/no trouble.

As for the counter side of things... I'd love to hear more about this
vector counter setup.  Right now I am focused on providing counters
that are able to be incremented tens/hundreds of millions of times a
day.  Yes that is 2000+ updates/second to 1 value.  And hbase is
kicking ass at that right now.  From what I know about vector clocks,
they are nontrivial structures to update, and you have about .5ms to
do the work in :-)

Finally, you will ALWAYS need to have a backup plan if your data store
goes down.  Because it will eventually, if even only for upgrades.
Caching, 503s, write buffers, write logs, etc are all standard ways,
and even with Cassandra you'll need them.  I heard that reddit had
some kind of Cassandra downtime... nothing is perfect, and even with
"single points of failure" you still have the biggest spof - the
algorithms and implementation, which there is only 1 of.  Distributed
cascading failures are horrible to figure out (doubly so at 3am), and
having a central master to coordinate everything doesn't seem so bad
anymore.  But as Jon said, the HBase master is _not_ part of the query
path, so even if it goes down your cluster does not choke.

The practical reality on the ground is HBase gives you some really
really great performance (sub millisecond cached reads are great),
good tools, features, API and is practically stable in production.  We
run it at Stumbleupon and it's role is only going to grow more and
more.  With the latest update of CDH3/hdfs-append and 0.89, HBase is
on a solid road to a durable performant datastore.

Keep tuned, lots of cool things coming up.  Come to hadoop world if
you can, and the hbase meetup the night before at Stumbleupon's office
in Manhattan.

-ryan

On Sat, Sep 4, 2010 at 9:55 PM, Edward Capriolo <edlinuxguru@gmail.com> wrote:
> On Sun, Sep 5, 2010 at 12:07 AM, Jonathan Gray <jgray@facebook.com> wrote:
>>> > But your boss seems rather to be criticizing the fact that our system
>>> > is made of components.  In software engineering, this is usually
>>> > considered a strength.  As to 'roles', one of the bigtable author's
>>> > argues that a cluster of master and slaves makes for simpler systems
>>> > [1].
>>>
>>> I definitely agree with you. However, my boss considers the simplicity
>>> from
>>> the users' viewpoint. More components make the system more complex for
>>> users.
>>
>> Who are the users?  Are they deploying the software and responsible for maintaining
backend databases?
>>
>> Or are there backend developers, frontend developers, operations, etc?
>>
>> In my experience, the "users" are generally writing the applications and not maintaining
databases.  And in the case of HBase, and it's been said already on this thread, that users
generally have an easier time with the data and consistency models.
>>
>> Above all, I think the point made by Stack earlier is extremely relevant.  Are you
using HDFS already?  Do you have needs for ZK?  When you do, HBase in an additional piece
to this stack and generally fits in nicely.  From an admin/ops POV, the learning curve is
minimal once familiar with these other systems.  And even if you aren't already using Hadoop,
might you in the future?
>>
>> If you don't and never will, then the single-component nature of Cassandra may be
more appealing.
>>
>> Also, vector clocks are nice but are still a distributed algorithm.  We've been
doing lots of work benchmarking and optimizing increments recently, pushing extremely high
throughput on relatively small clusters.  I would not expect being able to achieve this level
of performance or concurrency with any kind of per-counter distribution.  Certainly not while
providing the strict atomicity and consistency guarantees that HBase provides.
>>
>> I've never implemented counters w/ vector clocks so I could be wrong.  But I do
know that I could explain how we implement counters in a performant, consistent, atomic way
and you wouldn't have to reach for Wikipedia once ;)
>>
>> JG
>>
>
> (I agree with just about everything on this thread except point 1)
>
> If this was a black and white issue, you would be either a user or a
> developer. At this stage both cassandra and hbase are at the stage
> where very few people are pure users. I feel if you are checking out
> beta versions, applying patches to a source tree, or watching issues,
> upgrading 3 times a year, your are more of a developer then a user.
>
> Modular software is great. But if two programs do roughly the same
> function, but one is 7 pieces and the other is 1, it is hard to make
> the case that modular is better.
>
> cd /home/edward/hadoop/hadoop-0.20.2/src/
> [edward@ec src]$ find . | wc -l
> 2683
>
> [edward@ec apache-cassandra-0.6.3-src]$ find . | wc -l
> 609
>
> I have been working with hadoop for a while now. There is a ticket I
> wanted to work on reading the hadoop configuration from ldap. I
> figured this would be a relatively quickly thing. After all, the
> hadoop conf is just a simple XML file with name value pairs.....
>
> [edward@ec core]$ cd org/apache/hadoop/conf/
> [edward@ec conf]$ ls
> Configurable.java  Configuration.java  Configured.java  package.html
>
> [edward@ec conf]$ wc -l Configuration.java
> 1301 Configuration.java
>
> Holy crud! Now a good portion of this file is comments, but still.
> 1301 lines to read and write xml files! The hadoop conf has tons of
> stuff to do variable interpolation, xinclude support, the ability to
> read configurations as streams, handing for deprecated config file
> names.
>
> There is a method in Configuration with this signature:
>
>  public <U> Class<? extends U> getClass(String name,
>                                         Class<? extends U>
defaultValue,
>                                         Class<U> xface) {
>
> My point, is that all the modularity and flexibility does not
> translate into much for end users, and for developers that just want
> to jump in, I would rather jump into 600 files then 2600 (by the way
> that is NOT including hbase)
>

Mime
View raw message