hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Trivial Update of "Hbase/PoweredBy" by stack
Date Tue, 06 Aug 2013 16:56:58 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "Hbase/PoweredBy" page has been changed by stack:
https://wiki.apache.org/hadoop/Hbase/PoweredBy?action=diff&rev1=83&rev2=84

Comment:
Add note on how to edit

+ This page documents a roughly alphabetical list of institutions that are using HBase. Please
include details about your cluster hardware and size. Entries without this may be mistaken
for spam references and deleted.
+ 
+ To add entries you need write permission to the wiki, which you can get by subscribing to
the dev@hbase.apache.org mailing list and asking for permissions on the wiki account username
you've registered yourself as. If you are using HBase in production you ought to consider
getting involved in the development process anyway, by filing bugs, testing beta releases,
reviewing the code and turning your notes into shared documentation. Your participation in
this process will ensure your needs get met.
+ 
  [[http://www.adobe.com|Adobe]] - We currently have about 30 nodes running HDFS, Hadoop and
HBase  in clusters ranging from 5 to 14 nodes on both production and development. We plan
a deployment on an 80 nodes cluster. We are using HBase in several areas from social services
to structured data and processing for internal use. We constantly write data to HBase and
run mapreduce jobs to process then store it back to HBase or external systems. Our production
cluster has been running since Oct 2008.
  
  [[http://www.benipaltechnologies.com|Benipal Technologies]] - We have a 35 node cluster
used for HBase and Mapreduce with Lucene / SOLR and katta integration to create and finetune
our search databases. Currently, our HBase installation has over 10 Billion rows with 100s
of datapoints per row. We compute over 10¹⁸ calculations daily using MapReduce directly
on HBase. We heart HBase. 

Mime
View raw message