hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "Hbase/PoweredBy" by stack
Date Tue, 03 Jul 2012 12:15:31 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "Hbase/PoweredBy" page has been changed by stack:

Added oclc for ron buckley

  [[http://www.mendeley.com|Mendeley]] We are creating a platform for researchers to collaborate
and share their research online. HBase is helping us to create the world's largest research
paper collection and is being used to store all our raw imported data. We use a lot of map
reduce jobs to process these papers into pages displayed on the site. We also use HBase with
Pig to do analytics and produce the article statistics shown on the web site. You can find
out more about how we use HBase in these slides [http://www.slideshare.net/danharvey/hbase-at-mendeley].
  [[http://ning.com|Ning]] uses HBase to store and serve the results of processing user events
and log files, which allows us to provide near-real time analytics and reporting. We use a
small cluster of commodity machines with 4 cores and 16GB of RAM per machine to handle all
our analytics and reporting needs.
+ [[http://www.worldcat.org|OCLC]] uses HBase as the main data store for WorldCat, a union
catalog which aggregates the collections of 72,000 libraries in 112 countries and territories.
 WorldCat is currently comprised of nearly 1 billion records with nearly 2 billion library
ownership indications. We're running a 50 Node HBase cluster and a separate offline map-reduce
  [[http://olex.openlogic.com|OpenLogic]] stores all the world's Open Source packages, versions,
files, and lines of code in HBase for both near-real-time access and analytical purposes.
 The production cluster has well over 100TB of disk spread across nodes with 32GB+ RAM and
dual-quad or dual-hex core CPU's.

View raw message