hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "PoweredBy" by prosch
Date Thu, 07 Apr 2011 09:17:08 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "PoweredBy" page has been changed by prosch.
http://wiki.apache.org/hadoop/PoweredBy?action=diff&rev1=273&rev2=274

--------------------------------------------------

    * Our clusters vary from 10 to 500 nodes
    * Hypertable is also supported by Baidu
  
-  * [[http://www.beebler.com|Beebler]]
+  * [[http://www.beebler.com|Beebler]] [[http://technischeuebersetzung.mojblog.hr/|Blog]]
    * 14 node cluster (each node has: 2 dual core CPUs, 2TB storage, 8GB RAM)
    * We use hadoop for matching dating profiles
  
@@ -511, +511 @@

  
  = U =
   * [[http://glud.udistrital.edu.co|Universidad Distrital Francisco Jose de Caldas (Grupo
GICOGE/Grupo Linux UD GLUD/Grupo GIGA]]
-   . 5 node low-profile cluster. We use Hadoop to support the research project: Territorial
Intelligence System of Bogota City.
+   . 5 node low-profile cluster. We use Hadoop to support the research project: Territorial
Intelligence System of Bogota City.[[http://prosch.blog.de/
+ |√úbersetzung]]
  
   * [[http://ir.dcs.gla.ac.uk/terrier/|University of Glasgow - Terrier Team]]
    * 30 nodes cluster (Xeon Quad Core 2.4GHz, 4GB RAM, 1TB/node storage).
@@ -543, +544 @@

   * [[http://www.web-alliance.fr|Web Alliance]]
    * We use Hadoop for our internal search engine optimization (SEO) tools. It allows us
to store, index, search data in a much faster way.
    * We also use it for logs analysis and trends prediction.
-  * [[http://www.worldlingo.com/|WorldLingo]]
+  * [[http://www.worldlingo.com/|WorldLingo]] [[http://uebersetzer1.wordpress.com/|Wordpress]]
    * Hardware: 44 servers (each server has: 2 dual core CPUs, 2TB storage, 8GB RAM)
    * Each server runs Xen with one Hadoop/HBase instance and another instance with web or
application servers, giving us 88 usable virtual machines.
    * We run two separate Hadoop/HBase clusters with 22 nodes each.
@@ -562, +563 @@

    * >60% of Hadoop Jobs within Yahoo are Pig jobs.
  
  = Z =
-  * [[http://www.zvents.com/|Zvents]]
+  * [[http://www.zvents.com/|Zvents]] 
    * 10 node cluster (Dual-Core AMD Opteron 2210, 4GB RAM, 1TB/node storage)
    * Run Naive Bayes classifiers in parallel over crawl data to discover event information
  

Mime
View raw message