hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "PoweredBy" by messing
Date Tue, 06 Dec 2011 08:48:08 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "PoweredBy" page has been changed by messing:
http://wiki.apache.org/hadoop/PoweredBy?action=diff&rev1=379&rev2=380

    * ''We use Hadoop/MapReduce and Hive for data management, analysis, log aggregation, reporting,
ETL into Hive, and loading data into distributed K/V stores''
    * ''Our primary cluster is 10 nodes, each member has 2x4 Cores, 24 GB RAM, 6 x 1TB SATA.''
    * ''We also use AWS EMR clusters for additional reporting capacity on 10 TB of data stored
in S3. We usually use m1.xlarge, 60 - 100 nodes.''
- 
  
   * ''[[http://www.brockmann-consult.de/|Brockmann Consult GmbH]] - Environmental informatics
and Geoinformation services ''
    * ''We use Hadoop to develop the [[http://www.brockmann-consult.de/calvalus/|Calvalus]]
system - parallel processing of large amounts of satellite data.''
@@ -200, +199 @@

    * ''Image content based advertising and auto-tagging for social media. ''
    * ''Image based video copyright protection. ''
  
- 
   * ''[[http://www.explore.to/|Explore.To Yellow Pages]] - Explore To Yellow Pages ''
    * ''We use Hadoop for our internal search, filtering and indexing''
    * ''Elastic cluster with 5-80 nodes''
@@ -229, +227 @@

    * ''Machine learning ''
  
   * ''[[http://freestylers.jp/|Freestylers]] - Image retrieval engine ''
-   * ''[[http://www.kralarabaoyunlari.com|Araba oyunları]] - Araba oyunları
+   * ''[[http://www.kralarabaoyunlari.com|Araba oyunları]] - Araba oyunları ''
-   * ''[[http://www.pepe-izle.gen.tr/|Pepe izle]] - Pepe izle
+   * [[http://www.pepe-izle.gen.tr/|Pepe izle]] - Pepe izle
    * ''We Japanese company Freestylers use Hadoop to build the image processing environment
for image-based product recommendation system mainly on Amazon EC2, from April 2009. ''
    * ''Our Hadoop environment produces the original database for fast access from our web
application. ''
    * ''We also uses Hadoop to analyzing similarities of user's behavior. ''
@@ -252, +250 @@

   * ''[[http://www.gewinnspiele-agent.de|Gewinnspiele]] ''
    * ''6 node cluster (each node has: 4 dual core CPUs, 1,5TB storage, 4GB RAM, RedHat OS)
''
    * ''Using Hadoop for our high speed data mining applications in corporation with [[http://www.twilight-szene.de|Twilight]]
''
- 
  
   * ''[[http://gumgum.com|GumGum]] ''
    * ''9 node cluster (Amazon EC2 c1.xlarge) ''
@@ -408, +405 @@

    * ''Focus is on social graph analysis and ad optimization. ''
    * ''Use a mix of Java, Pig and Hive. ''
  
- 
   * ''[[http://www.memonews.com/en//|MeMo News - Online and Social Media Monitoring]] ''
    * ''we use hadoop ''
     * ''as plattform for distributed crawling ''
     * ''to store and process unstructured data, such as news and social media (Hadoop, PIG,
MapRed and HBase) ''
     * ''log file aggregation and processing (Flume) ''
- 
  
   * ''[[http://www.mercadolibre.com//|Mercadolibre.com]] ''
    * ''20 nodes cluster (12 * 20 cores, 32GB, 53.3TB) ''
@@ -551, +546 @@

    * ''We use hadoop for log and usage analysis ''
    * ''We predominantly leverage Hive and HUE for data access ''
  
-  * ''[[http://www.rummblelabs.com|RummbleLabs]] ''
+  * ''[[http://www.rubbelloselotto.de/|Rubbellose]] ''
    * We use AWS EMR with Cascading to create personalization and recommendation job flows
  
  = S =

Mime
View raw message