hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "PoweredBy" by SteveLoughran
Date Tue, 25 Oct 2011 09:22:39 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "PoweredBy" page has been changed by SteveLoughran:
http://wiki.apache.org/hadoop/PoweredBy?action=diff&rev1=360&rev2=361

Comment:
roll back last spam links

   * [[http://atbrox.com/|Atbrox]]
    * We use hadoop for information extraction & search, and data analysis consulting
    * Cluster: we primarily use Amazon's Elastic MapReduce
- 
-  * [[http://www.ABC-Online-Shops.de/|ABC Online Shops]]
-   * Shop the Internet search engine
-  * [[http://www.aflam-3araby.com/|Aflam]]
-   * Biggest Aflam Directory on Web
  
  = B =
   * [[http://www.babacar.org/|BabaCar]]
@@ -144, +139 @@

    * We also use Hadoop for executing long-running offline [[http://en.wikipedia.org/wiki/SPARQL|SPARQL]]
queries for clients.
    * We use Amazon S3 and Cassandra to store input RDF datasets and output files.
    * We've developed [[http://rdfgrid.rubyforge.org/|RDFgrid]], a Ruby framework for map/reduce-based
processing of RDF data.
-   * We primarily use Ruby,  and RDFgrid to process RDF data with Hadoop Streaming.
+   * We primarily use Ruby, [[http://rdf.rubyforge.org/|RDF.rb]]   and RDFgrid to process
RDF data with Hadoop Streaming.
    * We primarily run Hadoop jobs on Amazon Elastic MapReduce, with cluster sizes of 1 to
20 nodes depending on the size of the dataset (hundreds of millions to billions of RDF statements).

  
   * [[http://www.deepdyve.com|Deepdyve]]
@@ -421, +416 @@

  = N =
   * [[http://www.navteqmedia.com|NAVTEQ Media Solutions]]
    * We use Hadoop/Mahout to process user interactions with advertisements to optimize ad
selection.
+ 
   * [[http://www.openneptune.com|Neptune]]
    * Another Bigtable cloning project using Hadoop to store large structured data set.
    * 200 nodes(each node has: 2 dual core CPUs, 2TB storage, 4GB RAM)
@@ -439, +435 @@

    * We use Hadoop to store and process our log files
    * We rely on Apache Pig for reporting, analytics, Cascading for machine learning, and
on a proprietary JavaScript API for ad-hoc queries
    * We use commodity hardware, with 8 cores and 16 GB of RAM per machine
-  * [[http://www.nlptechniquez.com/|nlp technique]]
-   * Free NLP Techniques in Life
  
  = O =
   * [[http://www.optivo.com|optivo]] - Email marketing software
@@ -645, +639 @@

    . Our goal is to develop techniques for the Semantic Web that take advantage of MapReduce
(Hadoop) and its scaling-behavior to keep up with the growing proliferation of semantic data.
    * [[http://dbis.informatik.uni-freiburg.de/?project=DiPoS/RDFPath.html|RDFPath]] is an
expressive RDF path language for querying large RDF graphs with MapReduce.
    * [[http://dbis.informatik.uni-freiburg.de/?project=DiPoS/PigSPARQL.html|PigSPARQL]] is
a translation from SPARQL to Pig Latin allowing to execute SPARQL queries on large RDF graphs
with MapReduce.
-   * [[http://www.ultra-mind.com/|UltraMind]]
-   * If you Believe in Money, Mind Control and Power
  
  = V =
   * [[http://www.veoh.com|Veoh]]

Mime
View raw message