hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "PoweredBy" by DjoerdHiemstra
Date Thu, 26 Jan 2012 08:16:51 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "PoweredBy" page has been changed by DjoerdHiemstra:
http://wiki.apache.org/hadoop/PoweredBy?action=diff&rev1=391&rev2=392

Comment:
Added: University of Twente (put things under 'U' in alphabetical order)

    * ''We have 94 nodes (752 cores) in our clusters, as of July 2010, but the number grows
regularly. ''
  
  = U =
-  * ''[[http://glud.udistrital.edu.co|Universidad Distrital Francisco Jose de Caldas (Grupo
GICOGE/Grupo Linux UD GLUD/Grupo GIGA]] ''
+  * ''[[http://glud.udistrital.edu.co|Universidad Distrital Francisco Jose de Caldas (Grupo
GICOGE/Grupo Linux UD GLUD/Grupo GIGA)]] ''
    . ''5 node low-profile cluster. We use Hadoop to support the research project: Territorial
Intelligence System of Bogota City. ''
- 
-  * ''[[http://ir.dcs.gla.ac.uk/terrier/|University of Glasgow - Terrier Team]] ''
-   * ''30 nodes cluster (Xeon Quad Core 2.4GHz, 4GB RAM, 1TB/node storage). We use Hadoop
to facilitate information retrieval research & experimentation, particularly for TREC,
using the Terrier IR platform. The open source release of [[http://ir.dcs.gla.ac.uk/terrier/|Terrier]]
includes large-scale distributed indexing using Hadoop Map Reduce. ''
- 
-  * ''[[http://www.umiacs.umd.edu/~jimmylin/cloud-computing/index.html|University of Maryland]]
''
-   . ''We are one of six universities participating in IBM/Google's academic cloud computing
initiative. Ongoing research and teaching efforts include projects in machine translation,
language modeling, bioinformatics, email analysis, and image processing. ''
- 
-  * ''[[http://hcc.unl.edu|University of Nebraska Lincoln, Holland Computing Center]] ''
-   . ''We currently run one medium-sized Hadoop cluster (1.6PB) to store and serve up physics
data for the computing portion of the Compact Muon Solenoid (CMS) experiment. This requires
a filesystem which can download data at multiple Gbps and process data at an even higher rate
locally. Additionally, several of our students are involved in research projects on Hadoop.
''
  
   * ''[[http://dbis.informatik.uni-freiburg.de/index.php?project=DiPoS|University of Freiburg
- Databases and Information Systems]] ''
    . ''10 nodes cluster (Dell PowerEdge R200 with Xeon Dual Core 3.16GHz, 4GB RAM, 3TB/node
storage). ''
    . ''Our goal is to develop techniques for the Semantic Web that take advantage of MapReduce
(Hadoop) and its scaling-behavior to keep up with the growing proliferation of semantic data.
''
    * ''[[http://dbis.informatik.uni-freiburg.de/?project=DiPoS/RDFPath.html|RDFPath]] is
an expressive RDF path language for querying large RDF graphs with MapReduce. ''
    * ''[[http://dbis.informatik.uni-freiburg.de/?project=DiPoS/PigSPARQL.html|PigSPARQL]]
is a translation from SPARQL to Pig Latin allowing to execute SPARQL queries on large RDF
graphs with MapReduce. ''
+ 
+  * ''[[http://ir.dcs.gla.ac.uk/terrier/|University of Glasgow - Terrier Team]] ''
+   * ''30 nodes cluster (Xeon Quad Core 2.4GHz, 4GB RAM, 1TB/node storage). We use Hadoop
to facilitate information retrieval research & experimentation, particularly for TREC,
using the Terrier IR platform. The open source release of [[http://ir.dcs.gla.ac.uk/terrier/|Terrier]]
includes large-scale distributed indexing using Hadoop Map Reduce. ''
+ 
+  * ''[[http://www.umiacs.umd.edu/~jimmylin/cloud-computing/index.html|University of Maryland]]
''
+   . ''We are one of six universities participating in IBM/Google's academic cloud computing
initiative. Ongoing research and teaching efforts include projects in machine translation,
language modeling, bioinformatics, email analysis, and image processing. ''
+ 
+  * ''[[http://hcc.unl.edu|University of Nebraska Lincoln, Holland Computing Center]] ''
+   . ''We currently run one medium-sized Hadoop cluster (1.6PB) to store and serve up physics
data for the computing portion of the Compact Muon Solenoid (CMS) experiment. This requires
a filesystem which can download data at multiple Gbps and process data at an even higher rate
locally. Additionally, several of our students are involved in research projects on Hadoop.
''
+ 
+  * ''[[http://db.cs.utwente.nl|University of Twente, Database Group]] ''
+   . ''We run a 16 node cluster (dual core Xeon E3110 64 bit processors with 6MB cache, 8GB
main memory, 1TB disk) as of December 2008. We teach MapReduce and use Hadoop in our computer
science master's program, and for information retrieval research. For more information, see:
http://mirex.sourceforge.net/
  
  = V =
   * ''[[http://www.veoh.com|Veoh]] ''

Mime
View raw message