hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "Hama" by udanax
Date Mon, 25 Feb 2008 03:43:07 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The following page has been changed by udanax:
http://wiki.apache.org/hadoop/Hama

------------------------------------------------------------------------------
+ [http://wiki.apache.org/hadoop-data/attachments/Hama/attachments/hama-medium.png]
+ 
  == Introduction ==
- '''Hama''' is a parallel matrix computational package based on Hadoop Map/Reduce. It will
be useful for a massively large-scale ''Numerical Analysis'' and ''Data Mining'', which need
the intensive computation power of matrix inversion, e.g. linear regression, PCA, SVM and
etc. It will be also useful for many scientific applications, e.g. physics computations, linear
algebra, computational fluid dynamics, statistics, graphic rendering and many more.
+ '''Hama''' is a parallel matrix computational package based on Hadoop Map/Reduce. In Korean,
Hama is pronunciation of Hippo (하마). It will be useful for a massively large-scale ''Numerical
Analysis'' and ''Data Mining'', which need the intensive computation power of matrix inversion,
e.g. linear regression, PCA, SVM and etc. It will be also useful for many scientific applications,
e.g. physics computations, linear algebra, computational fluid dynamics, statistics, graphic
rendering and many more.
  
  Currently, several shared-memory based parallel matrix solutions can provide a scalable
and high performance matrix operations, but matrix resources can not be scalable in the term
of complexity.  The '''Hama''' approach proposes the use of 2-dimensional Row and Column(Qualifier)
space and multi-dimensional Columnfamilies of Hbase, which is able to store large sparse and
various type of matrices (e.g. Triangle Matrix, 3D Matrix, and etc.).  In addition, auto-partitioned
sparsity sub-structure will be efficiently managed and serviced by Hbase. Row and Column operations
can be done in linear-time, where several algorithms such as structured Gaussian elimination
and iterative methods run in O(~-the number of non-zero elements in the matrix-~ / ~-number
of mappers (processors/cores)-~) time on Hadoop Map/Reduce. 
  === Initial Contributors ===

Mime
View raw message