Dear Wiki user,
You have subscribed to a wiki page or wiki category on "Lucenehadoop Wiki" for change notification.
The following page has been changed by udanax:
http://wiki.apache.org/lucenehadoop/Hbase/ShellPlans

Altools provides automatic parallelization of the most timeconsuming relational/matrix/vector
operations, and will ensure that the iterative solvers are scalable.
+ * Survey List
 * Parallel Processing of Relational Data
+ * Parallel Processing of Relational Data
 * Parallel Algorithms of MultiDimensional Matrix Operations
+ * Parallel Algorithms of MultiDimensional Matrix Operations
 * Parallel Gaussian Elimination Algorithm
+ * Parallel Gaussian Elimination Algorithm

@@ 169, +170 @@
}}}
= Example Of Altools Use =
+ == Latent Semantic Analysis by SVD ==
+ Latent semantic analysis (LSA) is a technique in natural language processing, in particular
in vectorial semantics, of analyzing relationships between a set of documents and the terms
they contain by producing a set of concepts related to the documents and terms. LSA was patented
in 1988 by Scott Deerwester, Susan Dumais, George Furnas, Richard Harshman, Thomas Landauer,
Karen Lochbaum and Lynn Streeter. In the context of its application to information retrieval,
it is sometimes called latent semantic indexing (LSI).  wikipedia

 == LSI by SVD ==
 Latent semantic analysis (LSA) is a technique in natural language processing, in particular
in vectorial semantics, of analyzing relationships between a set of documents and the terms
they contain by producing a set of concepts related to the documents and terms.

 LSA was patented in 1988 by Scott Deerwester, Susan Dumais, George Furnas, Richard Harshman,
Thomas Landauer, Karen Lochbaum and Lynn Streeter. In the context of its application to information
retrieval, it is sometimes called latent semantic indexing (LSI).

 This example used SVD decomposition with k=3.
+ This LSA example used SVD decomposition with k=3.
+
 [[BR]]'' The SVD is typically computed using large matrix methods (for example, Lanczos
methods) but may also be computed incrementally and with greatly reduced resources via a neural
networklike approach which does not require the large, fullrank matrix to be held in memory''
+ ''(The SVD is typically computed using large matrix methods (for example, Lanczos methods)
but may also be computed incrementally and with greatly reduced resources via a neural networklike
approach which does not require the large, fullrank matrix to be held in memory)  wikipedia''
* ~'''NOTATION'''~
* ~''T,,0,,'' : orthogonal, unitlength columns~
