mxnet-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Haibin Lin <>
Subject Status of Sparse Tensor Support in MXNet
Date Wed, 27 Sep 2017 17:12:24 GMT
(It looks like the previous email didn’t go through. Resending it)

Hi everyone,

I’ve been working on sparse tensor support in MXNet. I’d like to share a
bit regarding what I worked on and gather some inputs/feature requests from
the community.

Recently sparse tensor CPU support has been merged to MXNet master with:

   - Two sparse data formats: Compressed Sparse Row
   for sparse inputs) and Row Sparse
   sparse gradients)
   - Two data iterators for sparse data input: NDArrayIter
    and LibSVMIter
   - Three optimizers for sparse gradient updates: Ftrl
   (@CNevd), SGD
    and Adam
   - Sparse storage conversion
   , matrix-matrix product
   , matrix-vector product
   and sparse gradient aggregation
   (CPU @reminisce, GPU @stefanhenneking)
   - Many sparse element-wise CPU operators including: arithmetic (e.g.
   elemwise_add), rounding, trigonometric, hyperbolic, exponents,
   logarithms, and power operators (mainly implemented for Row Sparse but not
   yet for CSR @cjolivier01).
   - Distributed kv-store with sparse push
   only, 64-bit hashed keys not supported for distributed training)
   - Distributed linear regression
   with sparse data

There’re also some ongoing benchmarking efforts for matrix multiplication,
memory usage and distributed training within MXNet (@anirudh2290) and
tutorials <> regarding
basic sparse operations (work in progress, comments are welcome).

The future work I have in mind includes:

   - Update document to reflect available sparse operators and benchmark
   - Sparse embedding operator
   - Adagrad optimizer for sparse gradient updates
   - Reduce sum operator for CSR
   - Gluon interface support
   - Factorization machine example
   - Noise contrastive estimation example

*What sparse related features and operator support would you need and what
do you want to use it for? Do you want any item in the list of future work
to become available sooner? Any feedback is welcome. Thanks a lot.*



  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message