singa-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From wang...@apache.org
Subject svn commit: r1742242 - /incubator/singa/site/trunk/content/markdown/develop/schedule.md
Date Wed, 04 May 2016 09:59:15 GMT
Author: wangwei
Date: Wed May  4 09:59:15 2016
New Revision: 1742242

URL: http://svn.apache.org/viewvc?rev=1742242&view=rev
Log:
update schedule for v1.0

Modified:
    incubator/singa/site/trunk/content/markdown/develop/schedule.md

Modified: incubator/singa/site/trunk/content/markdown/develop/schedule.md
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/develop/schedule.md?rev=1742242&r1=1742241&r2=1742242&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/develop/schedule.md (original)
+++ incubator/singa/site/trunk/content/markdown/develop/schedule.md Wed May  4 09:59:15 2016
@@ -3,31 +3,38 @@
 
 | Release | Module| Feature | Status |
 |---------|---------|-------------|--------|
-| 0.1 Sep 2015     | Neural Network |1.1. Feed forward neural network, including CNN, MLP
| done|
-|         |          |1.2. RBM-like model, including RBM | done|
-|         |                |1.3. Recurrent neural network, including standard RNN | done|
-|         | Architecture   |1.4. One worker group on single node (with data partition)| done|
-|         |                |1.5. Multi worker groups on single node using [Hogwild](http://www.eecs.berkeley.edu/~brecht/papers/hogwildTR.pdf)|done|
-|         |                |1.6. Distributed Hogwild|done|
-|         |                |1.7. Multi groups across nodes, like [Downpour](http://papers.nips.cc/paper/4687-large-scale-distributed-deep-networks)|done|
-|         |                |1.8. All-Reduce training architecture like [DeepImage](http://arxiv.org/abs/1501.02876)|done|
-|         |                |1.9. Load-balance among servers | done|
-|         | Failure recovery|1.10. Checkpoint and restore |done|
-|         | Tools|1.11. Installation with GNU auto tools| done|
-|0.2 Jan 2016 | Neural Network |2.1. Feed forward neural network, including AlexNet, cuDNN
layers, etc.| done |
-|         |                |2.2. Recurrent neural network, including GRULayer and BPTT|done
|
-|         | |2.3. Model partition and hybrid partition|done|
-|         | Tools |2.4. Integration with Mesos for resource management|done|
-|         |               |2.5. Prepare Docker images for deployment|done|
-|         |               |2.6. Visualization of neural net and debug information |done|
-|         | Binding        |2.7. Python binding for major components |done|
-|         | GPU            |2.8. Single node with multiple GPUs |done|
-|0.3 April 2016 | GPU | 3.1 Multiple nodes, each with multiple GPUs|done|
-|               |     | 3.2 Heterogeneous training using both GPU and CPU [CcT](http://arxiv.org/abs/1504.04343)|done|
-|               |     | 3.3 Support cuDNN v4 | done|
-|               | Installation| 3.4 Remove dependency on ZeroMQ, CZMQ, Zookeeper for single
node training|done|
-|               | Updater| 3.5 Add new SGD updaters including Adam, AdamMax and AdaDelta|done|
-|               | Binding| 3.6 Enhance Python binding for training|done|
-|0.4 July 2016  | Rafiki | 4.1 Deep learning as a service| |
-|               |        | 4.2 Product search using Rafiki| |
-
+| 0.1 Sep 2015     | Neural Network | Feed forward neural network, including CNN, MLP | done|
+|         |          | RBM-like model, including RBM | done|
+|         |                | Recurrent neural network, including standard RNN | done|
+|         | Architecture   | One worker group on single node (with data partition)| done|
+|         |                | Multi worker groups on single node using [Hogwild](http://www.eecs.berkeley.edu/~brecht/papers/hogwildTR.pdf)|done|
+|         |                | Distributed Hogwild|done|
+|         |                | Multi groups across nodes, like [Downpour](http://papers.nips.cc/paper/4687-large-scale-distributed-deep-networks)|done|
+|         |                | All-Reduce training architecture like [DeepImage](http://arxiv.org/abs/1501.02876)|done|
+|         |                | Load-balance among servers | done|
+|         | Failure recovery| Checkpoint and restore |done|
+|         | Tools| Installation with GNU auto tools| done|
+|0.2 Jan 2016 | Neural Network | Feed forward neural network, including AlexNet, cuDNN layers,
etc.| done |
+|         |                | Recurrent neural network, including GRULayer and BPTT|done |
+|         | | Model partition and hybrid partition|done|
+|         | Tools | Integration with Mesos for resource management|done|
+|         |               | Prepare Docker images for deployment|done|
+|         |               | Visualization of neural net and debug information |done|
+|         | Binding        | Python binding for major components |done|
+|         | GPU            | Single node with multiple GPUs |done|
+|0.3 April 2016 | GPU | Multiple nodes, each with multiple GPUs|done|
+|               |     | Heterogeneous training using both GPU and CPU [CcT](http://arxiv.org/abs/1504.04343)|done|
+|               |     | Support cuDNN v4 | done|
+|               | Installation| Remove dependency on ZeroMQ, CZMQ, Zookeeper for single node
training|done|
+|               | Updater| Add new SGD updaters including Adam, AdamMax and AdaDelta|done|
+|               | Binding| Enhance Python binding for training|done|
+|0.4 June 2016  | Rafiki | Deep learning as a service| |
+|               |        | Product search using Rafiki| |
+|1.0 July 2016  | Programming abstraction|Tensor with linear algebra, neural net and random
operations| |
+|               |                        |Updater for distributed parameter updating||
+|               | Optimization       | Execution and memory optimization||
+|               | Hardware           | Use Cuda and Cudnn for Nvidia GPU||
+|               |                    | Use OpenCL for AMD GPU or other devices||
+|               | Cross-platform | To extend from Linux to MacOS and Windows||
+|               | Examples | Speech recognition example||
+|               | |Large image models, e.g., [GoogLeNet](http://arxiv.org/abs/1409.4842),
[VGG](https://arxiv.org/pdf/1409.1556.pdf) and [Residual Net](http://arxiv.org/abs/1512.03385)||



Mime
View raw message