singa-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From wang...@apache.org
Subject svn commit: r1692292 - in /incubator/singa/site/trunk/content: markdown/docs/installation.md markdown/docs/program-model.md markdown/docs/user-guide.md markdown/introduction.md markdown/quick-start.md site.xml
Date Wed, 22 Jul 2015 15:42:43 GMT
Author: wangwei
Date: Wed Jul 22 15:42:43 2015
New Revision: 1692292

URL: http://svn.apache.org/r1692292
Log:
Update quick-start with the latest code, i..e, using workspace as command line argument

Added:
    incubator/singa/site/trunk/content/markdown/docs/user-guide.md
Removed:
    incubator/singa/site/trunk/content/markdown/docs/program-model.md
Modified:
    incubator/singa/site/trunk/content/markdown/docs/installation.md
    incubator/singa/site/trunk/content/markdown/introduction.md
    incubator/singa/site/trunk/content/markdown/quick-start.md
    incubator/singa/site/trunk/content/site.xml

Modified: incubator/singa/site/trunk/content/markdown/docs/installation.md
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/installation.md?rev=1692292&r1=1692291&r2=1692292&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/installation.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs/installation.md Wed Jul 22 15:42:43 2015
@@ -6,24 +6,32 @@ ___
 
 SINGA is developed and tested on Linux platforms with the following external libraries.
 
+The following dependenies are required:
+
   * gflags version 2.1.1, use the default setting for namespace (i.e., gflags).
 
   * glog version 0.3.3.
 
-  * gtest version 1.7.0.
-
   * google-protobuf version 2.6.0.
 
   * openblas version >= 0.2.10.
 
-  * opencv version 2.4.9.
-
   * zeromq version >= 3.2
 
   * czmq version >= 3
 
   * zookeeper version 3.4.6
 
+
+Optional dependencies include:
+
+  * gtest version 1.7.0.
+
+  * opencv version 2.4.9.
+
+  * lmdb version 0.9.10
+
+
 Tips:
 For libraries like openblas, opencv, older versions may also work, because we do not use
any newly added features.
 
@@ -95,6 +103,6 @@ You may install glog individually and tr
 
 #### While compiling Singa, I get error "SSE2 instruction set not enabled".
 You can try following command:
-    
+
     $ make CFLAGS='-msse2' CXXFLAGS='-msse2'
 

Added: incubator/singa/site/trunk/content/markdown/docs/user-guide.md
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/user-guide.md?rev=1692292&view=auto
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/user-guide.md (added)
+++ incubator/singa/site/trunk/content/markdown/docs/user-guide.md Wed Jul 22 15:42:43 2015
@@ -0,0 +1,101 @@
+## Programming Model
+
+We describe the programming model of SINGA to provide users instructions of
+implementing a new model and submitting the training job. The programming model
+is made almost transparent to the underlying distributed environment. Hence
+users do not need to worry much about the communication and synchronization of
+nodes, which is discussed in [architecture](architecture.html) in details.
+
+### Deep learning training
+
+Deep learning is labeled as a feature learning technique, which usually
+consists of multiple layers.  Each layer is associated a feature transformation
+function. After going through all layers, the raw input feature (e.g., pixels
+of images) would be converted into a high-level feature that is easier for
+tasks like classification.
+
+Training a deep learning model is to find the optimal parameters involved in
+the transformation functions that generates good features for specific tasks.
+The goodness of a set of parameters is measured by a loss function, e.g.,
+[Cross-Entropy Loss](https://en.wikipedia.org/wiki/Cross_entropy). Since the
+loss functions are usually non-linear and non-convex, it is difficult to get a
+closed form solution. Normally, people uses the SGD algorithm which randomly
+initializes the parameters and then iteratively update them to reduce the loss.
+
+
+### Steps to submit a training job
+
+SINGA uses the stochastic gradient descent (SGD) algorithm to train parameters
+of deep learning models.  For each SGD iteration, there is a
+[Worker](architecture.html) computing gradients of parameters from the
+NeuralNet and a [Updater]() updating parameter values based on gradients. SINGA
+has implemented three algorithms for gradient calculation, namely Back
+propagation algorithm for feed-forward models, back-propagation through time
+for recurrent neural networks and contrastive divergence for energy models like
+RBM and DBM. Variant SGD updaters are also provided, including
+[AdaDelta](http://arxiv.org/pdf/1212.5701v1.pdf),
+[AdaGrad](http://www.magicbroom.info/Papers/DuchiHaSi10.pdf),
+[RMSProp](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf),
+[Nesterov](http://scholar.google.com/citations?view_op=view_citation&hl=en&user=DJ8Ep8YAAAAJ&citation_for_view=DJ8Ep8YAAAAJ:hkOj_22Ku90C).
+
+Consequently, what a user needs to do to submit a training job is
+
+  1. [Prepare the data](data.html) for training, validation and test.
+
+  2. [Implement the new Layers](layer.html) to support specific feature transformations
+  required in the new model.
+
+  3. Configure the training job including the [cluster setting](architecture.html)
+  and [model configuration](model-config.html)
+
+### Driver program
+
+Each training job has a driver program that
+
+  * registers the layers implemented by the user and,
+
+  * starts the [Trainer](https://github.com/apache/incubator-singa/blob/master/include/trainer/trainer.h)
+  by providing the job configuration.
+
+An example driver program is like
+
+    #include "singa.h"
+    #include "user-layer.h"  // header for user defined layers
+
+    DEFINE_int32(job, -1, "Job ID");  // job ID generated by the SINGA script
+    DEFINE_string(workspace, "examples/mnist/", "workspace of the training job");
+    DEFINE_bool(resume, false, "resume from checkpoint");
+
+    int main(int argc, char** argv) {
+      google::InitGoogleLogging(argv[0]);
+      gflags::ParseCommandLineFlags(&argc, &argv, true);
+
+      // register all user defined layers in user-layer.h
+      Register(kFooLayer, FooLayer);
+      ...
+
+      JobProto jobConf;
+      // read job configuration from text conf file
+      ReadProtoFromTextFile(&jobConf, FLAGS_workspace + "/job.conf");
+      Trainer trainer;
+      trainer.Start(FLAGS_job, jobConf, FLAGS_resume);
+    }
+
+Users can also configure the job in the driver program instead of writing the
+configuration file
+
+
+      JobProto jobConf;
+      jobConf.set_job_name("my singa job");
+      ... // configure cluster and model
+      Trainer trainer;
+      trainer.Start(FLAGS_job, jobConf, FLAGS_resume);
+
+We will provide helper functions to make the configuration easier in the
+future, like [keras](https://github.com/fchollet/keras).
+
+Compile and link the driver program with singa library to generate an
+executable, e.g., with name `mysinga`. To submit the job, just pass the path of
+the executable and the workspace to the singa job submission script
+
+    ./bin/singa-run.sh <path to mysinga> -workspace=<my job workspace>

Modified: incubator/singa/site/trunk/content/markdown/introduction.md
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/introduction.md?rev=1692292&r1=1692291&r2=1692292&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/introduction.md (original)
+++ incubator/singa/site/trunk/content/markdown/introduction.md Wed Jul 22 15:42:43 2015
@@ -11,68 +11,80 @@ existing systems, e.g. Hogwild used by C
 algorithm proposed by Google Brain and used at Microsoft Adam. SINGA provides users the chance
to
 select the one that is most scalable for their model and data.
 
-To provide good usability, SINGA provides a simple programming model based on the layer structure

-that is common in deep learning models. Users override the base layer class to implement
their own 
-layer logics for feature transformation. A model is constructed by configuring each layer
and their 
-connections like Caffe. SINGA takes care of the data and model partitioning, and makes the
underlying 
-distributed communication (almost) transparent to users. A set of built-in layers and example
models 
+To provide good usability, SINGA provides a simple programming model based on the layer structure
+that is common in deep learning models. Users override the base layer class to implement
their own
+layer logics for feature transformation. A model is constructed by configuring each layer
and their
+connections like Caffe. SINGA takes care of the data and model partitioning, and makes the
underlying
+distributed communication (almost) transparent to users. A set of built-in layers and example
models
 are provided.
 
-SINGA is an [Apache incubator project](http://singa.incubator.apache.org/), released under
Apache 
-License 2. It is mainly developed by the DBSystem group of National University of Singapore.

-A diverse community is being constructed to welcome open-source contribution. 
+SINGA is an [Apache incubator project](http://singa.incubator.apache.org/), released under
Apache
+License 2. It is mainly developed by the DBSystem group of National University of Singapore.
+A diverse community is being constructed to welcome open-source contribution.
 
 ### Goals and Principles
 
 #### Goals
 * Scalability: A distributed platform that can scale to a large model and training dataset.
-* Usability: To provide abstraction and easy to use interface 
+* Usability: To provide abstraction and easy to use interface
 	so that users can implement their deep learning model/algorithm
 	without much awareness of the underlying distributed platform.
 * Extensibility: to make SINGA extensible for implementing different consistency models,
 	training algorithms and deep learning models.
 
 #### Principles
-Scalability is a challenge research problem for distributed deep learning training. 
-SINGA provides a general architecture to exploit the scalability of different training algorithms.

+Scalability is a challenge research problem for distributed deep learning training.
+SINGA provides a general architecture to exploit the scalability of different training algorithms.
 Different parallelism approaches are also supported:
 
-* Model Partition---one model replica spreads across multiple machines to handle large models,

-	which have too many parameters to be kept in the memory of a single machine. Overhead: 
+* Model Partition---one model replica spreads across multiple machines to handle large models,
+	which have too many parameters to be kept in the memory of a single machine. Overhead:
 	synchronize layer data across machines within one model replica Partition.
 * Data Partition---one model replica trains against a partition of the whole training dataset.
 	This approach can handle large training dataset.
 	Overhead: synchronize parameters among model replicas.
-* Hybrid Partition---exploit a cost model to find optimal model and data partitions 
+* Hybrid Partition---exploit a cost model to find optimal model and data partitions
 	which would reduce both overheads.
 
-To achieve the usability goal, we propose our programming model with the following 
+To achieve the usability goal, we propose our programming model with the following
 two major considerations:
 
-* Extract common data structures and operations for deep learning training algorithms, i.e.,

-	Back Propagation and Contrastive Divergence. Users implement their models by inheriting

+* Extract common data structures and operations for deep learning training algorithms, i.e.,
+	Back Propagation and Contrastive Divergence. Users implement their models by inheriting
 	these data structures and overriding the operations.
 * Make model partition and data partition automatically almost transparent to users.
 
 Considering extensibility, we make our core data structures (e.g., Layer) and operations
 general enough for programmers to override.
 
+### Where to go from here
+
+  * SINGA [User guide](user-guide.html) describes how to submit a
+  training job for your own deep learning model.
+
+  * SINGA [architecture](architecture.html) illustrates how different training frameworks
are
+   supported using a general system architecture.
+
+  * [Training examples](examples.html) are provided to help users get started with SINGA.
+
+<!---
 ### System Architecture
 
 <img src="images/arch.png" alt="SINGA Logical Architecture" style="width: 500px"/>
 <p><strong>SINGA Logical Architecture</strong></p>
 
 The logical system architecture is shown in the above figure. There are two types of execution
units,
-namely workers and servers. They are grouped according to the cluster configuration. Each
worker 
-group runs against a partition of the training dataset to compute the updates (e.g., the
gradients) 
-of parameters on one model replica, denoted as ParamShard. Worker groups run asynchronously,
while 
-workers within one group run synchronously with each worker computing (partial) updates for
a subset 
-of model parameters. Each server group also maintains one replica of the model parameters

-(i.e., ParamShard). It receives and handles requests (e.g., Get/Put/Update) from workers.
Every server 
+namely workers and servers. They are grouped according to the cluster configuration. Each
worker
+group runs against a partition of the training dataset to compute the updates (e.g., the
gradients)
+of parameters on one model replica, denoted as ParamShard. Worker groups run asynchronously,
while
+workers within one group run synchronously with each worker computing (partial) updates for
a subset
+of model parameters. Each server group also maintains one replica of the model parameters
+(i.e., ParamShard). It receives and handles requests (e.g., Get/Put/Update) from workers.
Every server
 group synchronizes with neighboring server groups periodically or ac- cording to some specified
rules.
 
-SINGA starts by parsing the cluster and model configurations. The first worker group initializes
model 
-parameters and sends Put requests to put them into the ParamShards of servers. Then every
worker group 
-runs the training algorithm by iterating over its training data in mini-batch. Each worker
collects the 
-fresh parameters from servers before computing the updates (e.g., gradients) for them. Once
it finishes 
+SINGA starts by parsing the cluster and model configurations. The first worker group initializes
model
+parameters and sends Put requests to put them into the ParamShards of servers. Then every
worker group
+runs the training algorithm by iterating over its training data in mini-batch. Each worker
collects the
+fresh parameters from servers before computing the updates (e.g., gradients) for them. Once
it finishes
 the computation, it issues update requests to the servers.
+-->

Modified: incubator/singa/site/trunk/content/markdown/quick-start.md
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/quick-start.md?rev=1692292&r1=1692291&r2=1692292&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/quick-start.md (original)
+++ incubator/singa/site/trunk/content/markdown/quick-start.md Wed Jul 22 15:42:43 2015
@@ -28,6 +28,7 @@ If there are dependent libraries missing
 [installation](docs/installation.html) page
 for guidance on installing them.
 
+<!---
 ### Run in standalone mode
 
 Running SINGA in standalone mode is on the contrary of running it on Mesos or
@@ -35,8 +36,9 @@ YARN. For standalone mode, users have to
 instance, they have to prepare a host file containing all running nodes.
 There is no management on CPU and memory resources, hence SINGA consumes as much
 CPU and memory resources as it needs.
+-->
 
-#### Training on a single node
+### Training on a single node
 
 For single node training, one process will be launched to run the SINGA code on
 the node where SINGA is started. We train the [CNN model](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks)
over the
@@ -45,7 +47,7 @@ The hyper-parameters are set following
 [cuda-convnet](https://code.google.com/p/cuda-convnet/).
 
 
-##### Data and model preparation
+#### Data and model preparation
 
 Download the dataset and create the data shards for training and testing.
 
@@ -62,15 +64,55 @@ model configuration file (*model.conf*)
 training data shard, test data shard and the mean file.-->
 
 Since all modules used for training this CNN model are provided by SINGA as
-built-in modules, there is no need to write any code. Instead, you just
-executable the running script (*../../bin/singa-run.sh*) by providing the model configuration
file
-(*model.conf*).  If you want to implement your own modules, e.g., layer,
-then you have to register your modules in the driver code. After compiling the
-driver code, link it with the SINGA library to generate the executable. More
-details are described in [Code your own models]().
+built-in modules, there is no need to write any code. You just execute the
+script (*../../bin/singa-run.sh*) by providing the workspace which includes the
+job configuration file (*job.conf*).  If you want to implement your own
+modules, e.g., layer, then you have to register your modules in the [driver
+program](user-guide.html).
 
-##### Training without partitioning
+Start the training by running:
+
+    #goto top level folder
+    cd ../..
+    ./bin/singa-run.sh -workspace=examples/cifar10
+
+Note: we have changed the command line arguments from `-cluster... -model=...`
+to `-workspace`. The `workspace` folder must have a job.conf file which
+specifies the cluster (number of workers, number of servers, etc) and model
+configuration.
+
+Some training information will be shown on the screen like:
+
+    Starting zookeeper ... already running as process 21660.
+    Generate host list to /home/singa/wangwei/incubator-singa/examples/cifar10/job.hosts
+    Generate job id to /home/singa/wangwei/incubator-singa/examples/cifar10/job.id [job_id
= 1]
+    Executing : ./singa -workspace=/home/singa/wangwei/incubator-singa/examples/cifar10 -job=1
+    proc #0 -> 10.10.10.14:49152 (pid = 26724)
+    Server (group = 0, id = 0) start
+    Worker (group = 0, id = 0) start
+    Generate pid list to /home/singa/wangwei/incubator-singa/examples/cifar10/job.pids
+    Test step-0, loss : 2.302607, accuracy : 0.090100
+    Train step-0, loss : 2.302614, accuracy : 0.062500
+    Train step-30, loss : 2.302403, accuracy : 0.141129
+    Train step-60, loss : 2.301960, accuracy : 0.155738
+    Train step-90, loss : 2.301470, accuracy : 0.159341
+    Train step-120, loss : 2.301048, accuracy : 0.160640
+    Train step-150, loss : 2.300414, accuracy : 0.161424
+    Train step-180, loss : 2.299842, accuracy : 0.160912
+    Train step-210, loss : 2.298510, accuracy : 0.163211
+    Train step-240, loss : 2.297058, accuracy : 0.163641
+    Train step-270, loss : 2.295308, accuracy : 0.163745
+    Test step-300, loss : 2.256824, accuracy : 0.193500
+    Train step-300, loss : 2.292490, accuracy : 0.165282
+
+
+You can find more logs under the `/tmp` folder. Once the training is finished
+the learned model parameters will be dumped into $workspace/checkpoint folder.
+The dumped file can be used for continuing the training or as initialization
+for other similar models. [Checkpoint and Resume](checkpoint.html) discusses
+more details.
 
+<!---
 To train the model without any partitioning, you just set the numbers
 in the cluster configuration file (*cluster.conf*) as :
 
@@ -84,28 +126,109 @@ One worker group trains against one part
 runs over a partition of the model. If *nworkers_per_group* is set to 1, then
 there is no model partitioning. More details on the cluster configuration are
 described in the [System Architecture](docs/architecture.html) page.
+-->
 
-Start the training by running:
+#### Distributed Training
 
-    #goto top level folder
-    cd ../..
-    ./bin/singa-run.sh -model=examples/cifar10/model.conf -cluster=examples/cifar10/cluster.conf
-
-##### Training with data Partitioning
-
-    nworker_groups: 2
-    nserver_groups: 1
-    nservers_per_group: 1
-    nworkers_per_group: 1
-    nworkers_per_procs: 2
-    workspace: "examples/cifar10/"
-
-The above cluster configuration file specifies two worker groups and one server group. 
-Worker groups run asynchronously but share the memory space for parameter values. In other
words,
-it runs as the Hogwild algorithm. Since it is running in a single node, we can avoid partitioning
the 
-dataset explicitly. In specific, a random start offset is assigned to each worker group such
that they 
-would not work on the same mini-batch for every iteration. Consequently, they run like on
different data 
-partitions. The running command is the same:
+To train the model in distributed environment, we first change the job
+configuration to use 2 worker groups (one worker per group) and 2 servers (from
+the same server group).
+
+    // job.conf
+    cluster {
+      nworker_groups: 2
+      nserver_groups: 1
+      nservers_per_group: 2
+    }
+
+This configuration would run SINGA using Downpour training framework.
+In specific, the 2 worker groups run asynchronously to compute the parameter
+gradients. Each server maintains a subset of parameters, i.e., updating the
+parameters based on gradients passed by workers.
+
+To run SINGA in a cluster,
+
+  1. A hostfile should be prepared under conf/ folder, e.g.,
+
+        // hostfile
+        logbase-a04
+        logbase-a05
+        logbase-a06
+        ...
+
+  2. The zookeeper location must be configured in conf/singa.conf, e.g.,
+
+    zookeeper_host: "logbase-a04:2181"
+
+  3. Make your ssh command password-free
+
+Currently, we assume the data files are on NFS, i.e., visible to all nodes.
+To start the training, run
+
+    ./bin/singa-run.sh -workspace=examples/cifar10
+
+The `singa-run.sh` will calculate the number of nodes (i.e., processes) to
+launch and will generate a job.hosts file under workspace by looping all nodes
+in conf/hostfile. Hence if there are few nodes in the hostfile, then multiple
+processes would be launched in one node.
+
+You can get some job information like job ID and running processes using the
+singa-console.sh script:
+
+    ./bin/singa-console.sh list
+    JOB ID    |NUM PROCS
+    ----------|-----------
+    job-4     |2
+
+Sample training output is
+
+    Generate job id to /home/singa/wangwei/incubator-singa/examples/cifar10/job.id [job_id
= 4]
+    Executing @ logbase-a04 : cd /home/singa/wangwei/incubator-singa; ./singa -workspace=/home/singa/wangwei/incubator-singa/examples/cifar10
-job=4
+    Executing @ logbase-a05 : cd /home/singa/wangwei/incubator-singa; ./singa -workspace=/home/singa/wangwei/incubator-singa/examples/cifar10
-job=4
+    proc #0 -> 10.10.10.15:49152 (pid = 3504)
+    proc #1 -> 10.10.10.14:49152 (pid = 27119)
+    Server (group = 0, id = 1) start
+    Worker (group = 1, id = 0) start
+    Server (group = 0, id = 0) start
+    Worker (group = 0, id = 0) start
+    Generate pid list to
+    /home/singa/wangwei/incubator-singa/examples/cifar10/job.pids
+    Test step-0, loss : 2.297355, accuracy : 0.101700
+    Train step-0, loss : 2.274724, accuracy : 0.062500
+    Train step-30, loss : 2.263850, accuracy : 0.131048
+    Train step-60, loss : 2.249972, accuracy : 0.133197
+    Train step-90, loss : 2.235008, accuracy : 0.151786
+    Train step-120, loss : 2.228674, accuracy : 0.154959
+    Train step-150, loss : 2.215979, accuracy : 0.165149
+    Train step-180, loss : 2.198111, accuracy : 0.180249
+    Train step-210, loss : 2.175717, accuracy : 0.188389
+    Train step-240, loss : 2.160980, accuracy : 0.197095
+    Train step-270, loss : 2.145763, accuracy : 0.202030
+    Test step-300, loss : 1.921962, accuracy : 0.299100
+    Train step-300, loss : 2.129271, accuracy : 0.208056
+
+
+We can see that the accuracy (resp. loss) distributed training increases (resp.
+decreases) faster than single node training.
+
+You can stop the training by singa-stop.sh
+
+    ./bin/singa-stop.sh
+    Kill singa @ logbase-a04 ...
+    Kill singa @ logbase-a05 ...
+    bash: line 1: 27119 Killed                  ./singa -workspace=/home/singa/wangwei/incubator-singa/examples/cifar10
-job=4
+    Kill singa @ logbase-a06 ...
+    bash: line 1:  3504 Killed                  ./singa -workspace=/home/singa/wangwei/incubator-singa/examples/cifar10
-job=4
+    Cleanning metadata in zookeeper ...
+
+
+<!---
+In other words,
+it runs as the Hogwild algorithm. Since it is running in a single node, we can avoid partitioning
the
+dataset explicitly. In specific, a random start offset is assigned to each worker group such
that they
+would not work on the same mini-batch for every iteration. Consequently, they run like on
different data
+partitions.
+The running command is the same:
 
     ./bin/singa-run.sh -model=examples/cifar10/model.conf -cluster=examples/cifar10/cluster.conf
 
@@ -119,14 +242,14 @@ partitions. The running command is the s
     nworkers_per_procs: 2
     workspace: "examples/cifar10/"
 
-The above cluster configuration specifies one worker group with two workers. 
+The above cluster configuration specifies one worker group with two workers.
 The workers run synchronously, i.e., they are synchronized after one iteration.
 The model is partitioned among the two workers. In specific, each layer is
 sliced such that every worker is assigned one sliced layer. The sliced layer is
 the same as the original layer except that it only has B/g feature instances,
 where B is the size of instances in a mini-batch, g is the number of workers in
-a group. 
- 
+a group.
+
 All other settings are the same as running without partitioning
 
     ./bin/singa-run.sh -model=examples/cifar10/model.conf -cluster=examples/cifar10/cluster.conf
@@ -141,18 +264,19 @@ To run the distributed Hogwild framework
     nserver_groups: 2
 
 and start one process as,
-    
+
     ./bin/singa-run.sh -model=examples/cifar10/model.conf -cluster=examples/cifar10/cluster.conf
 
 and then start another process as,
- 
+
     ./singa -model=examples/cifar10/model.conf -cluster=examples/cifar10/cluster.conf
 
-Note that the two commands are different! The first one will start the zookeeper. Currently
we assume 
-that the example/cifar10 folder is in NFS. 
+Note that the two commands are different! The first one will start the zookeeper. Currently
we assume
+that the example/cifar10 folder is in NFS.
 
 ### Run with Mesos
 
 *in working*...
 
 ### Run with YARN
+-->

Modified: incubator/singa/site/trunk/content/site.xml
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/site.xml?rev=1692292&r1=1692291&r2=1692292&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/site.xml (original)
+++ incubator/singa/site/trunk/content/site.xml Wed Jul 22 15:42:43 2015
@@ -55,7 +55,7 @@
 
     <menu name="Documentaion">
       <item name="Installation" href="docs/installation.html"/>
-      <item name="Programming Model" href="docs/program-model.html">
+      <item name="User Guide" href="docs/user-guide.html">
         <item name ="Model Configuration" href="docs/model-config.html"/>
         <item name="Neural Network" href="docs/neuralnet.html"/>
         <item name="Layer" href="docs/layer.html"/>



Mime
View raw message