Return-Path: X-Original-To: apmail-singa-commits-archive@minotaur.apache.org Delivered-To: apmail-singa-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 83EEC19C2F for ; Tue, 12 Apr 2016 06:25:04 +0000 (UTC) Received: (qmail 55753 invoked by uid 500); 12 Apr 2016 06:25:04 -0000 Delivered-To: apmail-singa-commits-archive@singa.apache.org Received: (qmail 55730 invoked by uid 500); 12 Apr 2016 06:25:04 -0000 Mailing-List: contact commits-help@singa.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@singa.incubator.apache.org Delivered-To: mailing list commits@singa.incubator.apache.org Received: (qmail 55721 invoked by uid 99); 12 Apr 2016 06:25:04 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 12 Apr 2016 06:25:04 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 9988EC23A8 for ; Tue, 12 Apr 2016 06:25:03 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 0.804 X-Spam-Level: X-Spam-Status: No, score=0.804 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-0.996] autolearn=disabled Received: from mx2-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id wxbMhu9QvmdO for ; Tue, 12 Apr 2016 06:24:58 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx2-lw-eu.apache.org (ASF Mail Server at mx2-lw-eu.apache.org) with ESMTP id A0B215F65A for ; Tue, 12 Apr 2016 06:24:57 +0000 (UTC) Received: from svn01-us-west.apache.org (svn.apache.org [10.41.0.6]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 87887E0D0F for ; Tue, 12 Apr 2016 06:24:56 +0000 (UTC) Received: from svn01-us-west.apache.org (localhost [127.0.0.1]) by svn01-us-west.apache.org (ASF Mail Server at svn01-us-west.apache.org) with ESMTP id 88B4F3A0591 for ; Tue, 12 Apr 2016 06:24:56 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: svn commit: r985457 [4/35] - in /websites/staging/singa/trunk/content: ./ community/ develop/ docs/ docs/jp/ docs/kr/ docs/zh/ releases/ v0.1.0/ v0.2.0/ v0.2.0/jp/ v0.2.0/kr/ v0.2.0/zh/ Date: Tue, 12 Apr 2016 06:24:54 -0000 To: commits@singa.incubator.apache.org From: buildbot@apache.org X-Mailer: svnmailer-1.0.9 Message-Id: <20160412062456.88B4F3A0591@svn01-us-west.apache.org> Modified: websites/staging/singa/trunk/content/docs/layer.html ============================================================================== --- websites/staging/singa/trunk/content/docs/layer.html (original) +++ websites/staging/singa/trunk/content/docs/layer.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – Layers @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 @@ -374,7 +384,8 @@ layer{
    store_conf {
       backend: # "kvfile" or "textfile"
       path: # path to the data store
    -  batchsize :
    +  batchsize : 32
    +  prefetching: true #default value is false
       ...
     }
     
    @@ -551,7 +562,16 @@ rgbimage_conf {

    Neuron Layers

    -

    Neuron layers conduct feature transformations.

    +

    Neuron layers conduct feature transformations.

    +
    +

    ActivationLayer

    + +
    +
    type: kActivation
    +activation_conf {
    +  type: {RELU, SIGMOID, TANH, STANH}
    +}
    +
    ConvolutionLayer

    ConvolutionLayer conducts convolution transformation.

    @@ -632,7 +652,24 @@ lrn_conf { beta: float // exponential number }
    -

    local_size specifies the quantity of the adjoining channels which will be summed up. For WITHIN_CHANNEL, it means the side length of the space region which will be summed up.

    +

    local_size specifies the quantity of the adjoining channels which will be summed up. For WITHIN_CHANNEL, it means the side length of the space region which will be summed up.

    +
    +

    CuDNN layers

    +

    CuDNN v3 and v4 are supported in SINGA, which include the following layers,

    + +
      + +
    • CudnnActivationLayer (activation functions are SIGMOID, TANH, RELU)
    • + +
    • CudnnConvLayer
    • + +
    • CudnnLRNLayer
    • + +
    • CudnnPoolLayer
    • + +
    • CudnnSoftmaxLayer
    • +
    +

    These layers have the same configuration as the corresponding CPU layers. For CuDNN v4, the batch normalization layer is added, which is named as CudnnBMLayer.

    Loss Layers

    Loss layers measures the objective training loss.

    @@ -696,10 +733,10 @@ split_conf {
    LayerProto layer_conf_;
    -Blob<float> data_, grad_;
    +vector<Blob<float>> datavec_, gradvec_;
     vector<AuxType> aux_data_;
     
    -

    The base layer class keeps the user configuration in layer_conf_. Almost all layers has $b$ (mini-batch size) feature vectors, which are stored in the data_ Blob (A Blob is a chunk of memory space, proposed in Caffe). There are layers without feature vectors; instead, they share the data from source layers. The grad_ Blob is for storing the gradients of the objective loss w.r.t. the data_ Blob. It is necessary in BP algorithm, hence we put it as a member of the base class. For CD algorithm, the grad_ field is not used; instead, the layers for the RBM model may have a Blob for the positive phase feature and a Blob for the negative phase feature. For a recurrent layer in RNN, one row of the feature blob corresponds to the feature of one internal layer. The aux_data_ stores the auxiliary data, e.g., image label (set AuxType to int). If images have variant number of labels, the AuxType can be defined to vector<int>. Currently, we hard code AuxType to int. It will be added as a template argument of Layer class later.

    +

    The base layer class keeps the user configuration in layer_conf_. datavec_ stores the features associated with this layer. There are layers without feature vectors; instead, they share the data from source layers. The gradvec_ is for storing the gradients of the objective loss w.r.t. the datavec_. The aux_data_ stores the auxiliary data, e.g., image label (set AuxType to int). If images have variant number of labels, the AuxType can be defined to vector<int>. Currently, we hard code AuxType to int. It will be added as a template argument of Layer class later.

    If a layer has parameters, these parameters are declared using type Param. Since some layers do not have parameters, we do not declare any Param in the base layer class.

    Functions

    @@ -709,16 +746,7 @@ vector<AuxType> aux_data_; virtual void ComputeFeature(int flag, const vector<Layer*>& srclayers) = 0; virtual void ComputeGradient(int flag, const vector<Layer*>& srclayers) = 0;
    -

    The Setup function reads user configuration, i.e. conf, and information from source layers, e.g., mini-batch size, to set the shape of the data_ (and grad_) field as well as some other layer specific fields. Memory will not be allocated until computation over the data structure happens.

    +

    The Setup function reads user configuration, i.e. conf, and information from source layers, e.g., mini-batch size, to set the shape of the data_ (and grad_) field as well as some other layer specific fields. Memory will not be allocated until computation over the data structure happens.

    The ComputeFeature function evaluates the feature blob by transforming (e.g. convolution and pooling) features from the source layers. ComputeGradient computes the gradients of parameters associated with this layer. These two functions are invoked by the TrainOneBatch function during training. Hence, they should be consistent with the TrainOneBatch function. Particularly, for feed-forward and RNN models, they are trained using BP algorithm, which requires each layer’s ComputeFeature function to compute data_ based on source layers, and requires each layer’s ComputeGradient to compute gradients of parameters and source layers’ grad_. For energy models, e.g., RBM, they are trained by CD algorithm, which requires each layer’s ComputeFeature f unction to compute the feature vectors for the positive phase or negative phase depending on the phase argument, and requires the ComputeGradient function to only compute parameter gradients. For some layers, e.g., loss layer or output layer, they can put the loss or prediction result into the metric argument, which will be averaged and displayed periodically.

    Implementing a new Layer subclass

    Modified: websites/staging/singa/trunk/content/docs/lmdb.html ============================================================================== --- websites/staging/singa/trunk/content/docs/lmdb.html (original) +++ websites/staging/singa/trunk/content/docs/lmdb.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/mesos.html ============================================================================== --- websites/staging/singa/trunk/content/docs/mesos.html (original) +++ websites/staging/singa/trunk/content/docs/mesos.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – Distributed Training on Mesos @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/mlp.html ============================================================================== --- websites/staging/singa/trunk/content/docs/mlp.html (original) +++ websites/staging/singa/trunk/content/docs/mlp.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – MLP Example @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/model-config.html ============================================================================== --- websites/staging/singa/trunk/content/docs/model-config.html (original) +++ websites/staging/singa/trunk/content/docs/model-config.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – Model Configuration @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/neural-net.html ============================================================================== --- websites/staging/singa/trunk/content/docs/neural-net.html (original) +++ websites/staging/singa/trunk/content/docs/neural-net.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – Neural Net @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/neuralnet-partition.html ============================================================================== --- websites/staging/singa/trunk/content/docs/neuralnet-partition.html (original) +++ websites/staging/singa/trunk/content/docs/neuralnet-partition.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – Neural Net Partition @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/overview.html ============================================================================== --- websites/staging/singa/trunk/content/docs/overview.html (original) +++ websites/staging/singa/trunk/content/docs/overview.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – Introduction @@ -80,7 +80,7 @@ @@ -216,13 +219,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/param.html ============================================================================== --- websites/staging/singa/trunk/content/docs/param.html (original) +++ websites/staging/singa/trunk/content/docs/param.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – Parameters @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/programming-guide.html ============================================================================== --- websites/staging/singa/trunk/content/docs/programming-guide.html (original) +++ websites/staging/singa/trunk/content/docs/programming-guide.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – Programming Guide @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/python.html ============================================================================== --- websites/staging/singa/trunk/content/docs/python.html (original) +++ websites/staging/singa/trunk/content/docs/python.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – Python Binding @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/quick-start.html ============================================================================== --- websites/staging/singa/trunk/content/docs/quick-start.html (original) +++ websites/staging/singa/trunk/content/docs/quick-start.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – Quick Start @@ -80,7 +80,7 @@ @@ -216,13 +219,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 @@ -313,25 +323,6 @@

    SINGA setup

    Please refer to the installation page for guidance on installing SINGA.

    -

    Starting Zookeeper

    -

    SINGA uses zookeeper to coordinate the training. Please make sure the zookeeper service is started before running SINGA.

    -

    If you installed the zookeeper using our thirdparty script, you can simply start it by:

    - -
    -
    #goto top level folder
    -cd  SINGA_ROOT
    -./bin/zk-service.sh start
    -
    -

    (./bin/zk-service.sh stop stops the zookeeper).

    -

    Otherwise, if you launched a zookeeper by yourself but not used the default port, please edit the conf/singa.conf:

    - -
    -
    zookeeper_host: "localhost:YOUR_PORT"
    -
    -
    -

    Running in standalone mode

    -

    Running SINGA in standalone mode is on the contrary of running it using cluster managers like Mesos or YARN.

    -

    Training on a single node

    For single node training, one process will be launched to run SINGA at local host. We train the CNN model over the CIFAR-10 dataset as an example. The hyper-parameters are set following cuda-convnet. More details is available at CNN example.

    @@ -344,8 +335,8 @@ cp Makefile.example Makefile make download make create
    -

    A training dataset and a test dataset are created under cifar10-train-shard and cifar10-test-shard folder respectively. An image_mean.bin file is also generated, which contains the feature mean of all images.

    -

    Since all code used for training this CNN model is provided by SINGA as built-in implementation, there is no need to write any code. Instead, users just execute the running script (../../bin/singa-run.sh) by providing the job configuration file (job.conf). To code in SINGA, please refer to the programming guide.

    +

    A training dataset and a test dataset are created respectively. An image_mean.bin file is also generated, which contains the feature mean of all images.

    +

    Since all code used for training this CNN model is provided by SINGA as built-in implementation, there is no need to write any code. Instead, users just execute the running script by providing the job configuration file (job.conf). To code in SINGA, please refer to the programming guide.

    Training without parallelism

    By default, the cluster topology has a single worker and a single server. In other words, neither the training data nor the neural net is partitioned.

    @@ -354,23 +345,8 @@ make create
    # goto top level folder
     cd ../../
    -./bin/singa-run.sh -conf examples/cifar10/job.conf
    -
    -

    You can list the current running jobs by,

    - -
    -
    ./bin/singa-console.sh list
    -
    -JOB ID    |NUM PROCS
    -----------|-----------
    -24        |1
    -
    -

    Jobs can be killed by,

    - -
    -
    ./bin/singa-console.sh kill JOB_ID
    -
    -

    Logs and job information are available in /tmp/singa-log folder, which can be changed to other folders by setting log-dir in conf/singa.conf.

    +./singa -conf examples/cifar10/job.conf +

    Asynchronous parallel training

    @@ -391,7 +367,7 @@ cluster { neuralnet { layer { ... - sharddata_conf { + store_conf { random_skip: 5000 } } @@ -401,7 +377,7 @@ neuralnet {

    The running command is:

    -
    ./bin/singa-run.sh -conf examples/cifar10/job.conf
    +
    ./singa -conf examples/cifar10/job.conf
     

    Synchronous parallel training

    @@ -418,10 +394,26 @@ cluster {

    In SINGA, asynchronous training is enabled by launching multiple workers within one worker group. For instance, we can change the original job.conf to have two workers in one worker group as shown above. The workers will run synchronously as they are from the same worker group. This framework is the in-memory sandblaster. The model is partitioned among the two workers. In specific, each layer is sliced over the two workers. The sliced layer is the same as the original layer except that it only has B/g feature instances, where B is the number of instances in a mini-batch, g is the number of workers in a group. It is also possible to partition the layer (or neural net) using other schemes. All other settings are the same as running without partitioning

    -
    ./bin/singa-run.sh -conf examples/cifar10/job.conf
    +
    ./singa -conf examples/cifar10/job.conf
     

    Training in a cluster

    +
    +

    Starting Zookeeper

    +

    SINGA uses zookeeper to coordinate the training, and uses ZeroMQ for transferring messages. After installing zookeeper and ZeroMQ, you need to configure SINGA with --enable-dist before compiling. Please make sure the zookeeper service is started before running SINGA.

    +

    If you installed the zookeeper using our thirdparty script, you can simply start it by:

    + +
    +
    #goto top level folder
    +cd  SINGA_ROOT
    +./bin/zk-service.sh start
    +
    +

    (./bin/zk-service.sh stop stops the zookeeper).

    +

    Otherwise, if you launched a zookeeper by yourself but not used the default port, please edit the conf/singa.conf:

    + +
    +
    zookeeper_host: "localhost:YOUR_PORT"
    +

    We can extend the above two training frameworks to a cluster by updating the cluster configuration with:

    @@ -430,8 +422,8 @@ cluster {

    Every process would then create only one worker thread. Consequently, the workers would be created in different processes (i.e., nodes). The hostfile must be provided under SINGA_ROOT/conf/ specifying the nodes in the cluster, e.g.,

    -
    logbase-a01
    -logbase-a02
    +
    192.168.0.1
    +192.168.0.2
     

    And the zookeeper location must be configured correctly, e.g.,

    @@ -439,14 +431,29 @@ logbase-a02
    #conf/singa.conf
     zookeeper_host: "logbase-a01"
     
    -

    The running command is the same as for single node training:

    +

    The running command is :

    ./bin/singa-run.sh -conf examples/cifar10/job.conf
    -
    +
    +

    You can list the current running jobs by,

    + +
    +
    ./bin/singa-console.sh list
    +
    +JOB ID    |NUM PROCS
    +----------|-----------
    +24        |2
    +
    +

    Jobs can be killed by,

    + +
    +
    ./bin/singa-console.sh kill JOB_ID
    +
    +

    Logs and job information are available in /tmp/singa-log folder, which can be changed to other folders by setting log-dir in conf/singa.conf.

    -

    Running with Mesos

    -

    working

    +

    Training with GPUs

    +

    Please refer to the [GPU page][gpu.html] for details on training using GPUs.

    Where to go next

    The programming guide pages will describe how to submit a training job in SINGA.

    Modified: websites/staging/singa/trunk/content/docs/rbm.html ============================================================================== --- websites/staging/singa/trunk/content/docs/rbm.html (original) +++ websites/staging/singa/trunk/content/docs/rbm.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – RBM Example @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/rnn.html ============================================================================== --- websites/staging/singa/trunk/content/docs/rnn.html (original) +++ websites/staging/singa/trunk/content/docs/rnn.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – Recurrent Neural Networks for Language Modelling @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/test.html ============================================================================== --- websites/staging/singa/trunk/content/docs/test.html (original) +++ websites/staging/singa/trunk/content/docs/test.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – Performance Test and Feature Extraction @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/train-one-batch.html ============================================================================== --- websites/staging/singa/trunk/content/docs/train-one-batch.html (original) +++ websites/staging/singa/trunk/content/docs/train-one-batch.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – Train-One-Batch @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/updater.html ============================================================================== --- websites/staging/singa/trunk/content/docs/updater.html (original) +++ websites/staging/singa/trunk/content/docs/updater.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – Updater @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/zh/checkpoint.html ============================================================================== --- websites/staging/singa/trunk/content/docs/zh/checkpoint.html (original) +++ websites/staging/singa/trunk/content/docs/zh/checkpoint.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – CheckPoint @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/zh/cnn.html ============================================================================== --- websites/staging/singa/trunk/content/docs/zh/cnn.html (original) +++ websites/staging/singa/trunk/content/docs/zh/cnn.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – CNN Example @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/zh/data.html ============================================================================== --- websites/staging/singa/trunk/content/docs/zh/data.html (original) +++ websites/staging/singa/trunk/content/docs/zh/data.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – Data Preparation @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/zh/distributed-training.html ============================================================================== --- websites/staging/singa/trunk/content/docs/zh/distributed-training.html (original) +++ websites/staging/singa/trunk/content/docs/zh/distributed-training.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – Distributed Training @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/zh/index.html ============================================================================== --- websites/staging/singa/trunk/content/docs/zh/index.html (original) +++ websites/staging/singa/trunk/content/docs/zh/index.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – @@ -80,7 +80,7 @@ @@ -218,9 +221,9 @@
  • - + - v0.2.0 + v0.3.0 Modified: websites/staging/singa/trunk/content/docs/zh/installation_source.html ============================================================================== --- websites/staging/singa/trunk/content/docs/zh/installation_source.html (original) +++ websites/staging/singa/trunk/content/docs/zh/installation_source.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – 从源程序安装SIGNA @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/zh/mlp.html ============================================================================== --- websites/staging/singa/trunk/content/docs/zh/mlp.html (original) +++ websites/staging/singa/trunk/content/docs/zh/mlp.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – MLP Example @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/zh/neural-net.html ============================================================================== --- websites/staging/singa/trunk/content/docs/zh/neural-net.html (original) +++ websites/staging/singa/trunk/content/docs/zh/neural-net.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – Neural Net @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/zh/overview.html ============================================================================== --- websites/staging/singa/trunk/content/docs/zh/overview.html (original) +++ websites/staging/singa/trunk/content/docs/zh/overview.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – 简介 @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0 Modified: websites/staging/singa/trunk/content/docs/zh/programming-guide.html ============================================================================== --- websites/staging/singa/trunk/content/docs/zh/programming-guide.html (original) +++ websites/staging/singa/trunk/content/docs/zh/programming-guide.html Tue Apr 12 06:24:50 2016 @@ -1,13 +1,13 @@ - + Apache SINGA – 编程指南 @@ -80,7 +80,7 @@ @@ -218,13 +221,20 @@
  • - + - v0.2.0 + v0.3.0
  • + + + v0.2.0 +
  • + +
  • + v0.1.0