singa-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From wan...@apache.org
Subject svn commit: r1678931 - in /incubator/singa/site/trunk: ./ content/ content/markdown/ content/markdown/docs/
Date Tue, 12 May 2015 13:05:09 GMT
Author: wangsh
Date: Tue May 12 13:05:08 2015
New Revision: 1678931

URL: http://svn.apache.org/r1678931
Log:
add documentation pages

Added:
    incubator/singa/site/trunk/content/markdown/docs/code-structure.md
    incubator/singa/site/trunk/content/markdown/docs/neuralnet-partition.md
    incubator/singa/site/trunk/content/markdown/docs/programming-model.md
    incubator/singa/site/trunk/content/markdown/introduction.md   (with props)
Modified:
    incubator/singa/site/trunk/content/markdown/community.md
    incubator/singa/site/trunk/content/markdown/docs.md
    incubator/singa/site/trunk/content/markdown/docs/installation.md
    incubator/singa/site/trunk/content/markdown/index.md
    incubator/singa/site/trunk/content/markdown/quick-start.md
    incubator/singa/site/trunk/content/site.xml
    incubator/singa/site/trunk/pom.xml

Modified: incubator/singa/site/trunk/content/markdown/community.md
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/community.md?rev=1678931&r1=1678930&r2=1678931&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/community.md (original)
+++ incubator/singa/site/trunk/content/markdown/community.md Tue May 12 13:05:08 2015
@@ -1,3 +1,3 @@
-# Community
+## Community
 
 ___

Modified: incubator/singa/site/trunk/content/markdown/docs.md
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs.md?rev=1678931&r1=1678930&r2=1678931&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs.md Tue May 12 13:05:08 2015
@@ -5,3 +5,6 @@ ___
 * [Installation](docs/installation.html)
 * [System Architecture](docs/architecture.html)
 * [Communication](docs/communication.html)
+* [Code Structure](docs/code-structure.html)
+* [Neural Network Partition](docs/neuralnet-partition.html)
+* [Programming Model](docs/programming-model.html)

Added: incubator/singa/site/trunk/content/markdown/docs/code-structure.md
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/code-structure.md?rev=1678931&view=auto
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/code-structure.md (added)
+++ incubator/singa/site/trunk/content/markdown/docs/code-structure.md Tue May 12 13:05:08
2015
@@ -0,0 +1,75 @@
+## Code Structure
+
+___
+
+### Worker Side
+
+#### Main Classes
+
+<img src="../images/code-structure/main.jpg" style="width:70%;" align="center"/>
+
+* **Worker**: start the solver to conduct training or resume from previous training snapshots.
+* **Solver**: construct the neural network and run training algorithms over it. Validation
and testing is also done by the solver along the training.
+* **TableDelegate**: delegate for the parameter table physically stored in parameter servers.
+    it runs a thread to communicate with table servers for parameter transferring.
+* **Net**: the neural network consists of multiple layers constructed from input configuration
file.
+* **Layer**: the core abstraction, read data (neurons) from connecting layers, and compute
the data
+    of itself according to layer specific ComputeFeature functions. Data from the bottom
layer is forwarded
+    layer by layer to the top.
+
+#### Data types
+
+<img src="../images/code-structure/layer.jpg" style="width:90%;" align="center"/>
+
+* **ComputeFeature**: read data (neurons) from in-coming layers, and compute the data
+    of itself according to layer type. This function can be overrided to implement different
+    types layers.
+* **ComputeGradient**: read gradients (and data) from in-coming layers and compute
+    gradients of parameters and data w.r.t the learning objective (loss).
+
+We adpat the implementation for **PoolingLayer**, **Im2colLayer** and **LRNLayer** from [Caffe](http://caffe.berkeleyvision.org/).
+
+
+<img src="../images/code-structure/darray.jpg" style="width:55%;" align="center"/>
+
+* **DArray**: provide the abstraction of distributed array on multiple nodes,
+    supporting array/matrix operations and element-wise operations. Users can use it as a
local structure.
+* **LArray**: the local part for the DArray. Each LArray is treated as an
+    independent array, and support all array-related operations.
+* **MemSpace**: manage the memory used by DArray. Distributed memory are allocated
+    and managed by armci. Multiple DArray can share a same MemSpace, the memory
+    will be released when no DArray uses it anymore.
+* **Partition**: maintain both global shape and local partition information.
+    used when two DArray are going to interact.
+* **Shape**: basic class for representing the scope of a DArray/LArray
+* **Range**: basic class for representing the scope of a Partition
+
+### Parameter Server
+
+#### Main classes
+
+<img src="../images/code-structure/uml.jpg" style="width:90%;" align="center"/>
+
+* **NetworkService**: provide access to the network (sending and receiving messages). It
maintains a queue for received messages, implemented by NetworkQueue.
+* **RequestDispatcher**: pick up next message (request) from the queue, and invoked a method
(callback) to process them.
+* **TableServer**: provide access to the data table (parameters). Register callbacks for
different types of requests to RequestDispatcher.
+* **GlobalTable**: implement the table. Data is partitioned into multiple Shard objects per
table. User-defined consistency model supported by extending TableServerHandler for each table.
+
+#### Data types
+
+<img src="../images/code-structure/type.jpg" style="width:400px;" align="middle"/>
+
+Table related messages are either of type **RequestBase** which contains different types
of request, or of type **TableData** containing a key-value tuple.
+
+#### Control flow and thread model
+
+![uml](../images/code-structure/threads.jpg)
+
+The figure above shows how a GET request sent from a worker is processed by the
+table server. The control flow for other types of requests is similar. At
+the server side, there are at least 3 threads running at any time: two by
+NetworkService for sending and receiving message, and at least one by the
+RequestDispatcher for dispatching requests.
+
+
+

Modified: incubator/singa/site/trunk/content/markdown/docs/installation.md
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/installation.md?rev=1678931&r1=1678930&r2=1678931&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/installation.md (original)
+++ incubator/singa/site/trunk/content/markdown/docs/installation.md Tue May 12 13:05:08 2015
@@ -89,13 +89,13 @@ After the execution, czmq will be instal
 
 ### FAQ
 
-Q1:While compiling Singa and installing glog on max OS X, I get fatal error "'ext/slist'
 file not found"
-A1:You may install glog individually and try command :
+#### While compiling Singa and installing glog on max OS X, I get fatal error "'ext/slist'
 file not found".
+You may install glog individually and try command:
 
     $ make CFLAGS='-stdlib=libstdc++' CXXFLAGS='stdlib=libstdc++'
 
-Q2:While compiling Singa, I get error "SSE2 instruction set not enabled"
-A2:You can try following command:
+#### While compiling Singa, I get error "SSE2 instruction set not enabled".
+You can try following command:
     
     $ make CFLAGS='-msse2' CXXFLAGS='-msse2'
 

Added: incubator/singa/site/trunk/content/markdown/docs/neuralnet-partition.md
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/neuralnet-partition.md?rev=1678931&view=auto
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/neuralnet-partition.md (added)
+++ incubator/singa/site/trunk/content/markdown/docs/neuralnet-partition.md Tue May 12 13:05:08
2015
@@ -0,0 +1,54 @@
+## Neural Network Partition
+
+___
+
+The purposes of partitioning neural network is to distribute the partitions onto
+different working units (e.g., threads or nodes, called workers in this article)
+and parallelize the processing.
+Another reason for partition is to handle large neural network which cannot be
+hold in a single node. For instance, to train models against images with high
+resolution we need large neural networks (in terms of training parameters).
+
+Since *Layer* is the first class citizen in SIGNA, we do the partition against
+layers. Specifically, we support partitions at two levels. First, users can configure
+the location (i.e., worker ID) of each layer. In this way, users assign one worker
+for each layer. Secondly, for one layer, we can partition its neurons or partition
+the instances (e.g, images). They are called layer partition and data partition
+respectively. We illustrate the two types of partitions using an simple convolutional neural
network.
+
+<img src="../images/conv-mnist.png" align="center" width="200px"/>
+
+The above figure shows a convolutional neural network without any partition. It
+has 8 layers in total (one rectangular represents one layer). The first layer is
+DataLayer (data) which reads data from local disk files/databases (or HDFS). The second layer
+is a MnistLayer which parses the records from MNIST data to get the pixels of a batch
+of 28 images (each image is of size 28x28). The LabelLayer (label) parses the records to
get the label
+of each image in the batch. The ConvolutionalLayer (conv1) transforms the input image to
the
+shape of 8x27x27. The ReLULayer (relu1) conducts elementwise transformations. The PoolingLayer
(pool1)
+sub-samples the images. The fc1 layer is fully connected with pool1 layer. It
+mulitplies each image with a weight matrix to generate a 10 dimension hidden feature which
+is then normalized by a SoftmaxLossLayer to get the prediction.
+
+<img src="../images/conv-mnist-datap.png" align="center" width="400px"/>
+
+The above figure shows the convolutional neural network after partitioning all layers
+except the DataLayer and ParserLayers, into 3 partitions using data partition.
+The read layers process 4 images of the batch, the black and blue layers process 2 images
+respectively. Some helper layers, i.e., SliceLayer, ConcateLayer, BridgeSrcLayer,
+BridgeDstLayer and SplitLayer, are added automatically by our partition algorithm.
+Layers of the same color resident in the same worker. There would be data transferring
+across different workers at the boundary layers (i.e., BridgeSrcLayer and BridgeDstLayer),
+e.g., between s-slice-mnist-conv1 and d-slice-mnist-conv1.
+
+<img src="../images/conv-mnist-layerp.png" align="center" width="400px"/>
+
+The above figure shows the convolutional neural network after partitioning all layers
+except the DataLayer and ParserLayers, into 2 partitions using layer partition. We can
+see that each layer processes all 8 images from the batch. But different partitions process
+different part of one image. For instance, the layer conv1-00 process only 4 channels. The
other
+4 channels are processed by conv1-01 which residents in another worker.
+
+
+Since the partition is done at the layer level, we can apply different partitions for
+different layers to get a hybrid partition for the whole neural network. Moreover,
+we can also specify the layer locations to locate different layers to different workers.

Added: incubator/singa/site/trunk/content/markdown/docs/programming-model.md
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/docs/programming-model.md?rev=1678931&view=auto
==============================================================================
--- incubator/singa/site/trunk/content/markdown/docs/programming-model.md (added)
+++ incubator/singa/site/trunk/content/markdown/docs/programming-model.md Tue May 12 13:05:08
2015
@@ -0,0 +1,125 @@
+## Programming Model
+
+We describe the programming model of SINGA in this article.
+Base data structures are introduced firstly, and then we show examples for
+users with different levels of deep learning background.
+
+### Base Data Structures
+
+#### Layer
+
+Layer is the first class citizen in SINGA. Users construct their deep learning
+models by creating layer objects and combining them. SINGA
+takes care of running BackPropagation (or Contrastive Divergence) algorithms
+to calculate the gradients for parameters and calling [Updaters](#updater) to
+update them.
+
+    class Layer{
+      /**
+       * Setup layer properties.
+       * Setup the shapes for data and parameters, also setup some properties
+       * based on the layer configuration and connected src layers.
+       * @param conf user defined layer configuration of type [LayerProto](#netproto)
+       * @param srclayers layers connecting to this layer
+       */
+      Setup(conf, srclayers);
+      /**
+       * Setup the layer properties.
+       * This function is called if the model is partitioned due to distributed
+       * training. Shape of the layer is already set by the partition algorithm,
+       * and is passed in to set other properties.
+       * @param conf user defined layer configuration of type [LayerProto](#netproto)
+       * @param shape shape set by partition algorithm (for distributed training).
+       * @param srclayers layers connecting to this layer
+       */
+      SetupAfterPartition(conf, shape, srclayers);
+      /**
+       * Compute features of this layer based on connected layers.
+       * BP and CD will call this to calculate gradients
+       * @param training boolean phase indicator for training or test
+       * @param srclayers layers connecting to this layer
+       */
+      ComputeFeature(training, srclayers);
+      /**
+       * Compute gradients for parameters and connected layers.
+       * BP and CD will call this to calculate gradients
+       * @param srclayers layers connecting to this layer.
+       */
+      ComputeGradient(srclayers)=0;
+    }
+
+The above pseudo code shows the base Layer class. Users override these
+methods to implement their own layer classes. For example, we have implemented
+popular layers like ConvolutionLayer, InnerProductLayer. We also provide a
+DataLayer which is a base layer for loading (and prefetching) data from disk or HDFS. A base
ParserLayer
+is created for parsing the raw data and convert it into records that are recognizable by
SINGA.
+
+#### NetProto
+
+Since deep learning models consist of multiple layers. The model structure includes
+the properties of each layer and the connections between layers. SINGA uses
+google protocol buffer for users to configure the model structure. The protocol
+buffer message for the model structure is defined as:
+
+    NetProto{
+      repeated LayerProto layer;
+    }
+
+    LayerProto{
+      string name; // user defined layer name for displaying
+      string type; // One layer class has a unique type.
+      repeated string srclayer_name; // connected layer names;
+      repeated ParamProto param; // parameter configurations
+      ...
+    }
+
+Users can create a plain text file and fill it with the configurations. SINGA
+parses it according to user provided path.
+
+#### Param
+
+The Param class is shown below. Users do not need to extend the Param class for
+most cases. We make it a base class just for future extension. For example,
+if a new initialization trick is proposed in the future, we can override the `Init`
+method to implement it.
+
+    Param{
+      /**
+       * Set properties of the parameter.
+       * @param conf user defined parameter configuration of type ParamProto
+       * @param shape shape of the parameter
+      Setup(conf, shape);
+      /**
+       * Initialize the data of the parameter.
+       /
+      Init();
+      ...// methods to handle synchronizations with parameter servers and other workers
+    }
+
+#### Updater
+
+There are many SGD extensions for updating parameters,
+like [AdaDelta](http://arxiv.org/pdf/1212.5701v1.pdf),
+[AdaGrad](http://www.magicbroom.info/Papers/DuchiHaSi10.pdf),
+[RMSProp](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf),
+[Nesterov](http://scholar.google.com/citations?view_op=view_citation&amp;hl=en&amp;user=DJ8Ep8YAAAAJ&amp;citation_for_view=DJ8Ep8YAAAAJ:hkOj_22Ku90C)
+and SGD with momentum. We provide a base Updater to deal with these algorithms.
+New parameter updating algorithms can be added by extending the base Updater.
+
+    Updater{
+      /**
+      * @param proto user configuration for the updater.
+      Init(conf);
+      /**
+      * Update parameter based on its gradient
+      * @param step training step
+      * @param param the Param object
+      */
+      Update(step, param);
+    }
+
+### Examples
+
+The [MLP example](..)
+shows how to configure the model through google protocol buffer.
+

Modified: incubator/singa/site/trunk/content/markdown/index.md
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/index.md?rev=1678931&r1=1678930&r2=1678931&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/index.md (original)
+++ incubator/singa/site/trunk/content/markdown/index.md Tue May 12 13:05:08 2015
@@ -13,7 +13,7 @@ guide to download, install and run SINGA
 ### Contribute
 
 * Please subscribe to our development mailing list dev@singa.incubator.apache.org.
-* If you find any issues using SIGNA, please report it to the
+* If you find any issues using SINGA, please report it to the
 [Issue Tracker](https://issues.apache.org/jira/browse/singa).
 
 More details on contributing to SINGA is describe [here](community.html).

Added: incubator/singa/site/trunk/content/markdown/introduction.md
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/introduction.md?rev=1678931&view=auto
==============================================================================
--- incubator/singa/site/trunk/content/markdown/introduction.md (added)
+++ incubator/singa/site/trunk/content/markdown/introduction.md Tue May 12 13:05:08 2015
@@ -0,0 +1,94 @@
+## Introduction
+
+___
+
+SINGA is a distributed deep learning platform, for training large-scale deep
+learning models. Our design is driven by two key observations. First, the
+structures and training algorithms of deep learning models can be expressed
+using simple abstractions, e.g., the layer. SINGA allows users
+to write their own training algorithms by exposing intuitive programming abstractions
+and hiding complex details pertaining distributed execution of the training.
+Specifically, our programming model consists of data objects (layer and network)
+that define the model, and of computation functions over the data objects. Our
+second observation is that there are multiple approaches to partitioning the model
+and the training data onto multiple machines to achieve model parallelism, data
+parallelism or both. Each approach incurs different communication and synchronization
+overhead which directly affects the system’s scalability. We analyze the fundamental
+trade-offs of existing parallelism approaches, and propose an optimization algorithm
+that generates the parallelism scheme with minimal overhead.
+
+### Goals and Principles
+
+#### Goals
+* Scalability: A distributed platform that can scale to a large model and training
+    dataset, e.g., 1 Billion parameters and 10M images.
+* Usability: To provide abstraction and easy to use interface so that users can
+    implement their deep learning model/algorithm without much awareness of the
+    underlying distributed platform.
+* Extensibility: We try to make SINGA extensible for implementing different consistency
+    models, training algorithms and deep learning models.
+
+#### Principles
+To achieve the scalability goal, we parallelize the computation across a cluster
+of nodes by the following partitioning approaches:
+
+* Model Partition---one model replica spreads across multiple machines to handle large
+    models, which have too many parameters to be kept in the memory of a single machine.
+    Overhead: synchronize layer data across machines within one model replica Partition.
+* Data Partition---one model replica trains against a partition of the whole training dataset.
+    This approach can handle large training dataset.
+    Overhead: synchronize parameters among model replicas.
+* Hybrid Partition---exploit a cost model to find optimal model and data partitions which
+    would reduce both overheads.
+
+To achieve the usability goal, we propose our programming model with the following
+two major considerations:
+
+* Extract common data structures and operations for deep learning training algorithms, i.e.,
+    Back Propagation and Contrastive Divergence. Users implement their models by
+    inheriting these data structures and overriding the operations.
+* Manage model partition and data partition automatically through distributed array.
+    Users write code against the distributed array, without much awareness of the array partition
+     (which part is stored on which machine).
+
+Considering extensibility, we make our core data structures (e.g., Layer) and operations
general enough
+for programmers to override.
+
+### System Overview
+![SINGA software stack](images/software_stack.jpg)
+
+Three goals are considered in designing SINGA, namely ease of use, scalability and extensibility.
+We will introduce them together with the software stack as shown in the above figure.
+Algorithms for deep learning models are complex to code and hard to train. To make
+it ease of use, we provide a simple concept ‘Layer’ to construct deep complex models.
+Built-in Layer implementations include common layers, e.g., convolution layer
+and fully connected layer. Users can configure their models by combining these
+built-in layers through web interface or configuration files. Once the model and
+training data is configured, we start SINGA to conduct the training using the
+standard training algorithm (Back-Propagation,BP or Contrastive Divergence, CD)
+on a cluster of nodes and visualize the training performance to users (e.g.,
+through web interface). Advanced users can also implement their own layers by
+overloading the base Layer class through Python, Matlab, etc wrappers. DistributedArray
+is proposed for easy array operations that are heavily used for realizing layer
+logics. SINGA manages the distributed arrays (stored across multiple nodes)
+automatically and efficiently based on MPI. Training scalability is achieved by
+partitioning the training data and model onto multiple computing nodes and parallelizing
+the computation. A logically centralized parameter server maintains the model
+parameters in a ParameterTable. Computing nodes work according to the consistency
+policy and send information to the parameter server which updates the parameters
+based on SGD (stochastic gradient descent) algorithms. Besides the Layer class,
+other components like SGD algorithms and consistency module are also extensible.
+<!---
+The above figure shows the basic components of SINGA. It starts training a deep
+learning model by parsing a model configuration, which specifies the layer and
+network structure at the every worker node. After that, it initializes the table servers
and starts
+workers to run their tasks. Each table server maintains a partition (i.e., a set
+of rows) of a distributed parameter table where model parameters are stored.
+Worker groups consisting one or more worker nodes run in parallel to compute the
+gradients of parameters. In one iteration, every group fetches fresh parameters
+from the table servers, runs BP or CD algorithm to compute gradients against a
+mini-batch from the local data shard (a partition of the training dataset), and
+then sends gradients to the table servers. The data shard is created by loading
+training data from HDFS off-line. The master monitors the training progress and
+stops the workers and table servers once the model has converged to a given loss.
+-->

Propchange: incubator/singa/site/trunk/content/markdown/introduction.md
------------------------------------------------------------------------------
    svn:executable = *

Modified: incubator/singa/site/trunk/content/markdown/quick-start.md
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/markdown/quick-start.md?rev=1678931&r1=1678930&r2=1678931&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/markdown/quick-start.md (original)
+++ incubator/singa/site/trunk/content/markdown/quick-start.md Tue May 12 13:05:08 2015
@@ -18,7 +18,7 @@ Compile SINGA:
     make
 
 If there are dependent libraries missing, please refer to
-[installation]({{ BASE_PATH }}{% post_url /docs/2015-01-20-installation %}) page
+[installation](docs/installation.html) page
 for guidance on installing them. After successful compilation, the libsinga.so
 and singa executable will be built into the build folder.
 
@@ -76,7 +76,7 @@ One worker group trains against one part
 *nworker_groups* is set to 1, then there is no data partitioning. One worker
 runs over a partition of the model. If *nworkers_per_group* is set to 1, then
 there is no model partitioning. More details on the cluster configuration are
-described in the [System Architecture]() page.
+described in the [System Architecture](docs/architecture.html) page.
 
 Start the training by running:
 

Modified: incubator/singa/site/trunk/content/site.xml
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/content/site.xml?rev=1678931&r1=1678930&r2=1678931&view=diff
==============================================================================
--- incubator/singa/site/trunk/content/site.xml (original)
+++ incubator/singa/site/trunk/content/site.xml Tue May 12 13:05:08 2015
@@ -34,7 +34,7 @@
   <version position="none"/>
   
   <poweredBy>
-    <logo name="apache-incubator" alt="Apache Incubator" img="http://incubator.apache.org/images/egg-logo.png"
href="http://incubator.apache.org" width="150" />
+    <logo name="apache-incubator" alt="Apache Incubator" img="http://incubator.apache.org/images/egg-logo.png"
href="http://incubator.apache.org"/>
   </poweredBy>
 
   <skin>
@@ -50,7 +50,8 @@
     </breadcrumbs>
 
     <menu name="Apache SINGA">
-      <item name="Introduction" href="index.html"/>
+      <item name="Welcome" href="index.html"/>
+      <item name="Introduction" href="introduction.html"/>
       <item name="Quick Start" href="quick-start.html"/>
     </menu>
 
@@ -58,10 +59,13 @@
       <item name="Installation" href="docs/installation.html"/>
       <item name="System Architecture" href="docs/architecture.html"/>
       <item name="Communication" href="docs/communication.html"/>
+      <item name="Code Structure" href="docs/code-structure.html"/>
+      <item name="Neural Network Partition" href="docs/neuralnet-partition.html"/>
+      <item name="Programming Model" href="docs/programming-model.html"/>
     </menu>
 
     <menu name="External Links">
-      <item name="Apache Software Foundation" href="http://www.apache.org/" />
+      <item name="Apache Software Foundation" href="http://www.apache.org/"/>
     </menu>
 
   </body>

Modified: incubator/singa/site/trunk/pom.xml
URL: http://svn.apache.org/viewvc/incubator/singa/site/trunk/pom.xml?rev=1678931&r1=1678930&r2=1678931&view=diff
==============================================================================
--- incubator/singa/site/trunk/pom.xml (original)
+++ incubator/singa/site/trunk/pom.xml Tue May 12 13:05:08 2015
@@ -22,7 +22,7 @@
 
   <groupId>org.apache.singa</groupId>
   <artifactId>singa.site</artifactId>
-  <version>0.1</version>
+  <version>1.0</version>
   <packaging>pom</packaging>
   <name>Apache SINGA site</name>
   <url>http://singa.incubator.apache.org</url>
@@ -41,6 +41,10 @@
     </license>
   </licenses>
 
+  <properties>
+    <site.output>${project.build.directory}/site</site.output>
+  </properties>
+
   <build>
     <plugins>
       <plugin>



Mime
View raw message