Return-Path: X-Original-To: apmail-singa-commits-archive@minotaur.apache.org Delivered-To: apmail-singa-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E5D5918A67 for ; Wed, 17 Jun 2015 05:39:09 +0000 (UTC) Received: (qmail 24876 invoked by uid 500); 17 Jun 2015 05:39:09 -0000 Delivered-To: apmail-singa-commits-archive@singa.apache.org Received: (qmail 24859 invoked by uid 500); 17 Jun 2015 05:39:09 -0000 Mailing-List: contact commits-help@singa.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@singa.incubator.apache.org Delivered-To: mailing list commits@singa.incubator.apache.org Received: (qmail 24850 invoked by uid 99); 17 Jun 2015 05:39:09 -0000 Received: from Unknown (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 17 Jun 2015 05:39:09 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 6434C1A5B5A for ; Wed, 17 Jun 2015 05:39:09 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 4.638 X-Spam-Level: **** X-Spam-Status: No, score=4.638 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, PERCENT_RANDOM=2.837, T_FILL_THIS_FORM_SHORT=0.01, T_RP_MATCHES_RCVD=-0.01, URIBL_BLOCKED=0.001] autolearn=disabled Received: from mx1-eu-west.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id wgMsc2cN-U1T for ; Wed, 17 Jun 2015 05:38:57 +0000 (UTC) Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx1-eu-west.apache.org (ASF Mail Server at mx1-eu-west.apache.org) with SMTP id 1A8FB26216 for ; Wed, 17 Jun 2015 05:38:55 +0000 (UTC) Received: (qmail 22974 invoked by uid 99); 17 Jun 2015 05:38:55 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 17 Jun 2015 05:38:55 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 30BF0E3C68; Wed, 17 Jun 2015 05:38:55 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: wangwei@apache.org To: commits@singa.incubator.apache.org Date: Wed, 17 Jun 2015 05:38:55 -0000 Message-Id: <47b8dc7a27bc4b7fa3d77922d4bb01b3@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [1/2] incubator-singa git commit: SINGA-14 Update layer API to be general for training different models Repository: incubator-singa Updated Branches: refs/heads/master 6019905e7 -> 9d07f3c1b SINGA-14 Update layer API to be general for training different models Replace the boolean type variable "training" in ComputeFeature function (and other functions) with Phase type variable "phase". This update is to support the training of different models, e.g., RBM, and feed forward models like CNN. Phase can be kTrain (for training phase), kTest (for test phase), kPositive (for positive phase of the contrastive divergence algorithm), etc. Project: http://git-wip-us.apache.org/repos/asf/incubator-singa/repo Commit: http://git-wip-us.apache.org/repos/asf/incubator-singa/commit/ceaa962e Tree: http://git-wip-us.apache.org/repos/asf/incubator-singa/tree/ceaa962e Diff: http://git-wip-us.apache.org/repos/asf/incubator-singa/diff/ceaa962e Branch: refs/heads/master Commit: ceaa962e5e2354a874e267ed1ee761cf784f245b Parents: 6019905 Author: zhaojing Authored: Wed Jun 17 11:29:50 2015 +0800 Committer: zhaojing Committed: Wed Jun 17 11:39:43 2015 +0800 ---------------------------------------------------------------------- examples/cifar10/cluster.conf | 4 ++-- include/neuralnet/base_layer.h | 26 +++++++++++++------------- include/neuralnet/layer.h | 26 +++++++++++++------------- src/neuralnet/base_layer.cc | 26 +++++++++++++------------- src/neuralnet/layer.cc | 32 ++++++++++++++++---------------- src/proto/model.proto | 2 ++ src/trainer/worker.cc | 2 +- 7 files changed, 60 insertions(+), 58 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/ceaa962e/examples/cifar10/cluster.conf ---------------------------------------------------------------------- diff --git a/examples/cifar10/cluster.conf b/examples/cifar10/cluster.conf index 6f6d963..88c3d4b 100644 --- a/examples/cifar10/cluster.conf +++ b/examples/cifar10/cluster.conf @@ -1,6 +1,6 @@ -nworker_groups: 2 +nworker_groups: 1 nserver_groups: 1 nservers_per_group: 1 -nworkers_per_group: 1 +nworkers_per_group: 2 nworkers_per_procs: 2 workspace: "examples/cifar10/" http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/ceaa962e/include/neuralnet/base_layer.h ---------------------------------------------------------------------- diff --git a/include/neuralnet/base_layer.h b/include/neuralnet/base_layer.h index 777c2cb..d7c4c3a 100644 --- a/include/neuralnet/base_layer.h +++ b/include/neuralnet/base_layer.h @@ -109,11 +109,11 @@ class Layer { * @param training true if in training phase * @param srclayers layers connecting to this layer */ - virtual void ComputeFeature(bool training, const vector& srclayers)=0; + virtual void ComputeFeature(Phase phase, const vector& srclayers)=0; /** * \copybrief ComputeFeature(const vector& srclayers) */ - virtual void ComputeFeature(bool training); + virtual void ComputeFeature(Phase phase); /** * Compute gradients for parameters and connecting layers. * @@ -286,7 +286,7 @@ class BridgeSrcLayer: public Layer { const vector &shape, const vector& srclayers){} - virtual void ComputeFeature(bool training, const vector& srclayers); + virtual void ComputeFeature(Phase phase, const vector& srclayers); virtual void ComputeGradient(const vector& srclayers); virtual const Blob& data(const Layer* from) const { return srclayers_[0]->data(this); @@ -330,7 +330,7 @@ class BridgeDstLayer: public Layer { const vector &shape, const vector& srclayers){} - virtual void ComputeFeature(bool training, const vector& srclayers){ + virtual void ComputeFeature(Phase phase, const vector& srclayers){ ready_=false; } virtual void ComputeGradient(const vector& srclayers){} @@ -362,7 +362,7 @@ class ConcateLayer: public Layer { const vector &shape, const vector& srclayers){} - virtual void ComputeFeature(bool training, const vector>& srclayers); + virtual void ComputeFeature(Phase phase, const vector>& srclayers); virtual void ComputeGradient(const vector>& srclayers); }; @@ -378,7 +378,7 @@ class DataLayer: public Layer{ using Layer::ComputeFeature; using Layer::ComputeGradient; - virtual void ComputeFeature(bool training, const vector& srclayers)=0; + virtual void ComputeFeature(Phase phase, const vector& srclayers)=0; virtual void Setup(const LayerProto& proto, const vector& srclayers)=0; virtual bool is_datalayer() const { return true; @@ -440,7 +440,7 @@ class PrefetchLayer : public Layer { virtual ~PrefetchLayer(); virtual void Setup(const LayerProto& proto, const vector& srclayers); - virtual void ComputeFeature(bool training, const vector& srclayers); + virtual void ComputeFeature(Phase phase, const vector& srclayers); virtual void ComputeGradient(const vector& srclayers){}; virtual void SetupAfterPartition(const LayerProto& proto, const vector &shape, @@ -460,7 +460,7 @@ class PrefetchLayer : public Layer { return kNone; } - void Prefetch(bool training); + void Prefetch(Phase phase); protected: vector> sublayers_; map> datablobs_; @@ -476,7 +476,7 @@ class SliceLayer: public Layer { using Layer::ComputeFeature; using Layer::ComputeGradient; - virtual void ComputeFeature(bool training, const vector>& srclayers); + virtual void ComputeFeature(Phase phase, const vector>& srclayers); virtual void ComputeGradient(const vector>& srclayers); virtual void Setup(const LayerProto& proto, const vector& srclayers); virtual void SetupAfterPartition(); @@ -510,7 +510,7 @@ class SplitLayer: public Layer { const vector &shape, const vector& srclayers){} - virtual void ComputeFeature(bool training, const vector>& srclayers); + virtual void ComputeFeature(Phase phase, const vector>& srclayers); virtual void ComputeGradient(const vector>& srclayers); }; @@ -560,16 +560,16 @@ class ParserLayer: public Layer { /** * Parse records from DataLayer into blob. * This function is called by - * ComputeFeature(bool, const vector& srclayers) or Prefetch(bool). + * ComputeFeature(Phase, const vector& srclayers) or Prefetch(Phase). */ - virtual void ParseRecords(bool training, const vector& records, + virtual void ParseRecords(Phase phase, const vector& records, Blob* blob)=0; virtual bool is_parserlayer() const { return true; } - virtual void ComputeFeature(bool training, const vector& srclayers); + virtual void ComputeFeature(Phase phase, const vector& srclayers); /** * Dummy function. ParserLayer does not compute gradients. */ http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/ceaa962e/include/neuralnet/layer.h ---------------------------------------------------------------------- diff --git a/include/neuralnet/layer.h b/include/neuralnet/layer.h index 4a4c307..bfbee8f 100644 --- a/include/neuralnet/layer.h +++ b/include/neuralnet/layer.h @@ -42,7 +42,7 @@ class ConvolutionLayer: public Layer { const vector &shape, const vector& srclayers); - virtual void ComputeFeature(bool training, const vector>& srclayers); + virtual void ComputeFeature(Phase phase, const vector>& srclayers); virtual void ComputeGradient(const vector>& srclayers); virtual vector> GetParams() { return vector>{weight_, bias_}; @@ -73,7 +73,7 @@ class DropoutLayer: public Layer { const vector &shape, const vector& srclayers); - virtual void ComputeFeature(bool training, const vector>& srclayers); + virtual void ComputeFeature(Phase phase, const vector>& srclayers); virtual void ComputeGradient(const vector>& srclayers); protected: // drop probability @@ -108,7 +108,7 @@ class InnerProductLayer: public Layer { return kOneToAll; } - virtual void ComputeFeature(bool training, const vector>& srclayers); + virtual void ComputeFeature(Phase phase, const vector>& srclayers); virtual void ComputeGradient(const vector>& srclayers); //virtual void ToProto(LayerProto *layer_proto, bool copyData); virtual vector> GetParams() { @@ -129,7 +129,7 @@ class LabelLayer: public ParserLayer { using ParserLayer::Setup; virtual void Setup(const LayerProto& proto, const vector& srclayers); - virtual void ParseRecords(bool training, const vector& records, + virtual void ParseRecords(Phase phase, const vector& records, Blob* blob); }; @@ -156,7 +156,7 @@ class LRNLayer: public Layer { const vector& srclayers); - virtual void ComputeFeature(bool training, const vector>& srclayers); + virtual void ComputeFeature(Phase phase, const vector>& srclayers); virtual void ComputeGradient(const vector>& srclayers); protected: //! shape of the bottom layer feature @@ -173,7 +173,7 @@ class MnistImageLayer: public ParserLayer { using Layer::Setup; virtual void Setup(const LayerProto& proto, const vector& srclayers); - virtual void ParseRecords(bool training, const vector& records, + virtual void ParseRecords(Phase phase, const vector& records, Blob* blob); protected: @@ -199,7 +199,7 @@ class PoolingLayer: public Layer { virtual void SetupAfterPartition(const LayerProto& proto, const vector &shape, const vector& srclayers); - virtual void ComputeFeature(bool training, const vector>& srclayers); + virtual void ComputeFeature(Phase phase, const vector>& srclayers); virtual void ComputeGradient(const vector>& srclayers); protected: int kernel_, pad_, stride_; @@ -221,7 +221,7 @@ class ReLULayer: public Layer { const vector &shape, const vector& srclayers); - virtual void ComputeFeature(bool training, const vector>& srclayers); + virtual void ComputeFeature(Phase phase, const vector>& srclayers); virtual void ComputeGradient(const vector>& srclayers); }; @@ -257,7 +257,7 @@ class SoftmaxLossLayer: public LossLayer { return kOneToAll; } - virtual void ComputeFeature(bool training, const vector>& srclayers); + virtual void ComputeFeature(Phase phase, const vector>& srclayers); virtual void ComputeGradient(const vector>& srclayers); private: int batchsize_; @@ -271,7 +271,7 @@ class RGBImageLayer: public ParserLayer { using Layer::Setup; virtual void Setup(const LayerProto& proto, const vector& srclayers); - virtual void ParseRecords(bool training, const vector& records, + virtual void ParseRecords(Phase phase, const vector& records, Blob* blob); private: @@ -287,7 +287,7 @@ class ShardDataLayer: public DataLayer{ using Layer::ComputeFeature; using Layer::ComputeGradient; - virtual void ComputeFeature(bool training, const vector>& srclayers); + virtual void ComputeFeature(Phase phase, const vector>& srclayers); virtual void ComputeGradient(const vector>& srclayers){}; virtual void Setup(const LayerProto& proto, const vector& srclayers); private: @@ -299,7 +299,7 @@ class LMDBDataLayer: public DataLayer{ using Layer::ComputeFeature; using Layer::ComputeGradient; - virtual void ComputeFeature(bool training, const vector>& srclayers); + virtual void ComputeFeature(Phase phase, const vector>& srclayers); virtual void ComputeGradient(const vector>& srclayers){}; virtual void Setup(const LayerProto& proto, const vector& srclayers); void ConvertDatumToSingleLableImageRecord(const Datum& datum, @@ -333,7 +333,7 @@ class TanhLayer: public Layer { const vector& srclayers); - virtual void ComputeFeature(bool training, const vector>& srclayers); + virtual void ComputeFeature(Phase phase, const vector>& srclayers); virtual void ComputeGradient(const vector>& srclayers); private: float outer_scale_, inner_scale_; http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/ceaa962e/src/neuralnet/base_layer.cc ---------------------------------------------------------------------- diff --git a/src/neuralnet/base_layer.cc b/src/neuralnet/base_layer.cc index d3ff24b..63ac7a0 100644 --- a/src/neuralnet/base_layer.cc +++ b/src/neuralnet/base_layer.cc @@ -29,8 +29,8 @@ void Layer::SetupAfterPartition(){ CHECK(std::equal(shape.begin(), shape.end(), data_.shape().begin()))<& srclayers){ } void BridgeSrcLayer::ComputeGradient(const vector& srclayers){ @@ -94,38 +94,38 @@ void ConcateLayer::SetupAfterPartition(){ // LOG(ERROR)<& srclayers){} +void ConcateLayer::ComputeFeature(Phase phase, const vector& srclayers){} void ConcateLayer::ComputeGradient(const vector>& srclayers){} /************* Implementation for ParserLayer ***********/ -void ParserLayer::ComputeFeature(bool training, const vector& srclayers){ +void ParserLayer::ComputeFeature(Phase phase, const vector& srclayers){ CHECK_EQ(srclayers.size(),1); auto datalayer=static_cast(srclayers.begin()->get()); - ParseRecords(training, datalayer->records(), &data_); + ParseRecords(phase, datalayer->records(), &data_); } /************* Implementation for PrefetchLayer ***********/ -void PrefetchLayer::Prefetch(bool training){ +void PrefetchLayer::Prefetch(Phase phase){ //clock_t s=clock(); for(auto layer: sublayers_) - layer->ComputeFeature(training); + layer->ComputeFeature(phase); //LOG(ERROR)<<(clock()-s)*1.0/CLOCKS_PER_SEC; } -void PrefetchLayer::ComputeFeature(bool training, +void PrefetchLayer::ComputeFeature(Phase phase, const vector& srclayers){ if(thread_.joinable()) thread_.join(); else{ - Prefetch(training); + Prefetch(phase); } for(auto layer: sublayers_){ if(layer->is_parserlayer()) // TODO replace CopyFrom with Swap? datablobs_.at(layer->name()).CopyFrom(layer->data(this)); } - thread_=std::thread(&PrefetchLayer::Prefetch, this, training); + thread_=std::thread(&PrefetchLayer::Prefetch, this, phase); } void PrefetchLayer::Setup(const LayerProto& proto, @@ -237,7 +237,7 @@ Blob* SliceLayer::mutable_grad(const Layer* layer){ return &grad_; return &gradvec_[SliceID(layer)]; } -void SliceLayer::ComputeFeature(bool training, +void SliceLayer::ComputeFeature(Phase phase, const vector>& srclayers){ CHECK_EQ(srclayers.size(),1); if(slice_dim_==0){ @@ -266,7 +266,7 @@ void SplitLayer::SetupAfterPartition(){ Setup(layer_proto_, srclayers_); //LOG(ERROR)<>& srclayers){ +void SplitLayer::ComputeFeature(Phase phase, const vector>& srclayers){ } void SplitLayer::ComputeGradient(const vector>& srclayers){ http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/ceaa962e/src/neuralnet/layer.cc ---------------------------------------------------------------------- diff --git a/src/neuralnet/layer.cc b/src/neuralnet/layer.cc index a374511..de13ba7 100644 --- a/src/neuralnet/layer.cc +++ b/src/neuralnet/layer.cc @@ -60,7 +60,7 @@ void ConvolutionLayer::SetupAfterPartition(const LayerProto& proto, Setup(newproto, srclayers); } -void ConvolutionLayer::ComputeFeature(bool training, const vector& srclayers){ +void ConvolutionLayer::ComputeFeature(Phase phase, const vector& srclayers){ Tensor src(srclayers[0]->mutable_data(this)->mutable_cpu_data(), Shape4(batchsize_, channels_, height_, width_)); Tensor data(data_.mutable_cpu_data(), @@ -137,9 +137,9 @@ void DropoutLayer::SetupAfterPartition(const LayerProto& proto, Setup(proto, srclayers); } -void DropoutLayer::ComputeFeature(bool training, const vector& srclayers) { +void DropoutLayer::ComputeFeature(Phase phase, const vector& srclayers) { // check training - if(!training){ + if(phase!= kTrain){//!training){ data_.CopyFrom(srclayers[0]->data(this)); return; } @@ -185,7 +185,7 @@ void InnerProductLayer::SetupAfterPartition(const LayerProto& proto, Setup(newproto, srclayers); } -void InnerProductLayer::ComputeFeature(bool training, const vector& srclayers) { +void InnerProductLayer::ComputeFeature(Phase phase, const vector& srclayers) { Tensor data(data_.mutable_cpu_data(), Shape2(batchsize_,hdim_)); CHECK_EQ(srclayers[0]->data(this).count(), batchsize_*vdim_); Tensor src(srclayers[0]->mutable_data(this)->mutable_cpu_data(), @@ -223,7 +223,7 @@ void LabelLayer::Setup(const LayerProto& proto, data_.Reshape(vector{batchsize}); } -void LabelLayer::ParseRecords(bool training, const vector& records, +void LabelLayer::ParseRecords(Phase phase, const vector& records, Blob* blob){ int rid=0; float *label= blob->mutable_cpu_data() ; @@ -236,7 +236,7 @@ void LabelLayer::ParseRecords(bool training, const vector& records, /*********************LMDBDataLayer**********************************/ -void LMDBDataLayer::ComputeFeature(bool training, const vector& srclayers){ +void LMDBDataLayer::ComputeFeature(Phase phase, const vector& srclayers){ if(random_skip_){ int nskip=rand()%random_skip_; int n=0; @@ -355,7 +355,7 @@ void LRNLayer::SetupAfterPartition(const LayerProto& proto, Setup(proto, srclayers); } -void LRNLayer::ComputeFeature(bool training, const vector& srclayers){ +void LRNLayer::ComputeFeature(Phase phase, const vector& srclayers){ const float salpha = alpha_ / lsize_; Shape<4> s=Shape4(batchsize_,channels_, height_, width_); Tensor src(srclayers[0]->mutable_data(this)->mutable_cpu_data(), s); @@ -381,7 +381,7 @@ void LRNLayer::ComputeGradient(const vector& srclayers) { /**************** Implementation for MnistImageLayer******************/ -void MnistImageLayer::ParseRecords(bool training, +void MnistImageLayer::ParseRecords(Phase phase, const vector& records, Blob* blob){ LOG_IF(ERROR, records.size()==0)<<"Empty records to parse"; int ndim=records.at(0).image().shape_size(); @@ -509,7 +509,7 @@ void PoolingLayer::SetupAfterPartition(const LayerProto& proto, Setup(proto, srclayers); } -void PoolingLayer::ComputeFeature(bool training, const vector& srclayers){ +void PoolingLayer::ComputeFeature(Phase phase, const vector& srclayers){ Tensor src(srclayers[0]->mutable_data(this)->mutable_cpu_data(), Shape4(batchsize_, channels_, height_, width_)); Tensor data(data_.mutable_cpu_data(), @@ -553,7 +553,7 @@ void ReLULayer::SetupAfterPartition(const LayerProto& proto, Setup(proto, srclayers); } -void ReLULayer::ComputeFeature(bool training, const vector& srclayers){ +void ReLULayer::ComputeFeature(Phase phase, const vector& srclayers){ Tensor data(data_.mutable_cpu_data(), Shape1(data_.count())); Tensor src(srclayers[0]->mutable_data(this)->mutable_cpu_data(), Shape1(data_.count())); @@ -570,7 +570,7 @@ void ReLULayer::ComputeGradient(const vector& srclayers) { /*************** Implementation for RGBImageLayer *************************/ -void RGBImageLayer::ParseRecords(bool training, +void RGBImageLayer::ParseRecords(Phase phase, const vector& records, Blob* blob){ const vector& s=blob->shape(); Tensor images(data_.mutable_cpu_data(), Shape4(s[0],s[1],s[2],s[3])); @@ -585,8 +585,8 @@ void RGBImageLayer::ParseRecords(bool training, const float* meandptr=mean_.cpu_data(); for(const Record& record: records){ auto image=images[rid]; - bool do_crop=cropsize_>0&&training; - bool do_mirror=mirror_&&rand()%2&&training; + bool do_crop=cropsize_>0&&(phase == kTrain); + bool do_mirror=mirror_&&rand()%2&&(phase == kTrain); float* dptr=nullptr; if(do_crop||do_mirror) dptr=raw_image.dptr; @@ -663,7 +663,7 @@ void RGBImageLayer::Setup(const LayerProto& proto, } /***************Implementation for ShardDataLayer**************************/ -void ShardDataLayer::ComputeFeature(bool training, const vector& srclayers){ +void ShardDataLayer::ComputeFeature(Phase phase, const vector& srclayers){ if(random_skip_){ int nskip=rand()%random_skip_; LOG(INFO)<<"Random Skip "<Count() @@ -708,7 +708,7 @@ void TanhLayer::SetupAfterPartition(const LayerProto& proto, } -void TanhLayer::ComputeFeature(bool training, const vector& srclayers){ +void TanhLayer::ComputeFeature(Phase phase, const vector& srclayers){ Tensor data(data_.mutable_cpu_data(), Shape1(data_.count())); Tensor src(srclayers[0]->mutable_data(this)->mutable_cpu_data(), Shape1(data_.count())); @@ -738,7 +738,7 @@ void SoftmaxLossLayer::SetupAfterPartition(const LayerProto& proto, const vector& srclayers){ Setup(proto, srclayers); } -void SoftmaxLossLayer::ComputeFeature(bool training, const vector& srclayers) { +void SoftmaxLossLayer::ComputeFeature(Phase phase, const vector& srclayers) { Shape<2> s=Shape2(batchsize_, dim_); Tensor prob(data_.mutable_cpu_data(), s); Tensor src(srclayers[0]->mutable_data(this)->mutable_cpu_data(), s); http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/ceaa962e/src/proto/model.proto ---------------------------------------------------------------------- diff --git a/src/proto/model.proto b/src/proto/model.proto index 59c1a52..c6e3495 100644 --- a/src/proto/model.proto +++ b/src/proto/model.proto @@ -25,6 +25,8 @@ enum Phase { kTrain = 0; kValidation=1; kTest= 2; + kPositive = 3; + kNegative = 4; } enum ShareOption{ kValueOnly=0; http://git-wip-us.apache.org/repos/asf/incubator-singa/blob/ceaa962e/src/trainer/worker.cc ---------------------------------------------------------------------- diff --git a/src/trainer/worker.cc b/src/trainer/worker.cc index 52798ad..b308c4e 100644 --- a/src/trainer/worker.cc +++ b/src/trainer/worker.cc @@ -263,7 +263,7 @@ void BPWorker::Forward(int step, Phase phase, shared_ptr net){ } } //clock_t s=clock(); - layer->ComputeFeature(phase==kTrain); + layer->ComputeFeature(phase); //LOG(ERROR)<name()<<":"<<(clock()-s)*1.0/CLOCKS_PER_SEC; if(layer->is_bridgesrclayer()){ auto dst=layer->dstlayers().at(0);