singa-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF subversion and git services (JIRA)" <>
Subject [jira] [Commented] (SINGA-126) Improve Python Binding for interactive training
Date Wed, 06 Apr 2016 08:38:25 GMT


ASF subversion and git services commented on SINGA-126:

Commit 1c8e0dc03e1fc06c2e6460892fc9a91e482e5434 in incubator-singa's branch refs/heads/master
from [~flytosky]
[;h=1c8e0dc ]

SINGA-126 Python Binding for Interactive Training

1. Replace 'x != None' with 'x is not None'
2. Fixed the bug from type mismatch: debug should be set to False before passing it to SINGA's
loss layer.
3. Set default value for SingaProto's zookeeper endpoint. Then we can ignore `-singa_conf
xxx'. SINGA would assume that
the zookeeper endpoint is the default one ('localhost:2181'), and glog
would use its default logging dir.

> Improve Python Binding for interactive training
> -----------------------------------------------
>                 Key: SINGA-126
>                 URL:
>             Project: Singa
>          Issue Type: Improvement
>            Reporter: wangwei
>            Assignee: Lee Chonho
>              Labels: binding, debugging, interative, python
> Currently, python APIs only configure the layer and model. All objects are created after
the the JobProto is passed to Driver. Hence, users cannot query the layer object returned
> {code}
> conv1 = Convolution2D()
> {code}
> to get its internal data (e.g, feature and param values). These internal data is useful
for debugging.
> To support this feature, we need to create the SINGA::Layer object and store it in conv1.
> Users can write their own BP algorithm like this,
> {code}
> data = numpy.loadtxt("csv.txt")
> x, y = data[:, 1:], data[:, 0]
> input = Dummy() // dummy layer to get input data
> label = Dummy() // dummy layer to get label 
> conv = Convolution2D(...)
> pool = Pool2D()
> inner = Dense()
> loss = ...
> for i in range(x.shape[0] / batchsize):
>    xb, yb = ...
>    input.SetData(x)
>    label.SetData(y)
>    conv.ComputeFeature(input)
>    pool.ComputeFeature(conv)
>    inner.ComputeFeature(pool)
>    loss.ComputeGradient(inner, label)
>    ....
> {code}
> In this way, users know exactly how the training is conducted, and can access the internal
data of each layer directly, e.g.,, conv.GetParams().
> We may also learn from chainer to call the ComputeGradient functions automatically for
the backward pass.
> This feature requires the python APIs for singa::Layer.
> It is easy for training with a single worker. For multiple workers, we need to think

This message was sent by Atlassian JIRA

View raw message