singa-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "wangwei (JIRA)" <j...@apache.org>
Subject [jira] [Created] (SINGA-131) Implement and optimize hybrid training using both CPU and GPU
Date Tue, 12 Jan 2016 02:30:39 GMT
wangwei created SINGA-131:
-----------------------------

             Summary: Implement and optimize hybrid training using both CPU and GPU
                 Key: SINGA-131
                 URL: https://issues.apache.org/jira/browse/SINGA-131
             Project: Singa
          Issue Type: Improvement
            Reporter: wangwei


We discussed with researchers from Stanford on implementing hybrid training before
http://mail-archives.apache.org/mod_mbox/singa-dev/201507.mbox/%3CCAJz0iLsd5iSCqqVU4QHLKzMO2o%2BFt-40kN8RgWkYhDn%3D6Qqqbw%40mail.gmail.com%3E.
Now with the GPU training supported, we can move on to this feature.

The distributed training framework is natural for hybrid training with CPU and GPU. The first
n workers would be assigned with GPU cards (n is the number of cards configured by users),
and the rest workers would run on CPU.

Some code may need updates and optimization to consider the memory transferring between GPU
workers and CPU workers. Most of them is in worker.cc, param.cc and stub.cc.

Automatically tune the workload among GPU and CPU could be designed and implemented in this
ticket or a new ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message