singa-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "wangwei (JIRA)" <>
Subject [jira] [Created] (SINGA-7) Implement shared memory Hogwild algorithm
Date Tue, 09 Jun 2015 12:49:02 GMT
wangwei created SINGA-7:

             Summary: Implement shared memory Hogwild algorithm
                 Key: SINGA-7
             Project: Singa
          Issue Type: New Feature
            Reporter: wangwei

The original Hogwild [1] algorithm works on a multi-core machine with shared memory. There
are two ways to implement it in SINGA
1. Following the worker-server architecture to launch multiple worker groups and one server
group. Share the memory space of parameter values among worker groups and the server group.
Worker groups compute gradients and the server group updates parameter values. 

2. Using worker-only architecture like Caffe. Share the memory space of parameter values among
worker groups. Workers compute gradients and update parameters locally.

To simplify the implementation, we can firstly restrict the group size to be 1.

There are also two choices for the frequency of reporting the training/test performance.
1. based on training iterations
2. based on training time (e.g., seconds)

Once the shared memory version is finished, we will extend it to distributed environment.

[1]B. Recht, C. Re, S. J. Wright, and F. Niu. Hogwild: A lock-free approach to parallelizing
stochastic gradient descent. In NIPS, pages 693–701, 2011.

This message was sent by Atlassian JIRA

View raw message