singa-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "wangwei (JIRA)" <>
Subject [jira] [Updated] (SINGA-7) Implement shared memory Hogwild algorithm
Date Sat, 13 Jun 2015 03:41:00 GMT


wangwei updated SINGA-7:
    Assignee: wangwei

> Implement shared memory Hogwild algorithm
> -----------------------------------------
>                 Key: SINGA-7
>                 URL:
>             Project: Singa
>          Issue Type: New Feature
>            Reporter: wangwei
>            Assignee: wangwei
>              Labels: features, hogwild
> The original Hogwild [1] algorithm works on a multi-core machine with shared memory.
There are two ways to implement it in SINGA
> 1. Following the worker-server architecture to launch multiple worker groups and one
server group. Share the memory space of parameter values among worker groups and the server
group. Worker groups compute gradients and the server group updates parameter values. 
> 2. Using worker-only architecture like Caffe. Share the memory space of parameter values
among worker groups. Workers compute gradients and update parameters locally.
> To simplify the implementation, we can firstly restrict the group size to be 1.
> There are also two choices for the frequency of reporting the training/test performance.
> 1. based on training iterations
> 2. based on training time (e.g., seconds)
> Once the shared memory version is finished, we will extend it to distributed environment.
> [1]B. Recht, C. Re, S. J. Wright, and F. Niu. Hogwild: A lock-free approach to parallelizing
stochastic gradient descent. In NIPS, pages 693–701, 2011.

This message was sent by Atlassian JIRA

View raw message