horn-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Edward J. Yoon (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HORN-7) Implementation of distributed model trainer
Date Thu, 19 Nov 2015 07:10:10 GMT

    [ https://issues.apache.org/jira/browse/HORN-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15013051#comment-15013051

Edward J. Yoon commented on HORN-7:

The forward and backward passes are the essential computations of a Neural Net. So, only few
vertices of single layer of Neural Net will be activated in a single superstep. This is quite
inefficient. So, instead of doing like this, we send training instance continuously at every
superstep, and then handle the information (forward messages of current training instance)
and error (backward messages of previous training instance) at once.

Then, we push the accumulated updates to parameter servers in the corresponding mini-batch

> Implementation of distributed model trainer
> -------------------------------------------
>                 Key: HORN-7
>                 URL: https://issues.apache.org/jira/browse/HORN-7
>             Project: Apache Horn
>          Issue Type: New Feature
>            Reporter: Edward J. Yoon
>            Assignee: Edward J. Yoon
> As we disccused in HORN-4, we'll have neuron-centric message passing framework for training
large model.
> I'll add distbelief package includes neuron-centric interfaces and its execution runner
for it.

This message was sent by Atlassian JIRA

View raw message