horn-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Edward J. Yoon" <edwardy...@apache.org>
Subject Re: Sharing Survey Results
Date Wed, 25 Nov 2015 08:14:03 GMT
P.S., http://www.cs.cornell.edu/~wenleix/paper/blockgrace_vldb2013.pdf

On Wed, Nov 25, 2015 at 4:52 PM, Edward J. Yoon <edwardyoon@apache.org> wrote:
> Recently I've discussed little with co-workers, Greg (Pregel), and
> Adam coates, and Here's few survey results:
>
> In general, a large amount of memory is required by fully-connected
> layers which are used in a lot of different architectures. In
> Convolutional Neural Network, convolutional layers have a
> computational bottleneck and fully connected layers have a memory
> bottleneck (with matrix multiplication approach).
>
> In multi-GPUs architecture, there're some communication overheads such
> as the bottleneck of switching to the next mini-batch (copying next
> set ofimages and parameters). So, I think neuron-based iterative
> computing model on CPUs is very fit for model parallelism. As you
> already know, the forward and backward passes are the essential
> computations of a Neural Net. So, only few vertices of single layer of
> Neural Net will be activated in a single superstep, if we follow the
> Pregel style model. This is quite inefficient. So, instead of doing
> like this, we sends training instance continuously at every superstep,
> and then handles the information (forward messages of current training
> instance) and error (backward messages of previous training instance)
> at once. This approach is similar to scheme (b) of "One weird trick
> for parallelizing convolutional neural network, Google 2014".
>
> In conclusion, we're doing good.
>
> --
> Best Regards, Edward J. Yoon



-- 
Best Regards, Edward J. Yoon

Mime
View raw message