horn-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Edward J. Yoon" <edwardy...@apache.org>
Subject Re: Adding Introduction of Apache Horn project on commit101.org
Date Tue, 13 Oct 2015 04:09:46 GMT
It's now available at http://commit101.org/projects/apachehorn

On Mon, Oct 12, 2015 at 7:21 PM, Edward J. Yoon <edwardyoon@apache.org> wrote:
> Note, It'd be nice if we can more emphasize the fact that we natively
> runs on Hadoop.
> On Mon, Oct 12, 2015 at 7:13 PM, Edward J. Yoon <edwardyoon@apache.org> wrote:
>> Hi forks,
>> At my company, we plan to introduce our project on
>> http://commit101.org (OSS at Samsung). The draft made by me and
>> Shubham Mehta. Please review and feel free to feedback.
>> I'm CC'ing ASF trademarks@. If there's any problem Pls let us know.
>> Thanks!
>> --
>> == Apache Horn ==
>> The Apache Horn is an Apache Incubating project, allowing you to do
>> Downpour SGD based large-scale deep learning, using the heterogeneous
>> resources on existing Hadoop and Hama clusters. It is originally
>> inspired by Google's DistBelief (Jeff Dean et al, 2012). Its
>> architecture is designed with an intuitive programming model based on
>> the neuron-centric abstraction.
>> == Why Apache Horn? ==
>> Deep learning and unsupervised feature learning have shown great
>> promise in many practical applications. State-of-the-art performance
>> has been reported in several domains, ranging from speech recognition
>> visual object recognition, to text processing.
>> It has been observed that increasing the scale of Deep Learning model,
>> in terms of model parameters in proportion to training examples, can
>> drastically improve ultimate classification accuracy. There has been
>> lot of research regarding making the architecture and optimization
>> algorithm feasible to handle large models having billions of
>> parameters. One such way is to increase parallelism by allowing model
>> as well as data parallelism.
>> Horn’s architecture takes into account the requirement for model
>> parallelism and follows the recently introduced approach of
>> distributed Parameter server. For data parallelism, it makes use of
>> Hama’s master-groom framework.
>> With more involved Distributed optimization and inference, we plan to
>> achieves Sate-of-the-art performance on training of large-scale Deep
>> learning algorithms.
>> == Goals ==
>> As a Distributed framework, we first plan to reduce total training
>> time to achieve certain accuracy with more computing resources using
>> Downpour SGD training method for feed forward models including
>> Convolutional Neural Network (CNN), Auto encoder.
>> In Apache Horn, we are planning to support various Distributed
>> training frameworks like Sandblaster, AllReduce, and Distributed
>> Hogwild and acceleration.
>> In the end we would like to extend it to make it usable with other
>> Deep Learning models like restricted Boltzmann machine (RBM), and
>> recurrent neural networks (RNN)
>> == Principles ==
>> We want to provide general architecture to exploit the scalability of
>> various training methods. Synchronous methods increase efficiency and
>> ensure consistency while asynchronous methods increase the convergence
>> rate. We plan to introduce a hybrid approach which has best of both
>> the worlds.
>> We plan to use BSP computing framework as the base of our computation.
>> BSP framework for parallel computation makes logic and implementation
>> much cleaner, one the biggest challenge of implementing distributed
>> algorithms.
>> Further, programming model is based on the neuron centric abstraction,
>> which is intuitive for deep learning models.
>> == Getting Involved ==
>> HORN is an open source volunteer project under the Apache Software Foundation.
>> Currently, a number of researchers and developers from various
>> organizations, such as Microsoft, Samsung Electronics, Seoul National
>> University, Technical University of Munich, KAIST, LINE plus, and Cldi
>> Inc., are involved in the Horn project.
>> We encourage you to learn about the project and contribute your expertise.
>> --
>> Best Regards, Edward J. Yoon
> --
> Best Regards, Edward J. Yoon

Best Regards, Edward J. Yoon

View raw message