Return-Path: X-Original-To: apmail-hama-dev-archive@www.apache.org Delivered-To: apmail-hama-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 46783D2C3 for ; Thu, 20 Sep 2012 14:33:00 +0000 (UTC) Received: (qmail 45185 invoked by uid 500); 20 Sep 2012 14:33:00 -0000 Delivered-To: apmail-hama-dev-archive@hama.apache.org Received: (qmail 45163 invoked by uid 500); 20 Sep 2012 14:33:00 -0000 Mailing-List: contact dev-help@hama.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hama.apache.org Delivered-To: mailing list dev@hama.apache.org Received: (qmail 45152 invoked by uid 99); 20 Sep 2012 14:33:00 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 20 Sep 2012 14:33:00 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of gurongwalker@gmail.com designates 209.85.160.47 as permitted sender) Received: from [209.85.160.47] (HELO mail-pb0-f47.google.com) (209.85.160.47) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 20 Sep 2012 14:32:50 +0000 Received: by pbcwy7 with SMTP id wy7so5024340pbc.34 for ; Thu, 20 Sep 2012 07:32:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=CeM5WdzUpRx40ggk2xouSx+vxcIB0YxKYseeJOA6pC8=; b=V0eBMEufRSGRiFklIzqUq0XflTRTMs2tGaUPI6R+CpKa+vEbDkDUOM1x7oHANvl/eM q+CHLwW+/mne+EtozvWzVfLiJt6ufBeKi5o2G6PT+M0HBIWb51Hykp97vbhpR6p8PEEY gqfq++nlqjT9HQe27AlnHOYcRwmUMCiFhj7JPQ8v90sUfPgB18gK6hlHKRaxz1bcGnRV CnSqUgCTVbr8lD+pu5a4P8Np3D+Aj4Z+Her2kKTdocaw6al8Dnjq7NhG0hGBhIMZ9B6Q M+EvOFKM8PxTYzYTld0sY8pXbXt8a32NvJwgAQNZZ/PG6hKHACstuqM7NcYJbihmT2Nk DQug== MIME-Version: 1.0 Received: by 10.66.76.130 with SMTP id k2mr5862776paw.19.1348151548888; Thu, 20 Sep 2012 07:32:28 -0700 (PDT) Received: by 10.68.229.166 with HTTP; Thu, 20 Sep 2012 07:32:28 -0700 (PDT) In-Reply-To: References: Date: Thu, 20 Sep 2012 22:32:28 +0800 Message-ID: Subject: Re: Does Hama Graph provides any file reader interface during running time ? From: =?GB2312?B?ucvI2Q==?= To: dev@hama.apache.org Content-Type: multipart/mixed; boundary=f46d042dfec12cc52704ca22fe57 --f46d042dfec12cc52704ca22fe57 Content-Type: multipart/alternative; boundary=f46d042dfec12cc52304ca22fe55 --f46d042dfec12cc52304ca22fe55 Content-Type: text/plain; charset=GB2312 Content-Transfer-Encoding: quoted-printable Hi, Thomas. I read your blog and github on the information about training NN on Hama several days ago. I am agree with you on this topic for my experiences on implementing NN in a distribute way. That happens when I did not know Hama project. Thus, I implemented a customized distribuited system for training NN with large scale training data myself, the system is called cNeural. It is basically fellows a Master/Slave archtecture.I adopted Hadoop RPC for communication and HBase for storing large scale training dataset, and I used a batch-mode the BP tranining algorithm. BTW, HBase is very suitable for store traning data sets for machine learning. No matter how large a traning data set is, a HTable can easily store it across many regionservers. Each traning sample can be stored as a record in HTable, even it's sparse coded. Furtherly, HBase provide random access to your training sample. In my experience, it's much better to store the structured data in Hbase than directly in HDFS. Back to this topic, as you mentioned, I can read training data directlly from HDFS through HDFS API, during the setup stage of the vertex. I also considered this and know how to use HDFS API long ago, thanks for hint anyway:) However, I am afraid of that it may cost quite a lot of time, because for a large sacle NN with thousands of neurons, each neuron vertex almost simutanluously reads the same traning sample would cost a lot of network traffic and put too much stress on HDFS. What's more, it seems unnecessary. I planed to select a master vector responsible for reading samples for HDFS, and intialize each input neuro by sending the feature value to this vertex. However, even though I can do this, there are a lot more tough problems to solve, such as partition. As you said, to conrol this training workflow in a distributed way is too complex. And with so many network communication and distribute synchronization, it will be much slower than the sequential programe executed on a single machine. In a word, this tough distribution wil probably leads to no improvment but slower speed and high complexity. As you talk about for high dimensionalities, I suggest to use GPU to handle this. Distribution may not be a good solution in this case. Of course, we can combine GPU with Hama, and it's necessary in the near future, I believe= . As I have mentioned at the beginning of this mail. I implemented cNeural, and I also compare cNeural with Hadoop for sloving this problem. The experiment results can be find in the attachment of this mail. In general, cNeural adopted a parallel strategy like BSP model. So, I am about to reimplement cNeural on Hama BSP. I learned Hama Graph this week, and just come across a thought of implementing NN on Hama Graph, considered about this case, and asked this question. I am agree with you on your analysis. Regards, Walker. 2012/9/20 Thomas Jungblut > Hi, > > nice idea, but I'm certainly unsure if the graph module really fits your > needs. > In Backprop you need to set the input to different neurons in your input > layer and you have to forwardpropagate these until you reach the output > layer. Calculating the error from this single step in your architecture > would consume many supersteps. This is totally inefficient in my > opinion, but let's just take this thought away. > > Assuming you have an n by m matrix which contains your whole trainingset > and in the m-th column there is the outcome of the previous features. > A input vertex should have the ability to read a row of the corresponding > column vector from the trainingset and the output neurons need to do the > same. > Good news, you can do this by reading a file within the setup function of= a > vertex or by reading it line by line when compute is called. You can acce= ss > filesystems with the Hadoop DFS API pretty easily. Just type it into your > favourite search engines, it is just called FileSystem and you can get it > by using FileSystem.get(Configuration conf). > > Now here is my experience with a raw BSP and neural networks if you > consider this against the graph module: > - partition the neurons horizontally (through the layers) not by the laye= rs > - weights mustbe averaged across multiple tasks > > I came for myself to conclude that it is fairly better to implement a > function optimizer with raw BSP to train the weights (a simple > StochasticGradientDescent totally works out for almost every normal useca= se > if your network has a convex costfunction). > Of course this doesn't work out well for higher dimensionalities, but mor= e > data usually wins, even with simpler models. At the end you can always > boost it anyway. > > I will of course support you on this if you like, I'm fairly certain that > your way can work, but will be slow as hell. > Just my usual two cents on various topics ;) > > 2012/9/20 =B9=CB=C8=D9 > > > Hi, guys. > > > > As you are calling for some application programs on Hama in the *Future > > Plans* of the Hama programming wiki here ( > > > > > https://issues.apache.org/jira/secure/attachment/12528218/ApacheHamaBSPPr= ogrammingmodel.pdf > > ), > > I am so interested in machine learning. I have a plan to implement neur= al > > networks (eg.Multilayer Perceptron with BP) on Hama. Hama seems to be a > > nice tool for training large scale neural networks. Esepcailly, for tho= se > > with large scale structure (many hidden layers and many neurons), I fin= d > > Hama Graph provided a good solution. We can regard each neuron in > NN(neural > > network) as a vertex in Hama Graph, and the links between neurons as > eages > > in the Graph. Then, the training process can be regarded as updating th= e > > weights of the eages among vetices. However, I encounted a problem in t= he > > current Hama Graph implementation. > > > > Let me explain this to you. As you maybe now, during the training proce= ss > > of many machine learning algorithms, we need to input many training > samples > > into the model one by one. Usaually, more training samples will lead to > > preciser models. However, as far as I know, the only input file interfa= ce > > provided by the Hama Graph is the input for graph structure. Sadly, it'= s > > hard to read the distribute the training samples during running time, a= s > > users can only make their computing logics by overriding the some key > > functions such as compute() int the Vetex class. So, does hama graph > > provide any flexible file reading interface for users in running time? > > > > Thanks in advance. > > > > Walker. > > > --f46d042dfec12cc52304ca22fe55 Content-Type: text/html; charset=GB2312 Content-Transfer-Encoding: quoted-printable Hi, Thomas.

I read your blog and github on the information about tra= ining NN on Hama several days ago. I am agree with you on this topic for my= experiences on implementing NN in a distribute way.
That happens when = I did not know Hama project. Thus, I implemented a customized distribuited = system for training NN with large scale training data myself, the system is= called cNeural.
It is basically fellows a Master/Slave archtecture.I adopted Hadoop RPC for= communication and HBase for storing large scale training dataset, and I us= ed a batch-mode the BP tranining algorithm.
BTW, HBase is very suitable= for store traning data sets for machine learning. No matter how large a tr= aning data set is, a HTable can easily store it across many regionservers. =
Each traning sample can be stored as a record in HTable, even it's spar= se coded. Furtherly, HBase provide random access to your training sample. I= n my experience,
it's much better to store the structured data in H= base than directly in HDFS.

Back to this topic, as you mentioned, I can read training data directll= y from HDFS through HDFS API, during the setup stage of the vertex.
I a= lso considered this and know how to use HDFS API long ago, thanks for hint = anyway:)
However, I am afraid of that it may cost quite a lot of time, because for a= large sacle NN with thousands of neurons,
each neuron vertex almost si= mutanluously reads the same traning sample would cost a lot of network traf= fic
and put too much stress on HDFS. What's more, it seems unnecessary. I p= laned to select a master vector
responsible for reading samples for HDF= S, and intialize each input neuro by sending the feature value to this vert= ex.
However, even though I can do this, there are a lot more tough problems to = solve, such as partition. As you said, to
conrol this training workflow= in a distributed way is too complex. And with so many network communicatio= n and distribute
synchronization, it will be much slower than the sequential programe execut= ed on a single machine. In a word,
this tough distribution wil probably= leads to no improvment but slower speed and high complexity. As you talk a= bout for high
dimensionalities, I suggest to use GPU to handle this. Distribution may not= be a good solution in this case. Of course, we
can combine GPU with Ha= ma, and it's necessary in the near future, I believe.

As I have = mentioned at the beginning of this mail. I implemented cNeural, and I also = compare cNeural with Hadoop for sloving this problem.
The experiment results can be find in the attachment of this mail. In gener= al, cNeural adopted a parallel strategy like BSP model. So, I am about
&= nbsp;to reimplement cNeural on Hama BSP. I learned Hama Graph this week, an= d just come across a thought of implementing NN on Hama Graph,
considered about this case, and asked this question. I am agree with you on= your analysis.

Regards,
Walker.


2012/9/20 Thomas Jung= blut <thomas.jungblut@gmail.com>
Hi,

nice idea, but I'm certainly unsure if the graph module really fits you= r
needs.
In Backprop you need to set the input to different neurons in your input layer and you have to forwardpropagate these until you reach the output
layer. Calculating the error from this single step in your architecture
would consume many supersteps. This is totally inefficient in my
opinion, but let's just take this thought away.

Assuming you have an n by m matrix which contains your whole trainingset and in the m-th column there is the outcome of the previous features.
A input vertex should have the ability to read a row of the corresponding column vector from the trainingset and the output neurons need to do the same.
Good news, you can do this by reading a file within the setup function of a=
vertex or by reading it line by line when compute is called. You can access=
filesystems with the Hadoop DFS API pretty easily. Just type it into your favourite search engines, it is just called FileSystem and you can get it by using FileSystem.get(Configuration conf).

Now here is my experience with a raw BSP and neural networks if you
consider this against the graph module:
- partition the neurons horizontally (through the layers) not by the layers=
- weights mustbe averaged across multiple tasks

I came for myself to conclude that it is fairly better to implement a
function optimizer with raw BSP to train the weights (a simple
StochasticGradientDescent totally works out for almost every normal usecase=
if your network has a convex costfunction).
Of course this doesn't work out well for higher dimensionalities, but m= ore
data usually wins, even with simpler models. At the end you can always
boost it anyway.

I will of course support you on this if you like, I'm fairly certain th= at
your way can work, but will be slow as hell.
Just my usual two cents on various topics ;)

2012/9/20 =B9=CB=C8=D9 <gurong= walker@gmail.com>

> Hi, guys.
>
> As you are calling for some application programs on Hama in the = *Future
> Plans* of the Hama programming wiki here (
>
> https://issues.apache.o= rg/jira/secure/attachment/12528218/ApacheHamaBSPProgrammingmodel.pdf > ),
> I am so interested in machine learning. I have a plan to implement neu= ral
> networks (eg.Multilayer Perceptron with BP) on Hama. Hama seems to be = a
> nice tool for training large scale neural networks. Esepcailly, for th= ose
> with large scale structure (many hidden layers and many neurons), I fi= nd
> Hama Graph provided a good solution. We can regard each neuron in NN(n= eural
> network) as a vertex in Hama Graph, and the links between neurons as e= ages
> in the Graph. Then, the training process can be regarded as updating t= he
> weights of the eages among vetices. However, I encounted a problem in = the
> current Hama Graph implementation.
>
> Let me explain this to you. As you maybe now, during the training proc= ess
> of many machine learning algorithms, we need to input many training sa= mples
> into the model one by one. Usaually, more training samples will lead t= o
> preciser models. However, as far as I know, the only input file interf= ace
> provided by the Hama Graph is the input for graph structure. Sadly, it= 's
> hard to read the distribute the training samples during running time, = as
> users can only make their computing logics by overriding the some key<= br> > functions such as compute() int the Vetex class. So, does hama graph > provide any flexible file reading interface for users in running time?=
>
> Thanks in advance.
>
> Walker.
>

--f46d042dfec12cc52304ca22fe55-- --f46d042dfec12cc52704ca22fe57--