hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tharindu Mathew <mcclou...@gmail.com>
Subject Re: Extension points available for data locality
Date Tue, 21 Aug 2012 12:44:14 GMT
Dino, Feng,

Thanks for the options, but I guess I need to do it myself.

Harsh,

What you said was the initial impression I got, but I thought I need to do
something more with the name node. Thanks for clearing that out.

My guess is that this probably works by using getLocations and mapping this
location ip (or host) with the ip (or host) of the task tracker? Is this
correct?


On Tue, Aug 21, 2012 at 3:14 PM, feng lu <amuseme.lu@gmail.com> wrote:

> Hi Tharindu
>
> May you can try the Gora,The Apache Gora open source framework provides an
> in-memory data model and persistence for big data. Gora supports persisting
> to column stores, key value stores, document stores and RDBMSs, and
> analyzing the data with extensive Apache Hadoop MapReduce support.
>
> Now it support MySQL in gora-sql model.
>
>  http://gora.apache.org/
>
>
> On Tue, Aug 21, 2012 at 5:39 PM, Harsh J <harsh@cloudera.com> wrote:
>
>> Tharindu,
>>
>> (Am assuming you've done enough research to know that there's benefit
>> in what you're attempting to do.)
>>
>> Locality of tasks are determined by the job's InputFormat class.
>> Specifically, the locality information returned by the InputSplit
>> objects via InputFormat#getSplits(…) API is what the MR scheduler
>> looks at when trying to launch data local tasks.
>>
>> You can tweak your InputFormat (the one that uses this DB as input?)
>> to return relevant locations based on your "DB Cluster", in order to
>> achieve this.
>>
>> On Tue, Aug 21, 2012 at 2:36 PM, Tharindu Mathew <mccloud35@gmail.com>
>> wrote:
>> > Hi,
>> >
>> > I'm doing some research that involves pulling data stored in a mysql
>> cluster
>> > directly for a map reduce job, without storing the data in HDFS.
>> >
>> > I'd like to run hadoop task tracker nodes directly on the mysql cluster
>> > nodes. The purpose of this being, starting mappers directly in the node
>> > closest to the data if possible (data locality).
>> >
>> > I notice that with HDFS, since the name node knows exactly where each
>> data
>> > block is, it uses this to achieve data locality.
>> >
>> > Is there a way to achieve my requirement possibly by extending the name
>> node
>> > or otherwise?
>> >
>> > Thanks in advance.
>> >
>> > --
>> > Regards,
>> >
>> > Tharindu
>> >
>> > blog: http://mackiemathew.com/
>> >
>>
>>
>>
>> --
>> Harsh J
>>
>
>
>
> --
> Don't Grow Old, Grow Up... :-)
>



-- 
Regards,

Tharindu

blog: http://mackiemathew.com/

Mime
View raw message