hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Amr Shahin <amrnab...@gmail.com>
Subject Re: number of map and reduce task does not change in M/R program
Date Sun, 20 Oct 2013 15:29:47 GMT
Try profiling the job (
http://hadoop.apache.org/docs/stable/mapred_tutorial.html#Profiling)
And yeah the machine specs could be the reason, that's why hadoop was
invented in the first place ;)


On Sun, Oct 20, 2013 at 8:39 AM, Anseh Danesh <anseh.danesh@gmail.com>wrote:

> I try it in a small set of data, in about 600000 data and it does not take
> too long. the execution time was reasonable. but in the set of 100000000
> data it really works too bad. any thing else, I have 2 processors in my
> machine, I think this amount of data is very huge for my processor and this
> way it takes too long to process... what do you think about this?
>
>
> On Sun, Oct 20, 2013 at 1:49 AM, Amr Shahin <amrnablus@gmail.com> wrote:
>
>> Try running the job locally on a small set of the data and see if it
>> takes too long. If so, you map code might have some performance issues
>>
>>
>> On Sat, Oct 19, 2013 at 9:08 AM, Anseh Danesh <anseh.danesh@gmail.com>wrote:
>>
>>> Hi all.. I have a question.. I have a mapreduce program that get input
>>> from cassandra. my input is a little big, about 100000000 data. my problem
>>> is that my program takes too long to process, but I think mapreduce is good
>>> and fast for large volume of data. so I think maybe I have problems in
>>> number of map and reduce tasks.. I set the number of map and reduce asks
>>> with JobConf, with Job, and also in conf/mapred-site.xml, but I don't see
>>> any changes.. in my logs at first there is map 0% reduce 0% and after about
>>> 2 hours working it shows map 1% reduce 0%..!! what should I do? please Help
>>> me I really get confused...
>>>
>>
>>
>

Mime
View raw message