hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Anit Alexander <anitama...@gmail.com>
Subject Re:
Date Fri, 19 Jul 2013 07:40:07 GMT
Hello Tariq,
I solved the problem. There must have been some problem in the custom input
format i created. so i took a sample custom input format which was working
in cdh4 environment and applied the changes as per my requirement. It is
working now. But i havent tested that code in apache hadoop environment yet
:)

Regards,
Anit


On Thu, Jul 18, 2013 at 1:22 AM, Mohammad Tariq <dontariq@gmail.com> wrote:

> Hello Anit,
>
> Could you show me the exact error log?
>
> Warm Regards,
> Tariq
> cloudfront.blogspot.com
>
>
> On Tue, Jul 16, 2013 at 8:45 AM, Anit Alexander <anitamalex@gmail.com>wrote:
>
>> yes i did recompile. But i seem to face the same problem. I am running
>> the map reduce with a custom input format. I am not sure if there is some
>> change in the API to get the splits correct.
>>
>> Regards
>>
>>
>> On Tue, Jul 16, 2013 at 6:24 AM, 闫昆 <yankunhadoop@gmail.com> wrote:
>>
>>> I think you should recompile the program after run the program
>>>
>>>
>>> 2013/7/13 Anit Alexander <anitamalex@gmail.com>
>>>
>>>> Hello,
>>>>
>>>> I am encountering a problem in cdh4 environment.
>>>> I can successfully run the map reduce job in the hadoop cluster. But
>>>> when i migrated the same map reduce to my cdh4 environment it creates an
>>>> error stating that it cannot read the next block(each block is 64 mb). Why
>>>> is that so?
>>>>
>>>> Hadoop environment: hadoop 1.0.3
>>>> java version 1.6
>>>>
>>>> chd4 environment: CDH4.2.0
>>>> java version 1.6
>>>>
>>>> Regards,
>>>> Anit Alexander
>>>>
>>>
>>>
>>
>

Mime
View raw message