hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ashish Dobhal <dobhalashish...@gmail.com>
Subject Re: MR JOB
Date Fri, 18 Jul 2014 18:03:17 GMT
Thanks.


On Fri, Jul 18, 2014 at 10:41 PM, Rich Haase <rdhaase@gmail.com> wrote:

> HDFS handles the splitting of files into multiple blocks.  It's a file
> system operation that is transparent to the user.
>
>
> On Fri, Jul 18, 2014 at 11:07 AM, Ashish Dobhal <dobhalashish772@gmail.com
> > wrote:
>
>> Rich Haase Thanks,
>> But if the copy ops do not occur as a MR job then how does the splitting
>> of a file into several blocks takes place.
>>
>>
>> On Fri, Jul 18, 2014 at 10:24 PM, Rich Haase <rdhaase@gmail.com> wrote:
>>
>>> File copy operations do not run as map reduce jobs.  All hadoop fs
>>> commands are run as operations against HDFS and do not use the MapReduce.
>>>
>>>
>>> On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal <
>>> dobhalashish772@gmail.com> wrote:
>>>
>>>> Does the normal operations of hadoop such as uploading and downloading
>>>> a file into the HDFS run as a MR job.
>>>> If so why cant I see the job being run on my task tracker and job
>>>> tracker.
>>>> Thank you.
>>>>
>>>
>>>
>>>
>>> --
>>> *Kernighan's Law*
>>> "Debugging is twice as hard as writing the code in the first place.
>>> Therefore, if you write the code as cleverly as possible, you are, by
>>> definition, not smart enough to debug it."
>>>
>>
>>
>
>
> --
> *Kernighan's Law*
> "Debugging is twice as hard as writing the code in the first place.
> Therefore, if you write the code as cleverly as possible, you are, by
> definition, not smart enough to debug it."
>

Mime
View raw message