hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kun yan <yankunhad...@gmail.com>
Subject Re: why i can not track the job which i submitted in yarn?
Date Thu, 12 Sep 2013 01:17:53 GMT
Write your own mapreduce task is not to see progress on tasks? Maybe I have
the same problem


2013/9/12 ch huang <justlooks@gmail.com>

> i already set this option ,this is in my mapred-site.xml,and all my hive
> job can be see in RM UI
>
>
>
> <property>
>         <name>mapreduce.framework.name</name>
>         <value>yarn</value>
>         <description>The runtime framework for executing MapReduce jobs.
> Can be one of local, classic or yarn</description>
> </property>
>
>
> On Wed, Sep 11, 2013 at 5:51 PM, Devaraj k <devaraj.k@huawei.com> wrote:
>
>>  Your Job is running in local mode, that’s why you don’t see in the RM
>> UI/Job History.****
>>
>> ** **
>>
>> Can you change ‘mapreduce.framework.name’ configuration value to ‘yarn’,
>> it will show in RM UI.****
>>
>> ** **
>>
>> Thanks****
>>
>> Devaraj k****
>>
>> ** **
>>
>> *From:* ch huang [mailto:justlooks@gmail.com]
>> *Sent:* 11 September 2013 15:08
>> *To:* user@hadoop.apache.org
>> *Subject:* why i can not track the job which i submitted in yarn?****
>>
>> ** **
>>
>> hi,all:****
>>
>>      i do now know why i can not track my job which submitted to yarn ? *
>> ***
>>
>>  ****
>>
>> # hadoop jar
>> /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples-2.0.0-cdh4.3.0.jar pi
>> 20 10****
>>
>> Number of Maps  = 20
>> Samples per Map = 10
>> Wrote input for Map #0
>> Wrote input for Map #1
>> Wrote input for Map #2
>> Wrote input for Map #3
>> Wrote input for Map #4
>> Wrote input for Map #5
>> Wrote input for Map #6
>> Wrote input for Map #7
>> Wrote input for Map #8
>> Wrote input for Map #9
>> Wrote input for Map #10
>> Wrote input for Map #11
>> Wrote input for Map #12
>> Wrote input for Map #13
>> Wrote input for Map #14
>> Wrote input for Map #15
>> Wrote input for Map #16
>> Wrote input for Map #17
>> Wrote input for Map #18
>> Wrote input for Map #19
>> Starting Job
>> 13/09/11 17:32:02 WARN conf.Configuration: session.id is deprecated.
>> Instead, use dfs.metrics.session-id
>> 13/09/11 17:32:02 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>> processName=JobTracker, sessionId=
>> 13/09/11 17:32:02 WARN conf.Configuration: slave.host.name is
>> deprecated. Instead, use dfs.datanode.hostname
>> 13/09/11 17:32:02 WARN mapred.JobClient: Use GenericOptionsParser for
>> parsing the arguments. Applications should implement Tool for the same.
>> 13/09/11 17:32:02 INFO mapred.FileInputFormat: Total input paths to
>> process : 20
>> 13/09/11 17:32:03 INFO mapred.LocalJobRunner: OutputCommitter set in
>> config null
>> 13/09/11 17:32:03 INFO mapred.JobClient: Running job:
>> job_local854997782_0001
>> 13/09/11 17:32:03 INFO mapred.LocalJobRunner: OutputCommitter is
>> org.apache.hadoop.mapred.FileOutputCommitter
>> 13/09/11 17:32:03 INFO mapred.LocalJobRunner: Waiting for map tasks
>> 13/09/11 17:32:03 INFO mapred.LocalJobRunner: Starting task:
>> attempt_local854997782_0001_m_000000_0
>> 13/09/11 17:32:03 WARN mapreduce.Counters: Group
>> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
>> org.apache.hadoop.mapreduce.TaskCounter instead
>> 13/09/11 17:32:03 INFO util.ProcessTree: setsid exited with exit code 0
>> 13/09/11 17:32:03 INFO mapred.Task:  Using ResourceCalculatorPlugin :
>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@7f342545
>> 13/09/11 17:32:03 INFO mapred.MapTask: Processing split:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part0:0+118
>> 13/09/11 17:32:03 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES
>> is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as
>> counter name instead
>> 13/09/11 17:32:03 INFO mapred.MapTask: numReduceTasks: 1
>> 13/09/11 17:32:03 INFO mapred.MapTask: Map output collector class =
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>> 13/09/11 17:32:03 INFO mapred.MapTask: io.sort.mb = 100
>> 13/09/11 17:32:03 INFO mapred.MapTask: data buffer = 79691776/99614720
>> 13/09/11 17:32:03 INFO mapred.MapTask: record buffer = 262144/327680
>> 13/09/11 17:32:03 INFO mapred.MapTask: Starting flush of map output
>> 13/09/11 17:32:03 INFO mapred.MapTask: Finished spill 0
>> 13/09/11 17:32:03 INFO mapred.Task:
>> Task:attempt_local854997782_0001_m_000000_0 is done. And is in the process
>> of commiting
>> 13/09/11 17:32:03 INFO mapred.LocalJobRunner:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part0:0+118
>> 13/09/11 17:32:03 INFO mapred.Task: Task
>> 'attempt_local854997782_0001_m_000000_0' done.
>> 13/09/11 17:32:03 INFO mapred.LocalJobRunner: Finishing task:
>> attempt_local854997782_0001_m_000000_0
>> 13/09/11 17:32:03 INFO mapred.LocalJobRunner: Starting task:
>> attempt_local854997782_0001_m_000001_0
>> 13/09/11 17:32:03 WARN mapreduce.Counters: Group
>> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
>> org.apache.hadoop.mapreduce.TaskCounter instead
>> 13/09/11 17:32:03 INFO mapred.Task:  Using ResourceCalculatorPlugin :
>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@18f63055
>> 13/09/11 17:32:03 INFO mapred.MapTask: Processing split:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part1:0+118
>> 13/09/11 17:32:04 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES
>> is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as
>> counter name instead
>> 13/09/11 17:32:04 INFO mapred.MapTask: numReduceTasks: 1
>> 13/09/11 17:32:04 INFO mapred.MapTask: Map output collector class =
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>> 13/09/11 17:32:04 INFO mapred.MapTask: io.sort.mb = 100
>> 13/09/11 17:32:04 INFO mapred.MapTask: data buffer = 79691776/99614720
>> 13/09/11 17:32:04 INFO mapred.MapTask: record buffer = 262144/327680
>> 13/09/11 17:32:04 INFO mapred.MapTask: Starting flush of map output
>> 13/09/11 17:32:04 INFO mapred.MapTask: Finished spill 0
>> 13/09/11 17:32:04 INFO mapred.Task:
>> Task:attempt_local854997782_0001_m_000001_0 is done. And is in the process
>> of commiting
>> 13/09/11 17:32:04 INFO mapred.LocalJobRunner:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part1:0+118
>> 13/09/11 17:32:04 INFO mapred.Task: Task
>> 'attempt_local854997782_0001_m_000001_0' done.
>> 13/09/11 17:32:04 INFO mapred.LocalJobRunner: Finishing task:
>> attempt_local854997782_0001_m_000001_0
>> 13/09/11 17:32:04 INFO mapred.LocalJobRunner: Starting task:
>> attempt_local854997782_0001_m_000002_0
>> 13/09/11 17:32:04 WARN mapreduce.Counters: Group
>> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
>> org.apache.hadoop.mapreduce.TaskCounter instead
>> 13/09/11 17:32:04 INFO mapred.Task:  Using ResourceCalculatorPlugin :
>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@74bfd10a
>> 13/09/11 17:32:04 INFO mapred.MapTask: Processing split:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part10:0+118
>> 13/09/11 17:32:04 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES
>> is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as
>> counter name instead
>> 13/09/11 17:32:04 INFO mapred.MapTask: numReduceTasks: 1
>> 13/09/11 17:32:04 INFO mapred.MapTask: Map output collector class =
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>> 13/09/11 17:32:04 INFO mapred.MapTask: io.sort.mb = 100
>> 13/09/11 17:32:04 INFO mapred.MapTask: data buffer = 79691776/99614720
>> 13/09/11 17:32:04 INFO mapred.MapTask: record buffer = 262144/327680
>> 13/09/11 17:32:04 INFO mapred.MapTask: Starting flush of map output
>> 13/09/11 17:32:04 INFO mapred.JobClient:  map 10% reduce 0%
>> 13/09/11 17:32:04 INFO mapred.MapTask: Finished spill 0
>> 13/09/11 17:32:04 INFO mapred.Task:
>> Task:attempt_local854997782_0001_m_000002_0 is done. And is in the process
>> of commiting
>> 13/09/11 17:32:04 INFO mapred.LocalJobRunner:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part10:0+118
>> 13/09/11 17:32:04 INFO mapred.Task: Task
>> 'attempt_local854997782_0001_m_000002_0' done.
>> 13/09/11 17:32:04 INFO mapred.LocalJobRunner: Finishing task:
>> attempt_local854997782_0001_m_000002_0
>> 13/09/11 17:32:04 INFO mapred.LocalJobRunner: Starting task:
>> attempt_local854997782_0001_m_000003_0
>> 13/09/11 17:32:04 WARN mapreduce.Counters: Group
>> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
>> org.apache.hadoop.mapreduce.TaskCounter instead
>> 13/09/11 17:32:04 INFO mapred.Task:  Using ResourceCalculatorPlugin :
>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@3d80a8b7
>> 13/09/11 17:32:04 INFO mapred.MapTask: Processing split:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part11:0+118
>> 13/09/11 17:32:04 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES
>> is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as
>> counter name instead
>> 13/09/11 17:32:04 INFO mapred.MapTask: numReduceTasks: 1
>> 13/09/11 17:32:04 INFO mapred.MapTask: Map output collector class =
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>> 13/09/11 17:32:04 INFO mapred.MapTask: io.sort.mb = 100
>> 13/09/11 17:32:05 INFO mapred.MapTask: data buffer = 79691776/99614720
>> 13/09/11 17:32:05 INFO mapred.MapTask: record buffer = 262144/327680
>> 13/09/11 17:32:05 INFO mapred.MapTask: Starting flush of map output
>> 13/09/11 17:32:05 INFO mapred.MapTask: Finished spill 0
>> 13/09/11 17:32:05 INFO mapred.Task:
>> Task:attempt_local854997782_0001_m_000003_0 is done. And is in the process
>> of commiting
>> 13/09/11 17:32:05 INFO mapred.LocalJobRunner:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part11:0+118
>> 13/09/11 17:32:05 INFO mapred.Task: Task
>> 'attempt_local854997782_0001_m_000003_0' done.
>> 13/09/11 17:32:05 INFO mapred.LocalJobRunner: Finishing task:
>> attempt_local854997782_0001_m_000003_0
>> 13/09/11 17:32:05 INFO mapred.LocalJobRunner: Starting task:
>> attempt_local854997782_0001_m_000004_0
>> 13/09/11 17:32:05 WARN mapreduce.Counters: Group
>> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
>> org.apache.hadoop.mapreduce.TaskCounter instead
>> 13/09/11 17:32:05 INFO mapred.Task:  Using ResourceCalculatorPlugin :
>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@620e06ce
>> 13/09/11 17:32:05 INFO mapred.MapTask: Processing split:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part12:0+118
>> 13/09/11 17:32:05 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES
>> is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as
>> counter name instead
>> 13/09/11 17:32:05 INFO mapred.MapTask: numReduceTasks: 1
>> 13/09/11 17:32:05 INFO mapred.MapTask: Map output collector class =
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>> 13/09/11 17:32:05 INFO mapred.MapTask: io.sort.mb = 100
>> 13/09/11 17:32:05 INFO mapred.MapTask: data buffer = 79691776/99614720
>> 13/09/11 17:32:05 INFO mapred.MapTask: record buffer = 262144/327680
>> 13/09/11 17:32:05 INFO mapred.MapTask: Starting flush of map output
>> 13/09/11 17:32:05 INFO mapred.MapTask: Finished spill 0
>> 13/09/11 17:32:05 INFO mapred.Task:
>> Task:attempt_local854997782_0001_m_000004_0 is done. And is in the process
>> of commiting
>> 13/09/11 17:32:05 INFO mapred.LocalJobRunner:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part12:0+118
>> 13/09/11 17:32:05 INFO mapred.Task: Task
>> 'attempt_local854997782_0001_m_000004_0' done.
>> 13/09/11 17:32:05 INFO mapred.LocalJobRunner: Finishing task:
>> attempt_local854997782_0001_m_000004_0
>> 13/09/11 17:32:05 INFO mapred.LocalJobRunner: Starting task:
>> attempt_local854997782_0001_m_000005_0
>> 13/09/11 17:32:05 WARN mapreduce.Counters: Group
>> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
>> org.apache.hadoop.mapreduce.TaskCounter instead
>> 13/09/11 17:32:05 INFO mapred.Task:  Using ResourceCalculatorPlugin :
>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@3db158db
>> 13/09/11 17:32:05 INFO mapred.MapTask: Processing split:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part13:0+118
>> 13/09/11 17:32:05 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES
>> is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as
>> counter name instead
>> 13/09/11 17:32:05 INFO mapred.MapTask: numReduceTasks: 1
>> 13/09/11 17:32:05 INFO mapred.MapTask: Map output collector class =
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>> 13/09/11 17:32:05 INFO mapred.MapTask: io.sort.mb = 100
>> 13/09/11 17:32:05 INFO mapred.JobClient:  map 25% reduce 0%
>> 13/09/11 17:32:05 INFO mapred.MapTask: data buffer = 79691776/99614720
>> 13/09/11 17:32:05 INFO mapred.MapTask: record buffer = 262144/327680
>> 13/09/11 17:32:05 INFO mapred.MapTask: Starting flush of map output
>> 13/09/11 17:32:05 INFO mapred.MapTask: Finished spill 0
>> 13/09/11 17:32:05 INFO mapred.Task:
>> Task:attempt_local854997782_0001_m_000005_0 is done. And is in the process
>> of commiting
>> 13/09/11 17:32:05 INFO mapred.LocalJobRunner:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part13:0+118
>> 13/09/11 17:32:05 INFO mapred.Task: Task
>> 'attempt_local854997782_0001_m_000005_0' done.
>> 13/09/11 17:32:05 INFO mapred.LocalJobRunner: Finishing task:
>> attempt_local854997782_0001_m_000005_0
>> 13/09/11 17:32:05 INFO mapred.LocalJobRunner: Starting task:
>> attempt_local854997782_0001_m_000006_0
>> 13/09/11 17:32:05 WARN mapreduce.Counters: Group
>> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
>> org.apache.hadoop.mapreduce.TaskCounter instead
>> 13/09/11 17:32:05 INFO mapred.Task:  Using ResourceCalculatorPlugin :
>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1819ccba
>> 13/09/11 17:32:05 INFO mapred.MapTask: Processing split:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part14:0+118
>> 13/09/11 17:32:05 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES
>> is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as
>> counter name instead
>> 13/09/11 17:32:05 INFO mapred.MapTask: numReduceTasks: 1
>> 13/09/11 17:32:05 INFO mapred.MapTask: Map output collector class =
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>> 13/09/11 17:32:05 INFO mapred.MapTask: io.sort.mb = 100
>> 13/09/11 17:32:06 INFO mapred.MapTask: data buffer = 79691776/99614720
>> 13/09/11 17:32:06 INFO mapred.MapTask: record buffer = 262144/327680
>> 13/09/11 17:32:06 INFO mapred.MapTask: Starting flush of map output
>> 13/09/11 17:32:06 INFO mapred.MapTask: Finished spill 0
>> 13/09/11 17:32:06 INFO mapred.Task:
>> Task:attempt_local854997782_0001_m_000006_0 is done. And is in the process
>> of commiting
>> 13/09/11 17:32:06 INFO mapred.LocalJobRunner:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part14:0+118
>> 13/09/11 17:32:06 INFO mapred.Task: Task
>> 'attempt_local854997782_0001_m_000006_0' done.
>> 13/09/11 17:32:06 INFO mapred.LocalJobRunner: Finishing task:
>> attempt_local854997782_0001_m_000006_0
>> 13/09/11 17:32:06 INFO mapred.LocalJobRunner: Starting task:
>> attempt_local854997782_0001_m_000007_0
>> 13/09/11 17:32:06 WARN mapreduce.Counters: Group
>> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
>> org.apache.hadoop.mapreduce.TaskCounter instead
>> 13/09/11 17:32:06 INFO mapred.Task:  Using ResourceCalculatorPlugin :
>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@24a09e41
>> 13/09/11 17:32:06 INFO mapred.MapTask: Processing split:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part15:0+118
>> 13/09/11 17:32:06 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES
>> is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as
>> counter name instead
>> 13/09/11 17:32:06 INFO mapred.MapTask: numReduceTasks: 1
>> 13/09/11 17:32:06 INFO mapred.MapTask: Map output collector class =
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer
>> 13/09/11 17:32:06 INFO mapred.MapTask: io.sort.mb = 100
>> 13/09/11 17:32:06 INFO mapred.MapTask: data buffer = 79691776/99614720
>> 13/09/11 17:32:06 INFO mapred.MapTask: record buffer = 262144/327680
>> 13/09/11 17:32:06 INFO mapred.MapTask: Starting flush of map output
>> 13/09/11 17:32:06 INFO mapred.MapTask: Finished spill 0
>> 13/09/11 17:32:06 INFO mapred.Task:
>> Task:attempt_local854997782_0001_m_000007_0 is done. And is in the process
>> of commiting
>> 13/09/11 17:32:06 INFO mapred.LocalJobRunner:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part15:0+118
>> 13/09/11 17:32:06 INFO mapred.Task: Task
>> 'attempt_local854997782_0001_m_000007_0' done.
>> 13/09/11 17:32:06 INFO mapred.LocalJobRunner: Finishing task:
>> attempt_local854997782_0001_m_000007_0
>> 13/09/11 17:32:06 INFO mapred.LocalJobRunner: Starting task:
>> attempt_local854997782_0001_m_000008_0
>> 13/09/11 17:32:06 WARN mapreduce.Counters: Group
>> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
>> org.apache.hadoop.mapreduce.TaskCounter instead
>> 13/09/11 17:32:06 INFO mapred.Task:  Using ResourceCalculatorPlugin :
>> org.apache.hadoop.util.LinuxResourceCalculatorPlugin@5189f854
>> 13/09/11 17:32:06 INFO mapred.MapTask: Processing split:
>> hdfs://CH22:9000/user/root/PiEstimator_TMP_3_141592654/in/part16:0+118
>> 13/09/11 17:32:09 INFO mapred.JobClient:   File System Counters
>> 13/09/11 17:32:09 INFO mapred.JobClient:     FILE: Number of bytes
>> read=3425924
>> 13/09/11 17:32:09 INFO mapred.JobClient:     FILE: Number of bytes
>> written=5096181
>> 13/09/11 17:32:09 INFO mapred.JobClient:     FILE: Number of read
>> operations=0
>> 13/09/11 17:32:09 INFO mapred.JobClient:     FILE: Number of large read
>> operations=0
>> 13/09/11 17:32:09 INFO mapred.JobClient:     FILE: Number of write
>> operations=0
>> 13/09/11 17:32:09 INFO mapred.JobClient:     HDFS: Number of bytes
>> read=27140
>> 13/09/11 17:32:09 INFO mapred.JobClient:     HDFS: Number of bytes
>> written=49775
>> 13/09/11 17:32:09 INFO mapred.JobClient:     HDFS: Number of read
>> operations=1175
>> 13/09/11 17:32:09 INFO mapred.JobClient:     HDFS: Number of large read
>> operations=0
>> 13/09/11 17:32:09 INFO mapred.JobClient:     HDFS: Number of write
>> operations=465
>> 13/09/11 17:32:09 INFO mapred.JobClient:   Map-Reduce Framework
>> 13/09/11 17:32:09 INFO mapred.JobClient:     Map input records=20
>> 13/09/11 17:32:09 INFO mapred.JobClient:     Map output records=40
>> 13/09/11 17:32:09 INFO mapred.JobClient:     Map output bytes=360
>> 13/09/11 17:32:09 INFO mapred.JobClient:     Input split bytes=2330
>> 13/09/11 17:32:09 INFO mapred.JobClient:     Combine input records=0
>> 13/09/11 17:32:09 INFO mapred.JobClient:     Combine output records=0
>> 13/09/11 17:32:09 INFO mapred.JobClient:     Reduce input groups=2
>> 13/09/11 17:32:09 INFO mapred.JobClient:     Reduce shuffle bytes=0
>> 13/09/11 17:32:09 INFO mapred.JobClient:     Reduce input records=40
>> 13/09/11 17:32:09 INFO mapred.JobClient:     Reduce output records=0
>> 13/09/11 17:32:09 INFO mapred.JobClient:     Spilled Records=104
>> 13/09/11 17:32:09 INFO mapred.JobClient:     CPU time spent (ms)=0
>> 13/09/11 17:32:09 INFO mapred.JobClient:     Physical memory (bytes)
>> snapshot=0
>> 13/09/11 17:32:09 INFO mapred.JobClient:     Virtual memory (bytes)
>> snapshot=0
>> 13/09/11 17:32:09 INFO mapred.JobClient:     Total committed heap usage
>> (bytes)=9912647680
>> 13/09/11 17:32:09 INFO mapred.JobClient:
>> org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter
>> 13/09/11 17:32:09 INFO mapred.JobClient:     BYTES_READ=480
>> Job Finished in 7.269 seconds
>> Estimated value of Pi is 3.12000000000000000000****
>>
>>  ****
>>
>> # curl http://192.168.10.22:8088/cluster****
>>
>>  ****
>>
>>               var appsTableData=[
>> ["<a
>> href='/cluster/app/application_1376355583846_0142'>application_1376355583846_0142</a>","hive","select
>> page_url,concat_ws(\'...page_url,token(Stage-1)","default","1377143481037","1377143541840","FINISHED","SUCCEEDED","<br
>> title='100.0'> <div class='ui-progressbar ui-widget ui-widget-content
>> ui-corner-all' title='100.0%'> <div class='ui-progressbar-value
>> ui-widget-header ui-corner-left' style='width:100.0%'> </div> </div>","<a
>> href='
>> http://192.168.10.36/proxy/application_1376355583846_0142/jobhistory/job/job_1376355583846_0142
>> '>History</a>"],
>> ["<a
>> href='/cluster/app/application_1376355583846_0122'>application_1376355583846_0122</a>","hive","select
>> page_url,concat_ws(\'...page_url,token(Stage-1)","default","1377139980703","1377140027552","FINISHED","SUCCEEDED","<br
>> title='100.0'> <div class='ui-progressbar ui-widget ui-widget-content
>> ui-corner-all' title='100.0%'> <div class='ui-progressbar-value
>> ui-widget-header ui-corner-left' style='width:100.0%'> </div> </div>","<a
>> href='
>> http://192.168.10.36/proxy/application_1376355583846_0122/jobhistory/job/job_1376355583846_0122
>> '>History</a>"],
>> ["<a
>> href='/cluster/app/application_1376355583846_0186'>application_1376355583846_0186</a>","hive","select
>> page_url,concat_ws(\'...page_url,token(Stage-1)","default","1377159354342","1377160027061","KILLED","KILLED","<br
>> title='100.0'> <div class='ui-progressbar ui-widget ui-widget-content
>> ui-corner-all' title='100.0%'> <div class='ui-progressbar-value
>> ui-widget-header ui-corner-left' style='width:100.0%'> </div> </div>","<a
>> href='http://192.168.10.36/proxy/application_1376355583846_0186/
>> '>History</a>"],
>> ["<a
>> href='/cluster/app/application_1376355583846_0222'>application_1376355583846_0222</a>","root","insert
>> overwrite table
>> dump_temp2_page_...10(Stage-1)","default","1377679615558","1377679638915","FINISHED","SUCCEEDED","<br
>> title='100.0'> <div class='ui-progressbar ui-widget ui-widget-content
>> ui-corner-all' title='100.0%'> <div class='ui-progressbar-value
>> ui-widget-header ui-corner-left' style='width:100.0%'> </div> </div>","<a
>> href='
>> http://192.168.10.36/proxy/application_1376355583846_0222/jobhistory/job/job_1376355583846_0222
>> '>History</a>"],
>> ["<a
>> href='/cluster/app/application_1376355583846_0115'>application_1376355583846_0115</a>","root","select
>> page_url,concat_ws(\'...page_url,token(Stage-1)","default","1377135476144","1377136040644","KILLED","KILLED","<br
>> title='100.0'> <div class='ui-progressbar ui-widget ui-widget-content
>> ui-corner-all' title='100.0%'> <div class='ui-progressbar-value
>> ui-widget-header ui-corner-left' style='width:100.0%'> </div> </div>","<a
>> href='http://192.168.10.36/proxy/application_1376355583846_0115/
>> '>History</a>"],
>> ["<a
>> href='/cluster/app/application_1376355583846_0151'>application_1376355583846_0151</a>","hive","select
>> page_url,concat_ws(\'...page_url,token(Stage-1)","default","1377144292944","1377144370345","FINISHED","SUCCEEDED","<br
>> title='100.0'> <div class='ui-progressbar ui-widget ui-widget-content
>> ui-corner-all' title='100.0%'> <div class='ui-progressbar-value
>> ui-widget-header ui-corner-left' style='width:100.0%'> </div> </div>","<a
>> href='
>> http://192.168.10.36/proxy/application_1376355583846_0151/jobhistory/job/job_1376355583846_0151
>> '>History</a>"],
>> ["<a
>> href='/cluster/app/application_1376355583846_0033'>application_1376355583846_0033</a>","root","select
>> cookieid,first_category,count(id...10(Stage-1)","default","1376893476778","1376893512292","FINISHED","SUCCEEDED","<br
>> title='100.0'> <div class='ui-progressbar ui-widget ui-widget-content
>> ui-corner-all' title='100.0%'> <div class='ui-progressbar-value
>> ui-widget-header ui-corner-left' style='width:100.0%'> </div> </div>","<a
>> href='
>> http://192.168.10.36/proxy/application_1376355583846_0033/jobhistory/job/job_1376355583846_0033
>> '>History</a>"],****
>>
>
>


-- 

In the Hadoop world, I am just a novice, explore the entire Hadoop
ecosystem, I hope one day I can contribute their own code

YanBit
yankunhadoop@gmail.com

Mime
View raw message