hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brock Noland <br...@cloudera.com>
Subject Re: Issue with running Impala
Date Tue, 30 Oct 2012 01:49:17 GMT
Hi,

This question should go to the impala-user group which you can subscribe to
here:

https://groups.google.com/a/cloudera.org/forum/?fromgroups#!forum/impala-user

Sorry for the confusion.

Brock

On Mon, Oct 29, 2012 at 8:17 PM, Subash D'Souza <sdsouza@truecar.com> wrote:

> I'm hoping this is the right place to post questions about Impala. I'm
> playing around with Impala and have configured and got it running. I tried
> a running a query though and it comes back with a very abstract error. Any
> help would be appreciated.
>
> Thanks
> Subash
>
> Here are the error and log files for the same
> [hadoop4.rad.wc.truecarcorp.com:21000] > select * from clearbook2 limit 5;
> ERROR: Failed to open HDFS file hdfs://
> hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
> Error(255): Unknown error 255
> ERROR: Invalid query handle
>
> My log files don't seem to give much information
>
> Impala State Server
>
>
>  I1029 18:01:01.906649 23286 impala-server.cc:1524] TClientRequest.queryOptions: TQueryOptions
{
>   01: abort_on_error (bool) = false,
>   02: max_errors (i32) = 0,
>   03: disable_codegen (bool) = false,
>   04: batch_size (i32) = 0,
>   05: return_as_ascii (bool) = true,
>   06: num_nodes (i32) = 0,
>   07: max_scan_range_length (i64) = 0,
>   08: num_scanner_threads (i32) = 0,
>   09: max_io_buffers (i32) = 0,
>   10: allow_unsupported_formats (bool) = false,
>   11: partition_agg (bool) = false,
> }
> I1029 18:01:01.906776 23286 impala-server.cc:821] query(): query=select * from clearbook2
limit 5
> I1029 18:01:02.755319 23286 coordinator.cc:209] Exec() query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.756422 23286 simple-scheduler.cc:159] SimpleScheduler assignment (data->backend):
 (10.5.22.22:50010 -> 10.5.22.22:22000), (10.5.22.24:50010 -> 10.5.22.24:22000), (10.5.22.23:50010
-> 10.5.22.23:22000)
> I1029 18:01:02.756430 23286 simple-scheduler.cc:162] SimpleScheduler locality percentage
100% (3 out of 3)
> I1029 18:01:02.759310 23286 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.763690 23286 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a2
> tuples:
> Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)),
Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2
offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)),
Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5
offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)),
Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8
offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1
mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING
col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1
mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING
col=14 offset=224
>  null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)),
Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17
offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2
mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING
col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2
mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING
col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3
mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING
col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3
mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
>  type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30
offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3
mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING
col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4
mask=4))])
> I1029 18:01:02.809578 23286 coordinator.cc:298] starting 3 backends for query 5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.811485 23364 impala-server.cc:1588] ExecPlanFragment() instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
coord=10.5.22.24:22000 backend#=2
> I1029 18:01:02.811578 23364 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.815759 23364 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a5
> tuples:
> Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)),
Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2
offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)),
Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5
offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)),
Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8
offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1
mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING
col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1
mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING
col=14 offset=224
>  null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)),
Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17
offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2
mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING
col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2
mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING
col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3
mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING
col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3
mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
>  type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30
offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3
mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING
col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4
mask=4))])
> I1029 18:01:02.957340 23537 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958176 23373 coordinator.cc:734] Cancel() query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.958195 23373 plan-fragment-executor.cc:363] Cancel(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.958205 23373 data-stream-mgr.cc:194] cancelling all streams for fragment=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.958215 23373 data-stream-mgr.cc:97] cancelled stream: fragment_id=5fa79b17f9a8474f:97b7e6fea1b688a2
node_id=1
> I1029 18:01:02.958225 23373 coordinator.cc:777] sending CancelPlanFragment rpc for instance_id=5fa79b17f9a8474f:97b7e6fea1b688a3
backend=10.5.22.22:22000
> I1029 18:01:02.958411 23373 coordinator.cc:777] sending CancelPlanFragment rpc for instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
backend=10.5.22.24:22000
> I1029 18:01:02.958510 23364 impala-server.cc:1618] CancelPlanFragment(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958528 23364 plan-fragment-executor.cc:363] Cancel(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958539 23364 data-stream-mgr.cc:194] cancelling all streams for fragment=5fa79b17f9a8474f:97b7e6fea1b688a5
> I1029 18:01:02.958606 23373 coordinator.cc:366] Query id=5fa79b17f9a8474f:97b7e6fea1b688a1
failed because fragment id=5fa79b17f9a8474f:97b7e6fea1b688a4 failed.
> I1029 18:01:02.959193 23541 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
> I1029 18:01:02.959215 23541 impala-server.cc:966] UnregisterQuery(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.959609 23286 impala-server.cc:1406] ImpalaServer::get_state invalid handle
> I1029 18:01:02.960021 23286 impala-server.cc:1351] close(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.960031 23286 impala-server.cc:966] UnregisterQuery(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.960042 23286 impala-server.cc:972] unknown query id: 5fa79b17f9a8474f:97b7e6fea1b688a1
> I1029 18:01:02.961830 23541 data-stream-mgr.cc:177] DeregisterRecvr(): fragment_id=5fa79b17f9a8474f:97b7e6fea1b688a2,
node=1
> I1029 18:01:02.962131 23269
>
>  Impala DataNode
>
> I1029 18:01:02.813395  9958 impala-server.cc:1588] ExecPlanFragment() instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4
coord=10.5.22.24:22000 backend#=1
> I1029 18:01:02.813586  9958 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4
> I1029 18:01:02.818050  9958 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a4
> tuples:
> Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)),
Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2
offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)),
Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5
offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)),
Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8
offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1
mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING
col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1
mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING
col=14 offset=224
>  null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)),
Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17
offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2
mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING
col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2
mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING
col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3
mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING
col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3
mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
>  type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30
offset=480 null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3
mask=80)), Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING
col=33 offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4
mask=4))])
> I1029 18:01:02.954059  9988 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4
> I1029 18:01:02.958689  9875 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gzError(255>):
Unknown error 255
>     @           0x75ea41  (unknown)
>     @           0x731bdb  (unknown)
>     @           0x731e1a  (unknown)
>     @     0x7f2cb64bbd97  (unknown)
>     @     0x7f2cb4ab67f1  start_thread
>     @     0x7f2cb405370d  clone
> I1029 18:01:02.958782  9876 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gzError(255>):
Unknown error 255
>     @           0x75ea41  (unknown)
>     @           0x731bdb  (unknown)
>     @           0x731e1a  (unknown)
>     @     0x7f2cb64bbd97  (unknown)
>     @     0x7f2cb4ab67f1  start_thread
>     @     0x7f2cb405370d  clone
> I1029 18:01:02.962308  9875 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gzError(255>):
Unknown error 255
>     @           0x75ea41  (unknown)
>     @           0x731bdb  (unknown)
>     @           0x731e1a  (unknown)
>     @     0x7f2cb64bbd97  (unknown)
>     @     0x7f2cb4ab67f1  start_thread
>     @     0x7f2cb405370d  clone
> I1029 18:01:02.962537  9876 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
> Error(255 <http://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gzError(255>):
Unknown error 255
>     @           0x75ea41  (un
>
>  and here is the configuration of my datanodes
>
>
> Hadoop Configuration
>
> Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml,
yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml
> <
> td>mapreduce.job.end-notification.retry.attempts<
> tr><
> /tr>KeyValuedfs.datanode.data.dir
> /home/data/1/dfs/dn,/home/data/2/dfs/dn,/home/data/3/dfs/dndfs.namenode.checkpoint.txns40000s3.replication3mapreduce.output.fileoutputformat.compress.typeRECORDmapreduce.jobtracker.jobhistory.lru.cache.size5dfs.datanode.failed.volumes.tolerated0hadoop.http.filter.initializersorg.apache.hadoop.http.lib.StaticUserWebFiltermapreduce.cluster.temp.dir${hadoop.tmp.dir}/mapred/tempmapreduce.reduce.shuffle.memory.limit.percent0.25yarn.nodemanager.keytab
> /etc/krb5.keytabmapreduce.reduce.skip.maxgroups0dfs.https.server.keystore.resource
> ssl-server.xmlhadoop.http.authenti
> cation.kerberos.keytab${user.home}/hadoop.keytabyarn.nodemanager.localizer.client.thread-count5mapreduce.framework.namelocalio.file.buffer.size4096mapreduce.task.tmp.dir./tmpdfs.namenode.checkpoint.period3600ipc.client.kill.max10mapreduce.jobtracker.taskcache.levels2s3.stream-buffer-size4096dfs.namenode.secondary.http-address0.0.0.0:50090dfs.namenode.decommission.interval30dfs.namenode.http-address0.0.0.0:50070mapreduce.task.files.preserve.failedtasksfalsedfs.encrypt.data.transferfalsedfs.datanode.address0.0.0.0:50010hadoop.http.authentication.token.validi
> ty36000hadoop.security.group.mapping.ldap.search.filter.group(objectClass=group)
> dfs.client.failover.max.attempts15kfs.client-write-packet-size65536
> yarn.admin.acl*yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs86400dfs.client.failover.connection.retries.on.timeouts0mapreduce.map.sort.spill.percent
> 0.80file.stream-buffer-size4096dfs.webhdfs.enabledtrueipc.client.connection.maxidletime10000mapreduce.jobtracker.persist.jobstatus.hours
> 1dfs.datanode.ipc.address0.0.0.0:50020yarn.nodemanager.address0.0.0.0:0yarn.app.mapreduce.am.job.task.listener.thread-count30dfs.client.read.shortcircuittruedfs.namenode.safemode.extension30000ha.zookeeper.parent-znode/hadoop-hayarn.nodemanager.container-executor.class
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutorio.skip.checksum.errorsfalseyarn.resourcemanager.scheduler.client.thread-count50hadoop.http.authentication.kerberos.principalHTTP/_HOST@LOCALHOST
> mapreduce.reduce.log.levelINFOfs.s3.maxRetries4hadoop.kerberos.kinit.commandkinityarn.nodemanager.process-kill-wait.ms
> 2000dfs.namenode.name.dir.restorefalsemapreduce.jobtracker.handler.count
> 10yarn.app.mapreduce.client-am.ipc.max-retries1dfs.client.use.datanode.hostnamefalsehadoop.util.hash.typemurmurio.seqfile.lazydecompresstruedfs.datanode.dns.interfacedefaultyarn.nodemanager.disk-health-checker.min-healthy-disks0.25
> mapreduce.job.maxtaskfailures.per.tracker4mapreduce.tasktracker.healthchecker.script.timeout600000hadoop.security.group.mapping.ldap.search.attr.group.name
> cnfs.df.interval60000dfs.namenode.kerberos.internal.spnego.principal
> ${dfs.web.authentication.kerberos.principal}mapreduce.jobtracker.addresslocalmapreduce.tasktracker.tasks.sleeptimebeforesigkill5000dfs.journalnode.rpc-address0.0.0.0:8485
> mapreduce.job.a
> cl-view-jobdfs.client.block.write.replace-datanode-on-failure.policyDEFAULT
> dfs.namenode.replication.interval3dfs.namenode.num.checkpoints.retained2
> mapreduce.tasktracker.http.address0.0.0.0:50060yarn.resourcemanager.scheduler.address0.0.0.0:8030dfs.datanode.directoryscan.threads1hadoop.security.group.mapping.ldap.sslfalsemapreduce.task.merge.progress.records
> 10000dfs.heartbeat.interval3net.topology.script.number.args
> 100mapreduce.local.clientfactory.class.nameorg.apache.hadoop.mapred.LocalClientFactorydfs.client-write-packet-size65536io.native.lib.availabletruedfs.client.failover.conne
> ction.retries0yarn.nodemanager.disk-health-checker.interval-ms120000
> dfs.blocksize67108864mapreduce.jobhistory.webapp.address0.0.0.0:19888yarn.resourcemanager.resource-tracker.client.thread-count50dfs.blockreport.initialDelay
> 0mapreduce.reduce.markreset.buffer.percent0.0dfs.ha.tail-edits.period
> 60mapreduce.admin.user.envLD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/nativeyarn.nodemanager.health-checker.script.timeout-ms1200000yarn.resourcemanager.client.thread-count50file.bytes-per-checksum512dfs.replication.max512io.map.index.skip
> 0mapreduce.task.timeout600000dfs.d
> atanode.du.reserved0dfs.support.appendtrueftp.blocksize67108864dfs.client.file-block-storage-locations.num-threads10yarn.nodemanager.container-manager.thread-count20ipc.server.listen.queue.size128yarn.resourcemanager.amliveliness-monitor.interval-ms1000hadoop.ssl.hostname.verifierDEFAULTmapreduce.tasktracker.dns.interfacedefaulthadoop.security.group.mapping.ldap.search.attr.membermember
> mapreduce.tasktracker.outofband.heartbeatfalsemapreduce.job.userlog.retain.hours24
> yarn.nodemanager.resource.memory-mb8192dfs.namenode.delegation.token.renew-interval86400000hadoop.ssl.keystores.factor
> y.classorg.apache.hadoop.security.ssl.FileBasedKeyStoresFactorydfs.datanode.sync.behind.writesfalsemapreduce.map.maxattempts4dfs.client.read.shortcircuit.skip.checksum
> falsedfs.datanode.handler.count10hadoop.ssl.require.client.cert
> falseftp.client-write-packet-size65536ipc.server.tcpnodelay
> falsemapreduce.task.profile.reduces0-2hadoop.fuse.connection.timeout
> 300dfs.permissions.superusergrouphadoopmapreduce.jobtracker.jobhistory.task.numberprogresssplits12mapreduce.map.speculativetruefs.ftp.host.port
> 21dfs.datanode.data.dir.perm700mapreduce.client.submit.file.re
> plication10s3native.blocksize67108864mapreduce.job.ubertask.maxmaps9dfs.namenode.replication.min1mapreduce.cluster.acls.enabledfalseyarn.nodemanager.localizer.fetch.thread-count4map.sort.classorg.apache.hadoop.util.QuickSortfs.trash.checkpoint.interval0dfs.namenode.name.dir/home/data/1/dfs/nnyarn.app.mapreduce.am.staging-dir/tmp/hadoop-yarn/staging
> fs.AbstractFileSystem.file.implorg.apache.hadoop.fs.local.LocalFsyarn.nodemanager.env-whitelistJAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,YARN_HOMEdfs.image.compression.codecorg.apache.hadoop.io.compress.DefaultCodecmapreduce.job.reduces
> 1mapreduce.job.complete.cancel.delegation.tokenstruehadoop.security.group.mapping.ldap.search.filter.user(&(objectClass=user)(sAMAccountName={0}))yarn.nodemanager.sleep-delay-before-sigkill.ms250mapreduce.tasktracker.healthchecker.interval60000mapreduce.jobtracker.heartbeats.in.second100kfs.bytes-per-checksum512mapreduce.jobtracker.persist.jobstatus.dir/jobtracker/jobsInfodfs.namenode.backup.http-address0.0.0.0:50105hadoop.rpc.protectionauthenticationdfs.namenode.https-address0.0.0.0:50470ftp.stream-buffer-size4096dfs.ha.log-roll.period120yarn.resourcemanager.admin.client.thread-count1yar
> n.resourcemanager.zookeeper-store.session.timeout-ms60000file.client-write-packet-size65536hadoop.http.authentication.simple.anonymous.allowedtrueyarn.nodemanager.log.retain-seconds
> 10800dfs.datanode.drop.cache.behind.readsfalsedfs.image.transfer.bandwidthPerSec
> 0mapreduce.tasktracker.instrumentationorg.apache.hadoop.mapred.TaskTrackerMetricsInstio.mapfile.bloom.size1048576dfs.ha.fencing.ssh.connect-timeout30000s3.bytes-per-checksum512fs.automatic.closetruefs.trash.interval
> 0hadoop.security.authenticationsimplefs.defaultFShdfs://hadoop1.rad.wc.truecarcorp.com:8020hadoop.ssl.server.confssl-server.xmlipc.client.connect.max.retries10yarn.resourcemanager.delayed.delegation-token.removal-interval-ms30000dfs.journalnode.http-address0.0.0.0:8480mapreduce.jobtracker.taskschedulerorg.apache.hadoop.mapred.JobQueueTaskSchedulermapreduce.job.speculative.speculativecap0.1yarn.am.liveness-monitor.expiry-interval-ms600000mapreduce.output.fileoutputformat.compressfalsenet.topology.node.switch.mapping.implorg.apache.hadoop.net.ScriptBasedMapping
> dfs.namenode.replication.considerLoadtruemapreduce.job.counters.max120
> yarn.resourcemanager.address0.0.0.0:8032dfs.client.block.write.retries
> 3yarn.resourcemanager.nm.liveness-monitor.interval-ms1000io.map.index.interval
> 128mapred.child.java.opts-Xmx200mmapreduce.tasktracker.local.dir.minspacestart
> 0dfs.client.https.keystore.resourcessl-client.xmlmapreduce.client.progressmonitor.pollinterval1000mapreduce.jobtracker.tasktracker.maxblacklists4mapreduce.job.queuenamedefaultyarn.nodemanager.localizer.address0.0.0.0:8040io.mapfile.bloom.error.rate0.005mapreduce.job.split.metainfo.maxsize10000000yarn.nodemanager.delete.thread-count4ipc.client.tcpnodelayfalseyarn.app.mapreduce.am.resource.mb1536dfs.datanode.dns.nameserver
> defaultmapreduce.map.output.compress.codecorg.apache.hadoop.io.compress.DefaultCodecdfs.namenode.accesstime.precision3600000mapreduce.map.log.levelINFOio.seqfile.compress.blocksize1000000mapreduce.tasktracker.taskcontrollerorg.apache.hadoop.mapred.DefaultTaskController
> hadoop.security.groups.cache.secs300mapreduce.job.end-notification.max.attempts5
> yarn.nodemanager.webapp.address0.0.0.0:8042mapreduce.jobtracker.expire.trackers.interval600000yarn.resourcemanager.webapp.address0.0.0.0:8088yarn.nodemanager.health-checker.interval-ms600000hadoop.security.authorization
> falsefs.ftp.host0.0.0.0yarn.app.mapreduce.am.scheduler
> .heartbeat.interval-ms1000mapreduce.ifile.readaheadtrueha.zookeeper.session-timeout.ms5000mapreduce.tasktracker.taskmemorymanager.monitoringinterval5000
> mapreduce.reduce.shuffle.parallelcopies5mapreduce.map.skip.maxrecords0
> dfs.https.enablefalsemapreduce.reduce.shuffle.read.timeout180000
> mapreduce.output.fileoutputformat.compress.codecorg.apache.hadoop.io.compress.DefaultCodecmapreduce.jobtracker.instrumentation
> org.apache.hadoop.mapred.JobTrackerMetricsInstyarn.nodemanager.remote-app-log-dir-suffixlogsdfs.blockreport.intervalMsec21600000mapreduce.reduce.speculativetruemapreduce.jobhistory.keytab/etc/sec
> urity/keytab/jhs.service.keytabdfs.datanode.balance.bandwidthPerSec1048576file.blocksize
> 67108864yarn.resourcemanager.admin.address0.0.0.0:8033
> yarn.resourcemanager.resource-tracker.address0.0.0.0:8031mapreduce.tasktracker.local.dir.minspacekill0mapreduce.jobtracker.staging.root.dir${hadoop.tmp.dir}/mapred/staging
> mapreduce.jobtracker.retiredjobs.cache.size1000ipc.client.connect.max.retries.on.timeouts45ha.zookeeper.aclworld:anyone:rwcdayarn.nodemanager.local-dirs/tmp/nm-local-dirmapreduce.reduce.shuffle.connect.timeout180000dfs.block.access.key.update.interval
> 600dfs.block.access.token.lifetime6005mapreduce.jobtracker.system.dir${hadoop.tmp.dir}/mapred/systemyarn.nodemanager.admin-envMALLOC_ARENA_MAX=$MALLOC_ARENA_MAX
> mapreduce.jobtracker.jobhistory.block.size3145728mapreduce.tasktracker.indexcache.mb10
> dfs.namenode.checkpoint.check.period60dfs.client.block.write.replace-datanode-on-failure.enabletruedfs.datanode.directoryscan.interval21600yarn.nodemanager.container-monitor.interval-ms
> 3000dfs.default.chunk.view.size32768mapreduce.job.speculative.slownodethreshold
> 1.0mapreduce.job.reduce.slowstart.completedmaps0.05hadoop.security.instrumentation.requires.adminfalsedfs.namenode.safemode.min.datanodes0hadoop.http.authentication.signature.secret.file${user.home}/hadoop-http-auth-signature-secretmapreduce.reduce.maxattempts4
> yarn.nodemanager.localizer.cache.target-size-mb10240s3native.replication3
> dfs.datanode.https.address0.0.0.0:50475mapreduce.reduce.skip.proc.count.autoincr
> truefile.replication1hadoop.hdfs.configuration.version1ipc.client.idlethreshold4000hadoop.tmp.dir/tmp/hadoop-${user.name}mapreduce.jobhistory.address0.0.0.0:10020mapreduce.jobtracker.restart.recoverfalsemapreduce.cluster.local.dir${hadoop.tmp.dir}/mapred/localyarn.ipc.s
> erializer.typeprotocolbuffersdfs.namenode.decommission.nodes.per.interval5
> dfs.namenode.delegation.key.update-interval86400000fs.s3.buffer.dir${hadoop.tmp.dir}/s3
> dfs.namenode.support.allow.formattrueyarn.nodemanager.remote-app-log-dir/tmp/logs
> hadoop.work.around.non.threadsafe.getpwuidfalsedfs.ha.automatic-failover.enabledfalse
> mapreduce.jobtracker.persist.jobstatus.activetruedfs.namenode.logging.levelinfo
> yarn.nodemanager.log-dirs/tmp/logsdfs.namenode.checkpoint.edits.dir${dfs.namenode.checkpoint.dir}hadoop.rpc.socket.factory.class.defaultorg.apache.hadoop.net.StandardSocketFactoryyarn.resourcemanager.keytab/etc/krb5.keytabdfs.datanode.http.address0.0.0.0:50075mapreduce.task.profilefalsedfs.namenode.edits.dir${dfs.namenode.name.dir}hadoop.fuse.timer.period5mapreduce.map.skip.proc.count.autoincrtruefs.AbstractFileSystem.viewfs.implorg.apache.hadoop.fs.viewfs.ViewFsmapreduce.job.speculative.slowtaskthreshold1.0s3native.stream-buffer-size4096yarn.nodemanager.delete.debug-delay-sec0dfs.secondary.namenode.kerberos.internal.spnego.principal${dfs.web.authentication.kerberos.principal}dfs.namenode.safemode.threshold-pct0.999fmapreduce.ifile.readahead.bytes
> 4194304yarn.scheduler.maximum-allocation-mb10240s3native.bytes-per-checksum
> 512mapreduce.job.committer.setup.cleanup.neededtruekfs.replication
> 3yarn.nodemanager.log-aggregation.compression-typenonehadoop.http.authentication.type
> simpledfs.client.failover.sleep.base.millis500yarn.nodemanager.heartbeat.interval-ms
> 1000hadoop.jetty.logs.serve.aliasestruemapreduce.reduce.shuffle.input.buffer.percent
> 0.70dfs.datanode.max.transfer.threads4096mapreduce.task.io.sort.mb
> 100mapreduce.reduce.merge.inmem.threshold1000dfs.namenode.handler.count
> 10hadoop.ssl.client.confssl-client.xmlyarn.resourcemanager.container.liveness-monitor.interval-ms600000mapreduce.client.completion.pollinterval5000yarn.nodemanager.vmem-pmem-ratio2.1yarn.app.mapreduce.client.max-retries3hadoop.ssl.enabledfalsefs.AbstractFileSystem.hdfs.implorg.apache.hadoop.fs.Hdfsmapreduce.tasktracker.reduce.tasks.maximum2mapreduce.reduce.input.buffer.percent0.0kfs.stream-buffer-size4096dfs.namenode.invalidate.work.pct.per.iteration0.32fdfs.bytes-per-checksum512dfs.replication3mapreduce.shuffle.ssl.file.buffer.size
> 65536dfs.permissions.enabledtruemapreduce.jobtracker.maxtasks.perjob
> -1dfs.datanode.use.datanode.hostnamefalsemapreduce.task.userlog.limit.kb
> 0dfs.namenode.fs-limits.max-directory-items0s3.client-write-packet-size
> 65536dfs.client.failover.sleep.max.millis15000mapreduce.job.maps
> 2dfs.namenode.fs-limits.max-component-length0mapreduce.map.output.compress
> falses3.blocksize67108864kfs.blocksize67108864dfs.namenode.edits.journal-plugin.qjournalorg.apache.hadoop.hdfs.qjournal.client.QuorumJournalManagerdfs.client.https.need-authfalseyarn.scheduler.minimum-allocation-mb128ftp.replication3mapreduce.input.fileinputformat.split.minsize0fs.s3n.block.size67108864yarn.i
> pc.rpc.classorg.apache.hadoop.yarn.ipc.HadoopYarnProtoRPCdfs.namenode.num.extra.edits.retained1000000hadoop.http.staticuser.userdr.whoyarn.nodemanager.localizer.cache.cleanup.interval-ms
> 600000mapreduce.job.jvm.numtasks1mapreduce.task.profile.maps
> 0-2mapreduce.shuffle.port8080mapreduce.jobtracker.http.address0.0.0.0:50030mapreduce.reduce.shuffle.merge.percent0.66
> mapreduce.task.skip.start.attempts2mapreduce.task.io.sort.factor10
> dfs.namenode.checkpoint.dirfile://${hadoop.tmp.dir}/dfs/namesecondarytfile.fs.input.buffer.size262144fs.s3.block.size67108864tfile.io.chunk.size1048576io.serializationsorg.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerializationyarn.resourcemanager.max-completed-applications10000mapreduce.jobhistory.principal
> jhs/_HOST@REALM.TLDmapreduce.job.end-notification.retry.interval1dfs.namenode.backup.address0.0.0.0:50100dfs.block.access.token.enablefalseio.seqfile.sorter.recordlimit1000000s3native.client-write-packet-size65536ftp.bytes-per-checksum512hadoop.security.group.mappingorg.apache.hadoop.security.ShellBasedUnixGroupsMappingdfs.client.file-block-storage-locations.timeout60mapre
> duce.job.end-notification.max.retry.interval5yarn.acl.enabletrue
> yarn.nm.liveness-monitor.expiry-interval-ms600000mapreduce.tasktracker.map.tasks.maximum2
> dfs.namenode.max.objects0dfs.namenode.delegation.token.max-lifetime604800000
> mapreduce.job.hdfs-servers${fs.defaultFS}yarn.application.classpath$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,$YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*dfs.datanode.hdfs-blocks-metadata.enabledtrueyarn.nodemanager.aux-services.mapreduce.shuffle.classorg.apache.hadoop.mapred.ShuffleHandlermapreduce.tasktracker.dns.nameserverdefault
> dfs.datanode.readahead.bytes4193404mapreduce.job.ubertask.maxreduces1
> dfs.image.compressfalsemapreduce.shuffle.ssl.enabledfalseyarn.log-aggregation-enablefalsemapreduce.tasktracker.report.address127.0.0.1:0mapreduce.tasktracker.http.threads40dfs.stream-buffer-size4096tfile.fs.output.buffer.size262144yarn.resourcemanager.am.max-retries1dfs.datanode.drop.cache.behind.writesfalsemapreduce.job.ubertask.enable
> falsehadoop.common.configuration.version0.23.0dfs.namenode.replication.work.m
> ultiplier.per.iteration2mapreduce.job.acl-modify-jobio.seqfile.local.dir${hadoop.tmp.dir}/io/localfs.s3.sleepTimeSeconds10mapreduce.client.output.filterFAILED
>
>
>


-- 
Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/

Mime
View raw message