hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Subash D'Souza <sdso...@truecar.com>
Subject Issue with running Impala
Date Tue, 30 Oct 2012 01:17:45 GMT
I'm hoping this is the right place to post questions about Impala. I'm playing around with
Impala and have configured and got it running. I tried a running a query though and it comes
back with a very abstract error. Any help would be appreciated.

Thanks
Subash

Here are the error and log files for the same
[hadoop4.rad.wc.truecarcorp.com:21000] > select * from clearbook2 limit 5;
ERROR: Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
Error(255): Unknown error 255
ERROR: Invalid query handle

My log files don't seem to give much information

Impala State Server

 I1029 18:01:01.906649 23286 impala-server.cc:1524] TClientRequest.queryOptions: TQueryOptions
{
  01: abort_on_error (bool) = false,
  02: max_errors (i32) = 0,
  03: disable_codegen (bool) = false,
  04: batch_size (i32) = 0,
  05: return_as_ascii (bool) = true,
  06: num_nodes (i32) = 0,
  07: max_scan_range_length (i64) = 0,
  08: num_scanner_threads (i32) = 0,
  09: max_io_buffers (i32) = 0,
  10: allow_unsupported_formats (bool) = false,
  11: partition_agg (bool) = false,
}
I1029 18:01:01.906776 23286 impala-server.cc:821] query(): query=select * from clearbook2
limit 5
I1029 18:01:02.755319 23286 coordinator.cc:209] Exec() query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
I1029 18:01:02.756422 23286 simple-scheduler.cc:159] SimpleScheduler assignment (data->backend):
 (10.5.22.22:50010 -> 10.5.22.22:22000), (10.5.22.24:50010 -> 10.5.22.24:22000), (10.5.22.23:50010
-> 10.5.22.23:22000)
I1029 18:01:02.756430 23286 simple-scheduler.cc:162] SimpleScheduler locality percentage 100%
(3 out of 3)
I1029 18:01:02.759310 23286 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
I1029 18:01:02.763690 23286 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a2
tuples:
Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)),
Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2
offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)),
Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5
offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)),
Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8
offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1
mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING
col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1
mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING
col=14 offset=224
 null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)),
Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17
offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2
mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING
col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2
mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING
col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3
mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING
col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3
mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
 type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480
null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)),
Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33
offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4
mask=4))])
I1029 18:01:02.809578 23286 coordinator.cc:298] starting 3 backends for query 5fa79b17f9a8474f:97b7e6fea1b688a1
I1029 18:01:02.811485 23364 impala-server.cc:1588] ExecPlanFragment() instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
coord=10.5.22.24:22000 backend#=2
I1029 18:01:02.811578 23364 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
I1029 18:01:02.815759 23364 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a5
tuples:
Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)),
Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2
offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)),
Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5
offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)),
Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8
offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1
mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING
col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1
mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING
col=14 offset=224
 null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)),
Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17
offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2
mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING
col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2
mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING
col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3
mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING
col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3
mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
 type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480
null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)),
Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33
offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4
mask=4))])
I1029 18:01:02.957340 23537 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
I1029 18:01:02.958176 23373 coordinator.cc:734] Cancel() query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
I1029 18:01:02.958195 23373 plan-fragment-executor.cc:363] Cancel(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
I1029 18:01:02.958205 23373 data-stream-mgr.cc:194] cancelling all streams for fragment=5fa79b17f9a8474f:97b7e6fea1b688a2
I1029 18:01:02.958215 23373 data-stream-mgr.cc:97] cancelled stream: fragment_id=5fa79b17f9a8474f:97b7e6fea1b688a2
node_id=1
I1029 18:01:02.958225 23373 coordinator.cc:777] sending CancelPlanFragment rpc for instance_id=5fa79b17f9a8474f:97b7e6fea1b688a3
backend=10.5.22.22:22000
I1029 18:01:02.958411 23373 coordinator.cc:777] sending CancelPlanFragment rpc for instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
backend=10.5.22.24:22000
I1029 18:01:02.958510 23364 impala-server.cc:1618] CancelPlanFragment(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
I1029 18:01:02.958528 23364 plan-fragment-executor.cc:363] Cancel(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a5
I1029 18:01:02.958539 23364 data-stream-mgr.cc:194] cancelling all streams for fragment=5fa79b17f9a8474f:97b7e6fea1b688a5
I1029 18:01:02.958606 23373 coordinator.cc:366] Query id=5fa79b17f9a8474f:97b7e6fea1b688a1
failed because fragment id=5fa79b17f9a8474f:97b7e6fea1b688a4 failed.
I1029 18:01:02.959193 23541 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a2
I1029 18:01:02.959215 23541 impala-server.cc:966] UnregisterQuery(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
I1029 18:01:02.959609 23286 impala-server.cc:1406] ImpalaServer::get_state invalid handle
I1029 18:01:02.960021 23286 impala-server.cc:1351] close(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
I1029 18:01:02.960031 23286 impala-server.cc:966] UnregisterQuery(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
I1029 18:01:02.960042 23286 impala-server.cc:972] unknown query id: 5fa79b17f9a8474f:97b7e6fea1b688a1
I1029 18:01:02.961830 23541 data-stream-mgr.cc:177] DeregisterRecvr(): fragment_id=5fa79b17f9a8474f:97b7e6fea1b688a2,
node=1
I1029 18:01:02.962131 23269

 Impala DataNode

I1029 18:01:02.813395  9958 impala-server.cc:1588] ExecPlanFragment() instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4
coord=10.5.22.24:22000 backend#=1
I1029 18:01:02.813586  9958 plan-fragment-executor.cc:70] Prepare(): query_id=5fa79b17f9a8474f:97b7e6fea1b688a1
instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4
I1029 18:01:02.818050  9958 plan-fragment-executor.cc:83] descriptor table for fragment=5fa79b17f9a8474f:97b7e6fea1b688a4
tuples:
Tuple(id=0 size=560 slots=[Slot(id=0 type=STRING col=0 offset=16 null=(offset=0 mask=2)),
Slot(id=1 type=STRING col=1 offset=32 null=(offset=0 mask=4)), Slot(id=2 type=STRING col=2
offset=48 null=(offset=0 mask=8)), Slot(id=3 type=STRING col=3 offset=64 null=(offset=0 mask=10)),
Slot(id=4 type=STRING col=4 offset=80 null=(offset=0 mask=20)), Slot(id=5 type=STRING col=5
offset=96 null=(offset=0 mask=40)), Slot(id=6 type=FLOAT col=6 offset=8 null=(offset=0 mask=1)),
Slot(id=7 type=STRING col=7 offset=112 null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8
offset=128 null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=144 null=(offset=1
mask=2)), Slot(id=10 type=STRING col=10 offset=160 null=(offset=1 mask=4)), Slot(id=11 type=STRING
col=11 offset=176 null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=192 null=(offset=1
mask=10)), Slot(id=13 type=STRING col=13 offset=208 null=(offset=1 mask=20)), Slot(id=14 type=STRING
col=14 offset=224
 null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=240 null=(offset=1 mask=80)),
Slot(id=16 type=STRING col=16 offset=256 null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17
offset=272 null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=288 null=(offset=2
mask=4)), Slot(id=19 type=STRING col=19 offset=304 null=(offset=2 mask=8)), Slot(id=20 type=STRING
col=20 offset=320 null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=336 null=(offset=2
mask=20)), Slot(id=22 type=STRING col=22 offset=352 null=(offset=2 mask=40)), Slot(id=23 type=STRING
col=23 offset=368 null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=384 null=(offset=3
mask=1)), Slot(id=25 type=STRING col=25 offset=400 null=(offset=3 mask=2)), Slot(id=26 type=STRING
col=26 offset=416 null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=432 null=(offset=3
mask=8)), Slot(id=28 type=STRING col=28 offset=448 null=(offset=3 mask=10)), Slot(id=29
 type=STRING col=29 offset=464 null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=480
null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=496 null=(offset=3 mask=80)),
Slot(id=32 type=STRING col=32 offset=512 null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33
offset=528 null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=544 null=(offset=4
mask=4))])
I1029 18:01:02.954059  9988 plan-fragment-executor.cc:182] Open(): instance_id=5fa79b17f9a8474f:97b7e6fea1b688a4
I1029 18:01:02.958689  9875 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gz
Error(255): Unknown error 255
    @           0x75ea41  (unknown)
    @           0x731bdb  (unknown)
    @           0x731e1a  (unknown)
    @     0x7f2cb64bbd97  (unknown)
    @     0x7f2cb4ab67f1  start_thread
    @     0x7f2cb405370d  clone
I1029 18:01:02.958782  9876 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
Error(255): Unknown error 255
    @           0x75ea41  (unknown)
    @           0x731bdb  (unknown)
    @           0x731e1a  (unknown)
    @     0x7f2cb64bbd97  (unknown)
    @     0x7f2cb4ab67f1  start_thread
    @     0x7f2cb405370d  clone
I1029 18:01:02.962308  9875 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120105.txt.gz
Error(255): Unknown error 255
    @           0x75ea41  (unknown)
    @           0x731bdb  (unknown)
    @           0x731e1a  (unknown)
    @     0x7f2cb64bbd97  (unknown)
    @     0x7f2cb4ab67f1  start_thread
    @     0x7f2cb405370d  clone
I1029 18:01:02.962537  9876 status.cc:24] Failed to open HDFS file hdfs://hadoop1.rad.wc.truecarcorp.com:8020/tc/clearbook_data_20120101.txt.gz
Error(255): Unknown error 255
    @           0x75ea41  (un

 and here is the configuration of my datanodes

Hadoop Configuration

Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml,
yarn-site.xml, hdfs-default.xml, hdfs-site.xml
<
td>mapreduce.job.end-notification.retry.attempts<
tr><
/tr>
Key     Value
dfs.datanode.data.dir   /home/data/1/dfs/dn,/home/data/2/dfs/dn,/home/data/3/dfs/dn
dfs.namenode.checkpoint.txns    40000
s3.replication  3
mapreduce.output.fileoutputformat.compress.type RECORD
mapreduce.jobtracker.jobhistory.lru.cache.size  5
dfs.datanode.failed.volumes.tolerated   0
hadoop.http.filter.initializers org.apache.hadoop.http.lib.StaticUserWebFilter
mapreduce.cluster.temp.dir      ${hadoop.tmp.dir}/mapred/temp
mapreduce.reduce.shuffle.memory.limit.percent   0.25
yarn.nodemanager.keytab /etc/krb5.keytab
mapreduce.reduce.skip.maxgroups 0
dfs.https.server.keystore.resource      ssl-server.xml
hadoop.http.authenti
cation.kerberos.keytab  ${user.home}/hadoop.keytab
yarn.nodemanager.localizer.client.thread-count  5
mapreduce.framework.name        local
io.file.buffer.size     4096
mapreduce.task.tmp.dir  ./tmp
dfs.namenode.checkpoint.period  3600
ipc.client.kill.max     10
mapreduce.jobtracker.taskcache.levels   2
s3.stream-buffer-size   4096
dfs.namenode.secondary.http-address     0.0.0.0:50090
dfs.namenode.decommission.interval      30
dfs.namenode.http-address       0.0.0.0:50070
mapreduce.task.files.preserve.failedtasks       false
dfs.encrypt.data.transfer       false
dfs.datanode.address    0.0.0.0:50010
hadoop.http.authentication.token.validi
ty      36000
hadoop.security.group.mapping.ldap.search.filter.group  (objectClass=group)
dfs.client.failover.max.attempts        15
kfs.client-write-packet-size    65536
yarn.admin.acl  *
yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs        86400
dfs.client.failover.connection.retries.on.timeouts      0
mapreduce.map.sort.spill.percent        0.80
file.stream-buffer-size 4096
dfs.webhdfs.enabled     true
ipc.client.connection.maxidletime       10000
mapreduce.jobtracker.persist.jobstatus.hours    1
dfs.datanode.ipc.address        0.0.0.0:50020
yarn.nodemanager.address        0.0.0.0:0
yarn.app.mapreduce.am.job.task.listener.thread-count    30
dfs.client.read.shortcircuit    true
dfs.namenode.safemode.extension 30000
ha.zookeeper.parent-znode       /hadoop-ha
yarn.nodemanager.container-executor.class       org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor
io.skip.checksum.errors false
yarn.resourcemanager.scheduler.client.thread-count      50
hadoop.http.authentication.kerberos.principal   HTTP/_HOST@LOCALHOST
mapreduce.reduce.log.level      INFO
fs.s3.maxRetries        4
hadoop.kerberos.kinit.command   kinit
yarn.nodemanager.process-kill-wait.ms   2000
dfs.namenode.name.dir.restore   false
mapreduce.jobtracker.handler.count      10
yarn.app.mapreduce.client-am.ipc.max-retries
        1
dfs.client.use.datanode.hostname        false
hadoop.util.hash.type   murmur
io.seqfile.lazydecompress       true
dfs.datanode.dns.interface      default
yarn.nodemanager.disk-health-checker.min-healthy-disks  0.25
mapreduce.job.maxtaskfailures.per.tracker       4
mapreduce.tasktracker.healthchecker.script.timeout      600000
hadoop.security.group.mapping.ldap.search.attr.group.name       cn
fs.df.interval  60000
dfs.namenode.kerberos.internal.spnego.principal ${dfs.web.authentication.kerberos.principal}
mapreduce.jobtracker.address    local
mapreduce.tasktracker.tasks.sleeptimebeforesigkill      5000
dfs.journalnode.rpc-address     0.0.0.0:8485
mapreduce.job.a
cl-view-job
dfs.client.block.write.replace-datanode-on-failure.policy       DEFAULT
dfs.namenode.replication.interval       3
dfs.namenode.num.checkpoints.retained   2
mapreduce.tasktracker.http.address      0.0.0.0:50060
yarn.resourcemanager.scheduler.address  0.0.0.0:8030
dfs.datanode.directoryscan.threads      1
hadoop.security.group.mapping.ldap.ssl  false
mapreduce.task.merge.progress.records   10000
dfs.heartbeat.interval  3
net.topology.script.number.args 100
mapreduce.local.clientfactory.class.name        org.apache.hadoop.mapred.LocalClientFactory
dfs.client-write-packet-size    65536
io.native.lib.available true
dfs.client.failover.conne
ction.retries   0
yarn.nodemanager.disk-health-checker.interval-ms        120000
dfs.blocksize   67108864
mapreduce.jobhistory.webapp.address     0.0.0.0:19888
yarn.resourcemanager.resource-tracker.client.thread-count       50
dfs.blockreport.initialDelay    0
mapreduce.reduce.markreset.buffer.percent       0.0
dfs.ha.tail-edits.period        60
mapreduce.admin.user.env        LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native
yarn.nodemanager.health-checker.script.timeout-ms       1200000
yarn.resourcemanager.client.thread-count        50
file.bytes-per-checksum 512
dfs.replication.max     512
io.map.index.skip       0
mapreduce.task.timeout  600000
dfs.d
atanode.du.reserved     0
dfs.support.append      true
ftp.blocksize   67108864
dfs.client.file-block-storage-locations.num-threads     10
yarn.nodemanager.container-manager.thread-count 20
ipc.server.listen.queue.size    128
yarn.resourcemanager.amliveliness-monitor.interval-ms   1000
hadoop.ssl.hostname.verifier    DEFAULT
mapreduce.tasktracker.dns.interface     default
hadoop.security.group.mapping.ldap.search.attr.member   member
mapreduce.tasktracker.outofband.heartbeat       false
mapreduce.job.userlog.retain.hours      24
yarn.nodemanager.resource.memory-mb     8192
dfs.namenode.delegation.token.renew-interval    86400000
hadoop.ssl.keystores.factor
y.class org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory
dfs.datanode.sync.behind.writes false
mapreduce.map.maxattempts       4
dfs.client.read.shortcircuit.skip.checksum      false
dfs.datanode.handler.count      10
hadoop.ssl.require.client.cert  false
ftp.client-write-packet-size    65536
ipc.server.tcpnodelay   false
mapreduce.task.profile.reduces  0-2
hadoop.fuse.connection.timeout  300
dfs.permissions.superusergroup  hadoop
mapreduce.jobtracker.jobhistory.task.numberprogresssplits       12
mapreduce.map.speculative       true
fs.ftp.host.port        21
dfs.datanode.data.dir.perm      700
mapreduce.client.submit.file.re
plication       10
s3native.blocksize      67108864
mapreduce.job.ubertask.maxmaps  9
dfs.namenode.replication.min    1
mapreduce.cluster.acls.enabled  false
yarn.nodemanager.localizer.fetch.thread-count   4
map.sort.class  org.apache.hadoop.util.QuickSort
fs.trash.checkpoint.interval    0
dfs.namenode.name.dir   /home/data/1/dfs/nn
yarn.app.mapreduce.am.staging-dir       /tmp/hadoop-yarn/staging
fs.AbstractFileSystem.file.impl org.apache.hadoop.fs.local.LocalFs
yarn.nodemanager.env-whitelist  JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,YARN_HOME
dfs.image.compression.codec     org.apache.hadoop.io.compress.DefaultCodec
mapreduce.job.reduces   1

mapreduce.job.complete.cancel.delegation.tokens true
hadoop.security.group.mapping.ldap.search.filter.user   (&(objectClass=user)(sAMAccountName={0}))
yarn.nodemanager.sleep-delay-before-sigkill.ms  250
mapreduce.tasktracker.healthchecker.interval    60000
mapreduce.jobtracker.heartbeats.in.second       100
kfs.bytes-per-checksum  512
mapreduce.jobtracker.persist.jobstatus.dir      /jobtracker/jobsInfo
dfs.namenode.backup.http-address        0.0.0.0:50105
hadoop.rpc.protection   authentication
dfs.namenode.https-address      0.0.0.0:50470
ftp.stream-buffer-size  4096
dfs.ha.log-roll.period  120
yarn.resourcemanager.admin.client.thread-count  1
yar
n.resourcemanager.zookeeper-store.session.timeout-ms    60000
file.client-write-packet-size   65536
hadoop.http.authentication.simple.anonymous.allowed     true
yarn.nodemanager.log.retain-seconds     10800
dfs.datanode.drop.cache.behind.reads    false
dfs.image.transfer.bandwidthPerSec      0
mapreduce.tasktracker.instrumentation   org.apache.hadoop.mapred.TaskTrackerMetricsInst
io.mapfile.bloom.size   1048576
dfs.ha.fencing.ssh.connect-timeout      30000
s3.bytes-per-checksum   512
fs.automatic.close      true
fs.trash.interval       0
hadoop.security.authentication  simple
fs.defaultFS    hdfs://hadoop1.rad.wc.truecarcorp.com:8020
hadoop.ssl.server.conf
        ssl-server.xml
ipc.client.connect.max.retries  10
yarn.resourcemanager.delayed.delegation-token.removal-interval-ms       30000
dfs.journalnode.http-address    0.0.0.0:8480
mapreduce.jobtracker.taskscheduler      org.apache.hadoop.mapred.JobQueueTaskScheduler
mapreduce.job.speculative.speculativecap        0.1
yarn.am.liveness-monitor.expiry-interval-ms     600000
mapreduce.output.fileoutputformat.compress      false
net.topology.node.switch.mapping.impl   org.apache.hadoop.net.ScriptBasedMapping
dfs.namenode.replication.considerLoad   true
mapreduce.job.counters.max      120
yarn.resourcemanager.address    0.0.0.0:8032
dfs.client.block.write.retries  3
yarn.resourcemanager.nm.liveness-monitor.interval-ms    1000
io.map.index.interval   128
mapred.child.java.opts  -Xmx200m
mapreduce.tasktracker.local.dir.minspacestart   0
dfs.client.https.keystore.resource      ssl-client.xml
mapreduce.client.progressmonitor.pollinterval   1000
mapreduce.jobtracker.tasktracker.maxblacklists  4
mapreduce.job.queuename default
yarn.nodemanager.localizer.address      0.0.0.0:8040
io.mapfile.bloom.error.rate     0.005
mapreduce.job.split.metainfo.maxsize    10000000
yarn.nodemanager.delete.thread-count    4
ipc.client.tcpnodelay   false
yarn.app.mapreduce.am.resource.mb       1536
dfs.datanode.dns.nameserver     default
mapreduce.map.output.compress.codec     org.apache.hadoop.io.compress.DefaultCodec
dfs.namenode.accesstime.precision       3600000
mapreduce.map.log.level INFO
io.seqfile.compress.blocksize   1000000
mapreduce.tasktracker.taskcontroller    org.apache.hadoop.mapred.DefaultTaskController
hadoop.security.groups.cache.secs       300
mapreduce.job.end-notification.max.attempts     5
yarn.nodemanager.webapp.address 0.0.0.0:8042
mapreduce.jobtracker.expire.trackers.interval   600000
yarn.resourcemanager.webapp.address     0.0.0.0:8088
yarn.nodemanager.health-checker.interval-ms     600000
hadoop.security.authorization   false
fs.ftp.host     0.0.0.0
yarn.app.mapreduce.am.scheduler
.heartbeat.interval-ms  1000
mapreduce.ifile.readahead       true
ha.zookeeper.session-timeout.ms 5000
mapreduce.tasktracker.taskmemorymanager.monitoringinterval      5000
mapreduce.reduce.shuffle.parallelcopies 5
mapreduce.map.skip.maxrecords   0
dfs.https.enable        false
mapreduce.reduce.shuffle.read.timeout   180000
mapreduce.output.fileoutputformat.compress.codec        org.apache.hadoop.io.compress.DefaultCodec
mapreduce.jobtracker.instrumentation    org.apache.hadoop.mapred.JobTrackerMetricsInst
yarn.nodemanager.remote-app-log-dir-suffix      logs
dfs.blockreport.intervalMsec    21600000
mapreduce.reduce.speculative    true
mapreduce.jobhistory.keytab     /etc/sec
urity/keytab/jhs.service.keytab
dfs.datanode.balance.bandwidthPerSec    1048576
file.blocksize  67108864
yarn.resourcemanager.admin.address      0.0.0.0:8033
yarn.resourcemanager.resource-tracker.address   0.0.0.0:8031
mapreduce.tasktracker.local.dir.minspacekill    0
mapreduce.jobtracker.staging.root.dir   ${hadoop.tmp.dir}/mapred/staging
mapreduce.jobtracker.retiredjobs.cache.size     1000
ipc.client.connect.max.retries.on.timeouts      45
ha.zookeeper.acl        world:anyone:rwcda
yarn.nodemanager.local-dirs     /tmp/nm-local-dir
mapreduce.reduce.shuffle.connect.timeout        180000
dfs.block.access.key.update.interval    600
dfs.block.access.token.lifetime 600
5
mapreduce.jobtracker.system.dir ${hadoop.tmp.dir}/mapred/system
yarn.nodemanager.admin-env      MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX
mapreduce.jobtracker.jobhistory.block.size      3145728
mapreduce.tasktracker.indexcache.mb     10
dfs.namenode.checkpoint.check.period    60
dfs.client.block.write.replace-datanode-on-failure.enable       true
dfs.datanode.directoryscan.interval     21600
yarn.nodemanager.container-monitor.interval-ms  3000
dfs.default.chunk.view.size     32768
mapreduce.job.speculative.slownodethreshold     1.0
mapreduce.job.reduce.slowstart.completedmaps    0.05
hadoop.security.instrumentation.requires.admin  false
dfs.namenode.safemode.min.datanodes     0
hadoop.http.authentication.signature.secret.file        ${user.home}/hadoop-http-auth-signature-secret
mapreduce.reduce.maxattempts    4
yarn.nodemanager.localizer.cache.target-size-mb 10240
s3native.replication    3
dfs.datanode.https.address      0.0.0.0:50475
mapreduce.reduce.skip.proc.count.autoincr       true
file.replication        1
hadoop.hdfs.configuration.version       1
ipc.client.idlethreshold        4000
hadoop.tmp.dir  /tmp/hadoop-${user.name}
mapreduce.jobhistory.address    0.0.0.0:10020
mapreduce.jobtracker.restart.recover    false
mapreduce.cluster.local.dir     ${hadoop.tmp.dir}/mapred/local
yarn.ipc.s
erializer.type  protocolbuffers
dfs.namenode.decommission.nodes.per.interval    5
dfs.namenode.delegation.key.update-interval     86400000
fs.s3.buffer.dir        ${hadoop.tmp.dir}/s3
dfs.namenode.support.allow.format       true
yarn.nodemanager.remote-app-log-dir     /tmp/logs
hadoop.work.around.non.threadsafe.getpwuid      false
dfs.ha.automatic-failover.enabled       false
mapreduce.jobtracker.persist.jobstatus.active   true
dfs.namenode.logging.level      info
yarn.nodemanager.log-dirs       /tmp/logs
dfs.namenode.checkpoint.edits.dir       ${dfs.namenode.checkpoint.dir}
hadoop.rpc.socket.factory.class.default org.apache.hadoop.net.StandardSocketFactory
yarn.resourcemanager.keytab
        /etc/krb5.keytab
dfs.datanode.http.address       0.0.0.0:50075
mapreduce.task.profile  false
dfs.namenode.edits.dir  ${dfs.namenode.name.dir}
hadoop.fuse.timer.period        5
mapreduce.map.skip.proc.count.autoincr  true
fs.AbstractFileSystem.viewfs.impl       org.apache.hadoop.fs.viewfs.ViewFs
mapreduce.job.speculative.slowtaskthreshold     1.0
s3native.stream-buffer-size     4096
yarn.nodemanager.delete.debug-delay-sec 0
dfs.secondary.namenode.kerberos.internal.spnego.principal       ${dfs.web.authentication.kerberos.principal}
dfs.namenode.safemode.threshold-pct     0.999f
mapreduce.ifile.readahead.bytes 4194304
yarn.scheduler.maximum-allocation-mb    10240
s3native.bytes-per-checksum     512
mapreduce.job.committer.setup.cleanup.needed    true
kfs.replication 3
yarn.nodemanager.log-aggregation.compression-type       none
hadoop.http.authentication.type simple
dfs.client.failover.sleep.base.millis   500
yarn.nodemanager.heartbeat.interval-ms  1000
hadoop.jetty.logs.serve.aliases true
mapreduce.reduce.shuffle.input.buffer.percent   0.70
dfs.datanode.max.transfer.threads       4096
mapreduce.task.io.sort.mb       100
mapreduce.reduce.merge.inmem.threshold  1000
dfs.namenode.handler.count      10
hadoop.ssl.client.conf  ssl-client.xml
yarn.resourcemanager.container.liveness-monitor.interval-ms
        600000
mapreduce.client.completion.pollinterval        5000
yarn.nodemanager.vmem-pmem-ratio        2.1
yarn.app.mapreduce.client.max-retries   3
hadoop.ssl.enabled      false
fs.AbstractFileSystem.hdfs.impl org.apache.hadoop.fs.Hdfs
mapreduce.tasktracker.reduce.tasks.maximum      2
mapreduce.reduce.input.buffer.percent   0.0
kfs.stream-buffer-size  4096
dfs.namenode.invalidate.work.pct.per.iteration  0.32f
dfs.bytes-per-checksum  512
dfs.replication 3
mapreduce.shuffle.ssl.file.buffer.size  65536
dfs.permissions.enabled true
mapreduce.jobtracker.maxtasks.perjob    -1
dfs.datanode.use.datanode.hostname      false
mapreduce.task.userlog.limit.kb 0
dfs.namenode.fs-limits.max-directory-items      0
s3.client-write-packet-size     65536
dfs.client.failover.sleep.max.millis    15000
mapreduce.job.maps      2
dfs.namenode.fs-limits.max-component-length     0
mapreduce.map.output.compress   false
s3.blocksize    67108864
kfs.blocksize   67108864
dfs.namenode.edits.journal-plugin.qjournal      org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager
dfs.client.https.need-auth      false
yarn.scheduler.minimum-allocation-mb    128
ftp.replication 3
mapreduce.input.fileinputformat.split.minsize   0
fs.s3n.block.size       67108864
yarn.i
pc.rpc.class    org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
dfs.namenode.num.extra.edits.retained   1000000
hadoop.http.staticuser.user     dr.who
yarn.nodemanager.localizer.cache.cleanup.interval-ms    600000
mapreduce.job.jvm.numtasks      1
mapreduce.task.profile.maps     0-2
mapreduce.shuffle.port  8080
mapreduce.jobtracker.http.address       0.0.0.0:50030
mapreduce.reduce.shuffle.merge.percent  0.66
mapreduce.task.skip.start.attempts      2
mapreduce.task.io.sort.factor   10
dfs.namenode.checkpoint.dir     file://${hadoop.tmp.dir}/dfs/namesecondary
tfile.fs.input.buffer.size      262144
fs.s3.block.size        67108864
tfile.io.chunk.size     1048576

io.serializations       org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
yarn.resourcemanager.max-completed-applications 10000
mapreduce.jobhistory.principal  jhs/_HOST@REALM.TLD
mapreduce.job.end-notification.retry.interval   1
dfs.namenode.backup.address     0.0.0.0:50100
dfs.block.access.token.enable   false
io.seqfile.sorter.recordlimit   1000000
s3native.client-write-packet-size       65536
ftp.bytes-per-checksum  512
hadoop.security.group.mapping   org.apache.hadoop.security.ShellBasedUnixGroupsMapping
dfs.client.file-block-storage-locations.timeout 60
mapre
duce.job.end-notification.max.retry.interval    5
yarn.acl.enable true
yarn.nm.liveness-monitor.expiry-interval-ms     600000
mapreduce.tasktracker.map.tasks.maximum 2
dfs.namenode.max.objects        0
dfs.namenode.delegation.token.max-lifetime      604800000
mapreduce.job.hdfs-servers      ${fs.defaultFS}
yarn.application.classpath      $HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,$YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*
dfs.datanode.hdfs-blocks-metadata.enabled       true
yarn.nodemanager.aux-services.mapreduce.shuffle.class
        org.apache.hadoop.mapred.ShuffleHandler
mapreduce.tasktracker.dns.nameserver    default
dfs.datanode.readahead.bytes    4193404
mapreduce.job.ubertask.maxreduces       1
dfs.image.compress      false
mapreduce.shuffle.ssl.enabled   false
yarn.log-aggregation-enable     false
mapreduce.tasktracker.report.address    127.0.0.1:0
mapreduce.tasktracker.http.threads      40
dfs.stream-buffer-size  4096
tfile.fs.output.buffer.size     262144
yarn.resourcemanager.am.max-retries     1
dfs.datanode.drop.cache.behind.writes   false
mapreduce.job.ubertask.enable   false
hadoop.common.configuration.version     0.23.0
dfs.namenode.replication.work.m
ultiplier.per.iteration 2
mapreduce.job.acl-modify-job
io.seqfile.local.dir    ${hadoop.tmp.dir}/io/local
fs.s3.sleepTimeSeconds  10
mapreduce.client.output.filter  FAILED




[cid:DBF9A811-A509-4C8E-834F-8CC497761F5B]

Mime
View raw message