hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ha, Hun Cheol (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (MAPREDUCE-6736) import hive table from parquet files, there is no 'job.splitmetainfo' file message
Date Wed, 20 Jul 2016 09:53:20 GMT

     [ https://issues.apache.org/jira/browse/MAPREDUCE-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Ha, Hun Cheol updated MAPREDUCE-6736:
-------------------------------------
    Environment: 
Ubuntu 14.04.4 LTS (GNU/Linux 4.2.0-27-generic x86_64)
Hadoop 2.6.0-cdh5.7.0

  was:
Ubuntu 14.04.4 LTS (GNU/Linux 4.2.0-27-generic x86_64)

Hadoop 2.6.0-cdh5.7.0
Subversion http://github.com/cloudera/hadoop -r c00978c67b0d3fe9f3b896b5030741bd40bf541a
Compiled by jenkins on 2016-03-23T18:36Z
Compiled with protoc 2.5.0
>From source with checksum b2eabfa328e763c88cb14168f9b372
This command was run using /usr/lib/hadoop/hadoop-common-2.6.0-cdh5.7.0.jar


> import hive table from parquet files, there is no 'job.splitmetainfo' file message
> ----------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-6736
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6736
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: applicationmaster, mrv2
>    Affects Versions: 2.6.0
>         Environment: Ubuntu 14.04.4 LTS (GNU/Linux 4.2.0-27-generic x86_64)
> Hadoop 2.6.0-cdh5.7.0
>            Reporter: Ha, Hun Cheol
>            Priority: Blocker
>
> same issue : https://issues.apache.org/jira/browse/MAPREDUCE-3056
> which is created on 2011.09.21 and fixed 2011.10.04 
> there is user(Sergey) who has same issue on 2015.05.13 too! (look last comment of above
link)
> on beeline prompt, try import hive table from parquet files that are exported from another
hive table, there is no 'job.splitmetainfo' file on staging directory : FileNotFoundException
occurs
> full log messages below
> ==================
> Log Type: syslog
> Log Upload Time: Wed Jul 20 17:57:36 +0900 2016
> Log Length: 21439
> 2016-07-20 17:57:26,139 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created
MRAppMaster for application appattempt_1468834620182_0036_000001
> 2016-07-20 17:57:26,417 WARN [main] org.apache.hadoop.util.NativeCodeLoader: Unable to
load native-hadoop library for your platform... using builtin-java classes where applicable
> 2016-07-20 17:57:26,463 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing
with tokens:
> 2016-07-20 17:57:26,463 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind:
YARN_AM_RM_TOKEN, Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@52af26ee)
> 2016-07-20 17:57:26,510 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using
mapred newApiCommitter.
> 2016-07-20 17:57:26,991 WARN [main] org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory:
The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
> 2016-07-20 17:57:27,091 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter
set in config null
> 2016-07-20 17:57:27,154 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter:
File Output Committer Algorithm version is 1
> 2016-07-20 17:57:27,159 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter
is org.apache.hadoop.tools.mapred.CopyCommitter
> 2016-07-20 17:57:27,231 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering
class org.apache.hadoop.mapreduce.jobhistory.EventType for class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> 2016-07-20 17:57:27,232 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering
class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
> 2016-07-20 17:57:27,233 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering
class org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
> 2016-07-20 17:57:27,234 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering
class org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
> 2016-07-20 17:57:27,235 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering
class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
> 2016-07-20 17:57:27,241 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering
class org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
> 2016-07-20 17:57:27,242 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering
class org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
> 2016-07-20 17:57:27,243 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering
class org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
> 2016-07-20 17:57:27,292 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils:
Default file system [hdfs://da74:8020]
> 2016-07-20 17:57:27,322 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils:
Default file system [hdfs://da74:8020]
> 2016-07-20 17:57:27,352 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils:
Default file system [hdfs://da74:8020]
> 2016-07-20 17:57:27,382 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
Perms after creating 448, Expected: 448
> 2016-07-20 17:57:27,387 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
Emitting job history data to the timeline server is not enabled
> 2016-07-20 17:57:27,428 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering
class org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
> 2016-07-20 17:57:27,645 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded
properties from hadoop-metrics2.properties
> 2016-07-20 17:57:27,703 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl:
Scheduled snapshot period at 10 second(s).
> 2016-07-20 17:57:27,704 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl:
MRAppMaster metrics system started
> 2016-07-20 17:57:27,712 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
Adding job token for job_1468834620182_0036 to jobTokenSecretManager
> 2016-07-20 17:57:27,723 WARN [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
Job init failed
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.FileNotFoundException:
File does not exist: hdfs://da74:8020/user/yarn/staging/hdfs/.staging/job_1468834620182_0036/job.splitmetainfo
> 	at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.createSplits(JobImpl.java:1579)
> 	at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1443)
> 	at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1401)
> 	at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> 	at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> 	at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> 	at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> 	at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:997)
> 	at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:139)
> 	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1333)
> 	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1101)
> 	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> 	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1544)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> 	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1540)
> 	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1473)
> Caused by: java.io.FileNotFoundException: File does not exist: hdfs://da74:8020/user/yarn/staging/hdfs/.staging/job_1468834620182_0036/job.splitmetainfo
> 	at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
> 	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
> 	at org.apache.hadoop.mapreduce.split.SplitMetaInfoReader.readSplitMetaInfo(SplitMetaInfoReader.java:51)
> 	at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.createSplits(JobImpl.java:1574)
> 	... 17 more
> 2016-07-20 17:57:27,726 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster
launching normal, non-uberized, multi-container job job_1468834620182_0036.
> 2016-07-20 17:57:27,753 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue
class java.util.concurrent.LinkedBlockingQueue
> 2016-07-20 17:57:27,761 INFO [Socket Reader #1 for port 35855] org.apache.hadoop.ipc.Server:
Starting Socket Reader #1 for port 35855
> 2016-07-20 17:57:27,778 INFO [main] org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl:
Adding protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
> 2016-07-20 17:57:27,779 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC
Server Responder: starting
> 2016-07-20 17:57:27,779 INFO [IPC Server listener on 35855] org.apache.hadoop.ipc.Server:
IPC Server listener on 35855: starting
> 2016-07-20 17:57:27,780 INFO [main] org.apache.hadoop.mapreduce.v2.app.client.MRClientService:
Instantiated MRClientService at da74/115.68.67.98:35855
> 2016-07-20 17:57:27,837 INFO [main] org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)
via org.mortbay.log.Slf4jLog
> 2016-07-20 17:57:27,842 INFO [main] org.apache.hadoop.security.authentication.server.AuthenticationFilter:
Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
> 2016-07-20 17:57:27,846 INFO [main] org.apache.hadoop.http.HttpRequestLog: Http request
log for http.requests.mapreduce is not defined
> 2016-07-20 17:57:27,855 INFO [main] org.apache.hadoop.http.HttpServer2: Added global
filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
> 2016-07-20 17:57:27,860 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter
AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context
mapreduce
> 2016-07-20 17:57:27,860 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter
AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context
static
> 2016-07-20 17:57:27,862 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec:
/mapreduce/*
> 2016-07-20 17:57:27,862 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec:
/ws/*
> 2016-07-20 17:57:27,870 INFO [main] org.apache.hadoop.http.HttpServer2: Jetty bound to
port 42206
> 2016-07-20 17:57:27,870 INFO [main] org.mortbay.log: jetty-6.1.26.cloudera.4
> 2016-07-20 17:57:27,899 INFO [main] org.mortbay.log: Extract jar:file:/usr/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.7.0.jar!/webapps/mapreduce
to /tmp/Jetty_0_0_0_0_42206_mapreduce____p5x30f/webapp
> 2016-07-20 17:57:28,159 INFO [main] org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:42206
> 2016-07-20 17:57:28,160 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Web app /mapreduce
started at 42206
> 2016-07-20 17:57:28,493 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Registered
webapp guice modules
> 2016-07-20 17:57:28,497 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator:
JOB_CREATE job_1468834620182_0036
> 2016-07-20 17:57:28,499 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue
class java.util.concurrent.LinkedBlockingQueue
> 2016-07-20 17:57:28,499 INFO [Socket Reader #1 for port 41561] org.apache.hadoop.ipc.Server:
Starting Socket Reader #1 for port 41561
> 2016-07-20 17:57:28,506 INFO [IPC Server listener on 41561] org.apache.hadoop.ipc.Server:
IPC Server listener on 41561: starting
> 2016-07-20 17:57:28,502 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC
Server Responder: starting
> 2016-07-20 17:57:28,540 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
nodeBlacklistingEnabled:true
> 2016-07-20 17:57:28,540 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
maxTaskFailuresPerNode is 3
> 2016-07-20 17:57:28,540 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor:
blacklistDisablePercent is 33
> 2016-07-20 17:57:28,605 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting
to ResourceManager at da74/115.68.67.98:8030
> 2016-07-20 17:57:28,687 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
maxContainerCapability: <memory:8192, vCores:4>
> 2016-07-20 17:57:28,687 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
queue: root.hdfs
> 2016-07-20 17:57:28,692 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
Upper limit on the thread pool size is 500
> 2016-07-20 17:57:28,692 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl:
The thread pool initial size is 10
> 2016-07-20 17:57:28,694 INFO [main] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy:
yarn.client.max-cached-nodemanagers-proxies : 0
> 2016-07-20 17:57:28,699 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1468834620182_0036Job Transitioned from NEW to FAIL_ABORT
> 2016-07-20 17:57:28,700 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler:
Processing the event EventType: JOB_ABORT
> 2016-07-20 17:57:28,727 INFO [CommitterEvent Processor #0] org.apache.hadoop.tools.mapred.CopyCommitter:
Cleaning up temporary work folder: /user/hive/staging/hdfs/.staging/_distcp70301392
> 2016-07-20 17:57:28,758 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl:
job_1468834620182_0036Job Transitioned from FAIL_ABORT to FAILED
> 2016-07-20 17:57:28,759 INFO [Thread-55] org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
We are finishing cleanly so this is the last retry
> 2016-07-20 17:57:28,759 INFO [Thread-55] org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
Notify RMCommunicator isAMLastRetry: true
> 2016-07-20 17:57:28,759 INFO [Thread-55] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
RMCommunicator notified that shouldUnregistered is: true
> 2016-07-20 17:57:28,760 INFO [Thread-55] org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
Notify JHEH isAMLastRetry: true
> 2016-07-20 17:57:28,760 INFO [Thread-55] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
JobHistoryEventHandler notified that forceJobCompletion is true
> 2016-07-20 17:57:28,760 INFO [Thread-55] org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
Calling stop for all the services
> 2016-07-20 17:57:28,760 INFO [Thread-55] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
Stopping JobHistoryEventHandler. Size of the outstanding queue size is 3
> 2016-07-20 17:57:28,850 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
Event Writer setup for JobId: job_1468834620182_0036, File: hdfs://da74:8020/user/yarn/staging/hdfs/.staging/job_1468834620182_0036/job_1468834620182_0036_1.jhist
> 2016-07-20 17:57:29,188 INFO [Thread-55] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
In stop, writing event JOB_SUBMITTED
> 2016-07-20 17:57:29,223 INFO [Thread-55] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils:
Default file system [hdfs://da74:8020]
> 2016-07-20 17:57:29,253 INFO [Thread-55] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
In stop, writing event JOB_QUEUE_CHANGED
> 2016-07-20 17:57:29,253 INFO [Thread-55] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
In stop, writing event JOB_FAILED
> 2016-07-20 17:57:29,376 INFO [Thread-55] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
Copying hdfs://da74:8020/user/yarn/staging/hdfs/.staging/job_1468834620182_0036/job_1468834620182_0036_1.jhist
to hdfs://da74:8020/user/yarn/staging/history/done_intermediate/hdfs/job_1468834620182_0036-1469005044142-hdfs-distcp-1469005048754-0-0-FAILED-root.hdfs-1469005048754.jhist_tmp
> 2016-07-20 17:57:29,513 INFO [Thread-55] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
Copied to done location: hdfs://da74:8020/user/yarn/staging/history/done_intermediate/hdfs/job_1468834620182_0036-1469005044142-hdfs-distcp-1469005048754-0-0-FAILED-root.hdfs-1469005048754.jhist_tmp
> 2016-07-20 17:57:29,520 INFO [Thread-55] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
Copying hdfs://da74:8020/user/yarn/staging/hdfs/.staging/job_1468834620182_0036/job_1468834620182_0036_1_conf.xml
to hdfs://da74:8020/user/yarn/staging/history/done_intermediate/hdfs/job_1468834620182_0036_conf.xml_tmp
> 2016-07-20 17:57:29,561 INFO [Thread-55] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
Copied to done location: hdfs://da74:8020/user/yarn/staging/history/done_intermediate/hdfs/job_1468834620182_0036_conf.xml_tmp
> 2016-07-20 17:57:29,580 INFO [Thread-55] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
Moved tmp to done: hdfs://da74:8020/user/yarn/staging/history/done_intermediate/hdfs/job_1468834620182_0036.summary_tmp
to hdfs://da74:8020/user/yarn/staging/history/done_intermediate/hdfs/job_1468834620182_0036.summary
> 2016-07-20 17:57:29,586 INFO [Thread-55] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
Moved tmp to done: hdfs://da74:8020/user/yarn/staging/history/done_intermediate/hdfs/job_1468834620182_0036_conf.xml_tmp
to hdfs://da74:8020/user/yarn/staging/history/done_intermediate/hdfs/job_1468834620182_0036_conf.xml
> 2016-07-20 17:57:29,594 INFO [Thread-55] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
Moved tmp to done: hdfs://da74:8020/user/yarn/staging/history/done_intermediate/hdfs/job_1468834620182_0036-1469005044142-hdfs-distcp-1469005048754-0-0-FAILED-root.hdfs-1469005048754.jhist_tmp
to hdfs://da74:8020/user/yarn/staging/history/done_intermediate/hdfs/job_1468834620182_0036-1469005044142-hdfs-distcp-1469005048754-0-0-FAILED-root.hdfs-1469005048754.jhist
> 2016-07-20 17:57:29,594 INFO [Thread-55] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler:
Stopped JobHistoryEventHandler. super.stop()
> 2016-07-20 17:57:29,596 INFO [Thread-55] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
Setting job diagnostics to Job init failed : org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
java.io.FileNotFoundException: File does not exist: hdfs://da74:8020/user/yarn/staging/hdfs/.staging/job_1468834620182_0036/job.splitmetainfo
> 	at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.createSplits(JobImpl.java:1579)
> 	at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1443)
> 	at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transition(JobImpl.java:1401)
> 	at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> 	at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> 	at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> 	at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> 	at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:997)
> 	at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:139)
> 	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1333)
> 	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1101)
> 	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> 	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1544)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> 	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1540)
> 	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1473)
> Caused by: java.io.FileNotFoundException: File does not exist: hdfs://da74:8020/user/yarn/staging/hdfs/.staging/job_1468834620182_0036/job.splitmetainfo
> 	at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
> 	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
> 	at org.apache.hadoop.mapreduce.split.SplitMetaInfoReader.readSplitMetaInfo(SplitMetaInfoReader.java:51)
> 	at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.createSplits(JobImpl.java:1574)
> 	... 17 more
> 2016-07-20 17:57:29,597 INFO [Thread-55] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
History url is http://da74:19888/jobhistory/job/job_1468834620182_0036
> 2016-07-20 17:57:29,604 INFO [Thread-55] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
Waiting for application to be successfully unregistered.
> 2016-07-20 17:57:30,605 INFO [Thread-55] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
Final Stats: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0
CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0
> 2016-07-20 17:57:30,607 INFO [Thread-55] org.apache.hadoop.mapreduce.v2.app.MRAppMaster:
Deleting staging directory hdfs://da74:8020/ /user/hive/staging/hdfs/.staging/job_1468834620182_0036
> 2016-07-20 17:57:30,628 INFO [Thread-55] org.apache.hadoop.ipc.Server: Stopping server
on 41561
> 2016-07-20 17:57:30,629 INFO [IPC Server listener on 41561] org.apache.hadoop.ipc.Server:
Stopping IPC Server listener on 41561
> 2016-07-20 17:57:30,629 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping
IPC Server Responder
> 2016-07-20 17:57:30,629 INFO [TaskHeartbeatHandler PingChecker] org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler:
TaskHeartbeatHandler thread interrupted
> 2016-07-20 17:57:30,629 INFO [Ping Checker] org.apache.hadoop.yarn.util.AbstractLivelinessMonitor:
TaskAttemptFinishingMonitor thread interrupted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-help@hadoop.apache.org


Mime
View raw message