hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From AJAY GUPTA <ajaygit...@gmail.com>
Subject Re: yarnClient.getContainerReport exception for KILLED containers
Date Wed, 30 Nov 2016 18:23:11 GMT
The following property in my cluster was set to false. This needs to be
true for being able to read data from timeline server.
<property>
  <description>The setting that controls whether yarn system metrics is
  published on the timeline server or not by RM.</description>
  <name>yarn.resourcemanager.system-metrics-publisher.enabled</name>
  <value>true</value>
</property>

To add, below configuration also needs to be true to retrieve data using
yarn client.
<property>
  <description>Indicate to clients whether to query generic application
  data from timeline history-service or not. If not enabled then application
  data is queried only from Resource Manager.</description>
  <name>yarn.timeline-service.generic-application-history.enabled</name>
  <value>true</value>
</property>

Thanks,
Ajay

On Wed, Nov 30, 2016 at 8:22 PM, AJAY GUPTA <ajaygit158@gmail.com> wrote:

> Hi
>
> I am running Hadoop 2.6.0 on a cluster setup. I have setup the timeline
> server on this cluster.
> My code wants to fetch details of KILLED containers for an application. I
> have used the yarnClient.getContainerReport(<container id>) call to get
> details of containers. This call throws an exception for KILLED containers,
> despite timeline server being up.
> Is there some configuration I am missing here which is resulting in this
> issue or is this expected in hadoop 2.6.0?
>
> I have a local setup with Hadoop 2.7.2. It is able to give details of
> KILLED containers without any exception.
>
> Thanks,
> Ajay
>

Mime
View raw message