hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ion Badita <ion.bad...@searchcapital.net>
Subject Re: About Metrics update
Date Fri, 30 May 2008 15:10:52 GMT
Hi,

I found (because of the Metrics behavior reported in the previous 
e-mail) some errors in the metrics reported by the NameNodeMetrics:
safeModeTime and fsImageLoadTime keep growing (they should be the same 
over time). The mentioned metrics use MetricsIntValue for the values, on 
MetricsIntValue .pushMetric() if "changed" field is marked true then the 
value is "published" in MetricsRecod else the method does nothing.

public synchronized void pushMetric(final MetricsRecord mr) {
    if (changed)
      mr.incrMetric(name, value);
    changed = false;
  }

The problem is in AbstractMetricsContext.update() method, because the 
metricUpdates are not cleared after been merged in the "record's 
internal data".

Ion



Ion Badita wrote:
> Hi,
>
> A looked over the class 
> org.apache.hadoop.metrics.spi.AbstractMetricsContext and i have a 
> question:
> why in the update(MetricsRecordImpl record) metricUpdates Map is not 
> cleared after the updates are merged in metricMap. Because of this on 
> every update() "old" increments are merged in metricMap. Is this the 
> right behavior?
> If i want to increment only one metric in the record using current 
> implementation is not possible without modifying other metrics that 
> are incremented rare.
>
>
> Thanks
> Ion


Mime
View raw message