hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Edward Capriolo <edlinuxg...@gmail.com>
Subject Re: Blocks amount is "stuck" in statistics
Date Mon, 25 May 2009 15:02:30 GMT
On Mon, May 25, 2009 at 6:34 AM, Stas Oskin <stas.oskin@gmail.com> wrote:
> Hi.
> Ok, was too eager to report :).
> It got sorted out after some time.
> Regards.
> 2009/5/25 Stas Oskin <stas.oskin@gmail.com>
>> Hi.
>> I just did an erase of large test folder with about 20,000 blocks, and
>> created a new one. I copied about 128 blocks, and fsck reflects it
>> correctly, but NN statistics still shows the old number. It does shows the
>> currently used space correctly.
>> Any idea if this a known issue and was fixed? Also, does it has any
>> influence over the operations?
>> I'm using Hadoop 0.18.3.
>> Thanks.

This is something I am dealing with in my cacti graphs. Shameless
plugs http://www.jointhegrid.com/hadoop

Some hadoop JMX attributes are like traditional SNMP gauges. A JMX
Request directly gather a counter variable and returns it.
FSDatasetStatus Remaining is implemented like that.

Some variables are implemented like SNMP counter, DataNodeStatistics
BytesRead is like that the number keeps increasing.

Other values are sampled. Since I monitor with 5 minute intervals I am
setup like this:


DataNodeStatistics BlocksWritten is a gauge that is sampled. So if
your sample period is 5 minutes it will be 5 minutes before the stats
show up and then in 5 minutes they are replaced.

Using NullContextWithUpdateThread with the combination of sampled
variables and non sampled variables is not exactly what I want...As
some data is 5 minutes behind the real time data, but
NullContextWithUpdateThread is working well for me.

View raw message