hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alex Kozlov (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-6755) Have a configurable metric reporting CPU/disk usage per user
Date Fri, 14 May 2010 17:14:44 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-6755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Alex Kozlov updated HADOOP-6755:

    Status: Patch Available  (was: Open)
      Tags: monitoring

I looked at MAPREDUCE-220.  I think the idea is a bit different here: to be able to monitor
the usage per user in Ganglia or some other monitoring tool.  I am attaching a simple patch
mostly for the demo purposes.

In general, I think there should be two systems: one monitoring, focusing on a few important
metrics (cpu time, memory, disk usage per user), and more detailed per task and containing
more metrics, which later can be picked up by some more detailed reporting/analysis system.

Alex K

> Have a configurable metric reporting CPU/disk usage per user
> ------------------------------------------------------------
>                 Key: HADOOP-6755
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6755
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: metrics
>            Reporter: Alex Kozlov
>   Original Estimate: 4h
>  Remaining Estimate: 4h
> Many organizations are looking at resource usage per department/group/user for diagnostic
and resource allocation purposes.  It should be straightforward to implement a metric showing
the simple resource usage like CPU time and disk I/O per user and aggregate them using Ganglia.
> Eventually, we can create an API for pluggable metrics (there is one for Jobtracker and
> Let me know your thoughts.
> Alex K

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message