hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nick Rettinghouse (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-5377) Inefficient jobtracker history file layout
Date Mon, 02 Mar 2009 17:06:56 GMT
Inefficient jobtracker history file layout
------------------------------------------

                 Key: HADOOP-5377
                 URL: https://issues.apache.org/jira/browse/HADOOP-5377
             Project: Hadoop Core
          Issue Type: Bug
          Components: mapred
         Environment: This is at least a problem on 0.15.
            Reporter: Nick Rettinghouse


Storing too many files in a single directory slows things down tremendously and in this case,
makes the grid just a bit more difficult to manage.  On our jobtrackers, even with a 45 day
purge cycle, we see hundreds of thousands of files in logs/hadoop/history.  The following
is an example:

pchdm01.ypost.re1: logs/hadoop/history - 1,176,927 files!

This is the time(1) output of the `ls | wc -l`

real    0m56.042s
user    0m28.702s
sys     0m1.794s

Note that this was the second time I ran this filecount. The first run took more than 4 minutes
of real time.

===========================================

My recommended solution is that the Hadoop team store these files in the following structure:
    history/2008/08/19
    history/2008/08/20
    history/2008/08/21

Using this structure gives us 2 important things: consistently good performance and the ability
to easily delete or archive old files.  

If we expect a Hadoop cluster to process hundreds of thousands of jobs per day, then we may
want to break it down by
hour like this:
    history/2008/08/19/00
    history/2008/08/19/01
     ...
    history/2008/08/19/22
    history/2008/08/19/23


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message