hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Gudkov (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-5175) Option to prohibit jars unpacking
Date Thu, 05 Feb 2009 15:57:59 GMT
Option to prohibit jars unpacking
---------------------------------

                 Key: HADOOP-5175
                 URL: https://issues.apache.org/jira/browse/HADOOP-5175
             Project: Hadoop Core
          Issue Type: New Feature
          Components: mapred
    Affects Versions: 0.19.0
         Environment: Hadoop cluster of 5 servers, each with:
HDD: two disks WDC WD1000FYPS-01ZKB0
OS: Linux 2.6.26-1-686 #1 SMP
FS: XFS

            Reporter: Andrew Gudkov


I've noticed that task tracker moves all unpacked jars into 
${hadoop.tmp.dir}/mapred/local/taskTracker.

We are using a lot of external libraries, that are deployed via "-libjars" 
option. The total number of files after unpacking is about 20 thousands.

After running a number of jobs, tasks start to be killed with timeout reason 
("Task attempt_200901281518_0011_m_000173_2 failed to report status for 601 
seconds. Killing!"). All killed tasks are in "initializing" state. I've 
watched the tasktracker logs and found such messages:

{quote}
Thread 20926 (Thread-10368):
  State: BLOCKED
  Blocked count: 3611
  Waited count: 24
  Blocked on java.lang.ref.Reference$Lock@e48ed6
  Blocked by 20882 (Thread-10341)
  Stack:
    java.lang.StringCoding$StringEncoder.encode(StringCoding.java:232)
    java.lang.StringCoding.encode(StringCoding.java:272)
    java.lang.String.getBytes(String.java:947)
    java.io.UnixFileSystem.getBooleanAttributes0(Native Method)
    java.io.UnixFileSystem.getBooleanAttributes(UnixFileSystem.java:228)
    java.io.File.isDirectory(File.java:754)
    org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:427)
    org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
    org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
    org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
    org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
    org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
    org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
    org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
    org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
    org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
    org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
    org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
    org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
    org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
{quote}

HADOOP-4780 patch brings the code which stores map of directories along 
with their DU's, thus reducing the number of calls to DU. However, the delete operation takes
too long. I've manually deleted archive after 10 jobs had run and it took over 30 minutes
on XFS.

I suppose that an option to prohibit jars unpacking would be helpfull in my situation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message