hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5175) Option to prohibit jars unpacking
Date Tue, 19 May 2009 01:33:45 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12710580#action_12710580
] 

Hadoop QA commented on HADOOP-5175:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12408202/hadoop-5175.txt
  against trunk revision 776032.

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified tests.
                        Please justify why no tests are needed for this patch.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

    +1 release audit.  The applied patch does not increase the total number of release audit
warnings.

    +1 core tests.  The patch passed core unit tests.

    -1 contrib tests.  The patch failed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/349/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/349/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/349/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/349/console

This message is automatically generated.

> Option to prohibit jars unpacking
> ---------------------------------
>
>                 Key: HADOOP-5175
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5175
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: mapred
>    Affects Versions: 0.19.0
>         Environment: Hadoop cluster of 5 servers, each with:
> HDD: two disks WDC WD1000FYPS-01ZKB0
> OS: Linux 2.6.26-1-686 #1 SMP
> FS: XFS
>            Reporter: Andrew Gudkov
>         Attachments: hadoop-5175.txt
>
>
> I've noticed that task tracker moves all unpacked jars into 
> ${hadoop.tmp.dir}/mapred/local/taskTracker.
> We are using a lot of external libraries, that are deployed via "-libjars" 
> option. The total number of files after unpacking is about 20 thousands.
> After running a number of jobs, tasks start to be killed with timeout reason 
> ("Task attempt_200901281518_0011_m_000173_2 failed to report status for 601 
> seconds. Killing!"). All killed tasks are in "initializing" state. I've 
> watched the tasktracker logs and found such messages:
> {quote}
> Thread 20926 (Thread-10368):
>   State: BLOCKED
>   Blocked count: 3611
>   Waited count: 24
>   Blocked on java.lang.ref.Reference$Lock@e48ed6
>   Blocked by 20882 (Thread-10341)
>   Stack:
>     java.lang.StringCoding$StringEncoder.encode(StringCoding.java:232)
>     java.lang.StringCoding.encode(StringCoding.java:272)
>     java.lang.String.getBytes(String.java:947)
>     java.io.UnixFileSystem.getBooleanAttributes0(Native Method)
>     java.io.UnixFileSystem.getBooleanAttributes(UnixFileSystem.java:228)
>     java.io.File.isDirectory(File.java:754)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:427)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
>     org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:433)
> {quote}
> HADOOP-4780 patch brings the code which stores map of directories along 
> with their DU's, thus reducing the number of calls to DU. However, the delete operation
takes too long. I've manually deleted archive after 10 jobs had run and it took over 30 minutes
on XFS.
> I suppose that an option to prohibit jars unpacking would be helpfull in my situation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message