hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-13169) Randomize file list in SimpleCopyListing
Date Wed, 14 Sep 2016 14:56:20 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15490631#comment-15490631
] 

Steve Loughran commented on HADOOP-13169:
-----------------------------------------

I'll let Chris do the final review.

Now, some bad news about logging @ debug. For commons logging APIs, that needs to be wrapped
by {{if (LOG.isDebugEnabled()}} clauses; this skips the expense of building strings which
are then never used.

For classes which use the SLF4J logging APIs, you can get away with uising {{LOG.debug()}},
provided the style is
{code}
LOG.debug("Adding {}", fileStatusInfo.fileStatus);
{code}
Here the string concat only happens if the log is @ debug level, so less expensive and no
longer needs to be wrapped. That's why we get away with this in the s3a classes, which have
all been upgraded.

This leaves you with a choice: wrap the debug statements or move the LOG up to SLF4J, the
latter simply by changing the class of the log and its factory, adding the new imports and
deleting the old one
{code}
Logger LOG = LoggerFactory.getLogger(...)
{code)

> Randomize file list in SimpleCopyListing
> ----------------------------------------
>
>                 Key: HADOOP-13169
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13169
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: tools/distcp
>            Reporter: Rajesh Balamohan
>            Assignee: Rajesh Balamohan
>            Priority: Minor
>         Attachments: HADOOP-13169-branch-2-001.patch, HADOOP-13169-branch-2-002.patch,
HADOOP-13169-branch-2-003.patch, HADOOP-13169-branch-2-004.patch, HADOOP-13169-branch-2-005.patch,
HADOOP-13169-branch-2-006.patch, HADOOP-13169-branch-2-007.patch
>
>
> When copying files to S3, based on file listing some mappers can get into S3 partition
hotspots. This would be more visible, when data is copied from hive warehouse with lots of
partitions (e.g date partitions). In such cases, some of the tasks would tend to be a lot
more slower than others. It would be good to randomize the file paths which are written out
in SimpleCopyListing to avoid this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message