crunch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Gabriel Reid (JIRA)" <>
Subject [jira] [Commented] (CRUNCH-580) FileTargetImpl#handleOutputs Inefficiency on S3NativeFileSystem
Date Mon, 07 Dec 2015 22:06:10 GMT


Gabriel Reid commented on CRUNCH-580:

This looks like a very valid use of Guava, and I think it doesn't make too much sense to block
something like this because of our kill-Guava project.

I'm still pretty worried about the whole Guava situation (particularly the headaches I'm going
to go through at work if we upgrade to v18 in Crunch), but like I say, I don't think that
that should block a useful fix for S3 users like this.

> FileTargetImpl#handleOutputs Inefficiency on S3NativeFileSystem
> ---------------------------------------------------------------
>                 Key: CRUNCH-580
>                 URL:
>             Project: Crunch
>          Issue Type: Bug
>          Components: Core, IO
>    Affects Versions: 0.13.0
>         Environment: Amazon Elastic Map Reduce
>            Reporter: Jeffrey Quinn
>            Assignee: Josh Wills
>         Attachments: CRUNCH-580.patch
> We have run in to a pretty frustrating inefficiency inside of
> This method loops over all of the partial output files and moves them to their ultimate
destination directories, calling org.apache.hadoop.fs.FileSystem#rename(org.apache.hadoop.fs.Path,
org.apache.hadoop.fs.Path) on each partial output in a loop.
> This is no problem when the org.apache.hadoop.fs.FileSystem in question is HDFS where
#rename is a cheap operation, but when an implementation such as S3NativeFileSystem is used
it is extremely inefficient, as each iteration through the loop makes a single blocking S3
API call, and this loop can be extremely long when there are many thousands of partial output

This message was sent by Atlassian JIRA

View raw message