hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arun C Murthy (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-5795) Add a bulk FIleSystem.getFileBlockLocations
Date Fri, 08 May 2009 17:22:45 GMT
Add a bulk FIleSystem.getFileBlockLocations
-------------------------------------------

                 Key: HADOOP-5795
                 URL: https://issues.apache.org/jira/browse/HADOOP-5795
             Project: Hadoop Core
          Issue Type: New Feature
          Components: dfs
    Affects Versions: 0.20.0
            Reporter: Arun C Murthy
             Fix For: 0.21.0


Currently map-reduce applications (specifically file-based input-formats) use FileSystem.getFileBlockLocations
to compute splits. However they are forced to call it once per file.
The downsides are multiple:
   # Even with a few thousand files to process the number of RPCs quickly starts getting noticeable
   # The current implementation of getFileBlockLocations is too slow since each call results
in 'search' in the namesystem. Assuming a few thousand input files it results in that many
RPCs and 'searches'.

It would be nice to have a FileSystem.getFileBlockLocations which can take in a directory,
and return the block-locations for all files in that directory. We could eliminate both the
per-file RPC and also the 'search' by a 'scan'.

When I tested this for terasort, a moderate job with 8000 input files the runtime halved from
the current 8s to 4s. Clearly this is much more important for latency-sensitive applications...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message