hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-894) dfs client protocol should allow asking for parts of the block map
Date Wed, 02 May 2007 08:40:15 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Konstantin Shvachko updated HADOOP-894:

    Attachment: partialBlockList.patch

In this patch:
- I included the list of LocatedBlock directly into DFSFileInfo, rather than overloading the
- removed redundant members in DFSFileInfo
- ClientProtocol.open(src, length) takes 2 parameters now: the file name and the length of
the starting segment
of the file for which block locations must be returned
- Old open(src) is deprecated. I've seen many servlets used it directly. I replaced those
calls by 
getBlockLocations() in hadoop servlets, but there might be others.
- new ClientProtocol.getBlockLocations() method is introduced
- DFSInputStream during initialization fetches only 10 blocks, subsequent blocks are requested
cached during the regular read().
- pread first tries to use already cached blocks, then requests block locations from the name-node.
- DFSClient.getHints() now calls getBlockLocations(), I removed redundant getHints() from
ClientProocol and NameNode
- many existing tests verify new functionality, I added one more case to TestPread, which
ensures pread correctly
reads both cached and uncached blocks.
- checked style and checked JavaDoc.

> dfs client protocol should allow asking for parts of the block map
> ------------------------------------------------------------------
>                 Key: HADOOP-894
>                 URL: https://issues.apache.org/jira/browse/HADOOP-894
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: Owen O'Malley
>         Assigned To: Konstantin Shvachko
>         Attachments: partialBlockList.patch
> I think that the HDFS client protocol should change like:
> /** The meta-data about a file that was opened. */
> class OpenFileInfo {
>   /** the info for the first block */
>   public LocatedBlockInfo getBlockInfo();
>   public long getBlockSize();
>   public long getLength();
> }
> interface ClientProtocol extends VersionedProtocol {
>   public OpenFileInfo open(String name) throws IOException;
>   /** get block info for any range of blocks */
>   public LocatedBlockInfo[] getBlockInfo(String name, int blockOffset, int blockLength)
throws IOException;
> }
> so that the client can decide how much block info to request and when. Currently, when
the file is opened or an error occurs, the entire block list is requested and sent.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message