hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Gopal V (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-9146) HDFS forward seek() within a block shouldn't spawn new TCP Peer/RemoteBlockReader
Date Fri, 25 Sep 2015 19:42:04 GMT
Gopal V created HDFS-9146:

             Summary: HDFS forward seek() within a block shouldn't spawn new TCP Peer/RemoteBlockReader
                 Key: HDFS-9146
                 URL: https://issues.apache.org/jira/browse/HDFS-9146
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: HDFS
    Affects Versions: 2.7.1, 2.6.0, 2.8.0
            Reporter: Gopal V

When a seek() + forward readFully() is triggered from a remote dfsclient, HDFS opens a new
remote block reader even if the seek is within the same HDFS block.

(analysis from [~rajesh.balamohan])

This is due to the fact that a simple read operation assumes that the user is going to read
till the end of the block.

      try {
        blockReader = getBlockReader(targetBlock, offsetIntoBlock,
            targetBlock.getBlockSize() - offsetIntoBlock, targetAddr,
            storageType, chosenNode);


Since the user hasn't read till the end of the block when the next seek happens, the BlockReader
assumes this is an aborted read and tries to throw away the TCP peer it has got.


    // If we've now satisfied the whole client read, read one last packet
    // header, which should be empty
    if (bytesNeededToFinish <= 0) {

Since that is not satisfied, the status code is unset & the peer is not returned to the

    if (peerCache != null && sentStatusCode) {
      peerCache.put(datanodeID, peer);
    } else {

This message was sent by Atlassian JIRA

View raw message