Return-Path: X-Original-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5A186102E6 for ; Fri, 7 Mar 2014 01:19:34 +0000 (UTC) Received: (qmail 56132 invoked by uid 500); 7 Mar 2014 01:19:30 -0000 Delivered-To: apmail-hadoop-hdfs-commits-archive@hadoop.apache.org Received: (qmail 56088 invoked by uid 500); 7 Mar 2014 01:19:30 -0000 Mailing-List: contact hdfs-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-commits@hadoop.apache.org Received: (qmail 55920 invoked by uid 99); 7 Mar 2014 01:19:28 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Mar 2014 01:19:28 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Mar 2014 01:19:27 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id 2F62823889F1; Fri, 7 Mar 2014 01:19:07 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1575110 - in /hadoop/common/branches/branch-2/hadoop-hdfs-project: ./ hadoop-hdfs/ hadoop-hdfs/src/main/java/ hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/ hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/ hadoop-hdfs/src/main/n... Date: Fri, 07 Mar 2014 01:19:06 -0000 To: hdfs-commits@hadoop.apache.org From: cmccabe@apache.org X-Mailer: svnmailer-1.0.9 Message-Id: <20140307011907.2F62823889F1@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: cmccabe Date: Fri Mar 7 01:19:05 2014 New Revision: 1575110 URL: http://svn.apache.org/r1575110 Log: HDFS-6065. HDFS zero-copy reads should return null on EOF when doing ZCR (cmccabe) Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/ (props changed) hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/ (props changed) hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/ (props changed) hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/ShortCircuitReplica.java hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/ (props changed) hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.h hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_zerocopy.c hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestEnhancedByteBufferAccess.java Propchange: hadoop/common/branches/branch-2/hadoop-hdfs-project/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project:r1575109 Propchange: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs:r1575109 Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt?rev=1575110&r1=1575109&r2=1575110&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt Fri Mar 7 01:19:05 2014 @@ -323,6 +323,9 @@ Release 2.4.0 - UNRELEASED HDFS-6067. TestPread.testMaxOutHedgedReadPool is flaky (cmccabe) + HDFS-6065. HDFS zero-copy reads should return null on EOF when doing ZCR + (cmccabe) + BREAKDOWN OF HDFS-5698 SUBTASKS AND RELATED JIRAS HDFS-5717. Save FSImage header in protobuf. (Haohui Mai via jing9) Propchange: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java:r1575109 Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java?rev=1575110&r1=1575109&r2=1575110&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java Fri Mar 7 01:19:05 2014 @@ -1556,13 +1556,27 @@ implements ByteBufferReadable, CanSetDro closeCurrentBlockReader(); } + /** + * The immutable empty buffer we return when we reach EOF when doing a + * zero-copy read. + */ + private static final ByteBuffer EMPTY_BUFFER = + ByteBuffer.allocateDirect(0).asReadOnlyBuffer(); + @Override public synchronized ByteBuffer read(ByteBufferPool bufferPool, int maxLength, EnumSet opts) throws IOException, UnsupportedOperationException { - assert(maxLength > 0); - if (((blockReader == null) || (blockEnd == -1)) && - (pos < getFileLength())) { + if (maxLength == 0) { + return EMPTY_BUFFER; + } else if (maxLength < 0) { + throw new IllegalArgumentException("can't read a negative " + + "number of bytes."); + } + if ((blockReader == null) || (blockEnd == -1)) { + if (pos >= getFileLength()) { + return null; + } /* * If we don't have a blockReader, or the one we have has no more bytes * left to read, we call seekToBlockSource to get a new blockReader and @@ -1645,6 +1659,7 @@ implements ByteBufferReadable, CanSetDro @Override public synchronized void releaseBuffer(ByteBuffer buffer) { + if (buffer == EMPTY_BUFFER) return; Object val = extendedReadBuffers.remove(buffer); if (val == null) { throw new IllegalArgumentException("tried to release a buffer " + Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/ShortCircuitReplica.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/ShortCircuitReplica.java?rev=1575110&r1=1575109&r2=1575110&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/ShortCircuitReplica.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/ShortCircuitReplica.java Fri Mar 7 01:19:05 2014 @@ -277,7 +277,11 @@ public class ShortCircuitReplica { MappedByteBuffer loadMmapInternal() { try { FileChannel channel = dataStream.getChannel(); - return channel.map(MapMode.READ_ONLY, 0, channel.size()); + MappedByteBuffer mmap = channel.map(MapMode.READ_ONLY, 0, channel.size()); + if (LOG.isTraceEnabled()) { + LOG.trace(this + ": created mmap of size " + channel.size()); + } + return mmap; } catch (IOException e) { LOG.warn(this + ": mmap error", e); return null; Propchange: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native:r1575109 Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.h URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.h?rev=1575110&r1=1575109&r2=1575110&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.h (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/expect.h Fri Mar 7 01:19:05 2014 @@ -38,7 +38,7 @@ struct hdfsFile_internal; #define EXPECT_NULL(x) \ do { \ - void* __my_ret__ = x; \ + const void* __my_ret__ = x; \ int __my_errno__ = errno; \ if (__my_ret__ != NULL) { \ fprintf(stderr, "TEST_ERROR: failed on %s:%d (errno: %d): " \ @@ -50,7 +50,7 @@ struct hdfsFile_internal; #define EXPECT_NONNULL(x) \ do { \ - void* __my_ret__ = x; \ + const void* __my_ret__ = x; \ int __my_errno__ = errno; \ if (__my_ret__ == NULL) { \ fprintf(stderr, "TEST_ERROR: failed on %s:%d (errno: %d): " \ Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h?rev=1575110&r1=1575109&r2=1575110&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h Fri Mar 7 01:19:05 2014 @@ -746,12 +746,16 @@ extern "C" { * @param maxLength The maximum length to read. We may read fewer bytes * than this length. * - * @return On success, returns a new hadoopRzBuffer. + * @return On success, we will return a new hadoopRzBuffer. * This buffer will continue to be valid and readable * until it is released by readZeroBufferFree. Failure to * release a buffer will lead to a memory leak. + * You can access the data within the hadoopRzBuffer with + * hadoopRzBufferGet. If you have reached EOF, the data + * within the hadoopRzBuffer will be NULL. You must still + * free hadoopRzBuffer instances containing NULL. * - * NULL plus an errno code on an error. + * On failure, we will return NULL plus an errno code. * errno = EOPNOTSUPP indicates that we could not do a * zero-copy read, and there was no ByteBufferPool * supplied. Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_zerocopy.c URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_zerocopy.c?rev=1575110&r1=1575109&r2=1575110&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_zerocopy.c (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_zerocopy.c Fri Mar 7 01:19:05 2014 @@ -38,6 +38,9 @@ #define TEST_ZEROCOPY_LAST_BLOCK_SIZE 3215 #define TEST_ZEROCOPY_NUM_BLOCKS 6 #define SMALL_READ_LEN 16 +#define TEST_ZEROCOPY_FILE_LEN \ + (((TEST_ZEROCOPY_NUM_BLOCKS - 1) * TEST_ZEROCOPY_FULL_BLOCK_SIZE) + \ + TEST_ZEROCOPY_LAST_BLOCK_SIZE) #define ZC_BUF_LEN 32768 @@ -165,6 +168,22 @@ static int doTestZeroCopyReads(hdfsFS fs EXPECT_ZERO(memcmp(block, hadoopRzBufferGet(buffer) + (TEST_ZEROCOPY_FULL_BLOCK_SIZE - SMALL_READ_LEN), SMALL_READ_LEN)); hadoopRzBufferFree(file, buffer); + + /* Check the result of a zero-length read. */ + buffer = hadoopReadZero(file, opts, 0); + EXPECT_NONNULL(buffer); + EXPECT_NONNULL(hadoopRzBufferGet(buffer)); + EXPECT_INT_EQ(0, hadoopRzBufferLength(buffer)); + hadoopRzBufferFree(file, buffer); + + /* Check the result of reading past EOF */ + EXPECT_INT_EQ(0, hdfsSeek(fs, file, TEST_ZEROCOPY_FILE_LEN)); + buffer = hadoopReadZero(file, opts, 1); + EXPECT_NONNULL(buffer); + EXPECT_NULL(hadoopRzBufferGet(buffer)); + hadoopRzBufferFree(file, buffer); + + /* Cleanup */ free(block); hadoopRzOptionsFree(opts); EXPECT_ZERO(hdfsCloseFile(fs, file)); Modified: hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestEnhancedByteBufferAccess.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestEnhancedByteBufferAccess.java?rev=1575110&r1=1575109&r2=1575110&view=diff ============================================================================== --- hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestEnhancedByteBufferAccess.java (original) +++ hadoop/common/branches/branch-2/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestEnhancedByteBufferAccess.java Fri Mar 7 01:19:05 2014 @@ -753,6 +753,10 @@ public class TestEnhancedByteBufferAcces fsIn = fs.open(TEST_PATH); ByteBuffer buf = fsIn.read(null, 1, EnumSet.of(ReadOption.SKIP_CHECKSUMS)); fsIn.releaseBuffer(buf); + // Test EOF behavior + IOUtils.skipFully(fsIn, TEST_FILE_LENGTH - 1); + buf = fsIn.read(null, 1, EnumSet.of(ReadOption.SKIP_CHECKSUMS)); + Assert.assertEquals(null, buf); } finally { if (fsIn != null) fsIn.close(); if (fs != null) fs.close();