hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Patrick Kling (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HDFS-1527) SocketOutputStream.transferToFully fails for blocks >= 2GB on 32 bit JVM
Date Sat, 04 Dec 2010 01:32:11 GMT

     [ https://issues.apache.org/jira/browse/HDFS-1527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Patrick Kling updated HDFS-1527:
--------------------------------

    Attachment: HDFS-1527.patch

This patch uses a regular transfer instead of transferTo if we are on a 32bit JVM and the
block size is >= Integer.MAX_INT. I also re-enables TestLargeBlock on 32bit.

ant test-patch output:

{code}
     [exec] +1 overall.  
     [exec] 
     [exec]     +1 @author.  The patch does not contain any @author tags.
     [exec] 
     [exec]     +1 tests included.  The patch appears to include 3 new or modified tests.
     [exec] 
     [exec]     +1 javadoc.  The javadoc tool did not generate any warning messages.
     [exec] 
     [exec]     +1 javac.  The applied patch does not increase the total number of javac compiler
warnings.
     [exec] 
     [exec]     +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9)
warnings.
     [exec] 
     [exec]     +1 release audit.  The applied patch does not increase the total number of
release audit warnings.
     [exec] 
     [exec]     +1 system test framework.  The patch passed system test framework compile.
{code}

> SocketOutputStream.transferToFully fails for blocks >= 2GB on 32 bit JVM
> ------------------------------------------------------------------------
>
>                 Key: HDFS-1527
>                 URL: https://issues.apache.org/jira/browse/HDFS-1527
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.23.0
>         Environment: 32 bit JVM
>            Reporter: Patrick Kling
>             Fix For: 0.23.0
>
>         Attachments: HDFS-1527.patch
>
>
> On 32 bit JVM, SocketOutputStream.transferToFully() fails if the block size is >=
2GB. We should fall back to a normal transfer in this case. 
> {code}
> 2010-12-02 19:04:23,490 ERROR datanode.DataNode (BlockSender.java:sendChunks(399)) -
BlockSender.sendChunks() exception: java.io.IOException: Value too large
>  for defined data type
>         at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
>         at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:418)
>         at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:519)
>         at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:204)
>         at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:386)
>         at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:475)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opReadBlock(DataXceiver.java:196)
>         at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opReadBlock(DataTransferProtocol.java:356)
>         at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:328)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:130)
>         at java.lang.Thread.run(Thread.java:619)
> {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message