hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3164) Use FileChannel.transferTo() when data is read from DataNode.
Date Sat, 19 Apr 2008 02:04:22 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12590663#action_12590663
] 

Konstantin Shvachko commented on HADOOP-3164:
---------------------------------------------

# DataNode.useChannelForTransferTo
I am not in favor of a lot of very OS-dependent and even OS version dependent code. Rather
than including all known OSs that we observed do not have the problem we should assume that
all OS do well and take actions on those that don't when this reported.
This is translated into that we should eliminate boolean useChannelForTransferTo and retain
the part of the code that corresponds to the true value.
# DataNode.transferToFully()
#- Analyzing IOException message text is *BAD*. Instead, lets try to call waitForWritable()
before transferTo(). The expectation is that if the Socket buffer is full waitForWritable()
will wait until there is space to write to, and this will be a workaround the Linux EAGAIN
bug Raghu mentioned.
#- I'd make transferToFully() a member of SocketOutputStream rather than a DataNode static
method.
# BlockSender.sendBlock()
#- I am not sure I understand the why the new argument. What is wrong with declaring as
{code}
long sendBlock(OutputStream out, Throttler throttler) throws IOException {
  if( out == null ) {
    throw new IOException( "out stream is null" );
  }
  this.out = out;
  ......................................
}
{code}
and then calling it as
{code}
long read = blockSender.sendBlock(out, throttler);
   or
long read = blockSender.sendBlock(baseStream, throttler);
{code}
#- Also the use of this.out and (the parameter) out is very ambiguous here.

> Use FileChannel.transferTo() when data is read from DataNode.
> -------------------------------------------------------------
>
>                 Key: HADOOP-3164
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3164
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>             Fix For: 0.18.0
>
>         Attachments: HADOOP-3164.patch, HADOOP-3164.patch, HADOOP-3164.patch, HADOOP-3164.patch
>
>
> HADOOP-2312 talks about using FileChannel's [{{transferTo()}}|http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html#transferTo(long,%20long,%20java.nio.channels.WritableByteChannel)]
and [{{transferFrom()}}|http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html#transferFrom(java.nio.channels.ReadableByteChannel,%20long,%20long)]
in DataNode. 
> At the time DataNode neither used NIO sockets nor wrote large chunks of contiguous block
data to socket. Hadoop 0.17 does both when data is seved to clients (and other datanodes).
I am planning to try using transferTo() in the trunk. This might reduce DataNode's cpu by
another 50% or more.
> Once HADOOP-1702 is committed, we can look into using transferFrom().

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message