hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4386) RPC support for large data transfers.
Date Fri, 10 Oct 2008 05:57:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12638490#action_12638490
] 

Raghu Angadi commented on HADOOP-4386:
--------------------------------------

Ideally no RPC should be blocked because of another 'zero copy RPC' (where RPC layer does
not have control how fast the data is written or read).

> RPC support for large data transfers.
> -------------------------------------
>
>                 Key: HADOOP-4386
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4386
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs, ipc
>            Reporter: Raghu Angadi
>
> Currently HDFS has a socket level protocol for serving HDFS data to clients. Clients
do not use RPCs to read or write data. Fundamentally there is no reason why this data transfer
 can not use RPCs.
> This jira is place holder for any porting Datanode transfers to RPC. This topic has been
discussed in varying detail many times, the latest being in the context of HADOOP-3856. There
are quite a few issues to be resolved both at API level and at implementation level. 
> We should probably copy some of the comments from HADOOP-3856 to here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message