hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3996) DFS client should throw version mismatch errors in case of a changed functionality
Date Fri, 22 Aug 2008 06:50:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12624596#action_12624596
] 

dhruba borthakur commented on HADOOP-3996:
------------------------------------------

If there is a protocol mismatch between client and server, then dfs will bail out. In your
case, it is possible that the protcol versionnumber was not bumped up when the new feature
was introduced.

> DFS client should throw version mismatch errors in case of a changed functionality
> ----------------------------------------------------------------------------------
>
>                 Key: HADOOP-3996
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3996
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Amar Kamat
>
> I started a hadoop cluster, built with the (latest) trunk  and tried doing _dfs -put_
with the (dfs) clients from the (older/stale) trunk. The client went ahead and tried to upload
the data onto the cluster and failed with the following error
> {noformat}
> -bash-3.00$ ./bin/hadoop dfs -put file file
> 08/08/22 05:11:06 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Could not read from stream
> 08/08/22 05:11:06 INFO hdfs.DFSClient: Abandoning block blk_5748330682182803489_1002
> 08/08/22 05:11:12 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Could not read from stream
> 08/08/22 05:11:12 INFO hdfs.DFSClient: Abandoning block blk_7482082538144151768_1002
> 08/08/22 05:11:18 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Could not read from stream
> 08/08/22 05:11:18 INFO hdfs.DFSClient: Abandoning block blk_-3132217232090937466_1002
> 08/08/22 05:11:24 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Could not read from stream
> 08/08/22 05:11:24 INFO hdfs.DFSClient: Abandoning block blk_-6473055472384366978_1002
> 08/08/22 05:11:30 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable
to create new block.
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2504)
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1810)
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1990)
> put: Filesystem closed
> {noformat}
> It would be better if somehow the client detects that its not *made* for this master
and softly/simply bail out.
> ----
> Note that I *did not* do it on purpose but forgot to replace the older installation with
the newer one.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message