hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Resolved: (HADOOP-4533) HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible
Date Wed, 29 Oct 2008 21:54:44 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Hairong Kuang resolved HADOOP-4533.
-----------------------------------

      Resolution: Fixed
    Hadoop Flags: [Reviewed]

I've committed this.

> HDFS client of hadoop 0.18.1 and HDFS server 0.18.2 (0.18 branch) not compatible
> --------------------------------------------------------------------------------
>
>                 Key: HADOOP-4533
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4533
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Runping Qi
>            Assignee: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.18.2
>
>         Attachments: balancerRM_br18.patch
>
>
> Not sure whether this is considered as a bug or is an expected case.
> But here are the details.
> I have a cluster using a build from hadoop 0.18 branch.
> When I tried to use hadoop 0.18.1 dfs client to load files to it, I got the following
exceptions:
> hadoop --config ~/test dfs -copyFromLocal gridmix-env /tmp/.
> 08/10/28 16:23:00 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Could not read from stream
> 08/10/28 16:23:00 INFO dfs.DFSClient: Abandoning block blk_-439926292663595928_1002
> 08/10/28 16:23:06 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Could not read from stream
> 08/10/28 16:23:06 INFO dfs.DFSClient: Abandoning block blk_5160335053668168134_1002
> 08/10/28 16:23:12 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Could not read from stream
> 08/10/28 16:23:12 INFO dfs.DFSClient: Abandoning block blk_4168253465442802441_1002
> 08/10/28 16:23:18 INFO dfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Could not read from stream
> 08/10/28 16:23:18 INFO dfs.DFSClient: Abandoning block blk_-2631672044886706846_1002
> 08/10/28 16:23:24 WARN dfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable
to create new block.
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2349)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1912)
> 08/10/28 16:23:24 WARN dfs.DFSClient: Error Recovery for block blk_-2631672044886706846_1002
bad datanode[0]
> copyFromLocal: Could not get block locations. Aborting...
> Exception closing file /tmp/gridmix-env
> java.io.IOException: Could not get block locations. Aborting...
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
> This problem has a severe impact on Pig 2.0, since it is pre-packaged with hadoop 0.18.1
and will use 
> Hadoop 0.18.1 dfs client in its interaction with hadoop cluster.
> That means that Pig 2.0 will not work with the to be released hadoop 0.18.2

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message