hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1242) dfs upgrade/downgrade problems
Date Wed, 16 May 2007 18:51:16 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12496371
] 

Konstantin Shvachko commented on HADOOP-1242:
---------------------------------------------

I think we should target a more general task here (if any), which could be called "backward
incompatibility" I guess. 
Namely, a conversion from a pre-upgrade layout versions to the current one should be performed
in a way that
any attempt to run old version hdfs software in converted repository would fail.
- For data-nodes this can be achieved by retaining the storage file in its original location
but updating its
version to a newer one. That way the old data-node code will complain about not being able
to read
future version storage.
- For the name-node we can also write the new version into the old image file or corrupt the
image in any other way,
say by placing a message inside "This image is corrupted intentionally. Please do not remove."

The only drawback of this approach I can see is that the old storage and image files will
have to stay
in the repositories forever, and even if you create a hdfs from scratch old files will still
need to be created
if we want to support backward incompatibility.

> dfs upgrade/downgrade problems
> ------------------------------
>
>                 Key: HADOOP-1242
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1242
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.13.0
>            Reporter: Owen O'Malley
>         Assigned To: Konstantin Shvachko
>            Priority: Blocker
>             Fix For: 0.13.0
>
>         Attachments: clean-upgrade.patch
>
>
> I ran my test cluster on 0.13 and then tried to run it under 0.12. When I downgraded,
the namenode would not come up and the message said I needed to format the filesystem. I ignored
that and tried to restart on 0.13, now the datanode will not come up with:
> 2007-04-10 11:25:37,448 ERROR org.apache.hadoop.dfs.DataNode: org.apache.hadoop.
> dfs.InconsistentFSStateException: Directory /local/owen/hadoop/dfs/d
> ata is in an inconsistent state: Old layout block directory /local/owen/hadoop/dfs/data/data
is missing
>         at org.apache.hadoop.dfs.DataStorage.isConversionNeeded(DataStorage.java
> :170)
>         at org.apache.hadoop.dfs.Storage$StorageDirectory.analyzeStorage(Storage
> .java:264)
>         at org.apache.hadoop.dfs.DataStorage.recoverTransitionRead(DataStorage.j
> ava:83)
>         at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:230)
>         at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:199)
>         at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:1175)
>         at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1119)
>         at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:1140)
>         at org.apache.hadoop.dfs.DataNode.main(DataNode.java:1299)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message