hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brian Bockelman (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4298) File corruption when reading with fuse-dfs
Date Mon, 29 Sep 2008 22:40:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12635566#action_12635566
] 

Brian Bockelman commented on HADOOP-4298:
-----------------------------------------

More debugging info.  This is definitely bad logic in the optimization block of fuse-ds, function
dfs_read.  This most definitely would not work under any architecture.

I turned on some of the debugging statements, and got the following:

Sep 29 17:27:03 node182 fuse_dfs: Cache bounds for /user/brian/test_helloworld4: 0 -> 10485760
(10485760 bytes). Check for offset 10199040 
Sep 29 17:27:03 node182 fuse_dfs: FUSE requested 131072 bytes of /user/brian/test_helloworld4
for offset 10199040 in file 
Sep 29 17:27:03 node182 fuse_dfs: Cache bounds for /user/brian/test_helloworld4: 0 -> 10485760
(10485760 bytes). Check for offset 10330112 
Sep 29 17:27:03 node182 fuse_dfs: FUSE requested 131072 bytes of /user/brian/test_helloworld4
for offset 10330112 in file 
Sep 29 17:27:03 node182 fuse_dfs: Cache bounds for /user/brian/test_helloworld4: 0 -> 10485760
(10485760 bytes). Check for offset 10461184 
Sep 29 17:27:03 node182 fuse_dfs: FUSE requested 131072 bytes of /user/brian/test_helloworld4
for offset 10461184 in file 

Pause here.  The contents of the cache are [0, 10485760]; however, we are trying to read 10461184
(in the cache) to 10461184+131072 (not in the cache!!!).

Note that fuse-dfs serves this out of cache, and *does not* trigger a read!

Sep 29 17:27:03 node182 fuse_dfs: Cache bounds for /user/brian/test_helloworld4: 0 -> 10485760
(10485760 bytes). Check for offset 10592256 
Sep 29 17:27:03 node182 fuse_dfs: Reading /user/brian/test_helloworld4 from HDFS, offset 10592256,
amount 10485760 
Sep 29 17:27:03 node182 fuse_dfs: FUSE requested 131072 bytes of /user/brian/test_helloworld4
for offset 10592256 in file 

Here, we didn't load new info from the cache; so, the bytes in the range (10485760, 10592256)
are taken as random junk from memory.

Sep 29 17:27:03 node182 fuse_dfs: Cache bounds for /user/brian/test_helloworld4: 10592256
-> 15600000 (5007744 bytes). Check for offset 10723328 
Sep 29 17:27:03 node182 fuse_dfs: FUSE requested 131072 bytes of /user/brian/test_helloworld4
for offset 10723328 in file 
Sep 29 17:27:03 node182 fuse_dfs: Cache bounds for /user/brian/test_helloworld4: 10592256
-> 15600000 (5007744 bytes). Check for offset 10854400 
Sep 29 17:27:03 node182 fuse_dfs: FUSE requested 131072 bytes of /user/brian/test_helloworld4
for offset 10854400 in file 
Sep 29 17:27:03 node182 fuse_dfs: Cache bounds for /user/brian/test_helloworld4: 10592256
-> 15600000 (5007744 bytes). Check for offset 10985472 
Sep 29 17:27:03 node182 fuse_dfs: FUSE requested 131072 bytes of /user/brian/test_helloworld4
for offset 10985472 in file 

I'm also pretty sure that reads greater than size 10MB (the size of the cache) are going to
bust through the cache and again result in random junk.

Someone seriously needs to review the logic and corner conditions in this code.  I'll try
to fix it myself and do a patch later, but no promises.


> File corruption when reading with fuse-dfs
> ------------------------------------------
>
>                 Key: HADOOP-4298
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4298
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: contrib/fuse-dfs
>    Affects Versions: 0.18.1
>         Environment: CentOs 4.6 final; kernel 2.6.9-67.ELsmp; FUSE 2.7.4; hadoop 0.18.1;
64-bit
> I hand-altered the fuse-dfs makefile to use 64-bit instead of the hardcoded -m32.
>            Reporter: Brian Bockelman
>            Priority: Critical
>             Fix For: 0.18.1
>
>
> I pulled a 5GB data file into Hadoop using the following command:
> hadoop fs -put /scratch/886B9B3D-6A85-DD11-A9AB-000423D6CA6E.root /user/brian/testfile
> I have HDFS mounted in /mnt/hadoop using fuse-dfs.
> However, when I try to md5sum the file in place (md5sum /mnt/hadoop) or copy the file
back to local disk using "cp" then md5sum it, the checksum is incorrect.
> When I pull the file using normal hadoop means (hadoop fs -get /user/brian/testfile /scratch),
the md5sum is correct.
> When I repeat the test with a smaller file (512MB, on the theory that there is a problem
with some 2GB limit somewhere), the problem remains.
> When I repeat the test, the md5sum is consistently wrong - i.e., some part of the corruption
is deterministic, and not the apparent fault of a bad disk.
> CentOs 4.6 is, unfortunately, not the apparent culprit.  When checking on CentOs 5.x,
I could recreate the corruption issue.  The second node was also a 64-bit compile and CentOs
5.2 (`uname -r` returns 2.6.18-92.1.10.el5).
> Thanks for looking into this,
> Brian

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message