hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-2955) ant test fail for TestCrcCorruption with OutofMemory.
Date Mon, 10 Mar 2008 21:12:46 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-2955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Raghu Angadi updated HADOOP-2955:
---------------------------------

    Attachment: HADOOP-2955.java

Ok, I see why this did not cause the test to fail before HADOOP-2758. Fixing it in sendBlock()
won't fix the test, the client will (still) fail with OutOfMemory error. Attaching _a_ fix
: it truncates bytePerChecksum to length of the block in BlockSender constructor.

> ant test fail for TestCrcCorruption with OutofMemory.
> -----------------------------------------------------
>
>                 Key: HADOOP-2955
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2955
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.0
>            Reporter: Mahadev konar
>            Assignee: Raghu Angadi
>            Priority: Blocker
>         Attachments: HADOOP-2955.java
>
>
> TestCrcCorruption sometimes corrupts the metadata for crc and leads to corruption in
the length of of bytes of checksum (second field in metadata). This does not happen always
but somtimes since corruption is random in the test.
> I put in a debug statement in the allocation to see how many bytes were being allocated
and ran it for few times. This is one of the allocation in 
> BlockSender:sendBlock() 
>  int maxChunksPerPacket = Math.max(1,
>                       (BUFFER_SIZE + bytesPerChecksum - 1)/bytesPerChecksum);
>         int sizeofPacket = PKT_HEADER_LEN + 
>         (bytesPerChecksum + checksumSize) * maxChunksPerPacket;
>         LOG.info("Comment: bytes to allocate " + sizeofPacket);
>         ByteBuffer pktBuf = ByteBuffer.allocate(sizeofPacket);
> The output in one of the allocations is 
>  dfs.DataNode (DataNode.java:sendBlock(1766)) - Comment: bytes to allocate 1232596786
> So we should check for number of bytes being allocated in sendBlock (should be less than
the block size? -- seems like a good default).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message