hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Igor Rudenko (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10427) Write and Read SequenceFile Parallelly - java.io.IOException: Cannot obtain block length for LocatedBlock
Date Thu, 11 Apr 2019 11:42:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16815346#comment-16815346
] 

Igor Rudenko commented on HDFS-10427:
-------------------------------------

Could you share a code example how to reproduce the issue if still valid.

> Write and Read SequenceFile Parallelly - java.io.IOException: Cannot obtain block length
for LocatedBlock
> ---------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-10427
>                 URL: https://issues.apache.org/jira/browse/HDFS-10427
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode, namenode
>    Affects Versions: 2.7.2
>            Reporter: Syed Akram
>            Priority: Critical
>
> Trying to Write a key/value and Read already written key/value in a SequenceFile parallelly,
But while doing that 
> Writer - appendOption true
> java.io.IOException: Cannot obtain block length for LocatedBlock{BP-1019538077-localhost-1459944245378:blk_1075356142_3219260;
getBlockSize()=2409; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[dn1:50010,DS-21698924-4178-4c08-ba41-aa86770ef0d0,DISK],
DatanodeInfoWithStorage[dn3:50010,DS-8e3dc8c0-4e34-4d12-86a3-48b189b78f5d,DISK], DatanodeInfoWithStorage[dn2:50010,DS-fb22c1c2-e059-4e0e-91e0-df838beb86f9,DISK]]}
> 	at org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:428)
> 	at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:336)
> 	at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:272)
> 	at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:264)
> 	at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1526)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:303)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:299)
> 	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:299)
> 	at org.apache.hadoop.io.SequenceFile$Reader.openFile(SequenceFile.java:1902)
> 	at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1822)
> 	
> But when i'm trying to read when write (SequenceFile.Writer )is opened, it works fine,

> But when we do parallelly (both start write with append=true and then read already existing
key/value) then facing this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message