hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1035) StackOverflowError in FSDataSet
Date Fri, 02 Mar 2007 23:40:50 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12477552
] 

dhruba borthakur commented on HADOOP-1035:
------------------------------------------

+1 Code reviewed.

I like the optmization in clearPath. Also, since we are using ext3 filesystem, and it has
support for large directories, we could possibly think about increasing the number of files
per subdirectory (currently 64).

> StackOverflowError in FSDataSet
> -------------------------------
>
>                 Key: HADOOP-1035
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1035
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.10.1, 0.11.0, 0.11.1, 0.11.2
>            Reporter: Philippe Gassmann
>         Assigned To: Raghu Angadi
>            Priority: Blocker
>         Attachments: HADOOP-1035-1.patch, HADOOP-1035-2.patch, patch-StackOverflowError-HADOOP-1035
>
>
> [hadoop.org.apache.hadoop.dfs.DataNode] DataXCeiver
> java.lang.StackOverflowError
> at java.nio.ByteBuffer.wrap([BII)Ljava.nio.ByteBuffer;(Unknown Source)
> at java.nio.ByteBuffer.wrap([B)Ljava.nio.ByteBuffer;(Unknown Source)
> at java.lang.StringCoding$CharsetSE.encode([CII)[B(Unknown Source)
> at java.lang.StringCoding.encode(Ljava.lang.String;[CII)[B(Unknown Source)
> at java.lang.String.getBytes(Ljava.lang.String;)[B(Unknown Source)
> at java.io.UnixFileSystem.rename0(Ljava.io.File;Ljava.io.File;)Z(Native Method)
> at java.io.UnixFileSystem.rename(UnixFileSystem.java:265)
> at java.io.File.renameTo(File.java:1192)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:89)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:105)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95)
> I do not have the end of the stacktrace, but it is sure it happens in DataNode.DataXceiver.writeBlock().
> This error occurs after applying the patch provided in HADOOP-1034 that permits to see
such exceptions in log files.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message