Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 61870 invoked from network); 27 Feb 2007 21:42:34 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 27 Feb 2007 21:42:34 -0000 Received: (qmail 10606 invoked by uid 500); 27 Feb 2007 21:42:35 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 10579 invoked by uid 500); 27 Feb 2007 21:42:35 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 10563 invoked by uid 99); 27 Feb 2007 21:42:35 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 27 Feb 2007 13:42:35 -0800 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests= X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 27 Feb 2007 13:42:25 -0800 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id B4A45714048 for ; Tue, 27 Feb 2007 13:42:05 -0800 (PST) Message-ID: <5024500.1172612525737.JavaMail.jira@brutus> Date: Tue, 27 Feb 2007 13:42:05 -0800 (PST) From: "Raghu Angadi (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Commented: (HADOOP-1035) StackOverflowError in FSDataSet In-Reply-To: <23827556.1172242325502.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-1035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12476406 ] Raghu Angadi commented on HADOOP-1035: -------------------------------------- This is probably good time to get rid of siblings[] array in FSDataset. I will first try to remove siblings variable. I think it will simplify the this patch also. > StackOverflowError in FSDataSet > ------------------------------- > > Key: HADOOP-1035 > URL: https://issues.apache.org/jira/browse/HADOOP-1035 > Project: Hadoop > Issue Type: Bug > Components: dfs > Affects Versions: 0.10.1, 0.11.0, 0.11.1, 0.11.2 > Reporter: Philippe Gassmann > Assigned To: Raghu Angadi > Priority: Blocker > Attachments: patch-StackOverflowError-HADOOP-1035 > > > [hadoop.org.apache.hadoop.dfs.DataNode] DataXCeiver > java.lang.StackOverflowError > at java.nio.ByteBuffer.wrap([BII)Ljava.nio.ByteBuffer;(Unknown Source) > at java.nio.ByteBuffer.wrap([B)Ljava.nio.ByteBuffer;(Unknown Source) > at java.lang.StringCoding$CharsetSE.encode([CII)[B(Unknown Source) > at java.lang.StringCoding.encode(Ljava.lang.String;[CII)[B(Unknown Source) > at java.lang.String.getBytes(Ljava.lang.String;)[B(Unknown Source) > at java.io.UnixFileSystem.rename0(Ljava.io.File;Ljava.io.File;)Z(Native Method) > at java.io.UnixFileSystem.rename(UnixFileSystem.java:265) > at java.io.File.renameTo(File.java:1192) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:89) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:105) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > at org.apache.hadoop.dfs.FSDataset$FSDir.addBlock(FSDataset.java:95) > I do not have the end of the stacktrace, but it is sure it happens in DataNode.DataXceiver.writeBlock(). > This error occurs after applying the patch provided in HADOOP-1034 that permits to see such exceptions in log files. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.