Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 70756 invoked from network); 31 Oct 2007 04:52:24 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 31 Oct 2007 04:52:24 -0000 Received: (qmail 37317 invoked by uid 500); 31 Oct 2007 04:52:11 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 37283 invoked by uid 500); 31 Oct 2007 04:52:11 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 37274 invoked by uid 99); 31 Oct 2007 04:52:11 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 30 Oct 2007 21:52:11 -0700 X-ASF-Spam-Status: No, hits=-100.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 31 Oct 2007 04:52:26 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 9B8EE7141F1 for ; Tue, 30 Oct 2007 21:51:50 -0700 (PDT) Message-ID: <7346726.1193806310627.JavaMail.jira@brutus> Date: Tue, 30 Oct 2007 21:51:50 -0700 (PDT) From: "stack (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Commented: (HADOOP-2083) [hbase] TestTableIndex failed in patch build #970 and #956 In-Reply-To: <23558812.1192840130815.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-2083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12538988 ] stack commented on HADOOP-2083: ------------------------------- Console is here Ning: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/970/console. Search for when TestTableIndex runs. Below is the pertinent extract (I believe). Test #956 looks to have had a different failure cause that looks like it may have since been fixed. FYI, this failure seems to be rare. I've been trying to keep an eye out and over 70-odd builds have happened since w/o recurrence. {code} [junit] java.lang.IllegalStateException: Variable substitution depth too large: 20 [junit] dfs.namenode.logging.levelinfo [junit] tasktracker.http.port50060 [junit] dfs.name.dir${hadoop.tmp.dir}/dfs/name [junit] mapred.job.tracker.handler.count10 [junit] mapred.output.compression.typeRECORD [junit] dfs.datanode.dns.interfacedefault [junit] mapred.submit.replication10 [junit] fs.file.implorg.apache.hadoop.fs.LocalFileSystem [junit] fs.ramfs.implorg.apache.hadoop.fs.InMemoryFileSystem [junit] fs.hftp.implorg.apache.hadoop.dfs.HftpFileSystem [junit] mapred.child.java.opts-Xmx200m [junit] dfs.datanode.du.pct0.98f [junit] mapred.max.tracker.failures4 [junit] map.sort.classorg.apache.hadoop.mapred.MergeSorter [junit] ipc.client.timeout60000 [junit] dfs.datanode.du.reserved0 [junit] mapred.tasktracker.tasks.maximum2 [junit] hbase.index.merge.factor10 [junit] fs.inmemory.size.mb75 [junit] mapred.compress.map.outputfalse [junit] tasktracker.http.bindAddress0.0.0.0 [junit] hadoop.rpc.socket.factory.class.defaultorg.apache.hadoop.net.StandardSocketFactory [junit] keep.failed.task.filesfalse [junit] mapred.map.output.compression.typeRECORD [junit] io.seqfile.lazydecompresstrue [junit] io.skip.checksum.errorsfalse [junit] mapred.job.tracker.info.port50030 [junit] fs.s3.block.size67108864 [junit] dfs.client.block.write.retries3 [junit] dfs.replication.min1 [junit] mapred.userlog.limit.kb0 [junit] io.bytes.per.checksum512 [junit] fs.s3.maxRetries4 [junit] io.map.index.skip0 [junit] dfs.safemode.extension30000 [junit] hbase.index.optimizetrue [junit] mapred.jobtracker.completeuserjobs.maximum100 [junit] mapred.system.dirbuild/contrib/${contrib.name}/test/system [junit] mapred.userlog.retain.hours24 [junit] mapred.tasktracker.expiry.interval600000 [junit] mapred.log.dir${hadoop.tmp.dir}/mapred/logs [junit] job.end.retry.interval30000 [junit] mapred.task.tracker.report.bindAddress127.0.0.1 [junit] local.cache.size10737418240 [junit] io.compression.codecsorg.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec [junit] dfs.df.interval60000 [junit] dfs.replication.considerLoadtrue [junit] fs.checkpoint.period3600 [junit] dfs.info.bindAddress0.0.0.0 [junit] jobclient.output.filterFAILED [junit] mapred.output.compression.codecorg.apache.hadoop.io.compress.DefaultCodec [junit] ipc.client.connect.max.retries10 [junit] tasktracker.http.threads40 [junit] io.file.buffer.size4096 [junit] ipc.client.kill.max10 [junit] io.sort.mb100 [junit] mapred.tasktracker.dns.interfacedefault [junit] fs.s3.buffer.dir${hadoop.tmp.dir}/s3 [junit] mapred.min.split.size0 [junit] mapred.map.output.compression.codecorg.apache.hadoop.io.compress.DefaultCodec [junit] fs.checkpoint.dir${hadoop.tmp.dir}/dfs/namesecondary [junit] io.seqfile.sorter.recordlimit1000000 [junit] fs.default.namefile:/// [junit] ipc.client.maxidletime120000 [junit] dfs.secondary.info.bindAddress0.0.0.0 [junit] hbase.index.use.compound.filetrue [junit] io.seqfile.compression.typeRECORD [junit] hadoop.native.libtrue [junit] mapred.local.dir.minspacestart0 [junit] hadoop.tmp.dir${build.test} [junit] dfs.datanode.bindAddress0.0.0.0 [junit] mapred.map.tasks2 [junit] dfs.heartbeat.interval3 [junit] webinterface.private.actionsfalse [junit] mapred.reduce.parallel.copies5 [junit] mapred.local.dir${hadoop.tmp.dir}/mapred/local [junit] hbase.index.max.field.length10000 [junit] dfs.datanode.dns.nameserverdefault [junit] mapred.inmem.merge.threshold1000 [junit] mapred.speculative.executiontrue [junit] mapred.tasktracker.dns.nameserverdefault [junit] dfs.datanode.port50010 [junit] fs.trash.interval0 [junit] hbase.index.max.buffered.docs500 [junit] dfs.replication.max512 [junit] dfs.blockreport.intervalMsec3600000 [junit] dfs.block.size67108864 [junit] mapred.task.timeout600000 [junit] ipc.client.connection.maxidletime1000 [junit] fs.s3.sleepTimeSeconds10 [junit] dfs.client.buffer.dir${hadoop.tmp.dir}/dfs/tmp [junit] mapred.output.compressfalse [junit] mapred.local.dir.minspacekill0 [junit] dfs.replication3 [junit] mapred.reduce.max.attempts4 [junit] dfs.default.chunk.view.size32768 [junit] dfs.secondary.info.port50090 [junit] hadoop.logfile.count10 [junit] ipc.client.idlethreshold4000 [junit] mapred.job.trackerlocal [junit] hadoop.logfile.size10000000 [junit] fs.checkpoint.size67108864 [junit] io.sort.factor10 [junit] dfs.info.port50070 [junit] mapred.temp.dir${hadoop.tmp.dir}/mapred/temp [junit] job.end.retry.attempts0 [junit] dfs.data.dir${hadoop.tmp.dir}/dfs/data [junit] mapred.reduce.tasks1 [junit] fs.s3.implorg.apache.hadoop.fs.s3.S3FileSystem [junit] fs.trash.root${hadoop.tmp.dir}/Trash [junit] dfs.namenode.handler.count10 [junit] io.seqfile.compress.blocksize1000000 [junit] fs.kfs.implorg.apache.hadoop.fs.kfs.KosmosFileSystem [junit] ipc.server.listen.queue.size128 [junit] fs.hdfs.implorg.apache.hadoop.dfs.DistributedFileSystem [junit] mapred.job.tracker.info.bindAddress0.0.0.0 [junit] hbase.index.rowkey.namekey [junit] dfs.safemode.threshold.pct0.999f [junit] mapred.map.max.attempts4 [junit] [junit] hbase.column.boost3 [junit] hbase.column.tokenizefalse [junit] hbase.column.namecontents: [junit] hbase.column.storetrue [junit] hbase.column.omit.normsfalse [junit] hbase.column.indextrue [junit] [junit] at org.apache.hadoop.conf.Configuration.substituteVars(Configuration.java:293) [junit] at org.apache.hadoop.conf.Configuration.get(Configuration.java:300) [junit] at org.apache.hadoop.hbase.mapred.IndexTableReduce.configure(IndexTableReduce.java:53) [junit] at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:58) [junit] at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:82) [junit] at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:243) [junit] at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:164) [junit] 2007-10-19 08:29:23,346 ERROR [expireTrackers] org.apache.hadoop.mapred.JobTracker$ExpireTrackers.run(JobTracker.java:308): Tracker Expiry Thread got exception: java.lang.InterruptedException: sleep interrupted [junit] at java.lang.Thread.sleep(Native Method) [junit] at org.apache.hadoop.mapred.JobTracker$ExpireTrackers.run(JobTracker.java:263) [junit] {code} > [hbase] TestTableIndex failed in patch build #970 and #956 > ---------------------------------------------------------- > > Key: HADOOP-2083 > URL: https://issues.apache.org/jira/browse/HADOOP-2083 > Project: Hadoop > Issue Type: Bug > Components: contrib/hbase > Reporter: stack > > TestTableIndex failed in two nightly builds. > The fancy trick of passing around a complete configuration with per column indexing specification extensions inside in a Configuration value is biting us. The interpolation code has an upper bound of 20 interpolations. > Looking at seeing if I can run the interpolations before inserting the config. else need to move to have Configuration.substituteVars made protected so can fix ... or do config. for this job in another way. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.