Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 229E7200CE6 for ; Fri, 1 Sep 2017 05:14:10 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 2105916C633; Fri, 1 Sep 2017 03:14:10 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 4112D16C62E for ; Fri, 1 Sep 2017 05:14:09 +0200 (CEST) Received: (qmail 25507 invoked by uid 500); 1 Sep 2017 03:14:08 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 25496 invoked by uid 99); 1 Sep 2017 03:14:07 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 01 Sep 2017 03:14:07 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 7A726C36FA for ; Fri, 1 Sep 2017 03:14:06 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -100.002 X-Spam-Level: X-Spam-Status: No, score=-100.002 tagged_above=-999 required=6.31 tests=[RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id uD_dT7hJOoZO for ; Fri, 1 Sep 2017 03:14:05 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 388685FDEE for ; Fri, 1 Sep 2017 03:14:04 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id BE306E09A0 for ; Fri, 1 Sep 2017 03:14:02 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 3C3972414F for ; Fri, 1 Sep 2017 03:14:01 +0000 (UTC) Date: Fri, 1 Sep 2017 03:14:01 +0000 (UTC) From: "Weiwei Yang (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-12367) Ozone: Too many open files error while running corona MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Fri, 01 Sep 2017 03:14:10 -0000 [ https://issues.apache.org/jira/browse/HDFS-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16149974#comment-16149974 ] Weiwei Yang commented on HDFS-12367: ------------------------------------ Thanks [~nandakumar131], I will do some more tests with HDFS-12382 and to verify if this issue is completely resolved. Thanks for the hint. > Ozone: Too many open files error while running corona > ----------------------------------------------------- > > Key: HDFS-12367 > URL: https://issues.apache.org/jira/browse/HDFS-12367 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, tools > Reporter: Weiwei Yang > Assignee: Mukul Kumar Singh > > Too many open files error keeps happening to me while using corona, I have simply setup a single node cluster and run corona to generate 1000 keys, but I keep getting following error > {noformat} > ./bin/hdfs corona -numOfThreads 1 -numOfVolumes 1 -numOfBuckets 1 -numOfKeys 1000 > 17/08/28 00:47:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable > 17/08/28 00:47:42 INFO tools.Corona: Number of Threads: 1 > 17/08/28 00:47:42 INFO tools.Corona: Mode: offline > 17/08/28 00:47:42 INFO tools.Corona: Number of Volumes: 1. > 17/08/28 00:47:42 INFO tools.Corona: Number of Buckets per Volume: 1. > 17/08/28 00:47:42 INFO tools.Corona: Number of Keys per Bucket: 1000. > 17/08/28 00:47:42 INFO rpc.OzoneRpcClient: Creating Volume: vol-0-05000, with wwei as owner and quota set to 1152921504606846976 bytes. > 17/08/28 00:47:42 INFO tools.Corona: Starting progress bar Thread. > ... > ERROR tools.Corona: Exception while adding key: key-251-19293 in bucket: bucket-0-34960 of volume: vol-0-05000. > java.io.IOException: Exception getting XceiverClient. > at org.apache.hadoop.scm.XceiverClientManager.getClient(XceiverClientManager.java:156) > at org.apache.hadoop.scm.XceiverClientManager.acquireClient(XceiverClientManager.java:122) > at org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.getFromKsmKeyInfo(ChunkGroupOutputStream.java:289) > at org.apache.hadoop.ozone.client.rpc.OzoneRpcClient.createKey(OzoneRpcClient.java:487) > at org.apache.hadoop.ozone.tools.Corona$OfflineProcessor.run(Corona.java:352) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException: failed to create a child event loop > at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2234) > at com.google.common.cache.LocalCache.get(LocalCache.java:3965) > at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764) > at org.apache.hadoop.scm.XceiverClientManager.getClient(XceiverClientManager.java:144) > ... 9 more > Caused by: java.lang.IllegalStateException: failed to create a child event loop > at io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:68) > at io.netty.channel.MultithreadEventLoopGroup.(MultithreadEventLoopGroup.java:49) > at io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:61) > at io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:52) > at io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:44) > at io.netty.channel.nio.NioEventLoopGroup.(NioEventLoopGroup.java:36) > at org.apache.hadoop.scm.XceiverClient.connect(XceiverClient.java:76) > at org.apache.hadoop.scm.XceiverClientManager$2.call(XceiverClientManager.java:151) > at org.apache.hadoop.scm.XceiverClientManager$2.call(XceiverClientManager.java:145) > at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4767) > at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568) > at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350) > at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313) > at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228) > ... 12 more > Caused by: io.netty.channel.ChannelException: failed to open a new selector > at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:128) > at io.netty.channel.nio.NioEventLoop.(NioEventLoop.java:120) > at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:87) > at io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:64) > ... 25 more > Caused by: java.io.IOException: Too many open files > at sun.nio.ch.EPollArrayWrapper.epollCreate(Native Method) > at sun.nio.ch.EPollArrayWrapper.(EPollArrayWrapper.java:130) > at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:69) > at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36) > at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:126) > ... 28 more > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org