hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Weiwei Yang (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-12367) Ozone: Too many open files error while running corona
Date Mon, 28 Aug 2017 08:34:00 GMT
Weiwei Yang created HDFS-12367:
----------------------------------

             Summary: Ozone: Too many open files error while running corona
                 Key: HDFS-12367
                 URL: https://issues.apache.org/jira/browse/HDFS-12367
             Project: Hadoop HDFS
          Issue Type: Sub-task
          Components: ozone, tools
            Reporter: Weiwei Yang


Too many open files error keeps happening to me while using corona, I have simply setup a
single node cluster and run corona to generate 1000 keys, but I keep getting following error

{noformat}
./bin/hdfs corona -numOfThreads 1 -numOfVolumes 1 -numOfBuckets 1 -numOfKeys 1000
17/08/28 00:47:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
17/08/28 00:47:42 INFO tools.Corona: Number of Threads: 1
17/08/28 00:47:42 INFO tools.Corona: Mode: offline
17/08/28 00:47:42 INFO tools.Corona: Number of Volumes: 1.
17/08/28 00:47:42 INFO tools.Corona: Number of Buckets per Volume: 1.
17/08/28 00:47:42 INFO tools.Corona: Number of Keys per Bucket: 1000.
17/08/28 00:47:42 INFO rpc.OzoneRpcClient: Creating Volume: vol-0-05000, with wwei as owner
and quota set to 1152921504606846976 bytes.
17/08/28 00:47:42 INFO tools.Corona: Starting progress bar Thread.
...
ERROR tools.Corona: Exception while adding key: key-251-19293 in bucket: bucket-0-34960 of
volume: vol-0-05000.
java.io.IOException: Exception getting XceiverClient.
	at org.apache.hadoop.scm.XceiverClientManager.getClient(XceiverClientManager.java:156)
	at org.apache.hadoop.scm.XceiverClientManager.acquireClient(XceiverClientManager.java:122)
	at org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.getFromKsmKeyInfo(ChunkGroupOutputStream.java:289)
	at org.apache.hadoop.ozone.client.rpc.OzoneRpcClient.createKey(OzoneRpcClient.java:487)
	at org.apache.hadoop.ozone.tools.Corona$OfflineProcessor.run(Corona.java:352)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException:
failed to create a child event loop
	at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2234)
	at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
	at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
	at org.apache.hadoop.scm.XceiverClientManager.getClient(XceiverClientManager.java:144)
	... 9 more
Caused by: java.lang.IllegalStateException: failed to create a child event loop
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:68)
	at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:49)
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:61)
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:52)
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:44)
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:36)
	at org.apache.hadoop.scm.XceiverClient.connect(XceiverClient.java:76)
	at org.apache.hadoop.scm.XceiverClientManager$2.call(XceiverClientManager.java:151)
	at org.apache.hadoop.scm.XceiverClientManager$2.call(XceiverClientManager.java:145)
	at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4767)
	at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
	at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
	at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
	at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
	... 12 more
Caused by: io.netty.channel.ChannelException: failed to open a new selector
	at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:128)
	at io.netty.channel.nio.NioEventLoop.<init>(NioEventLoop.java:120)
	at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:87)
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:64)
	... 25 more
Caused by: java.io.IOException: Too many open files
	at sun.nio.ch.EPollArrayWrapper.epollCreate(Native Method)
	at sun.nio.ch.EPollArrayWrapper.<init>(EPollArrayWrapper.java:130)
	at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:69)
	at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
	at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:126)
	... 28 more
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Mime
View raw message