hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 陈加俊 <cjjvict...@gmail.com>
Subject java.net.SocketException: Too many open files
Date Tue, 11 Jan 2011 08:59:14 GMT
I set the env as fallows:

$ ulimit -n
65535

 $ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63943
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65535
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 63943
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

RS logs as fallows ,Why?

2010-12-29 06:09:10,738 WARN
org.apache.hadoop.hbase.regionserver.HRegionServer: Attempt=3118
java.net.SocketException: Too many open files
        at sun.nio.ch.Net.socket0(Native Method)
        at sun.nio.ch.Net.socket(Net.java:97)
        at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
        at
sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
        at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
        at
org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
        at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:304)
        at
org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:844)
        at
org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:716)
        at
org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:333)
        at $Proxy0.regionServerReport(Unknown Source)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:481)
        at java.lang.Thread.run(Thread.java:619)
2010-12-29 06:09:10,765 WARN
org.apache.hadoop.hbase.regionserver.HRegionServer: Attempt=3119
java.net.SocketException: Too many open files
        at sun.nio.ch.Net.socket0(Native Method)
        at sun.nio.ch.Net.socket(Net.java:97)
        at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84)
        at
sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
        at java.nio.channels.SocketChannel.open(SocketChannel.java:105)
        at
org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58)
        at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:304)
        at
org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:844)
        at
org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:716)
        at
org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:333)
        at $Proxy0.regionServerReport(Unknown Source)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:481)
        at java.lang.Thread.run(Thread.java:619)
2010-12-29 06:09:10,793 FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: Unhandled exception.
Aborting...
java.lang.NullPointerException
        at
org.apache.hadoop.ipc.Client$Connection.handleConnectionFailure(Client.java:351)
        at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:313)
        at
org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
        at org.apache.hadoop.ipc.Client.call(Client.java:720)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
        at $Proxy1.getFileInfo(Unknown Source)
        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at $Proxy1.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:619)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:453)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:648)
        at
org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:115)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:902)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:554)
        at java.lang.Thread.run(Thread.java:619)

my cluster is : hadoop0.20.2+hbase0.20.6  and 24 RS+DN

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message