hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sam liu <samliuhad...@gmail.com>
Subject Re: hdfs unable to create new block with 'Too many open fiiles' exception
Date Sat, 21 Dec 2013 17:25:04 GMT
In this cluster, data nodes runs as user 'mapred'. Actually, all hadoop
daemons runs as user 'mapred'.


2013/12/22 Ted Yu <yuzhihong@gmail.com>

> Are your data nodes running as user 'hdfs', or 'mapred' ?
>
> If the former, you need to increase file limit for 'hdfs' user.
>
> Cheers
>
>
> On Sat, Dec 21, 2013 at 8:30 AM, sam liu <samliuhadoop@gmail.com> wrote:
>
>> Hi Experts,
>>
>> We failed to run an MR job which accesses hive, as hdfs is unable to
>> create new block during reduce phase. The exceptions:
>>   1) In tasklog:
>> hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to
>> create new block
>>   2) In HDFS data node log:
>> DataXceiveServer: IOException due to:java.io.IOException: Too many open
>> fiiles
>>   ... ...
>>   at sun.nio.ch.ServerSocketAdapter.accept(ServerSocketAdaptor.java:96)
>>   at
>> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
>>
>> In hdfs-site.xml, we set 'dfs.datanode.max.xcievers' to 8196. At the same
>> time, we modified /etc/security/limits.conf to increase nofile of mapred
>> user to 1048576.  But this issue still happen.
>>
>> Any suggestions?
>>
>> Thanks a lot!
>>
>>
>

Mime
View raw message