hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Amit Kabra <amitkabrai...@gmail.com>
Subject Re: All datanodes are bad. Aborting ...
Date Sun, 20 Apr 2014 17:14:17 GMT
1) ulimit -a

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 513921
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 32000
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

2) dfs.datanode.max.xcievers = 4096

3) dfs.datanode.max.transfer.threads = 4096



On Sun, Apr 20, 2014 at 10:36 PM, sudhakara st <sudhakara.st@gmail.com> wrote:
> check with  open file descriptor limit in data nodes and namenode.
>
> $ ulimit -a
>
> and
> check with 'dfs.datanode.max.xcievers or dfs.datanode.max.transfer.threads'
> property in hdfs-site.xml
>
>
>
>
> On Sun, Apr 20, 2014 at 9:40 PM, Amit Kabra <amitkabraiiit@gmail.com> wrote:
>>
>> Yes, error logs here : http://pastebin.com/RBdN5Euf
>>
>> On Sun, Apr 20, 2014 at 8:14 PM, Serge Blazhievsky <hadoop.ca@gmail.com>
>> wrote:
>> > Do you see any errors in datanodes logs?
>> >
>> > Sent from my iPhone
>> >
>> >> On Apr 20, 2014, at 2:57, Amit Kabra <amitkabraiiit@gmail.com> wrote:
>> >>
>> >> number
>
>
>
>
> --
>
> Regards,
> ...sudhakara
>

Mime
View raw message