hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mahender Sarangam <Mahender.Bigd...@outlook.com>
Subject How to Mount Node which is unhealthy state.
Date Mon, 19 Dec 2016 20:13:58 GMT
Hi,

Currently one of the node goes unhealthy state and another node is LOST . When we try to restart
all services at lost/unhealthy node in ambari, below is the error. Is there any reason for
 cluster node going into Unhealthy state. Please help me in setting back cluster normal state


***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
Directory /mnt/resource/hadoop/hdfs/data became unmounted from /mnt . Current mount point:
/ . Please ensure that mounts are healthy. If the mount change was intentional, you can update
the contents of /var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist.
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
line 174, in <module>
    DataNode().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
line 61, in start
    datanode(action="start")
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py",
line 68, in datanode
    create_log_dir=True
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py",
line 269, in service
    Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160,
in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124,
in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line
262, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 73, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 103, in
checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 151, in
_call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 304, in
_call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash
-c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf
start datanode'' returned 1. starting datanode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-datanode-wn47-lxcluster.out

here is he stdout file


The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it
for version 2.5.1.0-56
2016-12-19 20:02:59,311 - Checking if need to create versioned conf dir /etc/hadoop/2.5.1.0-56/0
2016-12-19 20:02:59,312 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir',
'--package', 'hadoop', '--stack-version', '2.5.1.0-56', '--conf-version', '0')] {'logoutput':
False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-12-19 20:02:59,332 - call returned (1, '/etc/hadoop/2.5.1.0-56/0 exist already', '')
2016-12-19 20:02:59,332 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir',
'--package', 'hadoop', '--stack-version', '2.5.1.0-56', '--conf-version', '0')] {'logoutput':
False, 'sudo': True, 'quiet': False}
2016-12-19 20:02:59,349 - checked_call returned (0, '')
2016-12-19 20:02:59,350 - Ensuring that hadoop has the correct symlink structure
2016-12-19 20:02:59,350 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-12-19 20:02:59,508 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists,
will call conf-select on it for version 2.5.1.0-56
2016-12-19 20:02:59,510 - Checking if need to create versioned conf dir /etc/hadoop/2.5.1.0-56/0
2016-12-19 20:02:59,512 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir',
'--package', 'hadoop', '--stack-version', '2.5.1.0-56', '--conf-version', '0')] {'logoutput':
False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-12-19 20:02:59,532 - call returned (1, '/etc/hadoop/2.5.1.0-56/0 exist already', '')
2016-12-19 20:02:59,532 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir',
'--package', 'hadoop', '--stack-version', '2.5.1.0-56', '--conf-version', '0')] {'logoutput':
False, 'sudo': True, 'quiet': False}
2016-12-19 20:02:59,551 - checked_call returned (0, '')
2016-12-19 20:02:59,552 - Ensuring that hadoop has the correct symlink structure
2016-12-19 20:02:59,552 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-12-19 20:02:59,553 - Skipping creation of User and Group as host is sys prepped or ignore_groupsusers_create
flag is on
2016-12-19 20:02:59,553 - Skipping setting dfs cluster admin and tez view acls as host is
sys prepped
2016-12-19 20:02:59,553 - FS Type:
2016-12-19 20:02:59,553 - Directory['/etc/hadoop'] {'mode': 0755}
2016-12-19 20:02:59,563 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content':
InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-12-19 20:02:59,565 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner':
'hdfs', 'group': 'hadoop', 'mode': 01777}
2016-12-19 20:02:59,581 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) ||
(which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test
-f /selinux/enforce'}
2016-12-19 20:02:59,585 - Skipping Execute[('setenforce', '0')] due to not_if
2016-12-19 20:02:59,585 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents':
True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2016-12-19 20:02:59,586 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents':
True, 'group': 'root', 'cd_access': 'a'}
2016-12-19 20:02:59,588 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents':
True, 'cd_access': 'a'}
Skipping copying of fast-hdfs-resource.jar as host is sys prepped
2016-12-19 20:02:59,591 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties']
{'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-12-19 20:02:59,593 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content':
Template('health_check.j2'), 'owner': 'hdfs'}
2016-12-19 20:02:59,593 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content':
..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-12-19 20:02:59,605 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties']
{'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2016-12-19 20:02:59,605 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties']
{'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-12-19 20:02:59,606 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner':
'hdfs', 'group': 'hadoop'}
2016-12-19 20:02:59,610 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs',
'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group':
'hadoop'}
2016-12-19 20:02:59,613 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'),
'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-12-19 20:02:59,821 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists,
will call conf-select on it for version 2.5.1.0-56
2016-12-19 20:02:59,823 - Checking if need to create versioned conf dir /etc/hadoop/2.5.1.0-56/0
2016-12-19 20:02:59,825 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir',
'--package', 'hadoop', '--stack-version', '2.5.1.0-56', '--conf-version', '0')] {'logoutput':
False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-12-19 20:02:59,844 - call returned (1, '/etc/hadoop/2.5.1.0-56/0 exist already', '')
2016-12-19 20:02:59,845 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir',
'--package', 'hadoop', '--stack-version', '2.5.1.0-56', '--conf-version', '0')] {'logoutput':
False, 'sudo': True, 'quiet': False}
2016-12-19 20:02:59,866 - checked_call returned (0, '')
2016-12-19 20:02:59,867 - Ensuring that hadoop has the correct symlink structure
2016-12-19 20:02:59,867 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-12-19 20:02:59,869 - Stack Feature Version Info: stack_version=2.5, version=2.5.1.0-56,
current_cluster_version=2.5.1.0-56 -> 2.5.1.0-56
2016-12-19 20:02:59,882 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists,
will call conf-select on it for version 2.5.1.0-56
2016-12-19 20:02:59,884 - Checking if need to create versioned conf dir /etc/hadoop/2.5.1.0-56/0
2016-12-19 20:02:59,886 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir',
'--package', 'hadoop', '--stack-version', '2.5.1.0-56', '--conf-version', '0')] {'logoutput':
False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-12-19 20:02:59,905 - call returned (1, '/etc/hadoop/2.5.1.0-56/0 exist already', '')
2016-12-19 20:02:59,905 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir',
'--package', 'hadoop', '--stack-version', '2.5.1.0-56', '--conf-version', '0')] {'logoutput':
False, 'sudo': True, 'quiet': False}
2016-12-19 20:02:59,926 - checked_call returned (0, '')
2016-12-19 20:02:59,926 - Ensuring that hadoop has the correct symlink structure
2016-12-19 20:02:59,926 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-12-19 20:02:59,935 - checked_call['dpkg -s hdp-select | grep Version | awk '{print $2}'']
{'stderr': -1}
2016-12-19 20:02:59,953 - checked_call returned (0, '2.5.1.0-56', '')
2016-12-19 20:02:59,954 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*.out'
-exec echo '==> {} <==' \; -exec tail -n 100 {} \;'] {'logoutput': True, 'ignore_failures':
True, 'user': 'hdfs'}


2016-12-19 20:03:00,017 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents':
True, 'group': 'root'}
2016-12-19 20:03:00,025 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'),
'owner': 'root', 'group': 'root', 'mode': 0644}
2016-12-19 20:03:00,025 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop',
'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations':
...}
2016-12-19 20:03:00,036 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2016-12-19 20:03:00,037 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner':
'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-12-19 20:03:00,045 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop',
'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations':
...}
2016-12-19 20:03:00,052 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2016-12-19 20:03:00,052 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner':
'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-12-19 20:03:00,057 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner':
'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2016-12-19 20:03:00,058 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop',
'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {},
'configurations': ...}
2016-12-19 20:03:00,064 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2016-12-19 20:03:00,064 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml']
{'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding':
'UTF-8'}
2016-12-19 20:03:00,069 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop',
'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations':
...}
2016-12-19 20:03:00,076 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2016-12-19 20:03:00,076 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner':
'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-12-19 20:03:00,082 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop',
'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {u'final':
{u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address':
u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated':
u'true'}}, 'configurations': ...}
2016-12-19 20:03:00,089 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2016-12-19 20:03:00,089 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner':
'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-12-19 20:03:00,135 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf',
'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner':
'hdfs', 'configurations': ...}
2016-12-19 20:03:00,142 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2016-12-19 20:03:00,142 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner':
'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-12-19 20:03:00,166 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'),
'owner': 'hdfs'}
2016-12-19 20:03:00,167 - Directory['/var/lib/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents':
True, 'group': 'hadoop', 'mode': 0751}
2016-12-19 20:03:00,167 - Directory['/var/lib/ambari-agent/data/datanode'] {'create_parents':
True, 'mode': 0755}
2016-12-19 20:03:00,171 - Host contains mounts: ['/sys', '/proc', '/dev', '/dev/pts', '/run',
'/', '/sys/kernel/security', '/dev/shm', '/run/lock', '/sys/fs/cgroup', '/sys/fs/cgroup/systemd',
'/sys/fs/pstore', '/sys/fs/cgroup/net_cls,net_prio', '/sys/fs/cgroup/cpu,cpuacct', '/sys/fs/cgroup/hugetlb',
'/sys/fs/cgroup/freezer', '/sys/fs/cgroup/perf_event', '/sys/fs/cgroup/pids', '/sys/fs/cgroup/blkio',
'/sys/fs/cgroup/cpuset', '/sys/fs/cgroup/memory', '/sys/fs/cgroup/devices', '/proc/sys/fs/binfmt_misc',
'/dev/mqueue', '/sys/kernel/debug', '/dev/hugepages', '/sys/fs/fuse/connections', '/etc/network/interfaces.dynamic.d',
'/proc/sys/fs/binfmt_misc', '/var/lib/lxcfs', '/run/user/2008', '/sys/kernel/debug/tracing',
'/run/user/2019'].
2016-12-19 20:03:00,171 - Mount point for directory /mnt/resource/hadoop/hdfs/data is /
2016-12-19 20:03:00,171 - Directory /mnt/resource/hadoop/hdfs/data became unmounted from /mnt
. Current mount point: / .
2016-12-19 20:03:00,174 - Host contains mounts: ['/sys', '/proc', '/dev', '/dev/pts', '/run',
'/', '/sys/kernel/security', '/dev/shm', '/run/lock', '/sys/fs/cgroup', '/sys/fs/cgroup/systemd',
'/sys/fs/pstore', '/sys/fs/cgroup/net_cls,net_prio', '/sys/fs/cgroup/cpu,cpuacct', '/sys/fs/cgroup/hugetlb',
'/sys/fs/cgroup/freezer', '/sys/fs/cgroup/perf_event', '/sys/fs/cgroup/pids', '/sys/fs/cgroup/blkio',
'/sys/fs/cgroup/cpuset', '/sys/fs/cgroup/memory', '/sys/fs/cgroup/devices', '/proc/sys/fs/binfmt_misc',
'/dev/mqueue', '/sys/kernel/debug', '/dev/hugepages', '/sys/fs/fuse/connections', '/etc/network/interfaces.dynamic.d',
'/proc/sys/fs/binfmt_misc', '/var/lib/lxcfs', '/run/user/2008', '/sys/kernel/debug/tracing',
'/run/user/2019'].
2016-12-19 20:03:00,174 -
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
Directory /mnt/resource/hadoop/hdfs/data became unmounted from /mnt . Current mount point:
/ . Please ensure that mounts are healthy. If the mount change was intentional, you can update
the contents of /var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist.
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****
***** WARNING ***** WARNING ***** WARNING ***** WARNING ***** WARNING *****

2016-12-19 20:03:00,174 - File['/var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist']
{'content': '\n# This file keeps track of the last known mount-point for each dir.\n# It is
safe to delete, since it will get regenerated the next time that the component of the service
starts.\n# However, it is not advised to delete this file since Ambari may\n# re-create a
dir that used to be mounted on a drive but is now mounted on the root.\n# Comments begin with
a hash (#) symbol\n# dir,mount_point\n/mnt/resource/hadoop/hdfs/data,/mnt\n', 'owner': 'hdfs',
'group': 'hadoop', 'mode': 0644}
2016-12-19 20:03:00,175 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop',
'mode': 0755}
2016-12-19 20:03:00,175 - Changing owner for /var/run/hadoop from 0 to hdfs
2016-12-19 20:03:00,175 - Changing group for /var/run/hadoop from 0 to hadoop
2016-12-19 20:03:00,176 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop',
'create_parents': True}
2016-12-19 20:03:00,176 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop',
'create_parents': True}
2016-12-19 20:03:00,177 - File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'] {'action':
['delete'], 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid
&& ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'}
2016-12-19 20:03:00,186 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid']
2016-12-19 20:03:00,187 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited
;  /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf
start datanode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'},
'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid &&
ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'}
2016-12-19 20:03:04,287 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*'
-exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures':
True, 'user': 'hdfs'}
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-wn10-onetax.out.1 <==
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 55938
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-wn10-onetax.out.5 <==
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 55938
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-wn10-onetax.out <==
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 55938
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-wn10-onetax.out.3 <==
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 55938
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-wn10-lNXcLUSTER.log <==





Mime
View raw message