hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jeff Hubbs (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-13397) start-dfs.sh and hdfs --daemon start datanode say "ERROR: Cannot set priority of datanode process XXXX"
Date Wed, 04 Apr 2018 19:02:00 GMT
Jeff Hubbs created HDFS-13397:
---------------------------------

             Summary: start-dfs.sh and hdfs --daemon start datanode say "ERROR: Cannot set
priority of datanode process XXXX"
                 Key: HDFS-13397
                 URL: https://issues.apache.org/jira/browse/HDFS-13397
             Project: Hadoop HDFS
          Issue Type: New Feature
          Components: hdfs
    Affects Versions: 3.0.1
            Reporter: Jeff Hubbs


When executing
{code:java}
$HADOOP_HOME/bin/hdfs --daemon start datanode
{code}
as a regular user (e.g. "hdfs") you achieve fail saying
{code:java}
ERROR: Cannot set priority of datanode process XXXX
{code}
where XXXX is some PID.

It turned out that this is because at least on Gentoo Linux (and I think this is pretty well
universal), by default a regular user process can't increase the priority of itself or any
of the user's other processes. To fix this, I added these lines to /etc/security/limits.conf
[NOTE: the users hdfs, yarn, and mapred are in the group called hadoop on this system]:
{code:java}
@hadoop        hard    nice            -15
@hadoop        hard    priority        -15
{code}
This change will need to be made on all datanodes.

The need to enable [at minimum] the hdfs user to raise its processes' priority needs to be
added to the documentation. This is not a problem I observed under 3.0.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Mime
View raw message