hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Elek, Marton (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDDS-1609) Remove hard coded uid from Ozone docker image
Date Tue, 18 Jun 2019 13:05:00 GMT

    [ https://issues.apache.org/jira/browse/HDDS-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16866582#comment-16866582
] 

Elek, Marton commented on HDDS-1609:
------------------------------------

Did it (on archlinux instead of centos 7, let me know if it's a centos specific problem:

 
{code:java}
[testuser@sc hadoop]mvn clean install -f pom.ozone.xml -DskipTests
...
[testuser@sc hadoop]$ cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone
[testuser@sc ozone]$ ./test.sh
Removing network ozone_default
WARNING: Network ozone_default not found.
Creating network "ozone_default" with the default driver
Creating ozone_datanode_1 ... done
Creating ozone_datanode_2 ... done
Creating ozone_datanode_3 ... done
Creating ozone_scm_1      ... done
Creating ozone_om_1       ... done
0 datanode is up and healthy (until now)
0 datanode is up and healthy (until now)
3 datanodes are up and registered to the scm
==============================================================================
auditparser                                                                 
 
==============================================================================
auditparser.Auditparser :: Smoketest ozone cluster startup                  
 
==============================================================================
Initiating freon to generate data                                    
| PASS |
------------------------------------------------------------------------------
Testing audit parser                                                 
| PASS |
------------------------------------------------------------------------------
auditparser.Auditparser :: Smoketest ozone cluster startup            | PASS |
2 critical tests, 2 passed, 0 failed
2 tests total, 2 passed, 0 failed
==============================================================================
auditparser                                                          
| PASS |
2 critical tests, 2 passed, 0 failed
2 tests total, 2 passed, 0 failed
==============================================================================
Output:  /opt/hadoop/compose/ozone/result/robot-ozone-auditparser-om.xml
==============================================================================
basic :: Smoketest ozone cluster startup                                    
 
==============================================================================
Check webui static resources                                         
| PASS |
------------------------------------------------------------------------------
Start freon testing                                                  
| PASS |
------------------------------------------------------------------------------
basic :: Smoketest ozone cluster startup                             
| PASS |
2 critical tests, 2 passed, 0 failed
2 tests total, 2 passed, 0 failed
==============================================================================
Output:  /opt/hadoop/compose/ozone/result/robot-ozone-basic-scm.xml
Stopping ozone_om_1       ... done
Stopping ozone_datanode_2 ... done
Stopping ozone_scm_1      ... done
Stopping ozone_datanode_1 ... done
Stopping ozone_datanode_3 ... done
Removing ozone_om_1       ... done
Removing ozone_datanode_2 ... done
Removing ozone_scm_1      ... done
Removing ozone_datanode_1 ... done
Removing ozone_datanode_3 ... done
Removing network ozone_default
Log:     /opt/hadoop/compose/ozone/result/log.html
Report:  /opt/hadoop/compose/ozone/result/report.html
[testuser@sc ozone]$ id
uid=501(testuser) gid=501(testuser) groups=501(testuser),993(docker)
{code}
 

Please let me know very is the privilege escalation.

> Remove hard coded uid from Ozone docker image
> ---------------------------------------------
>
>                 Key: HDDS-1609
>                 URL: https://issues.apache.org/jira/browse/HDDS-1609
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>            Reporter: Eric Yang
>            Priority: Major
>             Fix For: 0.5.0
>
>         Attachments: linux.txt, log.html, osx.txt, report.html
>
>
> Hadoop-runner image is hard coded to [USER hadoop|https://github.com/apache/hadoop/blob/docker-hadoop-runner-jdk11/Dockerfile#L45]
and user hadoop is hard coded to uid 1000.  This arrangement complicates development environment
where host user is different uid from 1000.  External bind mount locations are written data
as uid 1000.  This can prevent development environment from clean up test data.  
> Docker documentation stated that "The best way to prevent privilege-escalation attacks
from within a container is to configure your container’s applications to run as unprivileged
users."  From Ozone architecture point of view, there is no reason to run Ozone daemon to
require privileged user or hard coded user.
> h3. Solution 1
> It would be best to support running docker container as host user to reduce friction.
 User should be able to run:
> {code}
> docker run -u $(id -u):$(id -g) ...
> {code}
> or in docker-compose file:
> {code}
> user: "${UID}:${GID}"
> {code}
> By doing this, the user will be name less in docker container.  Some commands may warn
that user does not have a name.  This can be resolved by mounting /etc/passwd or a file that
looks like /etc/passwd that contain host user entry.
> h3. Solution 2
> Move the hard coded user to range < 200.  The default linux profile reserves service
users < 200 to have umask that keep data private to service user or group writable, if
service shares group with other service users.  Register the service user with Linux vendors
to ensure that there is a reserved uid for Hadoop user or pick one that works for Hadoop.
 This is a longer route to pursuit, and may not be fruitful.  
> h3. Solution 3
> Default the docker image to have sssd client installed.  This will allow docker image
to see host level names by binding sssd socket.  The instruction for doing this is located
at in [Hadoop website| https://hadoop.apache.org/docs/r3.1.2/hadoop-yarn/hadoop-yarn-site/DockerContainers.html#User_Management_in_Docker_Container].
> The pre-requisites for this approach will require the host level system to have sssd
installed.  For production system, there is a 99% chance that sssd is installed.
> We may want to support combined solution of 1 and 3 to be proper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message