hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wangda Tan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-7224) Support GPU isolation for docker container
Date Tue, 24 Oct 2017 19:24:00 GMT

    [ https://issues.apache.org/jira/browse/YARN-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16217506#comment-16217506
] 

Wangda Tan commented on YARN-7224:
----------------------------------

Thanks [~sunilg] for reviewing the patch, 

For major comments:

#1/#3, I think it's better to make GPU-Docker implementation can be configurable, at least
we can easily add new implementations in the future without worrying about break backward
compatibility, etc. I added new config for this setting (see yarn-default.xml) and rename
existing implementation to NvidiaDockerV1... (Nvidia docker v2 is currently in alpha phase
which includes many usabiliy improvements, we can support it once it becomes GA). 

#5/#6/#8/#9, currently it returns only minor number, which is not sufficient. At least we
should include index of GPU for better handling device -> GPU mapping. I added {{GpuDevice}}
(which implements Serializable) to encapsulate allocated GPU devices. And I changed YarnConfiguration
to ensure admin specify GPU index with minor number when usable GPU devices by YARN are manually
configured. See {{YarnConfiguration#NM_GPU_ALLOWED_DEVICES}}.

For minor comments:

#2, Done 
#4, The docker volume is host OS's driver, etc. So it must be RO.
#7, We make the mapped GPU device in docker container same as host OS's GPU device, I think
it is appropriate.

> Support GPU isolation for docker container
> ------------------------------------------
>
>                 Key: YARN-7224
>                 URL: https://issues.apache.org/jira/browse/YARN-7224
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>            Reporter: Wangda Tan
>            Assignee: Wangda Tan
>         Attachments: YARN-7224.001.patch, YARN-7224.002-wip.patch, YARN-7224.003.patch,
YARN-7224.004.patch, YARN-7224.005.patch, YARN-7224.006.patch
>
>
> This patch is to address issues when docker container is being used:
> 1. GPU driver and nvidia libraries: If GPU drivers and NV libraries are pre-packaged
inside docker image, it could conflict to driver and nvidia-libraries installed on Host OS.
An alternative solution is to detect Host OS's installed drivers and devices, mount it when
launch docker container. Please refer to \[1\] for more details. 
> 2. Image detection: 
> From \[2\], the challenge is: 
> bq. Mounting user-level driver libraries and device files clobbers the environment of
the container, it should be done only when the container is running a GPU application. The
challenge here is to determine if a given image will be using the GPU or not. We should also
prevent launching containers based on a Docker image that is incompatible with the host NVIDIA
driver version, you can find more details on this wiki page.
> 3. GPU isolation.
> *Proposed solution*:
> a. Use nvidia-docker-plugin \[3\] to address issue #1, this is the same solution used
by K8S \[4\]. issue #2 could be addressed in a separate JIRA.
> We won't ship nvidia-docker-plugin with out releases and we require cluster admin to
preinstall nvidia-docker-plugin to use GPU+docker support on YARN. "nvidia-docker" is a wrapper
of docker binary which can address #3 as well, however "nvidia-docker" doesn't provide same
semantics of docker, and it needs to setup additional environments such as PATH/LD_LIBRARY_PATH
to use it. To avoid introducing additional issues, we plan to use nvidia-docker-plugin + docker
binary approach.
> b. To address GPU driver and nvidia libraries, we uses nvidia-docker-plugin \[3\] to
create a volume which includes GPU-related libraries and mount it when docker container being
launched. Changes include: 
> - Instead of using {{volume-driver}}, this patch added {{docker volume create}} command
to c-e and NM Java side. The reason is {{volume-driver}} can only use single volume driver
for each launched docker container.
> - Updated {{c-e}} and Java side, if a mounted volume is a named volume in docker, skip
checking file existence. (Named-volume still need to be added to permitted list of container-executor.cfg).
> c. To address isolation issue:
> We found that, cgroup + docker doesn't work under newer docker version which uses {{runc}}
as default runtime. Setting {{--cgroup-parent}} to a cgroup which include any {{devices.deny}}
causes docker container cannot be launched.
> Instead this patch passes allowed GPU devices via {{--device}} to docker launch command.
> References:
> \[1\] https://github.com/NVIDIA/nvidia-docker/wiki/NVIDIA-driver
> \[2\] https://github.com/NVIDIA/nvidia-docker/wiki/Image-inspection
> \[3\] https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker-plugin
> \[4\] https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org


Mime
View raw message