hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arpit Agarwal (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDDS-839) Wait for other services in the started script of hadoop-runner base docker image
Date Tue, 20 Nov 2018 16:52:00 GMT

    [ https://issues.apache.org/jira/browse/HDDS-839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16693477#comment-16693477

Arpit Agarwal commented on HDDS-839:

Thanks for this improvement [~elek]. It looks like we are removing the 15 second sleep for
OM, but WAITFOR is not being set. Will that be in a separate patch?
-      # To make sure SCM is running in dockerized environment we will sleep
-      # Could be removed after HDFS-13203
-      echo "Waiting 15 seconds for SCM startup"
-      sleep 15

> Wait for other services in the started script of hadoop-runner base docker image
> --------------------------------------------------------------------------------
>                 Key: HDDS-839
>                 URL: https://issues.apache.org/jira/browse/HDDS-839
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>            Reporter: Elek, Marton
>            Assignee: Elek, Marton
>            Priority: Major
>         Attachments: HDDS-839-docker-hadoop-runner.001.patch, HDDS-839-docker-hadoop-runner.002.patch
> As described in the parent issue, we need a simple method to handle service dependencies
in kubernetes clusters (usually as a workaround when some clients can't re-try with renewed
dns information).
> But it also could be useful to minimize the wait time in the docker-compose clusters.
> The easiest implementation is modifying the started script of the apache/hadoop-runner
base image and add a bash loop which checks the availability of the TCP port (with netcat).

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message