spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vida Ha (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (SPARK-3213) spark_ec2.py cannot find slave instances
Date Mon, 25 Aug 2014 21:58:58 GMT

    [ https://issues.apache.org/jira/browse/SPARK-3213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14109818#comment-14109818
] 

Vida Ha edited comment on SPARK-3213 at 8/25/14 9:57 PM:
---------------------------------------------------------

Joseph, Josh, & I discussed in person. 

There is a quick workaround:

1) Use an old version of the spark_ec2 scripts that uses security groups to identify the slaves,
if using "Launch more like this"

2) Avoid using "Launch more like this"

But now I need to investigate:

If using "launch more like this", it does seem like amazon tries to reuse the tags, but I'm
wondering if it doesn't like having multiple machines with the same "Name" tag.  I will try
using a different tag, like "spark-ec2-cluster-id" or something like that to identify the
machine.  If that tag does copy over, then we can properly support "Launch more like this".


was (Author: vidaha):
Joseph, Josh, & I discussed in person. 

There is a quick workarounds:

1) Use an old version of the spark_ec2 scripts that uses security groups to identify the slaves,
if using "Launch more like this"

But now I need to investigate:

If using "launch more like this", it does seem like amazon tries to reuse the tags, but I'm
wondering if it doesn't like having multiple machines with the same "Name" tag.  I will try
using a different tag, like "spark-ec2-cluster-id" or something like that to identify the
machine.  If that tag does copy over, then we can properly support "Launch more like this".

> spark_ec2.py cannot find slave instances
> ----------------------------------------
>
>                 Key: SPARK-3213
>                 URL: https://issues.apache.org/jira/browse/SPARK-3213
>             Project: Spark
>          Issue Type: Bug
>          Components: EC2
>    Affects Versions: 1.1.0
>            Reporter: Joseph K. Bradley
>            Priority: Blocker
>
> spark_ec2.py cannot find all slave instances.  In particular:
> * I created a master & slave and configured them.
> * I created new slave instances from the original slave ("Launch More Like This").
> * I tried to relaunch the cluster, and it could only find the original slave.
> Old versions of the script worked.  The latest working commit which edited that .py script
is: a0bcbc159e89be868ccc96175dbf1439461557e1
> There may be a problem with this PR: [https://github.com/apache/spark/pull/1899].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message