spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Apache Spark (JIRA)" <j...@apache.org>
Subject [jira] [Assigned] (SPARK-13776) Web UI is not available after ./sbin/start-master.sh
Date Wed, 09 Mar 2016 22:27:41 GMT

     [ https://issues.apache.org/jira/browse/SPARK-13776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Apache Spark reassigned SPARK-13776:
------------------------------------

    Assignee: Apache Spark

> Web UI is not available after ./sbin/start-master.sh
> ----------------------------------------------------
>
>                 Key: SPARK-13776
>                 URL: https://issues.apache.org/jira/browse/SPARK-13776
>             Project: Spark
>          Issue Type: Bug
>          Components: Web UI
>    Affects Versions: 1.6.0
>         Environment: Solaris 11.3, Oracle SPARC T-5 8 with 1024 hardware threads
>            Reporter: Erik O'Shaughnessy
>            Assignee: Apache Spark
>            Priority: Minor
>
> The Apache Spark Web UI fails to become available after starting a Spark master in stand-alone
mode:
> $ ./sbin/start-master.sh
> The log file contains the following:
> {quote}
> cat spark-hadoop-org.apache.spark.deploy.master.Master-1-t5-8-002.out
> Spark Command: /usr/java/bin/java -cp /usr/local/spark-1.6.0_nohadoop/conf/:/usr/local/spark-1.6.0_nohadoop/assembly/target/scala-2.10/spark-assembly-1.6.0-hadoop2.2.0.jar:/usr/local/spark-1.6.0_nohadoop/lib_managed/jars/datanucleus-api-jdo-3.2.6.jar:/usr/local/spark-1.6.0_nohadoop/lib_managed/jars/datanucleus-rdbms-3.2.9.jar:/usr/local/spark-1.6.0_nohadoop/lib_managed/jars/datanucleus-core-3.2.10.jar
-Xms1g -Xmx1g org.apache.spark.deploy.master.Master --ip t5-8-002 --port 7077 --webui-port
8080
> ========================================
> 16/01/27 12:00:42 WARN AbstractConnector: insufficient threads configured for SelectChannelConnector@0.0.0.0:8080
> 16/01/27 12:00:42 WARN AbstractConnector: insufficient threads configured for SelectChannelConnector@t5-8-002:6066
> {quote}
> I did some poking around and it seems that message is coming from Jetty and indicates
a mismatch between Jetty's default maxThreads configuration and the actual number of CPUs
available on the hardware (1024). I was not able to find a way to successfully change Jetty's
configuration at run-time. 
> Our work around was to disable CPUs until the WARN messages did not occur in the log
file, which was when NCPUs = 504. 
> I don't know for certain that this is isn't a known problem in Jetty from looking at
their bug reports, but I wasn't able to locate a Jetty issue that described this problem.
> While not specifically an Apache Spark problem, I thought documenting it would at least
be helpful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message