falcon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ed Kohlwey <ekohl...@gmail.com>
Subject Re: Issue adding a cluster to falcon
Date Fri, 08 Aug 2014 00:30:32 GMT
I don't have a copy of the sandbox handy. If you look at the top of the
logs falcon prints a commit id in a banner of stars when it starts. Can you
check the version in the sandbox? It may provide a hint.

Sent from my mobile device. Please excuse any typos or shorthand.
On Aug 7, 2014 6:16 PM, "Tyler D" <tdowg1@gmail.com> wrote:

> On Thu, Aug 7, 2014 at 12:00 PM, Ed Kohlwey <ekohlwey@gmail.com> wrote:
>
> > I'm attempting to deploy a cluster entity through an ambari-managed
> falcon.
> > This is a clean install from 1.6.1.
> >
> > When submitting the cluster definition, I get the following message:
> >
> > $ sudo -u admin falcon entity -submit -file cluster.xml -type cluster
> >
> > Error: Invalid Execute server or port: <hostname redacted>:8050
> >
> > Cannot initialize Cluster. Please check your configuration for
> > mapreduce.framework.name and the correspond server addresses.
> >
> >
> > mapreduce.framework.name is set yarn (per ambari deployment defaults).
> My
> > configuration line looks like this:
> >
> > <interface type="execute" endpoint="<hostname redacted>:8050"
> > version="2.4.0" />
> >
> > Enabling debugging in the hadoop packages in the log4j.xml shows that the
> > connection is being considered but not established.
> >
> > 2014-08-07 15:23:03,513 DEBUG -
> > [1141105573@qtp-216944274-0:admin:POST//entities/submit/cluster
> > a1b37e55-34fb-48f3-830d-e8e736f79c75] ~ Trying ClientProtocolProvider :
> > org.apache.hadoop.mapred.YarnClientProtocolProvider (Cluster:90)
> >
> > at
> >
> >
> org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
> >
> > at
> >
> >
> org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
> >
> > at
> >
> >
> org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
> >
> > 2014-08-07 15:23:03,971 INFO  -
> > [1141105573@qtp-216944274-0:admin:POST//entities/submit/cluster
> > a1b37e55-34fb-48f3-830d-e8e736f79c75] ~ Failed to use
> > org.apache.hadoop.mapred.YarnClientProtocolProvider due to error: null
> > (Cluster:113)
> >
> >
> > Any ideas of what could be wrong are appreciated. I am able to launch
> other
> > yarn jobs on this cluster successfully via Hive.
> >
>
>
>
> I am having this same exact issue.  In my case, Falcon is installed through
> HDP 2.1 distribution, which uses Ambari 1.6.1.
>
> At first I thought the problem was a version or port misconfiguration in my
> cluster entity spec that I was trying to use.
>
> I tried giving Falcon various different cluster xml configurations.  All
> attempts failed.
>
> I included the xml I tried in this mail.  Also posted to
> http://pastebin.com/Ce07vyGv
>
> Hopefully this helps people isolate the problem/eliminate non-problems.
>
> All of these interface of type=execute lines failed to register with the
> cluster (which makes me think this xml probably isn't the problem):
>
> <?xml version="1.0"?>
> <cluster colo="USWestOregon" description="oregonHadoopCluster"
> name="primaryCluster" xmlns="uri:falcon:cluster:0.1">
>     <interfaces>
>         <interface type="readonly"
> endpoint="hftp://vm-centos6-hdp21-a1.hdp.hadoop:50070" version="2.4.0" />
>         <interface type="write"
> endpoint="hdfs://vm-centos6-hdp21-a1.hdp.hadoop:8020" version="2.4.0" />
>         <!--<interface type="execute"
> endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8050" version="2.2.0" />-->
>         <!-- ^^FAILS: gives error like...
>              $ falcon entity -type cluster -submit -file
> /home/ambari-qa/falconChurnDemo/oregonCluster.xml`
>              Error:Invalid Execute server or port:
> vm-centos6-hdp21-a1.hdp.hadoop:8050
>              Cannot initialize Cluster. Please check your configuration for
> mapreduce.framework.name and the correspond server addresses.
> (FalconWebException:66)
>         -->
>         <!--<interface type="execute"
> endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8088" version="2.2.0" /> --><!--
> ALSO FAILS -->
>         <!--<interface type="execute"
> endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8050" version="2.4.0" /> --><!--
> ALSO FAILS -->
>         <!--<interface type="execute"
> endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8050" version="0.20.2" /> --><!--
> ALSO FAILS -->
>         <!--<interface type="execute"
> endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8030" version="2.4.0" /> --><!--
> ALSO FAILS -->
>         <!--<interface type="execute"
> endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8025" version="2.4.0" />--><!--
> ALSO FAILS -->
>         <!--<interface type="execute"
> endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8141" version="2.4.0" />--><!--
> ALSO FAILS -->
>         <!--<interface type="execute" endpoint="rm:8050"  version="0.20.2"
> />--><!-- ALSO FAILS -->
>         <!--<interface type="execute" endpoint="localhost:8050"
> version="0.20.2" />--><!-- ALSO FAILS -->
>         <interface type="execute"
> endpoint="vm-centos6-hdp21-a1.hdp.hadoop:8021" version="2.4.0" /><!-- ALSO
> FAILS -->
>         <interface type="workflow" endpoint="
> http://vm-centos6-hdp21-a1.hdp.hadoop:11000/oozie/" version="4.0.0" />
>         <interface type="messaging"
> endpoint="tcp://vm-centos6-hdp21-a1.hdp.hadoop:61616?daemon=true"
> version="5.1.6" />
>     </interfaces>
>     <locations>
>         <location name="staging" path="/apps/falcon/primaryCluster/staging"
> />
>         <location name="temp" path="/tmp" />
>         <location name="working" path="/apps/falcon/primaryCluster/working"
> />
>     </locations>
> </cluster>
>
> And of course here's the error:
> Error:Invalid Execute server or port: vm-centos6-hdp21-a1.hdp.hadoop:8050
> Cannot initialize Cluster. Please check your configuration for
> mapreduce.framework.name and the correspond server addresses.
> (FalconWebException:66)
>
>
>
> Something I thought was weird is when I execute the bin/falcon-status.sh
> and/or bin/service-status shell scripts: they always come back saying that
> the Falcon server is not running.  This conflicts, however, with the
> situation because the Falcon process is running, the web interface at
> http://vm-centos6-hdp21-a1.hdp.hadoop:15000/  is up, and Ambari reports
> the
> service as being up.  Here is me running those commands:
>
> [ambari-qa@vm-centos6-hdp21-a1 falcon]$ bin/falcon-status
> Hadoop is installed, adding hadoop classpath to falcon classpath
> falcon is not running.
> [ambari-qa@vm-centos6-hdp21-a1 falcon]$ echo $?
> 255
> [ambari-qa@vm-centos6-hdp21-a1 falcon]$ bin/service-status.sh
> Invalid option for app: . Valid choices are falcon and prism
> [ambari-qa@vm-centos6-hdp21-a1 falcon]$ bin/service-status.sh falcon
> Hadoop is installed, adding hadoop classpath to falcon classpath
> falcon is not running.
> [ambari-qa@vm-centos6-hdp21-a1 falcon]$ bin/service-status.sh prism
> Hadoop is installed, adding hadoop classpath to falcon classpath
> mkdir: cannot create directory `/usr/lib/falcon/server/webapp/prism':
> Permission denied
> /usr/lib/falcon/bin/falcon-config.sh: line 99: cd:
> /usr/lib/falcon/server/webapp/prism: No such file or directory
> java.io.FileNotFoundException: /usr/lib/falcon/server/webapp/prism.war (No
> such file or directory)
>         at java.io.FileInputStream.open(Native Method)
>         at java.io.FileInputStream.<init>(FileInputStream.java:146)
>         at java.io.FileInputStream.<init>(FileInputStream.java:101)
>         at sun.tools.jar.Main.run(Main.java:259)
>         at sun.tools.jar.Main.main(Main.java:1177)
> /usr/lib/falcon/bin/falcon-config.sh: line 101: cd: OLDPWD not set
> prism is not running.
>
>
>
> Also want to mention that I tried Hortonworks's latest Sandbox vm (
> http://hortonassets.s3.amazonaws.com/2.1/vmware/Hortonworks_Sandbox_2.1.ova
> ), and when I execute the bin/falcon-status.sh and/or bin/service-status
> shell scripts: they come back as expected, saying that the Falcon server is
> up and running and gives the url.
>
>
> Thanks Ed for sending out this mail... I guess for me, whats next could be
> to investigate the Falcon version differences between the HDP2.1 cluster I
> installed (with the problems) and the Falcon version in the Sandbox vm
> since it appears ... better.
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message