amaterasu-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Arun Manivannan <a...@arunma.com>
Subject Re: Initial setup of Amaterasu - Unable to run
Date Sun, 01 Oct 2017 01:41:11 GMT
Thanks a lot, Yoniv. That clarifies a lot. I am going to try and create
more jobs (with deps) and run Amaterasu on it.

Cheers
Arun

On Sun, Oct 1, 2017, 07:22 Yaniv Rodenski <yaniv@shinto.io> wrote:

> Hi Arun,
>
> On Mesos the easiest way to debug your job is to go to the Mesos UI which
> is usually on port 5050. If you are using the amaterasu-vagrant machine,
> the address would be 192.168.33.11:5050.
>
> As for the rest of your questions:
>
> 1. Currently using the Mesos cluster is the only way to run Amaterasu jobs.
> Next version will also support YARN. We are thinking about creating a stand
> alone deployment for Amaterasu but currently it's not in high priority. As
> for the buildHomeDir, that is a part of our dev workflow and is very handy.
> We have another gradle action called buildDitribution which packages
> Amaterasu, and this is how we plan to release versions going forward.
>
> 2. a. You are correct, in the current version pipelines are monoliths. We
> do plan on making pipelines composable in version 0.3 which is when we also
> plan on supporting binaries for actions
> b. To add dependencies, you can use the deps folder, you can see an example
> in the https://github.com/shintoio/amaterasu-job-sample in the
> error-handling branch. For Scala (jar) dependencies you use the jars.yml
> file and for python we support anaconda dependencies using a file called
> paython.yml.
>
> 3.  that should be the SDK :)
>
> Cheers,
> Yaniv
>
>
> On Sun, Oct 1, 2017 at 12:05 AM, Arun Manivannan <arun@arunma.com> wrote:
>
> > Hi Yaniv,
> >
> > That was spot on.  Yes. That was the issue and I am able to complete the
> > demo job successfully !!  I wonder where I should have looked into to
> have
> > figured out this issue myself. :-(
> >
> > Probably, I should have listened carefully to your presentation but I
> have
> > a few really basic questions.
> >
> > 1. Is the general way to test a deployment is to run the buildHomeDir
> (for
> > dev) and run it on vagrant with mesos?  Is there a way to bypass mesos
> and
> > run my spark jobs on local mode and python jobs on my local machine?  We
> > discussed this earlier - I would like to use amaterasu for running some
> > quick integration tests and it would be easier to test it on my local
> > machine than to have a VM running.
> >
> > 2. Probably, I am not seeing this right.  On the amaterasu-jobs, I notice
> > that all the jobs are in the same repo.  As you know, most often our jobs
> > in the pipeline isn't in a single repo.  I also notice that the
> > SparkRunnerHelper interprets the file.scala/other file arguments that is
> > passed into it and binds the context variables.  However, once we are out
> > of the repository, all we would have is binaries.
> >
> >          a. Is it a requirement at the moment to have all the Driver
> source
> > files in a single repo i.e. the job repo ?
> >          b. If that's the case, then how do I add external dependencies
> to
> > the component spark jobs?
> >
> > 3. I see that using AmaContext would enable handshake between jobs in the
> > pipeline.  I realise then, we must have the sdk/amaterasu library in the
> > classpath of the Spark job.  Which one would that be?
> >
> > I am terribly sorry if these questions doesn't make much sense in the
> > context of the project.  I would just like to know if I had misunderstood
> > the purpose of the project.  I absolutely realise that the project is
> just
> > incubating and it's too much to ask for all the bells and whistles on day
> > 1.
> >
> > Best Regards,
> > Arun
> >
> >
> >
> > On Sat, Sep 30, 2017 at 8:00 PM Yaniv Rodenski <yaniv@shinto.io> wrote:
> >
> > > Hi Arun,
> > >
> > > I think you are hitting a bug that we’ve fixed but was in a pending PR,
> > > I've just merged the PR.
> > > Try to do a git pull and run again, let us know if it solves the
> problem.
> > >
> > > Cheers,
> > > Yaniv
> > >
> > > On Sat, 30 Sep 2017 at 7:36 pm, Arun Manivannan <arun@arunma.com>
> wrote:
> > >
> > > > Hi,
> > > >
> > > > I am trying to make an initial run on Amaterasu with
> > > > https://github.com/arunma/amaterasu-v2-demo (just an unmodified fork
> > of
> > > > https://github.com/shintoio/amaterasu-v2-demo).  Seems like the
> spark
> > > job
> > > > fails with error (as I see from the logs).   Not surprisingly, I am
> > > unable
> > > > so see a json on the /tmp/test1.
> > > >
> > > > I am not familiar with Mesos. Tried to check for clues on
> > /var/log/mesos
> > > on
> > > > the vagrant box with no luck.
> > > >
> > > > I am just running a single node mesos on vagrant (
> > > > https://github.com/shintoio/amaterasu-vagrant).  Greatly appreciate
> if
> > > you
> > > > could help me with some hints.
> > > >
> > > > Earlier I ran a `./gradlew buildHomeDir` and modified the Vagrantfile
> > to
> > > > point to my local build directory of Amaterasu.
> > > >
> > > > Cheers,
> > > > Arun
> > > >
> > > >
> > > >
> > > > [vagrant@node1 ama]$ ./ama-start.sh --repo="
> > > > https://github.com/arunma/amaterasu-v2-demo.git" --branch="master"
> > > > --env="test" --report="code"
> > > > serving amaterasu from /ama/lib on user supplied port
> > > > ./ama-start.sh: line 29: popd: directory stack empty
> > > >
> > > >
> > > >                                              /\
> > > >              /  \ /\
> > > >             / /\ /  \
> > > >       _                 _                 / /  / /\ \
> > > >      /_\   _ __   __ _ | |_  ___  _ _  __(_( _(_(_ )_)
> > > >     / _ \ | '  \ / _` ||  _|/ -_)| '_|/ _` |(_-<| || |
> > > >    /_/ \_\|_|_|_|\__,_| \__|\___||_|  \__,_|/__/ \_,_|
> > > >
> > > >     Continuously deployed data pipelines
> > > >     Version 0.2.0-incubating
> > > >
> > > >
> > > > repo: https://github.com/arunma/amaterasu-v2-demo.git
> > > > java -cp ./bin/leader-0.2.0-incubating-all.jar
> > > > -Djava.library.path=/usr/lib
> > > > org.apache.amaterasu.leader.mesos.JobLauncher --home . --repo
> > > > https://github.com/arunma/amaterasu-v2-demo.git --branch master
> --env
> > > test
> > > > --report code
> > > > 2017-09-30 09:22:46.184:INFO::main: Logging initialized @688ms
> > > > 2017-09-30 09:22:46.262:INFO:oejs.Server:main: jetty-9.2.z-SNAPSHOT
> > > > 2017-09-30 09:22:46.300:INFO:oejsh.ContextHandler:main: Started
> > > > o.e.j.s.ServletContextHandler@385e9564{/,file:/ama/dist/,AVAILABLE}
> > > > 2017-09-30 09:22:46.317:INFO:oejs.ServerConnector:main: Started
> > > > ServerConnector@1dac5ef{HTTP/1.1}{0.0.0.0:8000}
> > > > 2017-09-30 09:22:46.317:INFO:oejs.Server:main: Started @822ms
> > > > SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> > > > SLF4J: Defaulting to no-operation (NOP) logger implementation
> > > > SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for
> > > further
> > > > details.
> > > > I0930 09:22:46.591053  5481 sched.cpp:232] Version: 1.4.0
> > > > I0930 09:22:46.592723  5514 sched.cpp:336] New master detected at
> > > > master@192.168.33.11:5050
> > > > I0930 09:22:46.593138  5514 sched.cpp:352] No credentials provided.
> > > > Attempting to register without authentication
> > > > I0930 09:22:46.596042  5511 sched.cpp:759] Framework registered with
> > > > a82be91b-d3fb-4ccd-9dc3-145f60ac8316-0000
> > > > ===> Executor 0000000002-6bfdbb03-88b7-44af-9cd0-2a12ddda7b62
> > registered
> > > > ===> a provider for group spark was created
> > > > ===> launching task: 0000000002
> > > > ===> ================= started action start =================
> > > > ===> val data = 1 to 1000
> > > > ===> val rdd = sc.parallelize(data)
> > > > ===> val odd = rdd.filter(n => n%2 != 0).toDF("number")
> > > > ===> ================= finished action start =================
> > > > ===> complete task: 0000000002
> > > > ===> launching task: 0000000003
> > > > ===> Executor 0000000003-6f7eb9e9-8977-4d47-906e-e0802bb7ffe8
> > registered
> > > > ===> a provider for group spark was created
> > > > ===> launching task: 0000000003
> > > > ===> Executor 0000000003-605f3b31-e15e-482d-8c93-737f1605e9f6
> > registered
> > > > ===> a provider for group spark was created
> > > > ===> launching task: 0000000003
> > > > ===> moving to err action null
> > > > 2017-09-30 09:24:07.355:INFO:oejs.ServerConnector:Thread-59: Stopped
> > > > ServerConnector@1dac5ef{HTTP/1.1}{0.0.0.0:8000}
> > > > 2017-09-30 09:24:07.359:INFO:oejsh.ContextHandler:Thread-59: Stopped
> > > > o.e.j.s.ServletContextHandler@385e9564
> {/,file:/ama/dist/,UNAVAILABLE}
> > > > I0930 09:24:07.361192  5512 sched.cpp:2021] Asked to stop the driver
> > > > I0930 09:24:07.361409  5512 sched.cpp:1203] Stopping framework
> > > > a82be91b-d3fb-4ccd-9dc3-145f60ac8316-0000
> > > > kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec
> ...
> > > or
> > > > kill -l [sigspec]
> > > >
> > > >
> > > > W00t amaterasu job is finished!!!
> > > >
> > >
> >
>
>
>
> --
> Yaniv Rodenski
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message