accumulo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@hortonworks.com>
Subject Re: Current Work on Accumulo in Hoya
Date Wed, 04 Dec 2013 09:13:22 GMT
The forked code goes into the AM logs as its just a forked run of
{{accumulo init}} to set up the file structure.

Error code 1 implies accumulo didn't want to start, which could be from
some environment problem -it needs to know where ZK home as well as hadoop
home are. We set those up before running accumulo, but they do need to be
passed down to the cluster config (which is then validated to see that they
are defined and point to a local directory -but we don't look in the
directory to see if they have all the JARs the accumulo launcher expects)

If you can, try to do this with kerberos off first. Kerberos complicates
things




On 3 December 2013 23:57, Roshan Punnoose <roshanp@gmail.com> wrote:

> I am now getting an exception when Hoya tries to initialize the accumulo
> cluster:
>
> Service accumulo failed in state STARTED; cause:
> org.apache.hadoop.yarn.service.launcher.ServiceLaunchException: accumulo
> failed with code 1
> org.apache.hadoop.yarn.service.launcher.ServiceLaunchException: accumulo
> failed with code 1
> at
>
> org.apache.hadoop.hoya.yarn.service.ForkedProcessService.reportFailure(ForkedProcessService.java:162)
>
> Any ideas as to where logs of a Forked process may go in Yarn?
>
>
> On Tue, Dec 3, 2013 at 4:24 PM, Roshan Punnoose <roshanp@gmail.com> wrote:
>
> > Ah never mind. Got further. Basically, I had specified
> > the yarn.resourcemanager.address to use the rescourcemanager scheduler
> port
> > by mistake. Using the proper port got me further. Thanks!
> >
> >
> > On Tue, Dec 3, 2013 at 4:17 PM, Roshan Punnoose <roshanp@gmail.com>
> wrote:
> >
> >> Yeah, it seems to be honoring the kinit cache properly and retrieving
> the
> >> correct kerberos ticket for validation.
> >>
> >>
> >> On Tue, Dec 3, 2013 at 4:02 PM, Billie Rinaldi <
> billie.rinaldi@gmail.com>wrote:
> >>
> >>> I haven't tried that out yet.  Were you following the instructions at
> >>>
> >>>
> https://github.com/hortonworks/hoya/blob/master/src/site/markdown/security.md
> >>> ?
> >>>
> >>>
> >>> On Tue, Dec 3, 2013 at 12:46 PM, Roshan Punnoose <roshanp@gmail.com>
> >>> wrote:
> >>>
> >>> > I am trying to run Hoya on a Kerberos Secure cluster. I believe I
> have
> >>> all
> >>> > the keytabs in place, and have been able to run mapreduce jobs with
> my
> >>> > user, etc. However, when I run the "hoya create" command I get this
> >>> > exception:
> >>> >
> >>> > org.apache.hadoop.security.AccessControlException: Client cannot
> >>> > authenticate via:[TOKEN]
> >>> > at
> >>> >
> >>> >
> >>>
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:170)
> >>> >
> >>> > I thought that Hoya should be using Kerberos instead of the TOKEN.
> >>> >
> >>> > Also noticed that the SASL NEGOTIATE is responding with "TOKEN" as
> >>> well:
> >>> >
> >>> > 2013-12-03 20:45:04,530 [main] DEBUG security.SaslRpcClient -
> Received
> >>> SASL
> >>> > message state: NEGOTIATE
> >>> > auths {
> >>> >   method: "TOKEN"
> >>> >   mechanism: "DIGEST-MD5"
> >>> >   protocol: ""
> >>> >   serverId: "default"
> >>> > }
> >>> >
> >>> > That doesn't seem right either. Is there something I might be
> missing?
> >>> >
> >>> >
> >>> > On Fri, Oct 18, 2013 at 12:28 PM, Roshan Punnoose <roshanp@gmail.com
> >
> >>> > wrote:
> >>> >
> >>> > > Yeah I noticed the git-flow style branching. Pretty cool.
> >>> > >
> >>> > >
> >>> > > On Fri, Oct 18, 2013 at 12:22 PM, Ted Yu <yuzhihong@gmail.com>
> >>> wrote:
> >>> > >
> >>> > >> Roshan:
> >>> > >> FYI
> >>> > >> The develop branch of Hoya repo should be more up-to-date.
> >>> > >>
> >>> > >> Cheers
> >>> > >>
> >>> > >>
> >>> > >> On Fri, Oct 18, 2013 at 8:33 AM, Billie Rinaldi <
> >>> > billie.rinaldi@gmail.com
> >>> > >> >wrote:
> >>> > >>
> >>> > >> > Adding --debug to the command may print out more things
as well.
> >>> >  Also,
> >>> > >> the
> >>> > >> > start-up is not instantaneous.  In the Yarn logs, you
should see
> >>> at
> >>> > >> first
> >>> > >> > one container under the application (e.g.
> >>> > >> >
> >>> > >> >
> >>> > >>
> >>> >
> >>>
> logs/userlogs/application_1381800165150_0014/container_1381800165150_0014_01_000001)
> >>> > >> > and its out.txt will contain information about the
> initialization
> >>> > >> process.
> >>> > >> > If that goes well, it will start up containers for the
other
> >>> > processes.
> >>> > >> >
> >>> > >> >
> >>> > >> > On Fri, Oct 18, 2013 at 8:20 AM, Roshan Punnoose <
> >>> roshanp@gmail.com>
> >>> > >> > wrote:
> >>> > >> >
> >>> > >> > > Ah ok, will check the logs. When the create command
did not
> >>> seem to
> >>> > do
> >>> > >> > > anything, I assumed it was just initializing the
cluster.json
> >>> > >> descriptor
> >>> > >> > in
> >>> > >> > > hdfs.
> >>> > >> > >
> >>> > >> > >
> >>> > >> > > On Fri, Oct 18, 2013 at 11:15 AM, Billie Rinaldi
> >>> > >> > > <billie.rinaldi@gmail.com>wrote:
> >>> > >> > >
> >>> > >> > > > Sounds like we should plan a meetup.  The examples
page [1]
> >>> has an
> >>> > >> > > example
> >>> > >> > > > create command to use for Accumulo (it requires
a few more
> >>> options
> >>> > >> than
> >>> > >> > > the
> >>> > >> > > > HBase create command).  After that your instance
should be
> up
> >>> and
> >>> > >> > > running.
> >>> > >> > > > If not, look in the Yarn application logs to
see what's
> going
> >>> > >> wrong.  I
> >>> > >> > > > haven't tried freezing and thawing an instance
yet, just
> >>> freezing
> >>> > >> and
> >>> > >> > > > destroying to clean up.  I've noticed freezing
leaves some
> of
> >>> the
> >>> > >> > > processes
> >>> > >> > > > running, but this is probably because I'm supposed
to be
> >>> testing
> >>> > on
> >>> > >> > Linux
> >>> > >> > > > instead of OS X.
> >>> > >> > > >
> >>> > >> > > > [1]:
> >>> > >> > > >
> >>> > >> > > >
> >>> > >> > >
> >>> > >> >
> >>> > >>
> >>> >
> >>>
> https://github.com/hortonworks/hoya/blob/develop/src/site/markdown/examples.md
> >>> > >> > > >
> >>> > >> > > >
> >>> > >> > > > On Fri, Oct 18, 2013 at 7:58 AM, Roshan Punnoose
<
> >>> > roshanp@gmail.com
> >>> > >> >
> >>> > >> > > > wrote:
> >>> > >> > > >
> >>> > >> > > > > I would be very interested in looking
into Hoya as well. I
> >>> > pulled
> >>> > >> > down
> >>> > >> > > > the
> >>> > >> > > > > code and got as far as being able to create
the accumulo
> >>> cluster
> >>> > >> > > > descriptor
> >>> > >> > > > > through the "hoya create" command. When
I tried the "hoya
> >>> thaw"
> >>> > >> > nothing
> >>> > >> > > > > seemed to happen. Still debugging, but
it would be very
> >>> useful
> >>> > to
> >>> > >> > see a
> >>> > >> > > > > quick tutorial on the usage over google+
if possible.
> >>> Thanks!
> >>> > >> > > > >
> >>> > >> > > > >
> >>> > >> > > > > On Fri, Oct 18, 2013 at 10:35 AM, Steve
Loughran <
> >>> > >> > > stevel@hortonworks.com
> >>> > >> > > > > >wrote:
> >>> > >> > > > >
> >>> > >> > > > > > Hi, I'm working on it, with bille
helping on accumulo
> >>> > specifics
> >>> > >> &
> >>> > >> > > > testing
> >>> > >> > > > > >
> >>> > >> > > > > >    1. The code is up on github;
> >>> > >> > https://github.com/hortonworks/hoya.
> >>> > >> > > > > what
> >>> > >> > > > > >    we don't have is any good issue
tracking -I'm using
> our
> >>> > >> internal
> >>> > >> > > > JIRA
> >>> > >> > > > > >    server for that which is bad as
it keeps the project
> >>> less
> >>> > >> open
> >>> > >> > > -and
> >>> > >> > > > > > loses
> >>> > >> > > > > >    decision history
> >>> > >> > > > > >    2.  we're on a two week sprint
cycle, next one ends
> on
> >>> > monday
> >>> > >> > with
> >>> > >> > > > > >    another release coming out -focus
on secure cluster
> >>> support
> >>> > >> and
> >>> > >> > > > better
> >>> > >> > > > > >    liveness monitoring
> >>> > >> > > > > >    3. Bille has been deploying accumulo
with it.
> >>> > >> > > > > >
> >>> > >> > > > > > We're doing development with the
focus on hbase, though
> >>> it is
> >>> > >> > > designed
> >>> > >> > > > to
> >>> > >> > > > > > have different back ends "providers"
-the accumulo one
> is
> >>> the
> >>> > >> > > > > alternative,
> >>> > >> > > > > > and most of what we do benefits both
of them, it's just
> >>> we are
> >>> > >> > > testing
> >>> > >> > > > > more
> >>> > >> > > > > > with HBase and adding some features
(liveness probes)
> >>> there
> >>> > >> first.
> >>> > >> > > > > >
> >>> > >> > > > > > If we could get broader participation
that would help
> >>> with the
> >>> > >> > > accumulo
> >>> > >> > > > > > testing and mean that we could put
it into the Apache
> >>> > Incubation
> >>> > >> > > > process
> >>> > >> > > > > > -they insist on that broadness by
the time you get out
> of
> >>> > >> > incubation.
> >>> > >> > > > > >
> >>> > >> > > > > > -contact me if you want to know more
> >>> > >> > > > > >
> >>> > >> > > > > > -i'd be happy to do a remote presentation
of hoya over
> >>> google+
> >>> > >> or
> >>> > >> > > > webex.
> >>> > >> > > > > >
> >>> > >> > > > > > I'm actually trying to set up a remote-only-YARN-HUG
> group
> >>> > >> between
> >>> > >> > > the
> >>> > >> > > > > US,
> >>> > >> > > > > > EU and Asia, where we'd have remote-only
sessions in
> >>> different
> >>> > >> > > > timezones;
> >>> > >> > > > > > Hoya would be one of the topics.
> >>> > >> > > > > >
> >>> > >> > > > > >
> >>> > >> > > > > >
> >>> > >> > > > > > On 17 October 2013 20:07, Ed Kohlwey
<
> ekohlwey@gmail.com>
> >>> > >> wrote:
> >>> > >> > > > > >
> >>> > >> > > > > > > I was wondering if anyone knows
what the current
> status
> >>> of
> >>> > >> > Accumulo
> >>> > >> > > > in
> >>> > >> > > > > > Hoya
> >>> > >> > > > > > > is. We're really interested
in running Accumulo in
> Yarn
> >>> in a
> >>> > >> > > > production
> >>> > >> > > > > > > environment and helping to mature
the project to the
> >>> point
> >>> > >> that
> >>> > >> > we
> >>> > >> > > > > could
> >>> > >> > > > > > do
> >>> > >> > > > > > > so.
> >>> > >> > > > > > >
> >>> > >> > > > > > > Are the current issues mostly
around testing or are
> >>> there
> >>> > some
> >>> > >> > > known
> >>> > >> > > > > > issues
> >>> > >> > > > > > > already? Would it help to be
able to run over a large
> >>> > cluster
> >>> > >> or
> >>> > >> > is
> >>> > >> > > > > there
> >>> > >> > > > > > > some additional development
that needs to be done?
> >>> > >> > > > > > >
> >>> > >> > > > > >
> >>> > >> > > > > > --
> >>> > >> > > > > > CONFIDENTIALITY NOTICE
> >>> > >> > > > > > NOTICE: This message is intended
for the use of the
> >>> individual
> >>> > >> or
> >>> > >> > > > entity
> >>> > >> > > > > to
> >>> > >> > > > > > which it is addressed and may contain
information that
> is
> >>> > >> > > confidential,
> >>> > >> > > > > > privileged and exempt from disclosure
under applicable
> >>> law. If
> >>> > >> the
> >>> > >> > > > reader
> >>> > >> > > > > > of this message is not the intended
recipient, you are
> >>> hereby
> >>> > >> > > notified
> >>> > >> > > > > that
> >>> > >> > > > > > any printing, copying, dissemination,
distribution,
> >>> disclosure
> >>> > >> or
> >>> > >> > > > > > forwarding of this communication
is strictly prohibited.
> >>> If
> >>> > you
> >>> > >> > have
> >>> > >> > > > > > received this communication in error,
please contact the
> >>> > sender
> >>> > >> > > > > immediately
> >>> > >> > > > > > and delete it from your system. Thank
You.
> >>> > >> > > > > >
> >>> > >> > > > >
> >>> > >> > > >
> >>> > >> > >
> >>> > >> >
> >>> > >>
> >>> > >
> >>> > >
> >>> >
> >>>
> >>
> >>
> >
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message