accumulo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Billie Rinaldi <billie.rina...@gmail.com>
Subject Re: Current Work on Accumulo in Hoya
Date Wed, 04 Dec 2013 21:07:41 GMT
The accumulo script requires the conf files to be present.  But if you have
some conf files, you can then connect the shell to any instance with the -z
flag.  We could consider having a client script with fewer requirements.

I tried it with just the accumulo-env.sh file, and it worked but ate all
the log messages (so you couldn't see what was going on when there were
errors).  I'd recommend dropping in log4j.properties too.


On Wed, Dec 4, 2013 at 1:00 PM, Roshan Punnoose <roshanp@gmail.com> wrote:

> I get:
>
> Accumulo is not properly configured.
>
> Try running $ACCUMULO_HOME/bin/bootstrap_config.sh and then editing
> $ACCUMULO_HOME/conf/accumulo-env.sh
>
> My guess is that the conf directory needs to be semi populated with at
> least the accumulo-env.sh?
>
>
> On Wed, Dec 4, 2013 at 3:40 PM, Eric Newton <eric.newton@gmail.com> wrote:
>
> > use the "-z" option:
> >
> > $ ./bin/accumulo shell -u root z instance zoo1,zoo2,zoo3
> >
> > -Eric
> >
> >
> > On Wed, Dec 4, 2013 at 3:13 PM, Roshan Punnoose <roshanp@gmail.com>
> wrote:
> >
> > > This is cool. I couldn't get it working with 1.5.0, but 1.7.0-SNAPSHOT
> > > worked perfectly. (I'll probably just downgrade sometime soon, or wait
> > for
> > > a release)
> > >
> > > I had to add this property to the hoya-client.xml to get it to look for
> > the
> > > hadoop/zookeeper jars in the right places. (Though that seems property
> > > seems to already be set in the yarn-site.xml):
> > > <property>
> > >       <name>yarn.application.classpath</name>
> > >
> > >
> > >
> >
> <value>/etc/hadoop/conf,/usr/lib/hadoop/*,/usr/lib/hadoop/lib/*,/usr/lib/hadoop-hdfs/*,/usr/lib/hadoop-hdfs/lib/*,/usr/lib/hadoop-yarn/*,/usr/lib/hadoop-yarn/lib/*,/usr/lib/hadoop-mapreduce/*,/usr/lib/hadoop-mapreduce/lib/*,/usr/lib/zookeeper/*</value>
> > >  </property>
> > >
> > > Also, any ideas on how to get the shell connected to it without a conf
> > > directory? I can just use the generated conf with the shell for now.
> > >
> > > Roshan
> > >
> > >
> > > On Wed, Dec 4, 2013 at 11:25 AM, Billie Rinaldi <
> > billie.rinaldi@gmail.com
> > > >wrote:
> > >
> > > > Interesting, let us know if having the conf populated in the tarball
> > > makes
> > > > a difference.  I'd recommend using 1.5.1-SNAPSHOT, by the way.  1.5.0
> > > > processes don't return proper exit codes when there are errors.
> > > >
> > > >
> > > > On Wed, Dec 4, 2013 at 8:19 AM, Roshan Punnoose <roshanp@gmail.com>
> > > wrote:
> > > >
> > > > > I was able to get most of the way there. Turning off the log
> > > aggregation
> > > > > helped a lot, the forked exceptions were not getting to the
> > aggregated
> > > > > TFile in HDFS.
> > > > >
> > > > > I am trying to run accumulo 1.5.0 and for some reason, the
> > > propagtedConf
> > > > > that Hoya generates is not getting loaded during the accumulo
> > > initialize
> > > > > phase. I think it has to do with the fact that I already have a
> > > populated
> > > > > conf directory (with a sample accumulo-site.xml) in the accumulo
> > image
> > > I
> > > > am
> > > > > sending. I'm going to try and build a new accumulo image from
> source
> > > and
> > > > > try again with Hoya 0.7.0. The error I am seeing makes it seem like
> > the
> > > > > Accumulo Initialize is not looking at the propgatedConf
> > > > "instance.dfs.dir"
> > > > > property but using the default to put the data in "/accumulo" in
> > HDFS.
> > > > >
> > > > > Will keep trying. Thanks for the help!
> > > > >
> > > > >
> > > > > On Wed, Dec 4, 2013 at 4:13 AM, Steve Loughran <
> > stevel@hortonworks.com
> > > > > >wrote:
> > > > >
> > > > > > The forked code goes into the AM logs as its just a forked run
of
> > > > > > {{accumulo init}} to set up the file structure.
> > > > > >
> > > > > > Error code 1 implies accumulo didn't want to start, which could
> be
> > > from
> > > > > > some environment problem -it needs to know where ZK home as
well
> as
> > > > > hadoop
> > > > > > home are. We set those up before running accumulo, but they
do
> need
> > > to
> > > > be
> > > > > > passed down to the cluster config (which is then validated to
see
> > > that
> > > > > they
> > > > > > are defined and point to a local directory -but we don't look
in
> > the
> > > > > > directory to see if they have all the JARs the accumulo launcher
> > > > expects)
> > > > > >
> > > > > > If you can, try to do this with kerberos off first. Kerberos
> > > > complicates
> > > > > > things
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > On 3 December 2013 23:57, Roshan Punnoose <roshanp@gmail.com>
> > wrote:
> > > > > >
> > > > > > > I am now getting an exception when Hoya tries to initialize
the
> > > > > accumulo
> > > > > > > cluster:
> > > > > > >
> > > > > > > Service accumulo failed in state STARTED; cause:
> > > > > > > org.apache.hadoop.yarn.service.launcher.ServiceLaunchException:
> > > > > accumulo
> > > > > > > failed with code 1
> > > > > > > org.apache.hadoop.yarn.service.launcher.ServiceLaunchException:
> > > > > accumulo
> > > > > > > failed with code 1
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hoya.yarn.service.ForkedProcessService.reportFailure(ForkedProcessService.java:162)
> > > > > > >
> > > > > > > Any ideas as to where logs of a Forked process may go in
Yarn?
> > > > > > >
> > > > > > >
> > > > > > > On Tue, Dec 3, 2013 at 4:24 PM, Roshan Punnoose <
> > roshanp@gmail.com
> > > >
> > > > > > wrote:
> > > > > > >
> > > > > > > > Ah never mind. Got further. Basically, I had specified
> > > > > > > > the yarn.resourcemanager.address to use the rescourcemanager
> > > > > scheduler
> > > > > > > port
> > > > > > > > by mistake. Using the proper port got me further.
Thanks!
> > > > > > > >
> > > > > > > >
> > > > > > > > On Tue, Dec 3, 2013 at 4:17 PM, Roshan Punnoose <
> > > roshanp@gmail.com
> > > > >
> > > > > > > wrote:
> > > > > > > >
> > > > > > > >> Yeah, it seems to be honoring the kinit cache
properly and
> > > > > retrieving
> > > > > > > the
> > > > > > > >> correct kerberos ticket for validation.
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> On Tue, Dec 3, 2013 at 4:02 PM, Billie Rinaldi
<
> > > > > > > billie.rinaldi@gmail.com>wrote:
> > > > > > > >>
> > > > > > > >>> I haven't tried that out yet.  Were you following
the
> > > > instructions
> > > > > at
> > > > > > > >>>
> > > > > > > >>>
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/hortonworks/hoya/blob/master/src/site/markdown/security.md
> > > > > > > >>> ?
> > > > > > > >>>
> > > > > > > >>>
> > > > > > > >>> On Tue, Dec 3, 2013 at 12:46 PM, Roshan Punnoose
<
> > > > > roshanp@gmail.com>
> > > > > > > >>> wrote:
> > > > > > > >>>
> > > > > > > >>> > I am trying to run Hoya on a Kerberos
Secure cluster. I
> > > > believe I
> > > > > > > have
> > > > > > > >>> all
> > > > > > > >>> > the keytabs in place, and have been able
to run mapreduce
> > > jobs
> > > > > with
> > > > > > > my
> > > > > > > >>> > user, etc. However, when I run the "hoya
create" command
> I
> > > get
> > > > > this
> > > > > > > >>> > exception:
> > > > > > > >>> >
> > > > > > > >>> > org.apache.hadoop.security.AccessControlException:
Client
> > > > cannot
> > > > > > > >>> > authenticate via:[TOKEN]
> > > > > > > >>> > at
> > > > > > > >>> >
> > > > > > > >>> >
> > > > > > > >>>
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:170)
> > > > > > > >>> >
> > > > > > > >>> > I thought that Hoya should be using Kerberos
instead of
> the
> > > > > TOKEN.
> > > > > > > >>> >
> > > > > > > >>> > Also noticed that the SASL NEGOTIATE
is responding with
> > > "TOKEN"
> > > > > as
> > > > > > > >>> well:
> > > > > > > >>> >
> > > > > > > >>> > 2013-12-03 20:45:04,530 [main] DEBUG
> > security.SaslRpcClient -
> > > > > > > Received
> > > > > > > >>> SASL
> > > > > > > >>> > message state: NEGOTIATE
> > > > > > > >>> > auths {
> > > > > > > >>> >   method: "TOKEN"
> > > > > > > >>> >   mechanism: "DIGEST-MD5"
> > > > > > > >>> >   protocol: ""
> > > > > > > >>> >   serverId: "default"
> > > > > > > >>> > }
> > > > > > > >>> >
> > > > > > > >>> > That doesn't seem right either. Is there
something I
> might
> > be
> > > > > > > missing?
> > > > > > > >>> >
> > > > > > > >>> >
> > > > > > > >>> > On Fri, Oct 18, 2013 at 12:28 PM, Roshan
Punnoose <
> > > > > > roshanp@gmail.com
> > > > > > > >
> > > > > > > >>> > wrote:
> > > > > > > >>> >
> > > > > > > >>> > > Yeah I noticed the git-flow style
branching. Pretty
> cool.
> > > > > > > >>> > >
> > > > > > > >>> > >
> > > > > > > >>> > > On Fri, Oct 18, 2013 at 12:22 PM,
Ted Yu <
> > > > yuzhihong@gmail.com>
> > > > > > > >>> wrote:
> > > > > > > >>> > >
> > > > > > > >>> > >> Roshan:
> > > > > > > >>> > >> FYI
> > > > > > > >>> > >> The develop branch of Hoya repo
should be more
> > up-to-date.
> > > > > > > >>> > >>
> > > > > > > >>> > >> Cheers
> > > > > > > >>> > >>
> > > > > > > >>> > >>
> > > > > > > >>> > >> On Fri, Oct 18, 2013 at 8:33
AM, Billie Rinaldi <
> > > > > > > >>> > billie.rinaldi@gmail.com
> > > > > > > >>> > >> >wrote:
> > > > > > > >>> > >>
> > > > > > > >>> > >> > Adding --debug to the command
may print out more
> > things
> > > as
> > > > > > well.
> > > > > > > >>> >  Also,
> > > > > > > >>> > >> the
> > > > > > > >>> > >> > start-up is not instantaneous.
 In the Yarn logs,
> you
> > > > should
> > > > > > see
> > > > > > > >>> at
> > > > > > > >>> > >> first
> > > > > > > >>> > >> > one container under the
application (e.g.
> > > > > > > >>> > >> >
> > > > > > > >>> > >> >
> > > > > > > >>> > >>
> > > > > > > >>> >
> > > > > > > >>>
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> logs/userlogs/application_1381800165150_0014/container_1381800165150_0014_01_000001)
> > > > > > > >>> > >> > and its out.txt will contain
information about the
> > > > > > > initialization
> > > > > > > >>> > >> process.
> > > > > > > >>> > >> > If that goes well, it will
start up containers for
> the
> > > > other
> > > > > > > >>> > processes.
> > > > > > > >>> > >> >
> > > > > > > >>> > >> >
> > > > > > > >>> > >> > On Fri, Oct 18, 2013 at
8:20 AM, Roshan Punnoose <
> > > > > > > >>> roshanp@gmail.com>
> > > > > > > >>> > >> > wrote:
> > > > > > > >>> > >> >
> > > > > > > >>> > >> > > Ah ok, will check
the logs. When the create
> command
> > > did
> > > > > not
> > > > > > > >>> seem to
> > > > > > > >>> > do
> > > > > > > >>> > >> > > anything, I assumed
it was just initializing the
> > > > > > cluster.json
> > > > > > > >>> > >> descriptor
> > > > > > > >>> > >> > in
> > > > > > > >>> > >> > > hdfs.
> > > > > > > >>> > >> > >
> > > > > > > >>> > >> > >
> > > > > > > >>> > >> > > On Fri, Oct 18, 2013
at 11:15 AM, Billie Rinaldi
> > > > > > > >>> > >> > > <billie.rinaldi@gmail.com>wrote:
> > > > > > > >>> > >> > >
> > > > > > > >>> > >> > > > Sounds like we
should plan a meetup.  The
> examples
> > > > page
> > > > > > [1]
> > > > > > > >>> has an
> > > > > > > >>> > >> > > example
> > > > > > > >>> > >> > > > create command
to use for Accumulo (it requires
> a
> > > few
> > > > > more
> > > > > > > >>> options
> > > > > > > >>> > >> than
> > > > > > > >>> > >> > > the
> > > > > > > >>> > >> > > > HBase create
command).  After that your instance
> > > > should
> > > > > be
> > > > > > > up
> > > > > > > >>> and
> > > > > > > >>> > >> > > running.
> > > > > > > >>> > >> > > > If not, look
in the Yarn application logs to see
> > > > what's
> > > > > > > going
> > > > > > > >>> > >> wrong.  I
> > > > > > > >>> > >> > > > haven't tried
freezing and thawing an instance
> > yet,
> > > > just
> > > > > > > >>> freezing
> > > > > > > >>> > >> and
> > > > > > > >>> > >> > > > destroying to
clean up.  I've noticed freezing
> > > leaves
> > > > > some
> > > > > > > of
> > > > > > > >>> the
> > > > > > > >>> > >> > > processes
> > > > > > > >>> > >> > > > running, but
this is probably because I'm
> supposed
> > > to
> > > > be
> > > > > > > >>> testing
> > > > > > > >>> > on
> > > > > > > >>> > >> > Linux
> > > > > > > >>> > >> > > > instead of OS
X.
> > > > > > > >>> > >> > > >
> > > > > > > >>> > >> > > > [1]:
> > > > > > > >>> > >> > > >
> > > > > > > >>> > >> > > >
> > > > > > > >>> > >> > >
> > > > > > > >>> > >> >
> > > > > > > >>> > >>
> > > > > > > >>> >
> > > > > > > >>>
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/hortonworks/hoya/blob/develop/src/site/markdown/examples.md
> > > > > > > >>> > >> > > >
> > > > > > > >>> > >> > > >
> > > > > > > >>> > >> > > > On Fri, Oct 18,
2013 at 7:58 AM, Roshan
> Punnoose <
> > > > > > > >>> > roshanp@gmail.com
> > > > > > > >>> > >> >
> > > > > > > >>> > >> > > > wrote:
> > > > > > > >>> > >> > > >
> > > > > > > >>> > >> > > > > I would
be very interested in looking into
> Hoya
> > as
> > > > > > well. I
> > > > > > > >>> > pulled
> > > > > > > >>> > >> > down
> > > > > > > >>> > >> > > > the
> > > > > > > >>> > >> > > > > code and
got as far as being able to create
> the
> > > > > accumulo
> > > > > > > >>> cluster
> > > > > > > >>> > >> > > > descriptor
> > > > > > > >>> > >> > > > > through
the "hoya create" command. When I
> tried
> > > the
> > > > > > "hoya
> > > > > > > >>> thaw"
> > > > > > > >>> > >> > nothing
> > > > > > > >>> > >> > > > > seemed to
happen. Still debugging, but it
> would
> > be
> > > > > very
> > > > > > > >>> useful
> > > > > > > >>> > to
> > > > > > > >>> > >> > see a
> > > > > > > >>> > >> > > > > quick tutorial
on the usage over google+ if
> > > > possible.
> > > > > > > >>> Thanks!
> > > > > > > >>> > >> > > > >
> > > > > > > >>> > >> > > > >
> > > > > > > >>> > >> > > > > On Fri,
Oct 18, 2013 at 10:35 AM, Steve
> > Loughran <
> > > > > > > >>> > >> > > stevel@hortonworks.com
> > > > > > > >>> > >> > > > > >wrote:
> > > > > > > >>> > >> > > > >
> > > > > > > >>> > >> > > > > > Hi,
I'm working on it, with bille helping on
> > > > > accumulo
> > > > > > > >>> > specifics
> > > > > > > >>> > >> &
> > > > > > > >>> > >> > > > testing
> > > > > > > >>> > >> > > > > >
> > > > > > > >>> > >> > > > > >   
1. The code is up on github;
> > > > > > > >>> > >> > https://github.com/hortonworks/hoya.
> > > > > > > >>> > >> > > > > what
> > > > > > > >>> > >> > > > > >   
we don't have is any good issue tracking
> > -I'm
> > > > > using
> > > > > > > our
> > > > > > > >>> > >> internal
> > > > > > > >>> > >> > > > JIRA
> > > > > > > >>> > >> > > > > >   
server for that which is bad as it keeps
> > the
> > > > > > project
> > > > > > > >>> less
> > > > > > > >>> > >> open
> > > > > > > >>> > >> > > -and
> > > > > > > >>> > >> > > > > > loses
> > > > > > > >>> > >> > > > > >   
decision history
> > > > > > > >>> > >> > > > > >   
2.  we're on a two week sprint cycle,
> next
> > > one
> > > > > ends
> > > > > > > on
> > > > > > > >>> > monday
> > > > > > > >>> > >> > with
> > > > > > > >>> > >> > > > > >   
another release coming out -focus on
> secure
> > > > > cluster
> > > > > > > >>> support
> > > > > > > >>> > >> and
> > > > > > > >>> > >> > > > better
> > > > > > > >>> > >> > > > > >   
liveness monitoring
> > > > > > > >>> > >> > > > > >   
3. Bille has been deploying accumulo with
> > it.
> > > > > > > >>> > >> > > > > >
> > > > > > > >>> > >> > > > > > We're
doing development with the focus on
> > hbase,
> > > > > > though
> > > > > > > >>> it is
> > > > > > > >>> > >> > > designed
> > > > > > > >>> > >> > > > to
> > > > > > > >>> > >> > > > > > have
different back ends "providers" -the
> > > accumulo
> > > > > one
> > > > > > > is
> > > > > > > >>> the
> > > > > > > >>> > >> > > > > alternative,
> > > > > > > >>> > >> > > > > > and
most of what we do benefits both of
> them,
> > > it's
> > > > > > just
> > > > > > > >>> we are
> > > > > > > >>> > >> > > testing
> > > > > > > >>> > >> > > > > more
> > > > > > > >>> > >> > > > > > with
HBase and adding some features
> (liveness
> > > > > probes)
> > > > > > > >>> there
> > > > > > > >>> > >> first.
> > > > > > > >>> > >> > > > > >
> > > > > > > >>> > >> > > > > > If
we could get broader participation that
> > would
> > > > > help
> > > > > > > >>> with the
> > > > > > > >>> > >> > > accumulo
> > > > > > > >>> > >> > > > > > testing
and mean that we could put it into
> the
> > > > > Apache
> > > > > > > >>> > Incubation
> > > > > > > >>> > >> > > > process
> > > > > > > >>> > >> > > > > > -they
insist on that broadness by the time
> you
> > > get
> > > > > out
> > > > > > > of
> > > > > > > >>> > >> > incubation.
> > > > > > > >>> > >> > > > > >
> > > > > > > >>> > >> > > > > > -contact
me if you want to know more
> > > > > > > >>> > >> > > > > >
> > > > > > > >>> > >> > > > > > -i'd
be happy to do a remote presentation of
> > > hoya
> > > > > over
> > > > > > > >>> google+
> > > > > > > >>> > >> or
> > > > > > > >>> > >> > > > webex.
> > > > > > > >>> > >> > > > > >
> > > > > > > >>> > >> > > > > > I'm
actually trying to set up a
> > > > remote-only-YARN-HUG
> > > > > > > group
> > > > > > > >>> > >> between
> > > > > > > >>> > >> > > the
> > > > > > > >>> > >> > > > > US,
> > > > > > > >>> > >> > > > > > EU
and Asia, where we'd have remote-only
> > > sessions
> > > > in
> > > > > > > >>> different
> > > > > > > >>> > >> > > > timezones;
> > > > > > > >>> > >> > > > > > Hoya
would be one of the topics.
> > > > > > > >>> > >> > > > > >
> > > > > > > >>> > >> > > > > >
> > > > > > > >>> > >> > > > > >
> > > > > > > >>> > >> > > > > > On
17 October 2013 20:07, Ed Kohlwey <
> > > > > > > ekohlwey@gmail.com>
> > > > > > > >>> > >> wrote:
> > > > > > > >>> > >> > > > > >
> > > > > > > >>> > >> > > > > > >
I was wondering if anyone knows what the
> > > current
> > > > > > > status
> > > > > > > >>> of
> > > > > > > >>> > >> > Accumulo
> > > > > > > >>> > >> > > > in
> > > > > > > >>> > >> > > > > > Hoya
> > > > > > > >>> > >> > > > > > >
is. We're really interested in running
> > > Accumulo
> > > > in
> > > > > > > Yarn
> > > > > > > >>> in a
> > > > > > > >>> > >> > > > production
> > > > > > > >>> > >> > > > > > >
environment and helping to mature the
> > project
> > > to
> > > > > the
> > > > > > > >>> point
> > > > > > > >>> > >> that
> > > > > > > >>> > >> > we
> > > > > > > >>> > >> > > > > could
> > > > > > > >>> > >> > > > > > do
> > > > > > > >>> > >> > > > > > >
so.
> > > > > > > >>> > >> > > > > > >
> > > > > > > >>> > >> > > > > > >
Are the current issues mostly around
> testing
> > > or
> > > > > are
> > > > > > > >>> there
> > > > > > > >>> > some
> > > > > > > >>> > >> > > known
> > > > > > > >>> > >> > > > > > issues
> > > > > > > >>> > >> > > > > > >
already? Would it help to be able to run
> > over
> > > a
> > > > > > large
> > > > > > > >>> > cluster
> > > > > > > >>> > >> or
> > > > > > > >>> > >> > is
> > > > > > > >>> > >> > > > > there
> > > > > > > >>> > >> > > > > > >
some additional development that needs to
> be
> > > > done?
> > > > > > > >>> > >> > > > > > >
> > > > > > > >>> > >> > > > > >
> > > > > > > >>> > >> > > > > > --
> > > > > > > >>> > >> > > > > > CONFIDENTIALITY
NOTICE
> > > > > > > >>> > >> > > > > > NOTICE:
This message is intended for the use
> > of
> > > > the
> > > > > > > >>> individual
> > > > > > > >>> > >> or
> > > > > > > >>> > >> > > > entity
> > > > > > > >>> > >> > > > > to
> > > > > > > >>> > >> > > > > > which
it is addressed and may contain
> > > information
> > > > > that
> > > > > > > is
> > > > > > > >>> > >> > > confidential,
> > > > > > > >>> > >> > > > > > privileged
and exempt from disclosure under
> > > > > applicable
> > > > > > > >>> law. If
> > > > > > > >>> > >> the
> > > > > > > >>> > >> > > > reader
> > > > > > > >>> > >> > > > > > of
this message is not the intended
> recipient,
> > > you
> > > > > are
> > > > > > > >>> hereby
> > > > > > > >>> > >> > > notified
> > > > > > > >>> > >> > > > > that
> > > > > > > >>> > >> > > > > > any
printing, copying, dissemination,
> > > > distribution,
> > > > > > > >>> disclosure
> > > > > > > >>> > >> or
> > > > > > > >>> > >> > > > > > forwarding
of this communication is strictly
> > > > > > prohibited.
> > > > > > > >>> If
> > > > > > > >>> > you
> > > > > > > >>> > >> > have
> > > > > > > >>> > >> > > > > > received
this communication in error, please
> > > > contact
> > > > > > the
> > > > > > > >>> > sender
> > > > > > > >>> > >> > > > > immediately
> > > > > > > >>> > >> > > > > > and
delete it from your system. Thank You.
> > > > > > > >>> > >> > > > > >
> > > > > > > >>> > >> > > > >
> > > > > > > >>> > >> > > >
> > > > > > > >>> > >> > >
> > > > > > > >>> > >> >
> > > > > > > >>> > >>
> > > > > > > >>> > >
> > > > > > > >>> > >
> > > > > > > >>> >
> > > > > > > >>>
> > > > > > > >>
> > > > > > > >>
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > > --
> > > > > > CONFIDENTIALITY NOTICE
> > > > > > NOTICE: This message is intended for the use of the individual
or
> > > > entity
> > > > > to
> > > > > > which it is addressed and may contain information that is
> > > confidential,
> > > > > > privileged and exempt from disclosure under applicable law.
If
> the
> > > > reader
> > > > > > of this message is not the intended recipient, you are hereby
> > > notified
> > > > > that
> > > > > > any printing, copying, dissemination, distribution, disclosure
or
> > > > > > forwarding of this communication is strictly prohibited. If
you
> > have
> > > > > > received this communication in error, please contact the sender
> > > > > immediately
> > > > > > and delete it from your system. Thank You.
> > > > > >
> > > > >
> > > >
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message