mesos-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tim St Clair <tstcl...@redhat.com>
Subject Tachyon on Mesos
Date Mon, 25 Aug 2014 15:17:46 GMT
Conner - 

Awesome! 

+1 re: java + PR to tachyon repo 

I should have some time later this week to eval, I'm curious how you handled the mounting
details. 

Cheers, 
Tim 

----- Original Message -----

> From: "Connor Doyle" <connor.p.d@gmail.com>
> To: user@mesos.apache.org
> Cc: "Haoyuan Li" <haoyuan.li@gmail.com>, "Huamin Chen" <hchen@redhat.com>,
> "Brad Childs" <bchilds@redhat.com>, "Adam Bordelon" <adam@mesosphere.io>
> Sent: Sunday, August 24, 2014 6:46:46 PM
> Subject: Re: Alternate HDFS Filesystems + Hadoop on Mesos

> > Also, fwiw I'm interested in rallying folks on a Tachyon Framework in the
> > not-too-distant future, for anyone who is interested. Probably follow the
> > spark model and try to push upstream.
> 

> Hi Tim, late follow-up:

> The not-too distant future is here! Adam and I took a stab at a Tachyon
> framework during the MesosCon hackathon (
> http://github.com/mesosphere/tachyon-mesos ).
> We started writing in Scala, but not at all opposed to switching to Java,
> especially if the work can be upstreamed.
> --
> Connor

> > > On Fri, Aug 15, 2014 at 5:16 PM, John Omernik < john@omernik.com > wrote:
> > 
> 

> > > > I tried hdfs:/// and hdfs://cldbnode:7222/ Neither worked (examples
> > > > below)
> > > > I
> > > > really think the hdfs vs other prefixes should be looked at. Like I
> > > > said
> > > > above, the tachyon project just added a env variable to address this.
> > > 
> > 
> 

> > > > hdfs://cldbnode:7222/
> > > 
> > 
> 
> > > > WARNING: Logging before InitGoogleLogging() is written to STDERR
> > > 
> > 
> 
> > > > I0815 19:14:17.101666 22022 fetcher.cpp:76] Fetching URI
> > > > 'hdfs://hadoopmapr1:7222/mesos/hadoop-0.20.2-mapr-4.0.0.tgz'
> > > 
> > 
> 
> > > > I0815 19:14:17.101780 22022 fetcher.cpp:105] Downloading resource from
> > > > 'hdfs://hadoopmapr1:7222/mesos/hadoop-0.20.2-mapr-4.0.0.tgz' to
> > > > '/tmp/mesos/slaves/20140815-103603-1677764800-5050-24315-2/frameworks/20140815-154511-1677764800-5050-7162-0003/executors/executor_Task_Tracker_5/runs/b3174e72-75ea-48be-bbb8-a9a6cc605018/hadoop-0.20.2-mapr-4.0.0.tgz'
> > > 
> > 
> 
> > > > E0815 19:14:17.778833 22022 fetcher.cpp:109] HDFS copyToLocal failed:
> > > > hadoop
> > > > fs -copyToLocal
> > > > 'hdfs://hadoopmapr1:7222/mesos/hadoop-0.20.2-mapr-4.0.0.tgz'
> > > > '/tmp/mesos/slaves/20140815-103603-1677764800-5050-24315-2/frameworks/20140815-154511-1677764800-5050-7162-0003/executors/executor_Task_Tracker_5/runs/b3174e72-75ea-48be-bbb8-a9a6cc605018/hadoop-0.20.2-mapr-4.0.0.tgz'
> > > 
> > 
> 
> > > > WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated.
> > > > Please
> > > > use
> > > > org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties
> > > > files.
> > > 
> > 
> 
> > > > -copyToLocal: Wrong FS:
> > > > maprfs://hadoopmapr1:7222/mesos/hadoop-0.20.2-mapr-4.0.0.tgz, expected:
> > > > hdfs://hadoopmapr1:7222/mesos/hadoop-0.20.2-mapr-4.0.0.tgz
> > > 
> > 
> 
> > > > Usage: hadoop fs [generic options] -copyToLocal [-p] [-ignoreCrc]
> > > > [-crc]
> > > > <src> ... <localdst>
> > > 
> > 
> 
> > > > Failed to fetch:
> > > > hdfs://hadoopmapr1:7222/mesos/hadoop-0.20.2-mapr-4.0.0.tgz
> > > 
> > 
> 
> > > > Failed to synchronize with slave (it's probably exited)
> > > 
> > 
> 

> > > > hdfs:///
> > > 
> > 
> 
> > > > I0815 19:10:45.006803 21508 fetcher.cpp:76] Fetching URI
> > > > 'hdfs:///mesos/hadoop-0.20.2-mapr-4.0.0.tgz'
> > > 
> > 
> 
> > > > I0815 19:10:45.007099 21508 fetcher.cpp:105] Downloading resource from
> > > > 'hdfs:///mesos/hadoop-0.20.2-mapr-4.0.0.tgz' to
> > > > '/tmp/mesos/slaves/20140815-103603-1677764800-5050-24315-2/frameworks/20140815-154511-1677764800-5050-7162-0002/executors/executor_Task_Tracker_2/runs/22689054-aff6-4f7c-9746-a068a11ff000/hadoop-0.20.2-mapr-4.0.0.tgz'
> > > 
> > 
> 
> > > > E0815 19:10:45.681922 21508 fetcher.cpp:109] HDFS copyToLocal failed:
> > > > hadoop
> > > > fs -copyToLocal 'hdfs:///mesos/hadoop-0.20.2-mapr-4.0.0.tgz'
> > > > '/tmp/mesos/slaves/20140815-103603-1677764800-5050-24315-2/frameworks/20140815-154511-1677764800-5050-7162-0002/executors/executor_Task_Tracker_2/runs/22689054-aff6-4f7c-9746-a068a11ff000/hadoop-0.20.2-mapr-4.0.0.tgz'
> > > 
> > 
> 
> > > > WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated.
> > > > Please
> > > > use
> > > > org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties
> > > > files.
> > > 
> > 
> 
> > > > -copyToLocal: Wrong FS: maprfs:/mesos/hadoop-0.20.2-mapr-4.0.0.tgz,
> > > > expected:
> > > > hdfs:///mesos/hadoop-0.20.2-mapr-4.0.0.tgz
> > > 
> > 
> 
> > > > Usage: hadoop fs [generic options] -copyToLocal [-p] [-ignoreCrc]
> > > > [-crc]
> > > > <src> ... <localdst>
> > > 
> > 
> 
> > > > Failed to fetch: hdfs:///mesos/hadoop-0.20.2-mapr-4.0.0.tgz
> > > 
> > 
> 
> > > > Failed to synchronize with slave (it's probably exited)
> > > 
> > 
> 

> > > > On Fri, Aug 15, 2014 at 5:38 PM, John Omernik < john@omernik.com >
> > > > wrote:
> > > 
> > 
> 

> > > > > I am away from my cluster right now, I trued doing a hadoop fs -ls
> > > > > maprfs://
> > > > > and that worked. When I tries hadoop fs -ls hdfs:/// it failed with
> > > > > wrong
> > > > > fs
> > > > > type. With that error I didn't try it in the mapred-site. I will
try
> > > > > it.
> > > > > Still...why hard code the file prefixes? I guess I am curious on
how
> > > > > glusterfs would work, or others as they pop up.
> > > > 
> > > 
> > 
> 
> > > > > On Aug 15, 2014 5:04 PM, "Adam Bordelon" < adam@mesosphere.io
>
> > > > > wrote:
> > > > 
> > > 
> > 
> 

> > > > > > Can't you just use the hdfs:// protocol for maprfs? That should
> > > > > > work
> > > > > > just
> > > > > > fine.
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > On Fri, Aug 15, 2014 at 2:50 PM, John Omernik < john@omernik.com
>
> > > > > > wrote:
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > > Thanks all.
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > > I realized MapR has a work around for me that I will try
soon in
> > > > > > > that
> > > > > > > I
> > > > > > > have
> > > > > > > MapR fs NFS mounted on each node, I.e. I should be able
to get
> > > > > > > the
> > > > > > > tar
> > > > > > > from
> > > > > > > there.
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > > That said, perhaps someone with better coding skills than
me
> > > > > > > could
> > > > > > > provide
> > > > > > > an
> > > > > > > env variable where a user could provide the HDFS prefixes
to try.
> > > > > > > I
> > > > > > > know
> > > > > > > we
> > > > > > > did that with the tachyon project and it works well for
other
> > > > > > > HDFS
> > > > > > > compatible fs implementations, perhaps that would work
here? Hard
> > > > > > > coding
> > > > > > > a
> > > > > > > pluggable system seems like a long term issue that will
keep
> > > > > > > coming
> > > > > > > up.
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > > On Aug 15, 2014 4:02 PM, "Tim St Clair" < tstclair@redhat.com
>
> > > > > > > wrote:
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > > > The uri doesn't currently start with any of the known
types (at
> > > > > > > > least
> > > > > > > > on
> > > > > > > > 1st
> > > > > > > > grok).
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > > > You could redirect via a proxy that does the job for
you.
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > > > | if you had some fuse mount that would work too.
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > > > Cheers,
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > > > Tim
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > > > > From: "John Omernik" < john@omernik.com >
> > > > > > > > 
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > > > > To: user@mesos.apache.org
> > > > > > > > 
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > > > > Sent: Friday, August 15, 2014 3:55:02 PM
> > > > > > > > 
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > > > > Subject: Alternate HDFS Filesystems + Hadoop
on Mesos
> > > > > > > > 
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > > > > I am on a wonderful journey trying to get hadoop
on Mesos
> > > > > > > > > working
> > > > > > > > > with
> > > > > > > > > MapR.
> > > > > > > > > I feel like I am close, but when the slaves try
to run the
> > > > > > > > > packaged
> > > > > > > > > Hadoop,
> > > > > > > > > I get the error below. The odd thing is, I KNOW
I got Spark
> > > > > > > > > running
> > > > > > > > > on
> > > > > > > > > Mesos
> > > > > > > > > pulling both data and the packages from MapRFS.
So I am
> > > > > > > > > confused
> > > > > > > > > why
> > > > > > > > > there
> > > > > > > > > is and issue with the fetcher.cpp here. Granted,
when I got
> > > > > > > > > spark
> > > > > > > > > working,
> > > > > > > > > it was on 0.19.0, and I am trying a "fresh" version
from git
> > > > > > > > > (0.20.0?)
> > > > > > > > > that
> > > > > > > > > I just pulled today. I am not sure if that work,
but when I
> > > > > > > > > have
> > > > > > > > > more
> > > > > > > > > time
> > > > > > > > > I
> > > > > > > > > will try spark again.
> > > > > > > > 
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > > > > Any thoughts on this error? Thanks.
> > > > > > > > 
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > > > > Error:
> > > > > > > > 
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > > > > WARNING: Logging before InitGoogleLogging() is
written to
> > > > > > > > > STDERR
> > > > > > > > 
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > > > > I0815 15:48:35.446071 20636 fetcher.cpp:76] Fetching
URI
> > > > > > > > > 'maprfs:///mesos/hadoop-0.20.2-mapr-4.0.0.tgz'
> > > > > > > > 
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > > > > E0815 15:48:35.446184 20636 fetcher.cpp:161]
A relative path
> > > > > > > > > was
> > > > > > > > > passed
> > > > > > > > > for
> > > > > > > > > the resource but the environment variable
> > > > > > > > > MESOS_FRAMEWORKS_HOME
> > > > > > > > > is
> > > > > > > > > not
> > > > > > > > > set.
> > > > > > > > > Please either specify this config option or avoid
using a
> > > > > > > > > relative
> > > > > > > > > path
> > > > > > > > 
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > > > > Failed to fetch: maprfs:///mesos/hadoop-0.20.2-mapr-4.0.0.tgz
> > > > > > > > 
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > > > > Failed to synchronize with slave (it's probably
exited)
> > > > > > > > 
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 

> > > > > > > > --
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > > > Cheers,
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > > > Timothy St. Clair
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 
> > > > > > > > Red Hat Inc.
> > > > > > > 
> > > > > > 
> > > > > 
> > > > 
> > > 
> > 
> 

> > --
> 
> > Cheers,
> 
> > Timothy St. Clair
> 
> > Red Hat Inc.
> 

-- 
Cheers, 
Timothy St. Clair 
Red Hat Inc. 

Mime
View raw message