Return-Path: X-Original-To: apmail-flink-user-archive@minotaur.apache.org Delivered-To: apmail-flink-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 61A641815A for ; Thu, 29 Oct 2015 13:48:07 +0000 (UTC) Received: (qmail 5906 invoked by uid 500); 29 Oct 2015 13:48:07 -0000 Delivered-To: apmail-flink-user-archive@flink.apache.org Received: (qmail 5826 invoked by uid 500); 29 Oct 2015 13:48:07 -0000 Mailing-List: contact user-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@flink.apache.org Delivered-To: mailing list user@flink.apache.org Received: (qmail 5816 invoked by uid 99); 29 Oct 2015 13:48:07 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 29 Oct 2015 13:48:07 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id B73F7C483B for ; Thu, 29 Oct 2015 13:48:06 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 3.9 X-Spam-Level: *** X-Spam-Status: No, score=3.9 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_REPLY=1, HTML_MESSAGE=3, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-eu-west.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id un4ifS_FtuON for ; Thu, 29 Oct 2015 13:47:55 +0000 (UTC) Received: from mail-lb0-f180.google.com (mail-lb0-f180.google.com [209.85.217.180]) by mx1-eu-west.apache.org (ASF Mail Server at mx1-eu-west.apache.org) with ESMTPS id D142820E9B for ; Thu, 29 Oct 2015 13:47:54 +0000 (UTC) Received: by lbbwb3 with SMTP id wb3so28878011lbb.1 for ; Thu, 29 Oct 2015 06:47:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=CeE1d99xyyBPIKVdEGBF8xVrxYiCTeCFYyltG7BtQEU=; b=tDu4yUsYUdQ2EJ9UBYuQbLYmBlmMew4HYdFma7dvXaT8+zl6ZvEP3hgXc/aE6ydtPK Vh70D251VBzMCnnfJF/ikd87Ag9vU2Uy/pjTExiLxzIgZ9wFQI5g9Lp/IdhHed9DlqwD yxGmsA7AGWB8Hnrs4q88v4ir5EsU51uZ2YoDeXK2ghO045FRu8KkuY57wgZjGvBQlvBI g9GLWNhM83MTJ6bA96WSvWhCIhA/ytlGrZEOrWC4m2C4gPhoQhutS8LCRP0uY+ky6HKg V31CcIvAfjpvzGVFDev92M4uHbN3InEork4j722ks8eLXxQdrwqdqb7EW9aZpcjej0X5 CpjQ== MIME-Version: 1.0 X-Received: by 10.112.236.8 with SMTP id uq8mr1068509lbc.116.1446126474338; Thu, 29 Oct 2015 06:47:54 -0700 (PDT) Received: by 10.112.162.134 with HTTP; Thu, 29 Oct 2015 06:47:54 -0700 (PDT) Received: by 10.112.162.134 with HTTP; Thu, 29 Oct 2015 06:47:54 -0700 (PDT) In-Reply-To: References: Date: Thu, 29 Oct 2015 09:47:54 -0400 Message-ID: Subject: Re: Flink on EC" From: KOSTIANTYN Kudriavtsev To: user@flink.apache.org Content-Type: multipart/alternative; boundary=001a11c3bffccda8bc05233e8edf --001a11c3bffccda8bc05233e8edf Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi Thomas, Try to switch to Emr amo 3.5 and register hadoop's s3 FileSystem instead of the one packed with flink *Sent from my ZenFone On Oct 29, 2015 4:36 AM, "Thomas G=C3=B6tzinger" wr= ote: > Hello Flink Team, > > We at IESE Fraunhofer are evaluating Flink for a project and I'm a bit > frustrated in the moment. > > I've wrote a few testcases with the flink API and want to deploy them to > an Flink EC2 Cluster. I setup the cluster using the > karamel receipt which was adressed in the following video > > > https://www.google.de/url?sa=3Dt&rct=3Dj&q=3D&esrc=3Ds&source=3Dvideo&cd= =3D1&cad=3Drja&uact=3D8&ved=3D0CDIQtwIwAGoVChMIy86Tq6rQyAIVR70UCh0IRwuJ&url= =3Dhttp%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dm_SkhyMV0to&usg=3DAFQjCNGKUzF= v521yg-OTy-1XqS2-rbZKug&bvm=3Dbv.105454873,d.bGg > > The setup works fine and the hello-flink app could be run. But afterwards > I want to copy some data from s3 bucket to the local ec2 hdfs cluster. > > The hadoop fs -ls s3n.... works as well as cat,... > But if I want to copy the data with distcp the command freezes, and does > not respond until a timeout. > > After trying a few things I gave up and start another solution. I want to > access the s3 Bucket directly with flink and import it using a > small flink programm which just reads s3 and writes to local hadoop. This > works fine locally, but on cluster the S3NFileSystem class is missing > (ClassNotFound Exception) althoug it is included in the jar file of the > installation. > > > I forked the chef receipt and updated to flink 0.9.1 but the same issue. > > Is there another simple script to install flink with hadoop on an ec2 > cluster and working s3n filesystem? > > > > Freelancer > > on Behalf of Fraunhofer IESE Kaiserslautern > > -- > > Viele Gr=C3=BC=C3=9Fe > > > > Thomas G=C3=B6tzinger > > Freiberuflicher Informatiker > > > > Glockenstra=C3=9Fe 2a > > D-66882 H=C3=BCtschenhausen OT Spesbach > > Mobil: +49 (0)176 82180714 > > Homezone: +49 (0) 6371 735083 > > Privat: +49 (0) 6371 954050 > > mailto:mail@simplydevelop.de > > epost: thomas.goetzinger@epost.de > --001a11c3bffccda8bc05233e8edf Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

Hi Thomas,

Try to switch to Emr amo 3.5 and register hadoop's s3 Fi= leSystem instead of the one packed with flink

*Sent from my ZenFone

On Oct 29, 2015 4:36 AM, "Thomas G=C3=B6tzi= nger" <mail@simplydevelop.= de> wrote:
Hello Flink Team= ,

We at IESE Fraunhofer are evaluating=C2=A0Fli= nk=C2=A0for a project and I'm a bit frustrated in the moment.=C2=A0

I've wrote a few testcases with the=C2=A0flink= =C2=A0API and want to deploy them to an=C2=A0Flink=C2=A0EC2 Cluster. I setu= p the cluster using the=C2=A0
karamel receipt which was adressed in the following video=C2=A0


The setup works fine and the hello-fl= ink=C2=A0app could be run. But afterwards I want to copy some data from s3 = bucket to the local ec2 hdfs cluster.=C2=A0

The= hadoop fs -ls s3n.... works as well as cat,...
But if I want to copy the data with distcp the comman= d freezes, and does not respond until a timeout.

After trying a few things I gave up and start another solution. I wan= t to access the s3 Bucket directly with=C2=A0flink=C2=A0and import it using= a small=C2=A0flink=C2=A0programm which just reads s3 and writes to local h= adoop. This works fine locally, but on cluster the S3NFileSystem class is m= issing (ClassNotFound Exception) althoug it is included in the jar file of = the installation.=C2=A0


I forked the chef receipt and updated to= =C2=A0flink=C2=A00.9.1 but the same issue.

Is ther= e another simple script to install=C2=A0flink=C2=A0with hadoop on an ec2 cl= uster and working s3n filesystem?



Freelancer=C2=A0

on Behalf of Fraunhofer IESE Kaisersla= utern


--

Viele Gr=C3=BC=C3=9Fe<= /p>

=C2=A0

Thomas G=C3=B6tzinger

Freiberuflicher Informatiker

=C2=A0

Glockenstra=C3=9Fe 2a

D-66882 H=C3=BCtschenhausen OT Spesbach

Mobil: +49 (0)176 82180714

Homezone: +49 (0) 6371 735083

Privat: +49 (0) 6371 954050

mailto:mail@simplydevelop.de

epost: thomas.goetzinger@epost.de

--001a11c3bffccda8bc05233e8edf--