Return-Path: X-Original-To: apmail-hive-user-archive@www.apache.org Delivered-To: apmail-hive-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 97FBE1877F for ; Fri, 20 Nov 2015 09:35:43 +0000 (UTC) Received: (qmail 60501 invoked by uid 500); 20 Nov 2015 09:35:42 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 60424 invoked by uid 500); 20 Nov 2015 09:35:42 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 60414 invoked by uid 99); 20 Nov 2015 09:35:41 -0000 Received: from Unknown (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 20 Nov 2015 09:35:41 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 8A4D3C05AC for ; Fri, 20 Nov 2015 09:35:41 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.88 X-Spam-Level: ** X-Spam-Status: No, score=2.88 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-eu-west.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id G7Gq8JM0TQmU for ; Fri, 20 Nov 2015 09:35:31 +0000 (UTC) Received: from mail-yk0-f170.google.com (mail-yk0-f170.google.com [209.85.160.170]) by mx1-eu-west.apache.org (ASF Mail Server at mx1-eu-west.apache.org) with ESMTPS id 3F60A20C8B for ; Fri, 20 Nov 2015 09:35:31 +0000 (UTC) Received: by ykfs79 with SMTP id s79so153567754ykf.1 for ; Fri, 20 Nov 2015 01:35:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=Ckb6Iuy84RVLBEKre67lMlSwn4MQiEtGgB/+bVsdXL0=; b=eiAXOls/jJWOgoC3lW2llYRbmQ1y6iuFCeMGi81yt1uyw2bNDPR9xngnxPRbI5gssY EYSZ/C1UYzWu2f6Cdn27+LPMK4UgzBaCCOlEEAjnSRr8dgU0Zx0Lged3Q0Bs6Rm7+1iv atIqXIZ79HSAfPDqQ3TBru/nBZzM8PAEMHk3TH+ykFdgdKEDV4Ud/FfygpG/vM6ui+E/ I3R60R6FBR///wFhcHOiu1l2HN7LDEQQjsfjXGHWNNhV1UmXZcScz+9BgM7RCcppUdf4 cl/xJhoE3RGeyGXXkSb5SxPeUQgkg7eG28lufrxppJ/Hma3LosNqVVJQ+mU8sTsnlbyG mVQA== MIME-Version: 1.0 X-Received: by 10.129.46.194 with SMTP id u185mr12838215ywu.277.1448012130347; Fri, 20 Nov 2015 01:35:30 -0800 (PST) Received: by 10.13.228.67 with HTTP; Fri, 20 Nov 2015 01:35:30 -0800 (PST) Date: Fri, 20 Nov 2015 15:05:30 +0530 Message-ID: Subject: Hive on Spark - Hadoop 2 - Installation - Ubuntu From: Dasun Hegoda To: user@hive.apache.org Content-Type: multipart/alternative; boundary=001a114063fea8f2ff0524f59879 --001a114063fea8f2ff0524f59879 Content-Type: text/plain; charset=UTF-8 Hi, What I'm planning to do is develop a reporting platform using existing data. I have an existing RDBMS which has large number of records. So I'm using. ( http://stackoverflow.com/questions/33635234/hadoop-2-7-spark-hive-jasperreports-scoop-architecuture ) - Scoop - Extract data from RDBMS to Hadoop - Hadoop - Storage platform -> *Deployment Completed* - Hive - Datawarehouse - Spark - Read time processing -> *Deployment Completed* I'm planning to deploy Hive on Spark but I can't find the installation steps. I tried to read the official '[Hive on Spark][1]' guide but it has problems. As an example it says under 'Configuring Yarn' `yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler` but does not imply where should I do it. Also as per the guide configurations are set in the Hive runtime shell which is not permanent according to my knowledge. Given that I read [this][2] but it does not have any steps. Please provide me the steps to run Hive on Spark on Ubuntu as a production system? [1]: https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started [2]: http://stackoverflow.com/questions/26018306/how-to-configure-hive-to-use-spark -- Regards, Dasun Hegoda, Software Engineer www.dasunhegoda.com | dasunhegoda@gmail.com --001a114063fea8f2ff0524f59879 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi,

What I'm planning to do is= develop a reporting platform using existing data. I have an existing RDBMS= which has large number of records. So I'm using. (http://stackoverflow.com/questions/33635234/hadoop-2-7-spa= rk-hive-jasperreports-scoop-architecuture)

=C2= =A0- Scoop - Extract data from RDBMS to Hadoop
=C2=A0- Hadoop - S= torage platform -> *Deployment Completed*
=C2=A0- Hive - Dataw= arehouse
=C2=A0- Spark - Read time processing -> *Deployment C= ompleted*

I'm planning to deploy Hive on Spark= but I can't find the installation steps. I tried to read the official = '[Hive on Spark][1]' guide but it has problems. As an example it sa= ys under 'Configuring Yarn' `yarn.resourcemanager.scheduler.class= =3Dorg.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedul= er` but does not imply where should I do it. Also as per the guide configur= ations are set in the Hive runtime shell which is not permanent according t= o my knowledge.

Given that I read [this][2] but it= does not have any steps.

Please provide me the st= eps to run Hive on Spark on Ubuntu as a production system?

--
Regards,Dasun Hegoda, Software Engineer=C2=A0=C2=A0
www.dasunhegod= a.com=C2=A0| dasunhegoda@gmail.com
--001a114063fea8f2ff0524f59879--