Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E34D5D2DF for ; Wed, 3 Oct 2012 13:44:52 +0000 (UTC) Received: (qmail 35929 invoked by uid 500); 3 Oct 2012 13:44:49 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 35831 invoked by uid 500); 3 Oct 2012 13:44:47 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 35823 invoked by uid 99); 3 Oct 2012 13:44:47 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 03 Oct 2012 13:44:47 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of yhemanth@gmail.com designates 209.85.212.170 as permitted sender) Received: from [209.85.212.170] (HELO mail-wi0-f170.google.com) (209.85.212.170) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 03 Oct 2012 13:44:43 +0000 Received: by mail-wi0-f170.google.com with SMTP id hm2so1979936wib.5 for ; Wed, 03 Oct 2012 06:44:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=SJPAlSXotlS7vzVnQ5XB7nC9kX53tAOeSXX8RiKY4S0=; b=SZlM/DfQJEB67cUg6fDwZu1A6riT5Q5xkgACrb6W/yFLAf7PwX3w+iFWSWHJW/lFAQ mpeDRxHHc9BFiQs8v6x4UAz5LS/bG/ZxTnwH1dT9wIAJ8qBtgPj+uQcVqXWyjW/cVNd8 GQABNmifUgHYb7WbKcyxVObu3XJ9uttIkwT5EvBOiQ4C1BxNcdVmOMTXgnZrbkdfg6NU 1jzDI6zG/cXaLSit5+v0zGbTBsdbRJzzgEMWFThWtc77gJ2S5JB9le+kqLSsu0SA6RWf IZX1K85XVnv0SOnmbL7ol49vQJOEZOoZ8zpI/v4AdT5x7oFFvrfeWrkcAsUqR4UDW30a w6PA== MIME-Version: 1.0 Received: by 10.180.94.102 with SMTP id db6mr5076252wib.20.1349271861848; Wed, 03 Oct 2012 06:44:21 -0700 (PDT) Received: by 10.223.155.9 with HTTP; Wed, 3 Oct 2012 06:44:21 -0700 (PDT) In-Reply-To: <1348658821.31889.YahooMailClassic@web121705.mail.ne1.yahoo.com> References: <1348658821.31889.YahooMailClassic@web121705.mail.ne1.yahoo.com> Date: Wed, 3 Oct 2012 19:14:21 +0530 Message-ID: Subject: Re: Hadoop and Cuda , JCuda (CPU+GPU architecture) From: Hemanth Yamijala To: common-user@hadoop.apache.org Cc: Chen He Content-Type: multipart/alternative; boundary=f46d0442720a07e95604cb27d6c2 X-Virus-Checked: Checked by ClamAV on apache.org --f46d0442720a07e95604cb27d6c2 Content-Type: text/plain; charset=ISO-8859-1 You could also try creating a lib directory with the dependant jar and package that along with the job's jar file. Please refer to this blog post for information: http://www.cloudera.com/blog/2011/01/how-to-include-third-party-libraries-in-your-map-reduce-job/ On Wed, Sep 26, 2012 at 4:57 PM, sudha sadhasivam wrote: > Sir > We have also tried the option of putting JCUBLAA in hadoop jar. > Still it does not recognise. > We would be thankful if you could provide us with a sample exercise on the > same with steps for execution > I am herewith attaching the error file > Thanking you > with warm regards > Dr G sudha Sadasivam > > > --- On *Tue, 9/25/12, Chen He * wrote: > > > From: Chen He > Subject: Re: Hadoop and Cuda , JCuda (CPU+GPU architecture) > To: "sudha sadhasivam" > Cc: common-user@hadoop.apache.org > Date: Tuesday, September 25, 2012, 9:01 PM > > > Hi Sudha > > Good question. > > First of all, you need to specify clearly about your Hadoop environment, > (pseudo distributed or real cluster) > > Secondly, you need to clearly understand how hadoop load job's jar file to > all worker nodes, it only copy the jar file to worker nodes. It does not > contain the jcuda.jar file. MapReduce program may not know where it is even > you specify the jcuda.jar file in our worker node classpath. > > I prefer you can include the Jcuda.jar into your wordcount.jar. Then when > Hadoop copy the wordcount.jar file to all worker nodes' temporary working > directory, you do not need to worry about this issue. > > Let me know if you meet further question. > > Chen > > On Tue, Sep 25, 2012 at 12:38 AM, sudha sadhasivam < > sudhasadhasivam@yahoo.com > > wrote: > > > Sir > > We tried to integrate hadoop and JCUDA. > > We tried a code from > > > > > > > http://code.google.com/p/mrcl/source/browse/trunk/hama-mrcl/src/mrcl/mrcl/?r=76 > > > > We re able to compile. We are not able to execute. It does not recognise > > JCUBLAS.jar. We tried setting the classpath > > We are herewith attaching the procedure for the same along with errors > > Kindly inform us how to proceed. It is our UG project > > Thanking you > > Dr G sudha Sadasivam > > > > --- On *Mon, 9/24/12, Chen He >* > wrote: > > > > > > From: Chen He > > > > Subject: Re: Hadoop and Cuda , JCuda (CPU+GPU architecture) > > To: common-user@hadoop.apache.org > > Date: Monday, September 24, 2012, 9:03 PM > > > > > > http://wiki.apache.org/hadoop/CUDA%20On%20Hadoop > > > > On Mon, Sep 24, 2012 at 10:30 AM, Oleg Ruchovets > > > >wrote: > > > > > Hi > > > > > > I am going to process video analytics using hadoop > > > I am very interested about CPU+GPU architercute espessially using CUDA > ( > > > http://www.nvidia.com/object/cuda_home_new.html) and JCUDA ( > > > http://jcuda.org/) > > > Does using HADOOP and CPU+GPU architecture bring significant > performance > > > improvement and does someone succeeded to implement it in production > > > quality? > > > > > > I didn't fine any projects / examples to use such technology. > > > If someone could give me a link to best practices and example using > > > CUDA/JCUDA + hadoop that would be great. > > > Thanks in advane > > > Oleg. > > > > > > > > > --f46d0442720a07e95604cb27d6c2--