Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 996C6671C for ; Mon, 30 May 2011 08:18:22 +0000 (UTC) Received: (qmail 96027 invoked by uid 500); 30 May 2011 08:18:16 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 95823 invoked by uid 500); 30 May 2011 08:18:15 -0000 Mailing-List: contact mapreduce-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-user@hadoop.apache.org Delivered-To: mailing list mapreduce-user@hadoop.apache.org Received: (qmail 95806 invoked by uid 99); 30 May 2011 08:18:14 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 30 May 2011 08:18:14 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,RFC_ABUSE_POST,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of laurent.hatier@gmail.com designates 209.85.160.48 as permitted sender) Received: from [209.85.160.48] (HELO mail-pw0-f48.google.com) (209.85.160.48) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 30 May 2011 08:18:07 +0000 Received: by pwi16 with SMTP id 16so2144111pwi.35 for ; Mon, 30 May 2011 01:17:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=cPOWEUSgoukaU0aP6bPak1CwI9CfMDVSon8HaJujlIo=; b=uHtfNRo6A+HtVMg1vHawX//fxtdrFWRvAW75vH2czMdbFJX270fxp5CwstAg7TZxZb LXNHeCoVigi1CNYCczGSQ6mpQu2UTO3GLPSR3TvPV11N2+VgmiiUvi1+FV5V3xyMMLIt XACRyRvIpHLdFa+MHB8WLWyG2zghP+9a6aeS4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=nrawVWLDzrOTceVOR+4QalOKx79ugrT31ar9i+UjHFwyVWOkher6y26aoejKh4aN2x mKjfgeKp+DsT/ssoDzjNQkDaq+038tUvDQ6AmLll06UtOfFLrKuVLm2jAYGogsGlmMiT JKb/pV/JzrC+cgAzg2Vd3flqFhxrRs8eS0vWM= MIME-Version: 1.0 Received: by 10.68.12.226 with SMTP id b2mr559887pbc.44.1306743466429; Mon, 30 May 2011 01:17:46 -0700 (PDT) Received: by 10.142.240.14 with HTTP; Mon, 30 May 2011 01:17:46 -0700 (PDT) In-Reply-To: References: <050af519622395a0bdae18a98f361356@adam.ccri.com> Date: Mon, 30 May 2011 10:17:46 +0200 Message-ID: Subject: Re: Hadoop problem From: Laurent Hatier To: mapreduce-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=bcaec51dd1df210ec504a479ecf7 X-Virus-Checked: Checked by ClamAV on apache.org --bcaec51dd1df210ec504a479ecf7 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi everybody, I have a little problem with cassandra-all jar file : when i want to write back the result of the MapReduce in DB, he says me that the SpecificRecord class (Hector API) is not found... I have already check this dependency and it's ok. Do I have to use the Cassandra API or it's a technical problem ? Thanks 2011/5/27 Laurent Hatier > Of couurse !!! It's logical. > Thank you John. > > 2011/5/27 John Armstrong > >> On Fri, 27 May 2011 13:52:04 +0200, Laurent Hatier >> wrote: >> > I'm a newbie with Hadoop/MapReduce. I've a problem with hadoop. I set >> some >> > variables in the run function but when Map running, he can't get the >> value >> > of theses variables... >> > If anyone knows the solution :) >> >> By the "run function" do you mean the main method that launches the >> map/reduce job? It's no surprise that the mappers (and reducers) won't >> know those variables, because they run as completely separate tasks. >> >> If you're computing something in the setup method for use in the mappers >> or reducers you'll have to pass that information along somehow. If it's= a >> String (or something that can easily be made into a String, like an int) >> you can set it as a property in the job's Configuration. For more >> complicated data you'll have to serialize it to a file, place the file >> into >> the distributed cache, and then deserialize the data within the mapper o= r >> reducer's setup method. >> >> Of course, if the computation is less complicated/time consuming than th= e >> deserialization process, you may as well just recompute the data in each >> mapper or reducer. >> > > > > -- > Laurent HATIER > =C9tudiant en 2e ann=E9e du Cycle Ing=E9nieur =E0 l'EISTI > --=20 Laurent HATIER =C9tudiant en 2e ann=E9e du Cycle Ing=E9nieur =E0 l'EISTI --bcaec51dd1df210ec504a479ecf7 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi everybody,

I have a little problem with cassandra-all jar file : = when i want to write back the result of the MapReduce in DB, he says me tha= t the SpecificRecord class (Hector API) is not found... I have already chec= k this dependency and it's ok. Do I have to use the Cassandra API or it= 's a technical problem ?

Thanks

2011/5/27 Laurent Hatier <laurent.hatier= @gmail.com>
Of couurse !!! It's logical.
Thank you John.

2011/5/27 John Armstrong <john.armstrong@ccri.com>
On Fri, 27 May 2011 13:52:04 +0200, Laurent Hatier
<laurent.h= atier@gmail.com> wrote:
> I'm a newbie with Hadoop/MapReduce. I've a problem with hadoop= . I set
some
> variables in the run function but when Map running, he can't get t= he
value
> of theses variables...
> If anyone knows the solution :)

By the "run function" do you mean the main method that laun= ches the
map/reduce job? =A0It's no surprise that the mappers (and reducers) won= 't
know those variables, because they run as completely separate tasks.

If you're computing something in the setup method for use in the mapper= s
or reducers you'll have to pass that information along somehow. =A0If i= t's a
String (or something that can easily be made into a String, like an int) you can set it as a property in the job's Configuration. =A0For more complicated data you'll have to serialize it to a file, place the file = into
the distributed cache, and then deserialize the data within the mapper or reducer's setup method.

Of course, if the computation is less complicated/time consuming than the deserialization process, you may as well just recompute the data in each mapper or reducer.



--
Laurent HATIER
=C9tudiant en 2e ann=E9e du Cycle = Ing=E9nieur =E0 l'EISTI



--
Laurent HAT= IER
=C9tudiant en 2e ann=E9e du Cycle Ing=E9nieur =E0 l'EISTI
--bcaec51dd1df210ec504a479ecf7--