Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 21558F377 for ; Tue, 30 Apr 2013 18:01:07 +0000 (UTC) Received: (qmail 84040 invoked by uid 500); 30 Apr 2013 18:01:04 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 84021 invoked by uid 500); 30 Apr 2013 18:01:04 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 84012 invoked by uid 99); 30 Apr 2013 18:01:04 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 30 Apr 2013 18:01:04 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=5.0 tests=RCVD_IN_DNSWL_NONE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy) Received: from [208.113.200.5] (HELO homiemail-a44.g.dreamhost.com) (208.113.200.5) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 30 Apr 2013 18:00:59 +0000 Received: from homiemail-a44.g.dreamhost.com (localhost [127.0.0.1]) by homiemail-a44.g.dreamhost.com (Postfix) with ESMTP id 16AEA11805C for ; Tue, 30 Apr 2013 11:00:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=thelastpickle.com; h= content-type:mime-version:subject:from:in-reply-to:date :content-transfer-encoding:message-id:references:to; s= thelastpickle.com; bh=DdTQYaRG/QbyF6blK5iPUnO7pNI=; b=kmgDMi105G uzEdEBlR26a4SiB9qUwsQRG+vln3ShT5zie+UriflBSin8m2U5WwtOvpCGRtPhSH gkIdwDN9m3B/mY0wozVNIf5l2hvxZwfmaIoA2hfgcG2rn1cZOLSdsZhCG6l8Eolr wcEQ1bHfujJkUxmmJ9r127q9KQ3weQBLo= Received: from [172.16.1.8] (unknown [203.86.207.101]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: aaron@thelastpickle.com) by homiemail-a44.g.dreamhost.com (Postfix) with ESMTPSA id 67543118060 for ; Tue, 30 Apr 2013 11:00:38 -0700 (PDT) Content-Type: text/plain; charset=iso-8859-1 Mime-Version: 1.0 (Mac OS X Mail 6.3 \(1503\)) Subject: =?iso-8859-1?Q?Re=3A_error_casandra_ring_an_hadoop_connection_?= =?iso-8859-1?Q?=BF=3F?= From: aaron morton In-Reply-To: Date: Wed, 1 May 2013 06:00:36 +1200 Content-Transfer-Encoding: quoted-printable Message-Id: <3C7B98C9-3692-49EE-82E5-A5065DCE9CB5@thelastpickle.com> References: To: user@cassandra.apache.org X-Mailer: Apple Mail (2.1503) X-Virus-Checked: Checked by ClamAV on apache.org > ava.lang.RuntimeException: UnavailableException() Looks like the pig script could talk to one node, but the coordinator = could not process the request at the consistency level requested. Check = all the nodes are up, that the RF is set to the correct value and the CL = you are using.=20 Cheers ----------------- Aaron Morton Freelance Cassandra Consultant New Zealand @aaronmorton http://www.thelastpickle.com On 30/04/2013, at 4:55 AM, Miguel Angel Martin junquera = wrote: >=20 >=20 > hi all: >=20 > i can run pig with cassandra and hadoop in EC2. >=20 > I ,m trying to run pig with cassandra ring and hadoop=20 > The ring cassandra have the tasktrackers and datanodes , too.=20 >=20 > and i running pig from another machine where i have intalled the = namenode-jobtracker. > ihave a simple script to load data ffrom pygmalion keyspace adn = columfalimily account and dump result to test. > I installed another simple local cassandra in namenode-job tacker = machine and i can run pig jobs ok, but when i try to run script in = cassandra ring config changig the config of envitronment variable = PIG_INITIAL_ADDRESS to the IP of one of the nodes of cassandra ring i = have this error: >=20 >=20 > --- >=20 >=20 > j > ava.lang.RuntimeException: UnavailableException() > at = org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.may= beInit(ColumnFamilyRecordReader.java:384) > at = org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.com= puteNext(ColumnFamilyRecordReader.java:390) > at = org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.com= puteNext(ColumnFamilyRecordReader.java:313) > at = com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterat= or.java:143) > at = com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:1= 38) > at = org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFa= milyRecordReader.java:184) > at = org.apache.cassandra.hadoop.pig.CassandraStorage.getNext(CassandraStorage.= java:226) > at = org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordRead= er.nextKeyValue(PigRecordReader.java:211) > at = org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapT= ask.java:532) > at = org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143) > at = org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) > at org.apache.hadoop.mapred.Child$4.run(Child.java:255) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1136) > at org.apache.hadoop.mapred.Child.main(Child.java:249) > Caused by: UnavailableException() > at = org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassand= ra.java:12924) > at = org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) > at = org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassand= ra.java:734) > at = org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.ja= va:718) > at = org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.may= beInit(ColumnFamilyRecordReader.java:346) > ... 17 more >=20 >=20 >=20 > can anybody help me o have any idea? > Thanks in advance > pd: > 1.- the ports are open in EC2=20 > 2 The keyspace and cF are created in the cassandra cluster EC2 too = nad likey at the name node cassandra installation. > 3.-i have this bash_profile configuration: > # .bash_profile >=20 > # Get the aliases and functions > if [ -f ~/.bashrc ]; then > . ~/.bashrc > fi >=20 > # User specific environment and startup programs >=20 > PATH=3D$PATH:$HOME/.local/bin:$HOME/bin > export PATH=3D$PATH:/usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin > export JAVA_HOME=3D/usr/lib/jvm/java-1.7.0-openjdk.x86_64 > export CASSANDRA_HOME=3D/home/ec2-user/apache-cassandra-1.2.4 > export PIG_HOME=3D/home/ec2-user/pig-0.11.1-src > export PIG_INITIAL_ADDRESS=3D10.210.164.233 > #export PIG_INITIAL_ADDRESS=3D127.0.0.1 > export PIG_RPC_PORT=3D9160 > export PIG_CONF_DIR=3D/home/ec2-user/hadoop-1.1.1/conf > export PIG_PARTITIONER=3Dorg.apache.cassandra.dht.Murmur3Partitioner > #export PIG_PARTITIONER=3Dorg.apache.cassandra.dht.RandomPartitioner >=20 >=20 > 4.- I export all cassandrasjars in the hadoop-env.sh for all nodes of = hadoop > 5.- i have the same error running PIG in local mode=20 >=20 > 6.- if i change to ramdonpartioner =20 > an reload changes i have this error: >=20 > java.lang.RuntimeException: InvalidRequestException(why:Start token = sorts after end token) > at = org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.may= beInit(ColumnFamilyRecordReader.java:384) > at = org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.com= puteNext(ColumnFamilyRecordReader.java:390) > at = org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.com= puteNext(ColumnFamilyRecordReader.java:313) > at = com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterat= or.java:143) > at = com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:1= 38) > at = org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFa= milyRecordReader.java:184) > at = org.apache.cassandra.hadoop.pig.CassandraStorage.getNext(CassandraStorage.= java:228) > at = org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordRead= er.nextKeyValue(PigRecordReader.java:211) > at = org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapT= ask.java:532) > at = org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143) > at = org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370) > at org.apache.hadoop.mapred.Child$4.run(Child.java:255) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1136) > at org.apache.hadoop.mapred.Child.main(Child.java:249) > Caused by: InvalidRequestException(why:Start token sorts after end = token) > at = org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassand= ra.java:12916) > at = org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) > at = org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassand= ra.java:734) > at = org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.ja= va:718) > at = org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.may= beInit(ColumnFamilyRecordReader.java:346) > ... 17 more >=20 >=20 >=20 > thanks in advance >=20 > note.-i runing script with pig_cassandra and cassandra 1.2.0=20 >=20