Return-Path: X-Original-To: apmail-hive-user-archive@www.apache.org Delivered-To: apmail-hive-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 66223184C4 for ; Tue, 23 Feb 2016 07:11:00 +0000 (UTC) Received: (qmail 75645 invoked by uid 500); 23 Feb 2016 07:10:46 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 75581 invoked by uid 500); 23 Feb 2016 07:10:46 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 75571 invoked by uid 99); 23 Feb 2016 07:10:46 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 23 Feb 2016 07:10:46 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id ABFF71A06DF for ; Tue, 23 Feb 2016 07:10:45 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.179 X-Spam-Level: * X-Spam-Status: No, score=1.179 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd2-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id Hk3DhUGEcuMf for ; Tue, 23 Feb 2016 07:10:41 +0000 (UTC) Received: from mail-vk0-f45.google.com (mail-vk0-f45.google.com [209.85.213.45]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 5FDFD5F3F2 for ; Tue, 23 Feb 2016 07:10:40 +0000 (UTC) Received: by mail-vk0-f45.google.com with SMTP id k196so154298994vka.0 for ; Mon, 22 Feb 2016 23:10:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:reply-to:in-reply-to:references:from:date:message-id :subject:to:cc:content-type; bh=iy9x7VFtfKh5nE0DtkyNr4nNgL+yuOmCFobkJdJWFQI=; b=MqZZhNdaju0mlgURr0s12M1Jb3ya/XmEpjXWHH+N9EaFKMYEcCfrBEfHswopHr0ntP UvFYzkkG3imjFWY1r9lozO3gzSM0ZHCCum6mba9Ox/zasYvE0+2N40Mr6PYnrs8YADuS M4AUCN0ohePgzRNeg3EVfqjGxxdVGl4c01eFAiVrb/UDvClJ2E79l/A9CaGylk1M3vTM BcCplK1Acm6P8QLCY0oPnpSfCPgdzSEJbyFK3rd6AgtH74nFRPJNn2aGifBwLI4KUV/H IY2PfplOTPILyf51cWgRU5422Gs4ZV4+iAZexXOsK6jd0Dpe6RMfkydcZ7u7dxUXm7Xg 3B9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:reply-to:in-reply-to:references :from:date:message-id:subject:to:cc:content-type; bh=iy9x7VFtfKh5nE0DtkyNr4nNgL+yuOmCFobkJdJWFQI=; b=QGpb23T2Ugi1/fO8CNGH5J9+x+n35oCooAVfujRfLoXxKdLjikRD7Mkn4eB/OaRp/d AOivBdCABZ0/UrPUkItRfeyMA8DCPfHecDS9gr5MpKZUuuwcPUe8qZtvVgzk5jlSqusV PsMHXjb8LiTkfPmJAvZEWTzwLagsHN/E3DZYk1ovWJVl9ywmlWuDOBiazNjjQLQQv8zP iyTUyY3NrLj9koSknpTJIbX8V1tbN+l4HnPb9FiFPG/8FIvE/v29aiGG9HD/qF/nggrt n9IeyyQVlqIXzhnK7U6s5eFQZRUQZZnxyO9FRQDhZdCme5GeKY2UalLbD+aOBuzZCBpf UW8g== X-Gm-Message-State: AG10YORKPrlqHyA9o9vuQWjprCg3fZzSuZM7tSV6bK3JThCyQBtzC/i1O8K1Q75npjKv/yA2RoZ+oFZIypYnsg== X-Received: by 10.31.163.68 with SMTP id m65mr26495567vke.85.1456211139361; Mon, 22 Feb 2016 23:05:39 -0800 (PST) MIME-Version: 1.0 Received: by 10.176.1.135 with HTTP; Mon, 22 Feb 2016 23:05:19 -0800 (PST) Reply-To: sanjiv.is.on@gmail.com In-Reply-To: References: From: "@Sanjiv Singh" Date: Tue, 23 Feb 2016 12:35:19 +0530 Message-ID: Subject: Re: Spark SQL is not returning records for hive bucketed tables on HDP To: Varadharajan Mukundan Cc: user , "user@hive.apache.org" Content-Type: multipart/alternative; boundary=001a11414f48ae0228052c6a937e --001a11414f48ae0228052c6a937e Content-Type: text/plain; charset=UTF-8 Hi Varadharajan, That is the point, Spark SQL is able to recognize delta files. See below directory structure, ONE BASE (43 records) and one DELTA (created after last insert). And I am able see last insert through Spark SQL. *See below complete scenario :* *Steps:* - Inserted 43 records in table. - Run major compaction on table. - *alter table mytable COMPACT 'major';* - Disabled auto compaction on table. - *alter table mytable set TBLPROPERTIES("NO_AUTO_COMPACTION"="true");* - Inserted 1 record in table. > *hadoop fs -ls /apps/hive/warehouse/mydb.db/mytable* drwxrwxrwx - root hdfs 0 2016-02-23 11:43 /apps/hive/warehouse/mydb.db/mytable/base_0000087 drwxr-xr-x - root hdfs 0 2016-02-23 12:02 /apps/hive/warehouse/mydb.db/mytable/delta_0000088_0000088 *SPARK JDBC :* 0: jdbc:hive2://myhost:9999> select count(*) from mytable ; +------+ | _c0 | +------+ | 44 | +------+ 1 row selected (1.196 seconds) *HIVE JDBC :* 1: jdbc:hive2://myhost:10000> select count(*) from mytable ; +------+--+ | _c0 | +------+--+ | 44 | +------+--+ 1 row selected (0.121 seconds) Regards Sanjiv Singh Mob : +091 9990-447-339 On Tue, Feb 23, 2016 at 12:04 PM, Varadharajan Mukundan < srinathsmn@gmail.com> wrote: > Hi Sanjiv, > > Yes.. If we make use of Hive JDBC we should be able to retrieve all the > rows since it is hive which processes the query. But i think the problem > with Hive JDBC is that there are two layers of processing, hive and then at > spark with the result set. And another one is performance is limited to > that single HiveServer2 node and network. > > But If we make use of sqlContext.table function in spark to access hive > tables, it is supposed to read files directly from HDFS skipping the hive > layer. But it doesn't read delta files and just reads the contents from > base folder. Only after Major compaction, the delta files would be merged > with based folder and be visible for Spark SQL > > On Tue, Feb 23, 2016 at 11:57 AM, @Sanjiv Singh > wrote: > >> Hi Varadharajan, >> >> Can you elaborate on (you quoted on previous mail) : >> "I observed that hive transaction storage structure do not work with >> spark yet" >> >> >> If it is related to delta files created after each transaction and spark >> would not be able recognize them. then I have a table *mytable *(ORC , >> BUCKETED , NON-SORTED) , already done lots on insert , update and deletes. >> I can see delta files created in HDFS (see below), Still able to fetch >> consistent records through Spark JDBC and HIVE JDBC. >> >> Not compaction triggered for that table. >> >> > *hadoop fs -ls /apps/hive/warehouse/mydb.db/mytable* >> >> drwxrwxrwx - root hdfs 0 2016-02-23 11:38 >> /apps/hive/warehouse/mydb.db/mytable/base_0000060 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:38 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000061_0000061 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:38 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000062_0000062 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:38 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000063_0000063 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:38 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000064_0000064 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:38 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000065_0000065 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:38 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000066_0000066 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:38 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000067_0000067 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:38 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000068_0000068 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:38 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000069_0000069 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:38 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000070_0000070 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:38 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000071_0000071 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:38 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000072_0000072 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:39 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000073_0000073 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:39 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000074_0000074 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:39 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000075_0000075 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:39 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000076_0000076 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:39 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000077_0000077 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:39 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000078_0000078 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:39 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000079_0000079 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:39 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000080_0000080 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:39 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000081_0000081 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:39 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000082_0000082 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:39 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000083_0000083 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:39 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000084_0000084 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:39 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000085_0000085 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:40 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000086_0000086 >> drwxr-xr-x - root hdfs 0 2016-02-23 11:41 >> /apps/hive/warehouse/mydb.db/mytable/delta_0000087_0000087 >> >> >> >> Regards >> Sanjiv Singh >> Mob : +091 9990-447-339 >> >> On Mon, Feb 22, 2016 at 1:38 PM, Varadharajan Mukundan < >> srinathsmn@gmail.com> wrote: >> >>> Actually the auto compaction if enabled is triggered based on the volume >>> of changes. It doesn't automatically run after every insert. I think its >>> possible to reduce the thresholds but that might reduce performance by a >>> big margin. As of now, we do compaction after the batch insert completes. >>> >>> The only other way to solve this problem as of now is to use Hive JDBC >>> API. >>> >>> On Mon, Feb 22, 2016 at 11:39 AM, @Sanjiv Singh >>> wrote: >>> >>>> Compaction would have been triggered automatically as following >>>> properties already set in *hive-site.xml*. and also >>>> *NO_AUTO_COMPACTION* property not been set for these tables. >>>> >>>> >>>> >>>> >>>> hive.compactor.initiator.on >>>> >>>> true >>>> >>>> >>>> >>>> >>>> >>>> hive.compactor.worker.threads >>>> >>>> 1 >>>> >>>> >>>> >>>> >>>> Documentation is upset sometimes. >>>> >>>> >>>> >>>> >>>> Regards >>>> Sanjiv Singh >>>> Mob : +091 9990-447-339 >>>> >>>> On Mon, Feb 22, 2016 at 9:49 AM, Varadharajan Mukundan < >>>> srinathsmn@gmail.com> wrote: >>>> >>>>> Yes, I was burned down by this issue couple of weeks back. This also >>>>> means that after every insert job, compaction should be run to access new >>>>> rows from Spark. Sad that this issue is not documented / mentioned anywhere. >>>>> >>>>> On Mon, Feb 22, 2016 at 9:27 AM, @Sanjiv Singh >>>> > wrote: >>>>> >>>>>> Hi Varadharajan, >>>>>> >>>>>> Thanks for your response. >>>>>> >>>>>> Yes it is transnational table; See below *show create table. * >>>>>> >>>>>> Table hardly have 3 records , and after triggering minor compaction >>>>>> on tables , it start showing results on spark SQL. >>>>>> >>>>>> >>>>>> > *ALTER TABLE hivespark COMPACT 'major';* >>>>>> >>>>>> >>>>>> > *show create table hivespark;* >>>>>> >>>>>> CREATE TABLE `hivespark`( >>>>>> >>>>>> `id` int, >>>>>> >>>>>> `name` string) >>>>>> >>>>>> CLUSTERED BY ( >>>>>> >>>>>> id) >>>>>> >>>>>> INTO 32 BUCKETS >>>>>> >>>>>> ROW FORMAT SERDE >>>>>> >>>>>> 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' >>>>>> >>>>>> STORED AS INPUTFORMAT >>>>>> >>>>>> 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' >>>>>> >>>>>> OUTPUTFORMAT >>>>>> >>>>>> 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' >>>>>> >>>>>> LOCATION >>>>>> >>>>>> 'hdfs://myhost:8020/apps/hive/warehouse/mydb.db/hivespark' >>>>>> TBLPROPERTIES ( >>>>>> >>>>>> 'COLUMN_STATS_ACCURATE'='true', >>>>>> >>>>>> 'last_modified_by'='root', >>>>>> >>>>>> 'last_modified_time'='1455859079', >>>>>> >>>>>> 'numFiles'='37', >>>>>> >>>>>> 'numRows'='3', >>>>>> >>>>>> 'rawDataSize'='0', >>>>>> >>>>>> 'totalSize'='11383', >>>>>> >>>>>> 'transactional'='true', >>>>>> >>>>>> 'transient_lastDdlTime'='1455864121') ; >>>>>> >>>>>> >>>>>> Regards >>>>>> Sanjiv Singh >>>>>> Mob : +091 9990-447-339 >>>>>> >>>>>> On Mon, Feb 22, 2016 at 9:01 AM, Varadharajan Mukundan < >>>>>> srinathsmn@gmail.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> Is the transaction attribute set on your table? I observed that hive >>>>>>> transaction storage structure do not work with spark yet. You can confirm >>>>>>> this by looking at the transactional attribute in the output of "desc >>>>>>> extended " in hive console. >>>>>>> >>>>>>> If you'd need to access transactional table, consider doing a major >>>>>>> compaction and then try accessing the tables >>>>>>> >>>>>>> On Mon, Feb 22, 2016 at 8:57 AM, @Sanjiv Singh < >>>>>>> sanjiv.is.on@gmail.com> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> >>>>>>>> I have observed that Spark SQL is not returning records for hive >>>>>>>> bucketed ORC tables on HDP. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On spark SQL , I am able to list all tables , but queries on hive >>>>>>>> bucketed tables are not returning records. >>>>>>>> >>>>>>>> I have also tried the same for non-bucketed hive tables. it is >>>>>>>> working fine. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Same is working on plain Apache setup. >>>>>>>> >>>>>>>> Let me know if needs other details. >>>>>>>> >>>>>>>> Regards >>>>>>>> Sanjiv Singh >>>>>>>> Mob : +091 9990-447-339 >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Thanks, >>>>>>> M. Varadharajan >>>>>>> >>>>>>> ------------------------------------------------ >>>>>>> >>>>>>> "Experience is what you get when you didn't get what you wanted" >>>>>>> -By Prof. Randy Pausch in "The Last Lecture" >>>>>>> >>>>>>> My Journal :- http://varadharajan.in >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Thanks, >>>>> M. Varadharajan >>>>> >>>>> ------------------------------------------------ >>>>> >>>>> "Experience is what you get when you didn't get what you wanted" >>>>> -By Prof. Randy Pausch in "The Last Lecture" >>>>> >>>>> My Journal :- http://varadharajan.in >>>>> >>>> >>>> >>> >>> >>> -- >>> Thanks, >>> M. Varadharajan >>> >>> ------------------------------------------------ >>> >>> "Experience is what you get when you didn't get what you wanted" >>> -By Prof. Randy Pausch in "The Last Lecture" >>> >>> My Journal :- http://varadharajan.in >>> >> >> > > > -- > Thanks, > M. Varadharajan > > ------------------------------------------------ > > "Experience is what you get when you didn't get what you wanted" > -By Prof. Randy Pausch in "The Last Lecture" > > My Journal :- http://varadharajan.in > --001a11414f48ae0228052c6a937e Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi Varadharajan,


That is= the point, Spark SQL is able to recognize delta files. See below directory= structure, ONE BASE (43 records) and one DELTA (created after last insert)= . And I am able see last insert through Spark SQL.


See below complete scenario :

Steps:
    <= li>Inserted 43 records in table.
  • Run major compaction on table.= =C2=A0
    • alter table mytable COMPACT 'major';<= /li>
  • Disabled auto compaction on table.
    • alter ta= ble mytable set TBLPROPERTIES("NO_AUTO_COMPACTION"=3D"true&q= uot;);
  • Inserted 1 record in table.

> hadoop fs -ls /apps/hive/warehouse/mydb.db/m= ytable
drwxrwxrwx =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A00 2016-02-23 11:43 /apps/hive/warehouse/mydb.db/mytable/base_0000= 087
drwxr-xr-x =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A00 2016-02-23 12:02 /apps/hive/warehouse/mydb.db/mytable/delta_0000088_00= 00088

SPARK JDBC :

0: jdbc:hive2://myhost:9999> select count(*) from mytable ;
+= ------+
| _c0 =C2=A0|
+------+
| 44 =C2=A0 |<= /div>
+------+
1 row selected (1.196 seconds)

<= /div>
HIVE JDBC :

1: jdbc:hive2://myhos= t:10000> select count(*) from mytable ;
+------+--+
= | _c0 =C2=A0|
+------+--+
| 44 =C2=A0 |
+----= --+--+
1 row selected (0.121 seconds)

<= /div>

Regards
Sanjiv Singh
Mob :=C2=A0 +091 999= 0-447-339

On Tue, Feb 23, 2016 at 12:04 PM, Varadharaj= an Mukundan <srinathsmn@gmail.com> wrote:
Hi Sanjiv,

Yes.. If = we make use of Hive JDBC we should be able to retrieve all the rows since i= t is hive which processes the query. But i think the problem with Hive JDBC= is that there are two layers of processing, hive and then at spark with th= e result set. And another one is performance is limited to that single Hive= Server2 node and network.

But If we make use of sq= lContext.table function in spark to access hive tables, it is supposed to r= ead files directly from HDFS skipping the hive layer. But it doesn't re= ad delta files and just reads the contents from base folder. Only after Maj= or compaction, the delta files would be merged with based folder and be vis= ible for Spark SQL

On Tue, Feb 23, 2016 a= t 11:57 AM, @Sanjiv Singh <sanjiv.is.on@gmail.com> wrot= e:
Hi Varadharajan,
=
Can you elaborate on (you quoted on previous mail) :
"I observed that hive transaction storage st= ructure do not work with spark yet"

If it is related to delta files created after each tran= saction and spark would not be able recognize them. then I have a table=C2= =A0mytable=C2=A0(ORC , BUCKETED , NON-SORTED) , already done lots on= insert , update and deletes. I can see delta files created in HDFS (see be= low), Still able to fetch consistent records through Spark JDBC and HIVE JD= BC.=C2=A0

Not compaction triggered for that table.= =C2=A0
=C2=A0
> hadoop fs -ls /apps/hive/warehous= e/mydb.db/mytable

drwxrwxrwx =C2=A0 -= root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016-02-23 11:38 /apps/hive/= warehouse/mydb.db/mytable/base_0000060
drwxr-xr-x =C2=A0 - root h= dfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016-02-23 11:38 /apps/hive/warehou= se/mydb.db/mytable/delta_0000061_0000061
drwxr-xr-x =C2=A0 - root= hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016-02-23 11:38 /apps/hive/wareh= ouse/mydb.db/mytable/delta_0000062_0000062
drwxr-xr-x =C2=A0 - ro= ot hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016-02-23 11:38 /apps/hive/war= ehouse/mydb.db/mytable/delta_0000063_0000063
drwxr-xr-x =C2=A0 - = root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016-02-23 11:38 /apps/hive/w= arehouse/mydb.db/mytable/delta_0000064_0000064
drwxr-xr-x =C2=A0 = - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016-02-23 11:38 /apps/hive= /warehouse/mydb.db/mytable/delta_0000065_0000065
drwxr-xr-x =C2= =A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016-02-23 11:38 /apps/= hive/warehouse/mydb.db/mytable/delta_0000066_0000066
drwxr-xr-x = =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016-02-23 11:38 /ap= ps/hive/warehouse/mydb.db/mytable/delta_0000067_0000067
drwxr-xr-= x =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016-02-23 11:38 /= apps/hive/warehouse/mydb.db/mytable/delta_0000068_0000068
drwxr-x= r-x =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016-02-23 11:38= /apps/hive/warehouse/mydb.db/mytable/delta_0000069_0000069
drwxr= -xr-x =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016-02-23 11:= 38 /apps/hive/warehouse/mydb.db/mytable/delta_0000070_0000070
drw= xr-xr-x =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016-02-23 1= 1:38 /apps/hive/warehouse/mydb.db/mytable/delta_0000071_0000071
d= rwxr-xr-x =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016-02-23= 11:38 /apps/hive/warehouse/mydb.db/mytable/delta_0000072_0000072
drwxr-xr-x =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016-02-= 23 11:39 /apps/hive/warehouse/mydb.db/mytable/delta_0000073_0000073
drwxr-xr-x =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016-0= 2-23 11:39 /apps/hive/warehouse/mydb.db/mytable/delta_0000074_0000074
=
drwxr-xr-x =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016= -02-23 11:39 /apps/hive/warehouse/mydb.db/mytable/delta_0000075_0000075
drwxr-xr-x =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 20= 16-02-23 11:39 /apps/hive/warehouse/mydb.db/mytable/delta_0000076_0000076
drwxr-xr-x =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 = 2016-02-23 11:39 /apps/hive/warehouse/mydb.db/mytable/delta_0000077_0000077=
drwxr-xr-x =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= 0 2016-02-23 11:39 /apps/hive/warehouse/mydb.db/mytable/delta_0000078_00000= 78
drwxr-xr-x =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A00 2016-02-23 11:39 /apps/hive/warehouse/mydb.db/mytable/delta_0000079_00= 00079
drwxr-xr-x =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A00 2016-02-23 11:39 /apps/hive/warehouse/mydb.db/mytable/delta_0000080= _0000080
drwxr-xr-x =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A00 2016-02-23 11:39 /apps/hive/warehouse/mydb.db/mytable/delta_000= 0081_0000081
drwxr-xr-x =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A00 2016-02-23 11:39 /apps/hive/warehouse/mydb.db/mytable/delta_= 0000082_0000082
drwxr-xr-x =C2=A0 - root hdfs =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A00 2016-02-23 11:39 /apps/hive/warehouse/mydb.db/mytable/de= lta_0000083_0000083
drwxr-xr-x =C2=A0 - root hdfs =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A00 2016-02-23 11:39 /apps/hive/warehouse/mydb.db/mytable= /delta_0000084_0000084
drwxr-xr-x =C2=A0 - root hdfs =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A00 2016-02-23 11:39 /apps/hive/warehouse/mydb.db/myt= able/delta_0000085_0000085
drwxr-xr-x =C2=A0 - root hdfs =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016-02-23 11:40 /apps/hive/warehouse/mydb.db/= mytable/delta_0000086_0000086
drwxr-xr-x =C2=A0 - root hdfs =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 2016-02-23 11:41 /apps/hive/warehouse/mydb= .db/mytable/delta_0000087_0000087


=

Regards
Sanjiv Singh
Mob :=C2=A0 +091 9990-447-339
=

On Mon, Feb 22, 2016 at 1:3= 8 PM, Varadharajan Mukundan <srinathsmn@gmail.com> wrote:=
Actually the auto compa= ction if enabled is triggered based on the volume of changes. It doesn'= t automatically run after every insert. I think its possible to reduce the = thresholds but that might reduce performance by a big margin. As of now, we= do compaction after the batch insert completes.=C2=A0

T= he only other way to solve this problem as of now is to use Hive JDBC API.<= /div>

On Mon, Feb 22, 2016 at 11:39 AM, @Sanjiv Singh <<= a href=3D"mailto:sanjiv.is.on@gmail.com" target=3D"_blank">sanjiv.is.on@gma= il.com> wrote:

Compaction would have b= een triggered automatically as following properties already set in hive-= site.xml. and also=C2=A0NO_AUTO_COMPACTION<= /b> property not been set for these tables<= /span>.=C2=A0


=C2=A0 =C2=A0 &l= t;property>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0 <name>hive.compactor.initiator.on</name>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0 <value>true</value>

=C2=A0=C2=A0=C2=A0 </property>

=C2=A0=C2=A0=C2=A0 <property>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0 <name>hive.compactor.worker.threads</name>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0 <value>1</value>

=C2=A0=C2=A0=C2=A0 </property>


Documentation is upset=C2=A0sometimes= .


=C2=A0=C2=A0

=

Regards
Sanjiv Singh
Mob :=C2=A0 +091 9990-447-339
=

On Mon, Feb 22, 2016 at 9:4= 9 AM, Varadharajan Mukundan <srinathsmn@gmail.com> wrote:=
Yes, I was burned down = by this issue couple of weeks back. This also means that after every insert= job, compaction should be run to access new rows from Spark. Sad that this= issue is not documented / mentioned anywhere.

On Mon, Feb 22, 2016 at 9:27 A= M, @Sanjiv Singh <sanjiv.is.on@gmail.com> wrote:
Hi Varadharajan,=C2=A0
Thanks for your response.

Yes it = is transnational table; See below=C2=A0show create table.=C2=A0

Table hardly have 3 records , and after triggering min= or compaction on tables , it start showing results on spark SQL.
= =C2=A0

> ALTER TABLE hivespark COMPACT '= major';


> show cr= eate table hivespark;

=C2=A0 CREATE TABLE `hiv= espark`( =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0=C2=A0
=C2=A0 =C2=A0 `id` int, =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0
=C2=A0 =C2=A0 `name` string) = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
=C2=A0 CLUSTERED BY ( =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
=C2=A0 =C2=A0 id) =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0=
=C2=A0 INTO 32 BUCKETS =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0=
=C2=A0 ROW FORMAT SERDE =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
=C2=A0 =C2=A0 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0=C2=A0
=C2=A0 STORED AS INPUTFORMAT =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0=
=C2=A0 =C2=A0 'org.apache.hadoop.hive.ql.io.orc.OrcInputForm= at' =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0
=C2=A0 OUTPUTFORMAT =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0
=C2=A0 =C2=A0 'org.apache.hadoop.hive.ql.io.orc.OrcOut= putFormat' =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
=C2=A0 LOCATION =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0
=C2=A0 =C2=A0 'hdfs://myhost:8020/apps/hive= /warehouse/mydb.db/hivespark' =C2=A0=C2=A0
=C2=A0 TBLPROPERTI= ES ( =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0
=C2=A0 =C2=A0 = 9;COLUMN_STATS_ACCURATE'=3D'true', =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0=C2=A0
=C2=A0 =C2=A0 'last_modified_by= 9;=3D'root', =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0
=C2=A0 =C2=A0 'last_modified_time'=3D&#= 39;145= 5859079', =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
=C2= =A0 =C2=A0 'numFiles'=3D'37', =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0
=C2=A0 =C2=A0 'numRows'=3D'3', =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0
=C2=A0 =C2=A0 'rawDataSize'=3D&#= 39;0', =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
=C2=A0 =C2=A0 'totalSize&#= 39;=3D'11383', =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
=C2=A0 =C2=A0 'transac= tional'=3D'true', =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0
=C2=A0 =C2=A0 'transien= t_lastDdlTime'=3D'1455864121') ;

=

Regards
Sanjiv Singh
Mob :=C2=A0 +091 9990-447-339
=

On Mon, Feb 22, 2016 at 9:0= 1 AM, Varadharajan Mukundan <srinathsmn@gmail.com> wrote:=
Hi,

= Is the transaction attribute set on your table? I observed that hive transa= ction storage structure do not work with spark yet. You can confirm this by= looking at the transactional attribute in the output of "desc extende= d <tablename>" in hive console.

If you&= #39;d need to access transactional table, consider doing a major compaction= and then try accessing the tables

On Mon, Feb 22, 2016 at 8:57 AM, @Sa= njiv Singh <sanjiv.is.on@gmail.com> wrote:

Hi,

<= p class=3D"MsoNormal">

I have observed that S= park SQL is not returning records for hive bucketed ORC tables on HDP.

=C2=A0=C2=A0

On spark SQL , I am able to list all tables , but qu= eries on hive bucketed tables are not returning records.

I have also tried the same for non-bucketed hive tab= les. it is working fine.

=C2=A0


Same is working on plain Apache setup. =C2=A0

=
Let me know if needs other details.

=
Regards
Sanjiv Singh=
Mob :=C2=A0 +091 9990-447-339



<= font color=3D"#888888">--
Thanks,
M. Varadharajan

------= ------------------------------------------

"Experience is what = you get when you didn't get what you wanted"
=C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-By Prof. Randy Pausch in "The L= ast Lecture"

My Journal :- http://varadharajan.in




--
Thanks,=
M. Varadharajan

------------------------------------------------=

"Experience is what you get when you didn't get what you w= anted"
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-By P= rof. Randy Pausch in "The Last Lecture"

My Journal :- http://varadharajan.in=




--
Thanks,=
M. Varadharajan

------------------------------------------------=

"Experience is what you get when you didn't get what you w= anted"
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-By P= rof. Randy Pausch in "The Last Lecture"

My Journal :- http://varadharajan.in=




--
Thanks,=
M. Varadharajan

------------------------------------------------=

"Experience is what you get when you didn't get what you w= anted"
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0-By P= rof. Randy Pausch in "The Last Lecture"

My Journal :- http://varadharajan.in=

--001a11414f48ae0228052c6a937e--