hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mich Talebzadeh" <m...@peridale.co.uk>
Subject RE: ORC format
Date Mon, 01 Feb 2016 17:06:56 GMT
Please also bear in mind that data in Hive supposed to be immutable (although later version
allow one to update data in Hive).

 

Data coming to Hive is supposed to be cleansed already sourced from other transactional databases
etc. So the concept of primary key (meaning enforcing uniqueness for a row) does not make
much sense in Hive. Indeed one often demoralises data taking advantages of massive parallel
processing in Hive.

 

External indexes in Hive can be created but are not useful as yet. In other words Hive optimizer
will not use them.  For example these bitmap indexes on the Fact table below are not used
at all

 

0: jdbc:hive2://rhes564:10010/default> show index on sales;

+-----------------------+-----------------------+-----------------------+------------------------------------------+-----------------------+----------+--+

|       idx_name        |       tab_name        |       col_names       |               idx_tab_name
              |       idx_type        | comment  |

+-----------------------+-----------------------+-----------------------+------------------------------------------+-----------------------+----------+--+

| sales_cust_bix        | sales                 | cust_id               | oraclehadoop__sales_sales_cust_bix__
    | bitmap                |          |

| sales_channel_bix     | sales                 | channel_id            | oraclehadoop__sales_sales_channel_bix__
 | bitmap                |          |

| sales_prod_bix        | sales                 | prod_id               | oraclehadoop__sales_sales_prod_bix__
    | bitmap                |          |

| sales_promo_bix       | sales                 | promo_id              | oraclehadoop__sales_sales_promo_bix__
   | bitmap                |          |

| sales_time_bix        | sales                 | time_id               | oraclehadoop__sales_sales_time_bix__
    | bitmap                |          |

+-----------------------+-----------------------+-----------------------+------------------------------------------+-----------------------+----------+--+

 

 

So this leaves us with storage indexes in Hive with ORC tables. If you are importing CSV files,
they can be imported as zipped (bzip2) files  into external staging tables  and then inserted
into Orc tables. That is the way I would do that.

 

--1 Move .CSV data into HDFS:

--2 Create an external table.

--3 Create the ORC table.

--4 Insert the data from the external table to the Hive ORC table

--5 delete staging files 

 

--1) hdfs dfs -copyFromLocal /var/tmp/t.bcp hdfs://rhes564:9000/data/stg/table

--1) hdfs dfs -copyFromLocal /var/tmp/t2.bcp hdfs://rhes564:9000/data/stg/table

--2) hdfs dfs -ls hdfs://rhes564:9000/data/stg/table

 

set hive.exec.reducers.max=256;

use test;

 

--2)

 

DROP TABLE IF EXISTS stg_t2;

CREATE EXTERNAL TABLE stg_t2 (

INVOICENUMBER string

,PAYMENTDATE string

,NET string

,VAT string

,TOTAL string

)

COMMENT 'from csv file from excel sheet PayInsPeridaleTechnology'

ROW FORMAT serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde'

STORED AS TEXTFILE

LOCATION '/data/stg/table2'

TBLPROPERTIES ("skip.header.line.count"="1")

;

 

--3)

DROP TABLE IF EXISTS t2;

CREATE TABLE t2 (

INVOICENUMBER          INT

,PAYMENTDATE            timestamp

,NET                    DECIMAL(20,2)

,VAT                    DECIMAL(20,2)

,TOTAL                  DECIMAL(20,2)

)

COMMENT 'from csv file from excel sheet PayInsPeridaleTechnology'

STORED AS ORC

TBLPROPERTIES ( "orc.compress"="ZLIB" )

;

--4) Put data in target table. do the conversion and ignore empty rows

INSERT INTO TABLE t2

SELECT

          INVOICENUMBER

        , CAST(UNIX_TIMESTAMP(paymentdate,'DD/MM/YYYY')*1000 as timestamp)

        --, CAST(REGEXP_REPLACE(SUBSTR(net,2,20),",","") AS DECIMAL(20,2))

        , CAST(REGEXP_REPLACE(net,'[^\\d\\.]','') AS DECIMAL(20,2))

        , CAST(REGEXP_REPLACE(vat,'[^\\d\\.]','') AS DECIMAL(20,2))

        , CAST(REGEXP_REPLACE(total,'[^\\d\\.]','') AS DECIMAL(20,2))

FROM

stg_t2

WHERE  CAST(REGEXP_REPLACE(net,'[^\\d\\.]','') AS DECIMAL(20,2)) > 0.0 -- Exclude empty
rows

;

 

 

HTH,

 

Dr Mich Talebzadeh

 

LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf

Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15", ISBN 978-0-9563693-0-7.


co-author "Sybase Transact SQL Guidelines Best Practices", ISBN 978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN: 978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This message is for the
designated recipient only, if you are not the intended recipient, you should destroy it immediately.
Any information in this message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility
of the recipient to ensure that this email is virus free, therefore neither Peridale Technology
Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: Mich Talebzadeh [mailto:mich@peridale.co.uk] 
Sent: 01 February 2016 15:58
To: user@hive.apache.org
Subject: RE: ORC format

 

In relational databases say Oracle or Sybase there is only one primary key for a given table.
So by definition you can have one primary key on any table consists of  one column or composite
primary key (multiple columns).

 

Please check threads on “ORC files and statistics” in this forum.for detailed explanation.

 

HTH

 

 

Dr Mich Talebzadeh

 

LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf

Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15", ISBN 978-0-9563693-0-7.


co-author "Sybase Transact SQL Guidelines Best Practices", ISBN 978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN: 978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This message is for the
designated recipient only, if you are not the intended recipient, you should destroy it immediately.
Any information in this message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility
of the recipient to ensure that this email is virus free, therefore neither Peridale Technology
Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: Philip Lee [mailto:philjjoon@gmail.com] 
Sent: 01 February 2016 15:49
To: user@hive.apache.org <mailto:user@hive.apache.org> 
Subject: Re: ORC format

 

What do you mean by the silver bullet? so you mean it is not that stored as primary key on
each column. It is just stored as storage indexing, right?

 

"The statistics helps the optimiser. So whether one table or many, the optimiser will take
advantage of stats to push down the predicate for faster decision."  I understand this sentence
when the operating tree is consisted of filter function because when facing a filter function,
it can skip the unrelated rows. but do you think its aspect can be beneificial on joining
two tables as well?

 

On Mon, Feb 1, 2016 at 4:43 PM, Mich Talebzadeh <mich@peridale.co.uk <mailto:mich@peridale.co.uk>
> wrote:

Hi,

 

Orc table use what is known as storage index with stats (min, max. sum etc) stored at the
table, stripe and rowindex (rows of 10K batches) level. The statistics helps the optimiser.
So whether one table or many, the optimiser will take advantage of stats to push down the
predicate for faster decision. Again ORC table is not a silver bullet. It can be valuable
when conditions are met. 

 

HTH. 

 

Dr Mich Talebzadeh

 

LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf

Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15", ISBN 978-0-9563693-0-7.


co-author "Sybase Transact SQL Guidelines Best Practices", ISBN 978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN: 978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This message is for the
designated recipient only, if you are not the intended recipient, you should destroy it immediately.
Any information in this message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility
of the recipient to ensure that this email is virus free, therefore neither Peridale Technology
Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: Philip Lee [mailto:philjjoon@gmail.com <mailto:philjjoon@gmail.com> ] 
Sent: 01 February 2016 15:21
To: user@hive.apache.org <mailto:user@hive.apache.org> 
Subject: ORC format

 

Hello,

 

I experiment the performance of some systems between ORC and CSV file.

I read about ORC documentation on Hive website, but still curious of some things.

 

I know ORC format is faster on filtering or reading because it has indexing.

Has it advantage of joining two tables of ORC dataset as well?

 

Could you explain about it in detail?

When experimenting, it seems like it has some advantages of joining in some aspect, but not
quite sure what characteristic of ORC make this happening rather than CSV.

 

Best,

Phil

 





 

-- 

==========================================================

Hae Joon Lee

 

Now, in Germany,

M.S. Candidate, Interested in Distributed System, Iterative Processing

Dept. of Computer Science, Informatik in German, TUB

Technical University of Berlin

 

In Korea,

M.S. Candidate, Computer Architecture Laboratory

Dept. of Computer Science, KAIST 

 

Rm# 4414 CS Dept. KAIST

373-1 Guseong-dong, Yuseong-gu, Daejon, South Korea (305-701) 

 

Mobile) 49) 015-251-448-278 in Germany, no cellular in Korea

==========================================================


Mime
View raw message