hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mich Talebzadeh" <m...@peridale.co.uk>
Subject RE: Loading data containing newlines
Date Fri, 15 Jan 2016 23:33:43 GMT
Thanks Ryan, Very useful to know indeed 

 

Dr Mich Talebzadeh

 

LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf

Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15", ISBN 978-0-9563693-0-7.


co-author "Sybase Transact SQL Guidelines Best Practices", ISBN 978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN: 978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This message is for the
designated recipient only, if you are not the intended recipient, you should destroy it immediately.
Any information in this message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility
of the recipient to ensure that this email is virus free, therefore neither Peridale Technology
Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: Ryan Harris [mailto:Ryan.Harris@zionsbancorp.com] 
Sent: 15 January 2016 23:31
To: user@hive.apache.org
Subject: RE: Loading data containing newlines

 

Mich, if you have a toolpath that you can use to pipeline the required edits to the source
file, you can use a chain similar to this:

 

hadoop fs -text ${hdfs_path}/${orig_filename} | iconv -f EBCDIC-US -t ASCII | sed 's/\(.\{133\}\)/\1\n/g'
| gzip -c | /usr/bin/hadoop fs -put - /etl/${table_name}/load/${orig_filename}.gz

 

to clean up your source input data as you drop it into the initial external table location
that hive will use in a hive based ELT chain.

 

It really depends on your upstream data path....if data were being collected by flume, you
might be able to clean it up there.  It is also possible to handle this with custom hive serdes,
but it depends on where you want to write the code and how much existing data you already
have to deal with.

 

Spark is also a very flexible and useful tool for this sort of problem, as well as numerous
advantages when used as an execution engine, but setting up spark strictly to resolve this
issue seems like overkill to me.

 

 

From: Mich Talebzadeh [mailto:mich@peridale.co.uk] 
Sent: Friday, January 15, 2016 4:04 PM
To: user@hive.apache.org <mailto:user@hive.apache.org> 
Subject: RE: Loading data containing newlines

 

Ok but I believe there are other similar approaches.

 

I can take a raw csv file and customize it using existing shell commands like sed, awk, cut,
grep etc among them getting rid of blank lines or replacing silly characters.

 

Bottom line I want to “eventually” store that csv file in a hive table in a format that
I can use sql queries on it. 

 

Is that a viable alternative?

 

Thanks

 

 

 

Dr Mich Talebzadeh

 

LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf

Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15", ISBN 978-0-9563693-0-7.


co-author "Sybase Transact SQL Guidelines Best Practices", ISBN 978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN: 978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This message is for the
designated recipient only, if you are not the intended recipient, you should destroy it immediately.
Any information in this message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility
of the recipient to ensure that this email is virus free, therefore neither Peridale Technology
Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: Marcin Tustin [mailto:mtustin@handybook.com] 
Sent: 15 January 2016 21:51
To: user@hive.apache.org <mailto:user@hive.apache.org> 
Subject: Re: Loading data containing newlines

 

You can open a file as an RDD of lines, and map whatever custom tokenisation function you
want over it; alternatively you can partition down to a reasonable size and use map_partitions
to map the standard python csv parser over the partitions.

 

In general, the advantage of spark is that you can do anything you like rather than being
limited to a specific set of primitives. 

 

On Fri, Jan 15, 2016 at 4:42 PM, Mich Talebzadeh <mich@peridale.co.uk <mailto:mich@peridale.co.uk>
> wrote:

Hi Marcin,

 

Can you be specific in what way Spark is better suited for this operation compared to Hive?

 

Dr Mich Talebzadeh

 

LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf

Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15", ISBN 978-0-9563693-0-7.


co-author "Sybase Transact SQL Guidelines Best Practices", ISBN 978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN: 978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This message is for the
designated recipient only, if you are not the intended recipient, you should destroy it immediately.
Any information in this message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility
of the recipient to ensure that this email is virus free, therefore neither Peridale Technology
Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: Marcin Tustin [mailto:mtustin@handybook.com <mailto:mtustin@handybook.com> ] 
Sent: 15 January 2016 21:39
To: user@hive.apache.org <mailto:user@hive.apache.org> 
Subject: Re: Loading data containing newlines

 

I second this. I've generally found anything else to be disappointing when working with data
which is at all funky. 

 

On Wed, Jan 13, 2016 at 8:13 PM, Alexander Pivovarov <apivovarov@gmail.com <mailto:apivovarov@gmail.com>
> wrote:

Time to use Spark and Spark-Sql in addition to Hive?

It's probably going to happen sooner or later anyway.

 

I sent you Spark solution yesterday.  (you just need to write unbzip2AndCsvToListOfArrays(file:
String): List[Array[String]]  function using BZip2CompressorInputStream and Super CSV API)

you can download spark,  open spark-shell and run/debug the program on a single computer

 

and then run it on cluster if needed   (e.g. Amazon EMR can spin up Spark cluster in 7 min)

 

On Wed, Jan 13, 2016 at 4:13 PM, Gerber, Bryan W <Bryan.Gerber@pnnl.gov <mailto:Bryan.Gerber@pnnl.gov>
> wrote:

1.       hdfs dfs -copyFromLocal /incoming/files/*.bz2  hdfs://host.name/data/stg/table/ <http://host.name/data/stg/table/>


2.       CREATE EXTERNAL TABLE stg_<table> (cols…) ROW FORMAT serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
STORED AS TEXTFILE LOCATION ‘/data/stg/table/’

3.       CREATE TABLE <table> (cols…) STORE AS ORC  tblproperties ("orc.compress"="ZLIB");

4.       INSERT INTO TABLE <table> SELECT cols, udf1(cola), udf2(colb),functions(),etc.
FROM ext_<table>

5.       Delete files from hdfs://host.name/data/stg/table/ <http://host.name/data/stg/table/>


 

This has been working quite well, until our newest data contains fields with embedded newlines.

 

We are now looking into options further up the pipeline to see if we can condition the data
earlier in the process.

 

From: Mich Talebzadeh [mailto:mich@peridale.co.uk <mailto:mich@peridale.co.uk> ] 
Sent: Wednesday, January 13, 2016 10:34 AM


To: user@hive.apache.org <mailto:user@hive.apache.org> 
Subject: RE: Loading data containing newlines

 

Thanks Brian.

 

Just to clarify do you use something like below?

 

1.  hdfs dfs -copyFromLocal /var/tmp/t.bcp hdfs://rhes564.hedat.net:9000/misc/t.bcp <http://rhes564.hedat.net:9000/misc/t.bcp>


2.  CREATE EXTERNAL TABLE <TABLE> name (col1 INT, col2 string, …) COMMENT 'load from
bcp file'ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS ORC

 

Cheers,

 

 

Dr Mich Talebzadeh

 

LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf

Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15", ISBN 978-0-9563693-0-7.


co-author "Sybase Transact SQL Guidelines Best Practices", ISBN 978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN: 978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This message is for the
designated recipient only, if you are not the intended recipient, you should destroy it immediately.
Any information in this message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility
of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd,
its subsidiaries nor their employees accept any responsibility.

 

From: Gerber, Bryan W [mailto:Bryan.Gerber@pnnl.gov] <mailto:[mailto:Bryan.Gerber@pnnl.gov]>
 
Sent: 13 January 2016 18:12
To: user@hive.apache.org <mailto:user@hive.apache.org> 
Subject: RE: Loading data containing newlines

 

We are pushing the compressed text files into HDFS directory for Hive EXTERNAL table, then
using an INSERT on the table using ORC storage. We are letting Hive handle the ORC file creation
process.

 

From: Mich Talebzadeh [mailto:mich@peridale.co.uk] 
Sent: Tuesday, January 12, 2016 4:41 PM
To: user@hive.apache.org <mailto:user@hive.apache.org> 
Subject: RE: Loading data containing newlines

 

Hi Bryan,

 

As a matter of interest are you loading text files into local directories in encrypted format
at all and then push it into HDFS/Hive as ORC?

 

Thanks

 

 

Dr Mich Talebzadeh

 

LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf

Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15", ISBN 978-0-9563693-0-7.


co-author "Sybase Transact SQL Guidelines Best Practices", ISBN 978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN: 978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This message is for the
designated recipient only, if you are not the intended recipient, you should destroy it immediately.
Any information in this message shall not be understood as given or endorsed by Peridale Technology
Ltd, its subsidiaries or their employees, unless expressly so stated. It is the responsibility
of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd,
its subsidiaries nor their employees accept any responsibility.

 

From: Gerber, Bryan W [mailto:Bryan.Gerber@pnnl.gov] 
Sent: 12 January 2016 17:41
To: user@hive.apache.org <mailto:user@hive.apache.org> 
Subject: Loading data containing newlines

 

We are attempting to load CSV text files (compressed to bz2) containing newlines in fields
using EXTERNAL tables and INSERT/SELECT into ORC format tables.  Data volume is ~1TB/day,
we are really trying to avoid unpacking them to condition the data.

 

A few days of research has us ready to implement custom  input/output formats to handle the
ingest.  Any other suggestions that may be less effort with low impact to load times?

 

Thanks,

Bryan G.

 

 

 

Want to work at Handy? Check out our culture deck and open roles <http://www.handy.com/careers>


Latest news <http://www.handy.com/press>  at Handy

Handy just raised $50m <http://venturebeat.com/2015/11/02/on-demand-home-service-handy-raises-50m-in-round-led-by-fidelity/>
 led by Fidelity

 



 

 

Want to work at Handy? Check out our culture deck and open roles <http://www.handy.com/careers>


Latest news <http://www.handy.com/press>  at Handy

Handy just raised $50m <http://venturebeat.com/2015/11/02/on-demand-home-service-handy-raises-50m-in-round-led-by-fidelity/>
 led by Fidelity

 



  _____  

THIS ELECTRONIC MESSAGE, INCLUDING ANY ACCOMPANYING DOCUMENTS, IS CONFIDENTIAL and may contain
information that is privileged and exempt from disclosure under applicable law. If you are
neither the intended recipient nor responsible for delivering the message to the intended
recipient, please note that any dissemination, distribution, copying or the taking of any
action in reliance upon the message is strictly prohibited. If you have received this communication
in error, please notify the sender immediately. Thank you.


Mime
View raw message