hawq-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Will Wagner <wowag...@gmail.com>
Subject Re: what is Hawq?
Date Thu, 12 Nov 2015 13:42:25 GMT
Hi Lie,

Great answer.

I have a follow up question.
Everything HAWQ is capable of doing is already covered by Apache Drill.
Why do we need another tool?

Thank you,
Will W
On Nov 12, 2015 12:25 AM, "Lei Chang" <chang.lei.cn@gmail.com> wrote:

>
> Hi Bob,
>
> Apache HAWQ is a Hadoop native SQL query engine that combines the key
> technological advantages of MPP database with the scalability and
> convenience of Hadoop. HAWQ reads data from and writes data to HDFS
> natively. HAWQ delivers industry-leading performance and linear
> scalability. It provides users the tools to confidently and successfully
> interact with petabyte range data sets. HAWQ provides users with a
> complete, standards compliant SQL interface. More specifically, HAWQ has
> the following features:
>
>    - On-premise or cloud deployment
>    - Robust ANSI SQL compliance: SQL-92, SQL-99, SQL-2003, OLAP extension
>    - Extremely high performance. many times faster than other Hadoop SQL
>    engine.
>    - World-class parallel optimizer
>    - Full transaction capability and consistency guarantee: ACID
>    - Dynamic data flow engine through high speed UDP based interconnect
>    - Elastic execution engine based on virtual segment & data locality
>    - Support multiple level partitioning and List/Range based partitioned
>    tables.
>    - Multiple compression method support: snappy, gzip, quicklz, RLE
>    - Multi-language user defined function support: python, perl, java,
>    c/c++, R
>    - Advanced machine learning and data mining functionalities through
>    MADLib
>    - Dynamic node expansion: in seconds
>    - Most advanced three level resource management: Integrate with YARN
>    and hierarchical resource queues.
>    - Easy access of all HDFS data and external system data (for example,
>    HBase)
>    - Hadoop Native: from storage (HDFS), resource management (YARN) to
>    deployment (Ambari).
>    - Authentication & Granular authorization: Kerberos, SSL and role
>    based access
>    - Advanced C/C++ access library to HDFS and YARN: libhdfs3 & libYARN
>    - Support most third party tools: Tableau, SAS et al.
>    - Standard connectivity: JDBC/ODBC
>
>
> And the link here can give you more information around hawq:
> https://cwiki.apache.org/confluence/display/HAWQ/About+HAWQ
>
>
> And please also see the answers inline to your specific questions:
>
> On Thu, Nov 12, 2015 at 4:09 PM, Adaryl "Bob" Wakefield, MBA <
> adaryl.wakefield@hotmail.com> wrote:
>
>> Silly question right? Thing is I’ve read a bit and watched some YouTube
>> videos and I’m still not quite sure what I can and can’t do with Hawq. Is
>> it a true database or is it like Hive where I need to use HCatalog?
>>
>
> It is a true database, you can think it is like a parallel postgres but
> with much more functionalities and it works natively in hadoop world.
> HCatalog is not necessary. But you can read data registered in HCatalog
> with the new feature "hcatalog integration".
>
>
>> Can I write data intensive applications against it using ODBC? Does it
>> enforce referential integrity? Does it have stored procedures?
>>
>
> ODBC: yes, both JDBC/ODBC are supported
> referential integrity: currently not supported.
> Stored procedures: yes.
>
>
>> B.
>>
>
>
> Please let us know if you have any other questions.
>
> Cheers
> Lei
>
>
>

Mime
View raw message