hawq-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hong <xunzh...@apache.org>
Subject Re: [INTRODUCTIONS] Hi I'm Greg and I'm part of the Apache HAWQ community
Date Tue, 20 Dec 2016 15:06:02 GMT
Dear all,

My name is HongWu(xunzhang is my ID on the Internet). I began to contribute
to HAWQ this year and now become the HAWQ committer. I wrote near 100
commits <https://github.com/apache/incubator-hawq/commits?author=xunzhang>
to make HAWQ better! I am a believer of open source since I benefit a lot
from open source projects and I know the true meaning of it. I think open
source is more than just publishing the code. It is delivering
comprehensive documents/tutorials, collecting users/developers/feedbacks,
establish discussion/comparison, solving confusions/issues, reproducing
good or bad results, making clear roadmap, keeping long-term development,
treating the project as our own child. Moreover, Apache Software Foundation
is an elite community in the open source world, there are lots of
high-quality projects, developers and users.

I started my career 2012 at Douban which is a social network website
connecting people with interest such as book, movie, music, photo, blog and
so on. I was working as an algorithm engineer there, focused on developing
large-scale machine learning platforms and optimizing recommendation
engine. I am the author of Paracel <http://paracel.io/> open source
project, which is a distributed computational framework designed for
machine learning, graph algorithms and scientific computing. Paracel wants
to simplify the distributed and communication details for model developers
and data scientists, lets them be concentrated on model developing. During
this period, I also created the Plato: a real-time recommendation system.
This work got very outstanding results on douban.fm product. Plato changed
the architecture of Douban's offline processing way to do machine learning.

It's a very exciting jouney with HAWQ! In my point of view, SQL engine is a
super critical infrastructure in the big data ecosystem. I have tried to
design a parallel programming language named Girl(for some reason it's
still at very early stage), but no matter how simple it will be, it
supposes developers following some paradigms or programming
idioms/patterns. But SQL on the other side, is the only existing language
that erases the distributed coding logic: we just write SQLs.

I definitely believe HAWQ could be one of the best SQL engines on Hadoop
ecosystem with all of our collaborative effort including HAWQ users, HAWQ
developers, HDB custormers, Pivotal FTEs, HAWQ competitors, Apache mentors
and so on. Look forward to hear more people with diversed background
joining HAWQ family.

Beers

2016-12-20 18:03 GMT+08:00 Wen Lin <wlin@pivotal.io>:

> Hi, All,
>
> My name is LinWen. I joined Pivotal HAWQ Beijing team since November,
> 2014. Before that, I am an engineer of VMware.
> I have implemented some features for HAWQ, like new fault tolerance
> service, libyarn, etc.
> I believe HAWQ can be the best SQL on Hadoop engine with our joint
> effort.
>
> On Thu, Dec 15, 2016 at 11:03 PM, Lili Ma <lilima@apache.org> wrote:
>
>> Hello everyone,
>>
>> Glad to know everybody here:)
>>
>> I'm Lili Ma, from Pivotal HAWQ R&D team in Beijing. I have been focusing
>> on HAWQ development and product management since 2012 when I joined
>> Pivotal. I experienced and contributed in HAWQ's all growth path, from
>> birth, Alpha, 1.X, 2.0...
>>
>> My main covering fields about HAWQ include three parts: 1) Storage such
>> as internal table storage, HAWQ Input/OutputFormat, hawq
>> extract/register,etc 2) Dispatcher and interconnect 3) Security including
>> Ranger integration, Kerberos and LDAP.
>>
>> Before Pivotal, I worked at IBM for more than 2 years and focused on
>> providing data service inside of our public cloud provision. The data
>> service includes RDS(relational data service) which can provision a
>> distributed relational database based on DB2 Federation, and NOSQL service
>> which is based on HBase.
>>
>> I believe HAWQ can become more successful with our joint effort!  Welcome
>> to reach me or this mail list for any HAWQ or other kinds of issues :)
>>
>> Thanks
>> Lili
>>
>>
>> 2016-12-15 4:45 GMT+08:00 Dan Baskette <dbbaskette@gmail.com>:
>>
>>> I will add to the email flow…
>>>
>>> I am Dan Baskette, I am the Director of Tech Marketing for Pivotal and
>>> cover Pivotal HDB/Apache HAWQ, Pivotal Greenplum Database, and Apache
>>> MADlib.   I started my career at Sun Microsystems, and have been working
>>> for EMC/Greenplum and now Pivotal since 2000….a LONG time in quite a number
>>> of roles.   I was part of the team that launched Greenplum’s first Hadoop
>>> distribution and was around for the birth of HAWQ or as we called it when
>>> it was in it’s infancy…. GOH or Greenplum on Hadoop.   I have been actively
>>> running some webcasts on various HAWQ how-to topics for Hortonworks, so you
>>> can check those out on their site.
>>>
>>> Hoping this community really takes off in a big way!
>>>
>>> Dan
>>>
>>>
>>> On December 14, 2016 at 10:09:34 AM, Ruilong Huo (rhuo@pivotal.io)
>>> wrote:
>>>
>>> Hi All,
>>>
>>> Great for Gregory to start the thread that people can know each other
>>> much better, at least in Apache HAWQ community!
>>>
>>> I am Ruilong Huo and I am from HDB/HAWQ engineering team in Pivotal. I
>>> am from Teradata and joined Pivotal after that. It's my honor to be part of
>>> HAWQ project at its early stage. I am a fan of RDBMS (especially MPP
>>> database), big data, and cloud technology that changes the IT
>>> infrastructure of the enterprises and helps to do information
>>> transformation in a very large extent.
>>>
>>> I hope that with joint effort from hawq community, it will become even
>>> greater product in big data area, especially in SQL-on-Hadoop category.
>>>
>>> Best regards,
>>> Ruilong Huo
>>>
>>> On Wed, Dec 14, 2016 at 2:27 PM, Bob Glithero <rglithero@pivotal.io>
>>> wrote:
>>>
>>>> Hello all,
>>>>
>>>> I'm Bob, and I'm doing product marketing for HDB/HAWQ at Pivotal.  I'm
>>>> new-ish here, and not so much from a coding background as from networking.
>>>> I'm from Cisco Systems, where I focused on analytics use cases in
>>>> telecommunications, particularly for mobile network operators, for service
>>>> assurance, customer care, and customer profiling.  (also, as you're
>>>> introducing yourselves, we'd love to hear what use cases you're involved
>>>> with, too).
>>>>
>>>> About a year before I left my group at Cisco acquired an MPP database
>>>> of its own -- ParStream -- for its IoT and fog computing use cases, so it's
>>>> interesting to come here and learn about the architecture and applications
>>>> of HAWQ.
>>>>
>>>> I hope to help make your experience with HAWQ a good one.  If I can
>>>> help in any way, please reach out to me directly or on the list.
>>>>
>>>> Cheers,
>>>> Bob
>>>>
>>>>
>>>>
>>>> Bob Glithero | Product Marketing
>>>> Pivotal, Inc.
>>>> rglithero@pivotal.io | m: 415.341.5592
>>>>
>>>>
>>>> On Sun, Dec 11, 2016 at 6:56 PM, Roman Shaposhnik <roman@shaposhnik.org
>>>> > wrote:
>>>>
>>>>> Greg, thanks for kicking off the roll call. Getting to know each other
>>>>> is super
>>>>> useful (and can be fun! ;-)). I'll go next:
>>>>>
>>>>> I am Roman (your friendly neighborhood mentor). I hang around a lot of
>>>>> ASF
>>>>> big data projects (as a committer and a PMC member), but lately I've
>>>>> been
>>>>> gravitating towards IoT as well (Apache Mynewt). I started my career
>>>>> at Sun
>>>>> microsystems back at a time when Linux  wasn't even 1.x and I've been
>>>>> doing
>>>>> enterprise software ever since. I was lucky enough to get to work on
>>>>> the original
>>>>> Hadoop team at Yahoo! and fall in love with not one but two elephants
>>>>> (Hadoop
>>>>> and Postgres). Recently I've assumed a position of VP of Technology at
>>>>> ODPi
>>>>> and I'm still hoping to MHGA! My secret weapon is Apache Bigtop (which
>>>>> co-founded)
>>>>> and I'm not afraid to use it!
>>>>>
>>>>> I'm here to help as much as I can to make sure that this community
>>>>> evolves into
>>>>> a vibrant, self-governed, exciting place worthy of being a top level
>>>>> project (TLP)
>>>>> at ASF. If you have any questions or ideas that you may want to bounce
>>>>> off of
>>>>> me -- please don't hesitate to reach out directly or on the mailing
>>>>> list.
>>>>>
>>>>> Thanks,
>>>>> Roman.
>>>>>
>>>>> On Fri, Dec 9, 2016 at 11:53 AM, Gregory Chase <gchase@pivotal.io>
>>>>> wrote:
>>>>> >
>>>>> > Dear HAWQs,
>>>>> >
>>>>> > I thought it would be fun to get to know some of the other people
in
>>>>> the community.
>>>>> >
>>>>> > My name is Greg Chase and I run community development for Pivotal
>>>>> for big data open source communities that Pivotal contributes to.
>>>>> >
>>>>> > Some of you may have seen my frequent emails about virtual events
I
>>>>> help organize for user and contributor education.
>>>>> >
>>>>> > Not so long ago, I was in charge of product marketing for an
>>>>> in-memory data warehouse named after a Hawaiian town from a three-letter
>>>>> acronymed German Company. We treated Hadoop as an external table, and
>>>>> returning results from these queries was both slow and brittle due to
the
>>>>> network transfer rates.
>>>>> >
>>>>> > So I have a special appreciation of the innovation that has gone
>>>>> into creating Hadoop-native HAWQ out of PostgreSQL and Greenplum.
>>>>> >
>>>>> > These days I'm much more of a marketer than a coder, but I still
>>>>> love hearing about the kinds of projects that HAWQ users are involved
in.
>>>>> >
>>>>> > I know we'd all love to hear more about everyone else's projects,
>>>>> and how you became a HAWQ user.  So please introduce yourselves!
>>>>> >
>>>>> > --
>>>>> > Greg Chase
>>>>> >
>>>>> > Global Head, Big Data Communities
>>>>> > http://www.pivotal.io/big-data
>>>>> >
>>>>> > Pivotal Software
>>>>> > http://www.pivotal.io/
>>>>> >
>>>>> > 650-215-0477
>>>>> > @GregChase
>>>>> > Blog: http://geekmarketing.biz/
>>>>> >
>>>>>
>>>>
>>>>
>>>
>>
>

Mime
View raw message