hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Esteban Gutierrez <este...@cloudera.com>
Subject Re: Metadata conventions/tools
Date Tue, 08 Jul 2014 22:04:00 GMT
Hello John,

I don't think there is something like one size fits all in HBase for data
serialization. From my experience I've seen that users choose a serializers
that fits their needs (Avro, Kiji, Gora, etc.) or they use use
Bytes.toBytes.  So what you need to do is to make sure that in your
application the user can specify an arbitrary serializer/deserializer.

cheers,
esteban.

--
Cloudera, Inc.



On Tue, Jul 8, 2014 at 11:16 AM, John Lilley <john.lilley@redpoint.net>
wrote:

> Greetings!  We are an ISV of ETL/DI/DQ software and desire to support
> connections to HBase for "classic tabular" data like one would store in an
> RDBMS.  To that end, I am trying to better understand how people typically
> use HBase to store this type of data.  Hive appears to wrap HBase and
> provide a meta-data layer, and users on @hadoop have commented that Phoenix
> and Lingual are also used, as well as various home-grown solutions.  But
> we're really interested in the most common uses and conventions.  Can you
> comment on what "most people" actually use in production.  We are not
> really bleeding-edge (other than running in Hadoop, which I suppose makes
> us bleeding edge), in the sense that our customers tend to come from a
> server/RDBMS world and are very comfortable with that paradigm.
> Thanks,
> John
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message