Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 90271 invoked from network); 3 Sep 2008 13:16:15 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 3 Sep 2008 13:16:15 -0000 Received: (qmail 12011 invoked by uid 500); 3 Sep 2008 13:16:07 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 11978 invoked by uid 500); 3 Sep 2008 13:16:07 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 11967 invoked by uid 99); 3 Sep 2008 13:16:07 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 03 Sep 2008 06:16:07 -0700 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of tom.e.white@gmail.com designates 209.85.198.227 as permitted sender) Received: from [209.85.198.227] (HELO rv-out-0506.google.com) (209.85.198.227) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 03 Sep 2008 13:15:08 +0000 Received: by rv-out-0506.google.com with SMTP id k40so2402989rvb.29 for ; Wed, 03 Sep 2008 06:15:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:in-reply-to:mime-version:content-type :content-transfer-encoding:content-disposition:references; bh=hh73cC8xv/88y0SbhbH2AOZoUG0tCcMdU/JUOwW+JA4=; b=Yq+QnLj2HwG6pol0Ejj0mQLz6dfJpWpuW2sslzps3gbKh1laLuH9vz3lXcaM0JozDv +O05ITSQlTJB7l6/qX7UKO28h4T6vDI/K4NkWNod0zSlrLWvTRY9T4IFxMjC8e31vetb ui8ExrMteFF7x0+hLMAhUBeJf86g9gE2p2cOk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:in-reply-to:mime-version :content-type:content-transfer-encoding:content-disposition :references; b=vrTLSRrsr1Qb+igRW2KaNz5QL9vaA9PfKXCdnf6AfF70ABlEUtyPhV9dEID9RIeknM f25tQYZp26tYT9a55dnhaI6U1N/AOjKIF5AJ8Ce/1GRliJDDwpS7hfCk1VG1bhT8c6JX +f3CqB/ertCTZSA+gyo/iYLCJLhxKzfFpYek8= Received: by 10.140.165.21 with SMTP id n21mr4902554rve.97.1220447729000; Wed, 03 Sep 2008 06:15:29 -0700 (PDT) Received: by 10.141.209.18 with HTTP; Wed, 3 Sep 2008 06:15:28 -0700 (PDT) Message-ID: Date: Wed, 3 Sep 2008 14:15:28 +0100 From: "Tom White" To: core-dev@hadoop.apache.org Subject: Re: Serialization with additional schema info In-Reply-To: <83290b460809011352n6cbe2810qbb62605b15c45dee@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <83290b460809011352n6cbe2810qbb62605b15c45dee@mail.gmail.com> X-Virus-Checked: Checked by ClamAV on apache.org Jay, The Serialization and MapReduce APIs are very class-based - so having fixed types with dynamic serialization capabilities doesn't work as well in the current design. I like 2 better than 1, but both make the Serialization API dependent on MapReduce, which it currently isn't. And arguably it shouldn't be as you could use it simply to do serialization of data, outside a MapReduce context. Perhaps SerializerType is just a String, which also makes things more flexible (at the expense of type-safety)? What would the API changes look like for 3? Also, I believe the Jaql team has been looking at how to write JSON serializers, so perhaps there is an opportunity for collaboration here? Tom On Mon, Sep 1, 2008 at 9:52 PM, Jay Kreps wrote: > Hi All, > > I am interested in hooking up a custom serialization layer I use to the new > pluggable Hadoop serialization framework. It appears that the framework > assumes there is a one-to-one mapping between java classes and > serializations. This is exactly what we want to get away from--having a > common data format allows us to easily write generic data aggregation jobs > that work with any type. This is exactly how a database supports many > generic operations such as joins, group bys, etc--because the dataformat is > always a set of tuples which can be generically manipulated without > understanding any of the details of interpretation rather than user defined > complex types the db can't operate on. To do this I need to store data in a > standard way with supported types and have a short string schema description > along with each file, and pass that description to a generic > serializer/deserializer in order to tell it how to read the bytes in the > file. The problem I have is that there is no way to get the additional > schema information into the serializer to tell it how to serialize and > deserialize. > > Some Details in case the general problem is too vague: > > A very nice generic data format that maps well to programming languages is > JSON. For example a user could be stored like this: {"name":"Jay", > "date-o-birth":"05-25-1980", "age":28, "is_active": true, etc.}. But since > we store the same fields with each "row", this is highly inefficient. It > makes more sense to just store the necessary bytes for the values, and store > what fields we are expecting, and the expected type seperately. This let's > us store numbers compactly as well. > > JSON supports numbers, strings, lists, and maps, which all have natural > mappings in Java. The above user example would translate to a java Map > containing the given keys and values. > > Here is where the trouble starts. I can't do this in the existing > SerializationFactory because the type for the object is just Map.class, but > that doesn't contain enough info to properly deserialize the class. In > reality I need a string describing the type, such as > {"name":"string", "date-o-birth":"date", "age":"int32", > "is_active":"boolean", ...} > Note that this string contains all the information needed to add in the > property names and to correctly interpret the bytes as Integer or Boolean, > or whatever. > > The obvious solution is to just add this schema into the JobConf as a > property such as "map.key.schema.info", and use it to construct the right > serializer in the Serialization implmentation. The problem with this is that > there is no way for the Serialization implementation to know whether it is > constructing the map key, map value, reduce key, or reduce value. > > Some possible solutions: > > For now I am just sticking with wrapping up map and reduce to do the > serialization/deserialization to solve my problem. However this seems like a > common case where the serialization needs information not present in the > class itself, and I would like to add support to do it right. Would you guys > accept a patch that did one of the following: > > 1. Make SerializationFactory have a getMapKeySerializer, > getMapValueSerializer, etc. method and allow the user to specify their own > SerializationFactory by setting a property with the appropriate class name. > This is probably the most flexible and doesn't break any user serialization > implementations. The getMapKeySerializer method can then check the > map.key.schema.info in addition to mapred.mapinput.key.class. > 2. Change Serialization.getSerializer(Class c) to > Serialization.getSerializer(Class c, SerializerType k) where SerializerType > = enum {MapKey, MapValue, ReduceKey, ReduceValue}. This allows the > serialization implementer to invent their own properties (map.key.schema or > whatever) and fetch the appropriate thing. > 3. Add mapred.mapinput.serializer.info, mapred.reduceinput.serializer.info, > etc. and pass the value of this into the constructor of the serializer if it > has a constructor with a single string argument. > > Or maybe there a better way to accomplish this? > > Thanks! > > -Jay >