avro-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (AVRO-1335) C++ should support field default values
Date Thu, 24 Aug 2017 20:39:00 GMT

    [ https://issues.apache.org/jira/browse/AVRO-1335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16140653#comment-16140653
] 

ASF GitHub Bot commented on AVRO-1335:
--------------------------------------

GitHub user vimota opened a pull request:

    https://github.com/apache/avro/pull/241

    AVRO-1335: Adds C++ support for default values in schema serializatio…

    …n to json.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/vimota/avro changes

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/avro/pull/241.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #241
    
----
commit 6c2539769e004c01a0dca5978288ea3072f63e8f
Author: Victor Mota <vimota@gmail.com>
Date:   2017-08-20T01:56:03Z

    AVRO-1335: Adds C++ support for default values in schema serialization to json.

----


> C++ should support field default values
> ---------------------------------------
>
>                 Key: AVRO-1335
>                 URL: https://issues.apache.org/jira/browse/AVRO-1335
>             Project: Avro
>          Issue Type: Improvement
>          Components: c++
>    Affects Versions: 1.7.4
>            Reporter: Bin Guo
>         Attachments: AVRO-1335.patch
>
>
> We found that resolvingDecoder could not provide bidirectional compatibility between
different version of schemas.
> Especially for records, for example:
> {code:title=First schema}
> {
>     "type": "record",
>     "name": "TestRecord",
>     "fields": [
>         {
>             "name": "MyData",
> 			"type": {
> 				"type": "record",
> 				"name": "SubData",
> 				"fields": [
> 					{
> 						"name": "Version1",
> 						"type": "string"
> 					}
> 				]
> 			}
>         },
> 	{
>             "name": "OtherData",
>             "type": "string"
>         }
>     ]
> }
> {code}
> {code:title=Second schema}
> {
>     "type": "record",
>     "name": "TestRecord",
>     "fields": [
>         {
>             "name": "MyData",
> 			"type": {
> 				"type": "record",
> 				"name": "SubData",
> 				"fields": [
> 					{
> 						"name": "Version1",
> 						"type": "string"
> 					},
> 					{
> 						"name": "Version2",
> 						"type": "string"
> 					}
> 				]
> 			}
>         },
> 	{
>             "name": "OtherData",
>             "type": "string"
>         }
>     ]
> }
> {code}
> Say, node A knows only the first schema and node B knows the second schema, and the second
schema has more fields. 
> Any data generated by node B can be resolved by first schema 'cause the additional field
is marked as skipped.
> But data generated by node A can not be resolved by second schema and throws an exception
*"Don't know how to handle excess fields for reader."*
> This is because data is resolved exactly according to the auto-generated codec_traits
which trying to read the excess field.
> The problem is we just can not only ignore the excess field in record, since the data
after the troublesome record also needs to be resolved.
> Actually this problem stucked us for a very long time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message