hudi-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alexander Filipchik (Jira)" <j...@apache.org>
Subject [jira] [Updated] (HUDI-722) IndexOutOfBoundsException in MessageColumnIORecordConsumer.addBinary when writing parquet
Date Thu, 19 Mar 2020 02:29:00 GMT

     [ https://issues.apache.org/jira/browse/HUDI-722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Alexander Filipchik updated HUDI-722:
-------------------------------------
    Description: 
Some writes fail with java.lang.IndexOutOfBoundsException : Invalid array range: X to X inside
MessageColumnIORecordConsumer.addBinary call.

Specifically: getColumnWriter().write(value, r[currentLevel], currentColumnIO.getDefinitionLevel());

fails as size of r is the same as current level. What can be causing it?

 

It gets executed via: ParquetWriter.write(IndexedRecord) Library version: 1.10.1 Avro is a
very complex object (~2.5k columns, highly nested, arrays of unions present).

But what is surprising is that it fails to write top level field: PrimitiveColumnIO _hoodie_commit_time
r:0 d:1 [_hoodie_commit_time] which is the first top level field in Avro: {"_hoodie_commit_time":
"20200317215711", "_hoodie_commit_seqno": "20200317215711_0_650",

  was:
Some writes fail with java.lang.IndexOutOfBoundsException : Invalid array range: X to X inside
MessageColumnIORecordConsumer.addBinary call.

Specifically: getColumnWriter().write(value, r[currentLevel], currentColumnIO.getDefinitionLevel());

fails as size of r is the same as current level. What can be causing it?

 

It gets executed via: ParquetWriter.write(IndexedRecord) Library version: 1.10.1 Avro is a
very complex object (~2.5k columns, highly nested).

But what is surprising is that it fails to write top level field: PrimitiveColumnIO _hoodie_commit_time
r:0 d:1 [_hoodie_commit_time] which is the first top level field in Avro: {"_hoodie_commit_time":
"20200317215711", "_hoodie_commit_seqno": "20200317215711_0_650",


> IndexOutOfBoundsException in MessageColumnIORecordConsumer.addBinary when writing parquet
> -----------------------------------------------------------------------------------------
>
>                 Key: HUDI-722
>                 URL: https://issues.apache.org/jira/browse/HUDI-722
>             Project: Apache Hudi (incubating)
>          Issue Type: Bug
>          Components: Writer Core
>            Reporter: Alexander Filipchik
>            Priority: Major
>             Fix For: 0.6.0
>
>
> Some writes fail with java.lang.IndexOutOfBoundsException : Invalid array range: X to
X inside MessageColumnIORecordConsumer.addBinary call.
> Specifically: getColumnWriter().write(value, r[currentLevel], currentColumnIO.getDefinitionLevel());
> fails as size of r is the same as current level. What can be causing it?
>  
> It gets executed via: ParquetWriter.write(IndexedRecord) Library version: 1.10.1 Avro
is a very complex object (~2.5k columns, highly nested, arrays of unions present).
> But what is surprising is that it fails to write top level field: PrimitiveColumnIO _hoodie_commit_time
r:0 d:1 [_hoodie_commit_time] which is the first top level field in Avro: {"_hoodie_commit_time":
"20200317215711", "_hoodie_commit_seqno": "20200317215711_0_650",



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Mime
View raw message