drill-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "subbu srinivasan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (DRILL-4352) Query fails on single corrupted parquet column
Date Wed, 04 May 2016 00:28:12 GMT

    [ https://issues.apache.org/jira/browse/DRILL-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15269916#comment-15269916
] 

subbu srinivasan commented on DRILL-4352:
-----------------------------------------

Folks,
I went through the code for JsonParsing.  The main call for JSON deserialization happens to
be
in JSONReader which is called from JSONRecordParser. The issue is that a handleAndRaise call
is made to all caught exceptions.

Would the proposal below be of acceptance to the community.

The proposal is to catch the IOException and not bail out. 

 try{
      outside: while(recordCount < BaseValueVector.INITIAL_VALUE_ALLOCATION) {
    	try
    	{
        writer.setPosition(recordCount);
        write = jsonReader.write(writer);

        if(write == ReadState.WRITE_SUCCEED) {
//          logger.debug("Wrote record.");
          recordCount++;
        }else{
//          logger.debug("Exiting.");
          break outside;
        }
    	}
        catch(IOException ex)
        {
            logger.error("Ignoring record. Error parsing JSON: ", ex);
            ++parseErrorCount;
        }

      }



> Query fails on single corrupted parquet column
> ----------------------------------------------
>
>                 Key: DRILL-4352
>                 URL: https://issues.apache.org/jira/browse/DRILL-4352
>             Project: Apache Drill
>          Issue Type: Improvement
>          Components: Execution - Monitoring, Storage - Parquet
>    Affects Versions: 1.4.0
>            Reporter: F M├ęthot
>
> Getting this error when querying a corrupted Parquet file.
> Error: SYSTEM ERROR: IOException: FAILED_TO_UNCOMPRESSED(5)
> Fragment 1:9
> A single corrupt file among 1000s will cause a query to break.
> Encountering a corrupt files should be logged and not spoil a query.
> It would have been useful if it was clearly specified in the log which parquet file is
causing issue.
> Response from Ted Dunning:
> This is a lot like the problem of encountering bad lines in a line oriented file such
as CSV or JSON. 
> Drill doesn't currently have a good mechanism for skipping bad input. Or rather, it has
reasonably good mechanisms, but it doesn't use them well.
> I think that this is a very reasonable extension of the problem of dealing with individual
bad records and should be handled somehow by the parquet scanner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message