hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tom White (JIRA)" <j...@apache.org>
Subject [jira] Updated: (MAPREDUCE-685) Sqoop will fail with OutOfMemory on large tables using mysql
Date Thu, 09 Jul 2009 12:19:14 GMT

     [ https://issues.apache.org/jira/browse/MAPREDUCE-685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Tom White updated MAPREDUCE-685:
--------------------------------

       Resolution: Fixed
    Fix Version/s: 0.21.0
     Hadoop Flags: [Reviewed]
           Status: Resolved  (was: Patch Available)

I've just committed this. Thanks Aaron!

> Sqoop will fail with OutOfMemory on large tables using mysql
> ------------------------------------------------------------
>
>                 Key: MAPREDUCE-685
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-685
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: contrib/sqoop
>            Reporter: Aaron Kimball
>            Assignee: Aaron Kimball
>             Fix For: 0.21.0
>
>         Attachments: MAPREDUCE-685.3.patch, MAPREDUCE-685.patch, MAPREDUCE-685.patch.2
>
>
> The default MySQL JDBC client behavior is to buffer the entire ResultSet in the client
before allowing the user to use the ResultSet object. On large SELECTs, this can cause OutOfMemory
exceptions, even when the client intends to close the ResultSet after reading only a few rows.
The MySQL ConnManager should configure its connection to use row-at-a-time delivery of results
to the client.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message