hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsuyoshi OZAWA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-6237) DBRecordReader is not thread safe
Date Wed, 04 Feb 2015 10:36:35 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-6237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14304911#comment-14304911
] 

Tsuyoshi OZAWA commented on MAPREDUCE-6237:
-------------------------------------------

[~rkannan82] the latest change looks an incompatible change - new getConnection() method can
return null.

The current semantics of getConnection() is to create new connection to DB and cannot return
null. I think we should preserve not only the method name "getConnection()" but also the semantics
for backward compatibility though I know the name of getConnection() is confusing. Would you
update a patch?

> DBRecordReader is not thread safe
> ---------------------------------
>
>                 Key: MAPREDUCE-6237
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6237
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: mrv2
>    Affects Versions: 2.5.0
>            Reporter: Kannan Rajah
>            Assignee: Kannan Rajah
>         Attachments: mapreduce-6237.patch, mapreduce-6237.patch
>
>
> DBInputFormat.createDBRecorder is reusing JDBC connections across instances of DBRecordReader.
This is not a good idea. We should be creating separate connection. If performance is a concern,
then we should be using connection pooling instead.
> I looked at DBOutputFormat.getRecordReader. It actually creates a new Connection object
for each DBRecordReader. So can we just change DBInputFormat to create new Connection every
time? The connection reuse code was added as part of connection leak bug in MAPREDUCE-1443.
Any reason for caching the connection?
> We observed this issue in a customer setup where they were reading data from MySQL using
Pig. As per customer, the query is returning two records which causes Pig to create two instances
of DBRecordReader. These two instances are sharing the database connection instance. The first
DBRecordReader runs to extract the first record from MySQL just fine, but then closes the
shared connection instance. When the second DBRecordReader runs, it tries to execute a query
to retrieve the second record on the closed shared connection instance, which fail. If we
set
> mapred.map.tasks to 1, the query will be successful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message