camel-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrea Cosentino (JIRA)" <>
Subject [jira] [Commented] (CAMEL-11697) S3 Consumer: If maxMessagesPerPoll is greater than 50 consumer fails to poll objects from bucket
Date Wed, 23 Aug 2017 12:54:00 GMT


Andrea Cosentino commented on CAMEL-11697:

We'll need to add a new options to the S3 endpoint.

> S3 Consumer: If maxMessagesPerPoll is greater than 50 consumer fails to poll objects
from bucket
> ------------------------------------------------------------------------------------------------
>                 Key: CAMEL-11697
>                 URL:
>             Project: Camel
>          Issue Type: Bug
>          Components: camel-aws
>    Affects Versions: 2.14.3, 2.19.2
>            Reporter: MykhailoVlakh
>            Assignee: Andrea Cosentino
> It is possible to configure S3 consumer to process several s3 objects in a single poll
using the maxMessagesPerPoll property. 
> If this property contains a small number, less than 50, everything works fine but if
user tries to consume more files then s3 consumer simply fails all the time. It cannot poll
files because there are not enough HTTP connections to open streams for all the requested
files at once. The exception looks like this:
> {code}
> com.amazonaws.AmazonClientException: Unable to execute HTTP request: Timeout waiting
for connection from pool
> 	at com.amazonaws.http.AmazonHttpClient.executeHelper(
> 	at com.amazonaws.http.AmazonHttpClient.execute(
> 	at
> 	at
> 	at$201(
> 	at$S3DirectImpl.getObject(
> 	at
> 	at
> 	at
> 	at
> 	at
> 	at
> 	at org.apache.camel.impl.ScheduledPollConsumer.doRun(
> 	at
> 	at java.util.concurrent.Executors$
> 	at java.util.concurrent.FutureTask.runAndReset(
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(
> 	at java.util.concurrent.ThreadPoolExecutor$
> 	at
> {code}
> The issue happens because by default AmazonS3Client uses HTTP client with limited number
of connections in the pool - 50. 
> Since S3 consumer provides a possibility to consume any number of s3 objects in a single
pool and because it is quite common case that someone needs to process 50 or more files in
a single pool I think s3 consumer should handle this case properly. It should automatically
change HTTP connections pool size to be able to handle requested number of objects. This can
be done like this:
> {code}
> ClientConfiguration s3Config = new ClientConfiguration();
> /*
> +20 we need to allocate a bit more to be sure that we always can do additional API calls
when we already hold maxMessagesPerPoll s3 object streams opened
> */
> s3Config.setMaxConnections(maxMessagesPerPoll + 20); 
> AmazonS3Client client = new AeAmazonS3Client(awsCreds, s3Config );
> {code}

This message was sent by Atlassian JIRA

View raw message