camel-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject [1/2] camel git commit: Fix docs for maxPollRecords
Date Mon, 21 Nov 2016 08:47:49 GMT
Repository: camel
Updated Branches:
  refs/heads/master 11954f277 -> aaf0e47a5

Fix docs for maxPollRecords

This syncs up the default and description with max.poll.records
; see comment in


Branch: refs/heads/master
Commit: 115c93a2ac2c1b8672c428a9f3091c8b7777bae8
Parents: 11954f2
Author: Von Landon <>
Authored: Fri Nov 18 10:41:03 2016 -0700
Committer: Andrea Cosentino <>
Committed: Mon Nov 21 09:38:22 2016 +0100

 components/camel-kafka/src/main/docs/kafka-component.adoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/components/camel-kafka/src/main/docs/kafka-component.adoc b/components/camel-kafka/src/main/docs/kafka-component.adoc
index 17229f2..3f48ca4 100644
--- a/components/camel-kafka/src/main/docs/kafka-component.adoc
+++ b/components/camel-kafka/src/main/docs/kafka-component.adoc
@@ -122,7 +122,7 @@ The Kafka component supports 77 endpoint options which are listed below:
 | heartbeatIntervalMs | consumer | 3000 | Integer | The expected time between heartbeats
to the consumer coordinator when using Kafka's group management facilities. Heartbeats are
used to ensure that the consumer's session stays active and to facilitate rebalancing when
new consumers join or leave the group. The value must be set lower than
but typically should be set no higher than 1/3 of that value. It can be adjusted even lower
to control the expected time for normal rebalances.
 | keyDeserializer | consumer | org.apache.kafka.common.serialization.StringDeserializer |
String | Deserializer class for key that implements the Deserializer interface.
 | maxPartitionFetchBytes | consumer | 1048576 | Integer | The maximum amount of data per-partition
the server will return. The maximum total memory used for a request will be partitions max.partition.fetch.bytes.
This size must be at least as large as the maximum message size the server allows or else
it is possible for the producer to send messages larger than the consumer can fetch. If that
happens the consumer can get stuck trying to fetch a large message on a certain partition.
-| maxPollRecords | consumer | 2147483647 | Integer | A unique string that identifies the
consumer group this consumer belongs to. This property is required if the consumer uses either
the group management functionality by using subscribe(topic) or the Kafka-based offset management
+| maxPollRecords | consumer | 500 | Integer | The maximum number of records returned in a
single call to poll().
 | partitionAssignor | consumer | org.apache.kafka.clients.consumer.RangeAssignor | String
| The class name of the partition assignment strategy that the client will use to distribute
partition ownership amongst consumer instances when group management is used
 | pollTimeoutMs | consumer | 5000 | Long | The timeout used when polling the KafkaConsumer.
 | seekToBeginning | consumer | false | boolean | If the option is true then KafkaConsumer
will read from beginning on startup.

View raw message