kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jkr...@apache.org
Subject svn commit: r1502335 - in /kafka/site: 08/ops.html includes/header.html styles.css
Date Thu, 11 Jul 2013 19:32:17 GMT
Author: jkreps
Date: Thu Jul 11 19:32:16 2013
New Revision: 1502335

URL: http://svn.apache.org/r1502335
Log:
Add an operations page for 0.8.


Added:
    kafka/site/08/ops.html
Modified:
    kafka/site/includes/header.html
    kafka/site/styles.css

Added: kafka/site/08/ops.html
URL: http://svn.apache.org/viewvc/kafka/site/08/ops.html?rev=1502335&view=auto
==============================================================================
--- kafka/site/08/ops.html (added)
+++ kafka/site/08/ops.html Thu Jul 11 19:32:16 2013
@@ -0,0 +1,246 @@
+<!--#include virtual="../includes/header.html" -->
+
+<ul class="toc">
+	<li><a href="#operations">Operations</a>
+	<li><a href="#datacenters">Datacenters</a>
+	<li><a href="#config">Config</a>
+		<ul>
+			<li><a href="#serverconfig">Important Server Configs</a>
+			<li><a href="#clientconfig">Important Client Configs</a>
+			<li><a href="#prodconfig">A Production Server Configs</a>
+        </ul>
+     <li><a href="#java">Java Version</a>
+	 <li><a href="#hwandos">Hardware and OS</a>
+		<ul>
+			<li><a href="#os">OS</a>
+			<li><a href="#diskandfs">Disks and Filesystems</a>
+			<li><a href="#appvsosflush">Application vs OS Flush Management</a>
+			<li><a href="#linuxflush">Linux Flush Behavior</a>
+			<li><a href="#ext4">Ext4 Notes</a>
+		</ul>
+	<li><a href="#monitoring">Monitoring</a>
+	<li><a href="#zookeeper">Zookeeper</a>
+		<ul>
+			<li><a href="#zkversion">Stable Version</a>
+			<li><a href="#zkops">Operationalization</a>
+		</ul>
+</ul>
+
+<h1><a id="operations">Operations</a></h1>
+Here is some information on actually running Kafka as a production system based on usage
and experience at LinkedIn. Please send us any additional tips you know of.
+
+<h1><a id="datacenters">Datacenters</a></h1>
+Some deployments will need to manage a data pipeline that spans multiple datacenters. Our
approach to this is to deploy a local Kafka cluster in each datacenter and machines in each
location interact only with their local cluster.
+<p>
+For applications that need a global view of all data we use the <a href="/08/tools.html">mirror
maker tool</a> to provide clusters which have aggregate data mirrored from all datacenters.
These aggregator clusters are used for reads by applications that require this.
+<p>
+Likewise in order to support data load into Hadoop which resides in separate facilities we
provide local read-only clusters that mirror the production data centers in the facilities
where this data load occurs.
+<p>
+This allows each facility to stand alone and operate even if the inter-datacenter links are
unavailable: when this occurs the mirroring falls behind until the link is restored at which
time it catches up.
+<p>
+This deployment pattern allows datacenters to act as independent entities and allows us to
manage and tune inter-datacenter replication centrally.
+<p>
+This is not the only possible deployment pattern. It is possible to read from or write to
a remote Kafka cluster over the WAN though TCP tuning will be necessary for high-latency links.
+<p>
+It is generally not advisable to run a single Kafka cluster that spans multiple datacenters
as this will incur very high replication latency both for Kafka writes and Zookeeper writes
and neither Kafka nor Zookeeper will remain available if the network partitions.
+
+
+<h1><a id="config">Kafka Configuration</a></h1>
+Kafka 0.8 is the version we currently run. We are currently running with replication but
with producers acks = 1. 
+<P>
+<h3><a id="serverconfig">Important Server Configurations</a></h3>
+
+The most important server configurations for performance are those that control the disk
flush rate. The more often data is flushed to disk, the more "seek-bound" Kafka will be and
the lower the throughput. However very high application flush rates can lead to high latency
when the flush does occur. See the section below on application versus OS flush.
+
+<h3><a id="clientconfig">Important Client Configurations</a></h3>
+The most important producer configurations control
+<ul>
+	<li>compression</li>
+	<li>sync vs async production></li>
+	<li>batch size (for async producers)</li>
+</ul>
+The most important consumer configuration is the fetch size.
+<p>
+All configurations are documented in the <a href="configuration.html">configuration</a>
page.
+<p>
+<h3><a id="prodconfig">A Production Server Config</a></h3>
+Here is our server production server configuration:
+<pre>
+# Replication configurations
+num.replica.fetchers=4
+replica.fetch.max.bytes=1048576
+replica.fetch.wait.max.ms=500
+replica.high.watermark.checkpoint.interval.ms=5000
+replica.socket.timeout.ms=30000
+replica.socket.receive.buffer.bytes=65536
+replica.lag.time.max.ms=10000
+replica.lag.max.messages=4000
+
+controller.socket.timeout.ms=30000
+controller.message.queue.size=10
+
+# Log configuration
+num.partitions=8
+message.max.bytes=1000000
+auto.create.topics.enable=true
+log.index.interval.bytes=4096
+log.index.size.max.bytes=10485760
+log.retention.hours=168
+log.flush.interval.ms=10000
+log.flush.interval.messages=20000
+log.flush.scheduler.interval.ms=2000
+log.roll.hours=168
+log.cleanup.interval.mins=30
+log.segment.bytes=1073741824
+
+# ZK configuration
+zk.connection.timeout.ms=6000
+zk.sync.time.ms=2000
+
+# Socket server configuration
+num.io.threads=8
+num.network.threads=8
+socket.request.max.bytes=104857600
+socket.receive.buffer.bytes=1048576
+socket.send.buffer.bytes=1048576
+queued.max.requests=16
+fetch.purgatory.purge.interval.requests=100
+producer.purgatory.purge.interval.requests=100
+</pre>
+
+Our client configuration varies a fair amount between different use cases.
+
+<h1><a id="java">Java</a></h1>
+Any version of Java 1.6 or later should work fine, we are using 1.6.0_21.
+
+Here are our command line options:
+<pre>
+java -server -Xms3072m -Xmx3072m -XX:NewSize=256m -XX:MaxNewSize=256m 
+     -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70
+     -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution 
+     -Xloggc:logs/gc.log -Djava.awt.headless=true
+     -Dcom.sun.management.jmxremote -classpath &lt;long list of jars&gt; the.actual.Class
+	</pre>
+	
+<h1><a id="hwandos">Hardware and OS</a></h1>
+We are using dual quad-core Intel Xeon machines with 24GB of memory.
+<p>
+You need sufficient memory to buffer active readers and writers. You can do a back-of-the-envelope
estimate of memory needs by assuming you want to be able to buffer for 30 seconds and compute
your memory need as write_throughput*30.
+<p>
+The disk throughput is important. We have 8x7200 rpm SATA drives. In general disk throughput
is the performance bottleneck, and more disks is more better. Depending on how you configure
flush behavior you may or may not benefit from more expensive disks (if you force flush often
then higher RPM SAS drives may be better).
+
+<h2><a id="os">OS</a></h2>
+Kafka should run well on any unix system and has been tested on Linux and Solaris.
+<p>
+We have seen a few issues running on Windows and Windows is not currently a well supported
platform though we would be happy to change that.
+<p>
+You likely don't need to do much OS-level tuning though there are a few things that will
help performance. 
+<p>
+Two configurations that may be important:
+<ul>
+	<li>We upped the number of file descriptors since we have lots of topics and lots
of connections.
+	<li>We upped the max socket buffer size to enable high-performance data transfer between
data centers <a href="http://www.psc.edu/index.php/networking/641-tcp-tune">described
here</a>.
+</ul>
+
+<h2><a id="diskandfs">Disks and Filesystem</a></h2>
+We recommend using multiple drives to get good throughput and not sharing the same drives
used for Kafka data with application logs or other OS filesystem activity to ensure good latency.
As of 0.8 you can either RAID these drives together into a single volume or format and mount
each drive as its own directory. Since Kafka has replication the redundancy provided by RAID
can also be provided at the application level. This choice has several tradeoffs.
+<p>
+If you configure multiple data directories partitions will be assigned round-robin to data
directories. Each partition will be entirely in one of the data directories. If data is not
well balanced among partitions this can lead to load imbalance between disks.
+<p>
+RAID can potentially do better at balancing load between disks (although it doesn't always
seem to) because it balances load at a lower level. The primary downside of RAID is that it
is usually a big performance hit for write throughput and reduces the available disk space.
+<p>
+Another potential benefit of RAID is the ability to tolerate disk failures. However our experience
has been that rebuilding the RAID array is so I/O intensive that it effectively disables the
server, so this does not provide much real availability improvement.
+
+<h2><a id="appvsosflush">Application vs. OS Flush Management</a></h2>
+Kafka always immediately writes all data to the filesystem and supports the ability to configure
the flush policy that controls when data is forced out of the OS cache and onto disk using
the and flush. This flush policy can be controlled to force data to disk after a period of
time or after a certain number of messages has been written. There are several choices in
this configuration.
+<p>
+Kafka must eventually call fsync to know that data was flushed. When recovering from a crash
for any log segment not known to be fsync'd Kafka will check the integrity of each message
by checking its CRC and also rebuild the accompanying offset index file as part of the recovery
process executed on startup.
+<p>
+This frequency of application-level fsyncs has a large impact on both latency and throughput.
Setting a large flush interval will improve throughput as the operating system can buffer
the many small writes into a single large write. This works effectively even across many partitions
all taking simultaneous writes provided enough memory is available for buffering. However
doing this may have a significant impact on latency as in many filesystems (including ext2,
ext3, and ext4) fsync is an operation which blocks all writes to the file. Because of this
allowing lots of data to accumulate and then calling flush can lead to large write latencies
as new writes on that partition will be blocked as lots of accumulated data is flushed to
disk.
+<p>
+In 0.8 we support replication as a way to ensure that data that is written is durable in
the face of server crashes. As a result we allow giving out data to consumers immediately
and the flush interval does not impact consumer latency. However we still MUST flush each
log segment when the log rolls over to a new segment. So although you can set a relatively
lenient flush interval setting no flush interval at all will lead to a full segment's worth
of data being flushed all at once which can be quite slow.
+After 0.8 we improved our recovery procedure which allows us to avoid the blocking fsync
when the log rolls. As a result in all releases after 0.8 we recommend using replication and
not setting any application level flush settings---relying only on the OS and Kafka's own
background flush. This provides the best of all worlds for most uses: no knobs to tune, great
throughput and latency, and full recovery guarantees. We generally feel that the guarantees
provided by replication are stronger than sync to local disk, however the paranoid still may
prefer having both and application level fsync policies are still supported.
+<p>
+In general you don't need to do any low-level tuning of the filesystem, but in the next few
sections we will go over some of this in case it is useful.
+
+<h3><a id="linuxflush">Understanding Linux OS Flush Behavior</a></h3>
+
+In Linux, data written to the filesystem is maintained in <a href="http://en.wikipedia.org/wiki/Page_cache">pagecache</a>
until it must be written out to disk (due to an application-level fsync or the OS's own flush
policy). The flushing of data is done by a set of background threads called pdflush (or in
post 2.6.32 kernels "flusher threads").
+<p>
+Pdflush has a configurable policy that controls how much dirty data can be maintained in
cache and for how long before it must be written back to disk. This policy is described <a
href="http://www.westnet.com/~gsmith/content/linux-pdflush.htm">here</a>. When Pdflush
cannot keep up with the rate of data being written it will eventually cause the writing process
to block incurring latency in the writes to slow down the accumulation of data.
+<p>
+You can see the current state of OS memory usage by doing
+<pre>
+  cat /proc/meminfo
+</pre>
+The meaning of these values are described in the link above.
+<p>
+Using pagecache has several advantages over an in-process cache for storing data that will
be written out to disk:
+<ul>
+  <li>The I/O scheduler will batch together consecutive small writes into bigger physical
writes which improves throughput.
+  <li>The I/O scheduler will attempt to re-sequence writes to minimize movement of
the disk head which improves throughput.
+  <li>It automatically uses all the free memory on the machine
+</ul>
+
+<h3><a id="ext4">Ext4 Notes</a></h3>
+Ext4 may or may not be the best filesystem for Kafka. Filesystems like XFS supposedly handle
locking during fsync better. We have only tried Ext4, though.
+<p>
+It is not necessary to tune these settings, however those wanting to optimize performance
have a few knobs that will help:
+<ul>
+  <li>data=writeback: Ext4 defaults to data=ordered which puts a strong order on some
writes. Kafka does not require this ordering as it does very paranoid data recovery on all
unflushed log. This setting removes the ordering constraint and seems to significantly reduce
latency.
+  <li>Disabling journaling: Journaling is a tradeoff: it makes reboots faster after
server crashes but it introduces a great deal of additional locking which adds variance to
write performance. Those who don't care about reboot time and want to reduce a major source
of write latency spikes can turn off journaling entirely.
+  <li>commit=num_secs: This tunes the frequency with which ext4 commits to its metadata
journal. Setting this to a lower value reduces the loss of unflushed data during a crash.
Setting this to a higher value will improve throughput.
+  <li>nobh: This setting controls additional ordering guarantees when using data=writeback
mode. This should be safe with Kafka as we do not depend on write ordering and improves throughput
and latency.
+  <li>delalloc: Delayed allocation means that the filesystem avoid allocating any blocks
until the physical write occurs. This allows ext4 to allocate a large extent instead of smaller
pages and helps ensure the data is written sequentially. This feature is great for throughput.
It does seem to involve some locking in the filesystem which adds a bit of latency variance.
+</ul>
+	
+<h1><a id="monitoring">Monitoring</a></h1>
+
+Kafka uses Yammer Metrics for metrics reporting in both the server and the client. This can
be configured to report stats using pluggable stats reporters to hook up to your monitoring
system.
+<p>
+The easiest way to see the available metrics to fire up jconsole and point it at a running
kafka client or server; this will all browsing all metrics with JMX.
+<p>
+We pay particular we do graphing and alerting on the following metrics:
+<ul>
+	<li>The rate of data in and out of the cluster and the number of messages written
+	<li>The log flush rate and the time taken to flush the log
+	<li>The number of partitions that have replicas that are down or have fallen behind
and are underreplicated.
+	<li>Is the controller active? Answer had better be yes.
+	<li>Unclean leader elections. This shouldn't happen.
+	<li>Number of partitions each node is the leader for.
+	<li>Leader elections: we track each time this happens and how long it took
+	<li>Any changes to the ISR
+	<li>The number of produce requests waiting on replication to report back
+	<li>The number of fetch requests waiting on data to arrive
+	<li>Avg and 99th percentile time for each request for waiting in queue, local processing,
and waiting on other servers
+	<li>The raw rate of incoming fetch and produce requests
+	<li>GC time and other stats
+	<li>Various server stats such as CPU utilization, I/O service time, etc.
+</ul>
+
+<h3>Audit</h3>
+The final alerting we do is on the correctness of the data delivery. We audit that every
message that is sent is consumed by all consumers and measure the lag for this to occur. For
important topics we alert if a certain completeness is not achieved in a certain time period.
The details of this are discussed in KAFKA-260.
+
+<h1><a id="zk">Zookeeper</a></h1>
+
+<h3><a id="zkversion">Stable version</a></h3>
+At LinkedIn, we are running Zookeeper 3.3.*. Version 3.3.3 has known serious issues regarding
ephemeral node deletion and session expirations. After running into those issues in production,
we upgraded to 3.3.4 and have been running that smoothly for 1/2 year now.
+
+<h3><a id="zkops">Operationalizing Zookeeper</a></h3>
+Operationally, we do the following for a healthy Zookeeper installation -
+<p>
+Redundancy in the physical/hardware/network layout: try not to put them all in the same rack,
decent (but don't go nuts) hardware, try to keep redundant power and network paths, etc
+<p>
+I/O segregation: if you do a lot of write type traffic you'll almost definitely want the
transaction logs on a different disk group than application logs and snapshots (the write
to the Zookeeper service has a synchronous write to disk, which can be slow).
+<p>
+Application segregation: Unless you really understand the application patterns of other apps
that you want to install on the same box, it can be a good idea to run Zookeeper in isolation
(though this can be a balancing act with the capabilities of the hardware).
+Use care with virtualization: It can work, depending on your cluster layout and read/write
patterns and SLAs, but the tiny overheads introduced by the virtualization layer can add up
and throw off Zookeeper, as it can be very time sensitive
+<p>
+Zookeeper configuration and monitoring: It's java, make sure you give it 'enough' heap space
(We usually run them with 3-5G, but that's mostly due to the data set size we have here).
Unfortunately we don't have a good formula for it. As far as monitoring, both JMZ and the
4 letter commands are very useful, they do overlap in some cases (and in those cases we prefer
the 4 letter commands, they seem more predictable, or at the very least, they work better
with the LI monitoring infrastructure)
+Don't overbuild the cluster: large clusters, especially in a write heavy usage pattern, means
a lot of intracluster communication (quorums on the writes and subsequent cluster member updates),
but don't underbuild it (and risk swamping the cluster).
+<p>
+Try to run on a 3-5 node cluster: Zookeeper writes use quorums and inherently that means
having an odd number of machines in a cluster. Remember that a 5 node cluster will cause writes
to slow down compared to a 3 node cluster, but will allow more fault tolerance.
+<p>
+Overall, we try to keep the Zookeeper system as small as will handle the load (plus standard
growth capacity planning) and as simple as possible. We try not to do anything fancy with
the configuration or application layout as compared to the official release as well as keep
it as self contained as possible. For these reasons, we tend to skip the OS packaged versions,
since it has a tendency to try to put things in the OS standard hierarchy, which can be 'messy',
for want of a better way to word it.
+
+<!--#include virtual="../includes/footer.html" -->
\ No newline at end of file

Modified: kafka/site/includes/header.html
URL: http://svn.apache.org/viewvc/kafka/site/includes/header.html?rev=1502335&r1=1502334&r2=1502335&view=diff
==============================================================================
--- kafka/site/includes/header.html (original)
+++ kafka/site/includes/header.html Thu Jul 11 19:32:16 2013
@@ -41,13 +41,13 @@
 				<li><a href="/design.html">design</a></li>
 				<li><a href="/implementation.html">implementation</a></li>
 				<li><a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">clients</a></li>
-				<li><a href="https://cwiki.apache.org/confluence/display/KAFKA/Operations">operation</a></li>
 				<li><a href="https://cwiki.apache.org/confluence/display/KAFKA/FAQ">faq</a></li>
 				<li>0.8&nbsp;beta
 					<ul>
 					    <li><a href="/08/quickstart.html">quickstart</a></li>
 		                    <li><a href="/08/api.html">api&nbsp;docs</a></li>
 		                    <li><a href="/08/configuration.html">configuration</a></li>
+		                    <li><a href="/08/ops.html">operation</a></li>
 							<li><a href="/08/tools.html">tools</a></li>
 							<li><a href="https://cwiki.apache.org/confluence/display/KAFKA/Migrating+from+0.7+to+0.8">migration</a></li>
 					</ul>
@@ -57,6 +57,7 @@
 						<li><a href="/07/quickstart.html">quickstart</a></li>
 						<li><a href="http://people.apache.org/~joestein/kafka-0.7.1-incubating-docs">api&nbsp;docs</a></li>
 						<li><a href="/07/configuration.html">configuration</a></li>
+						<li><a href="https://cwiki.apache.org/confluence/display/KAFKA/Operations">operation</a></li>
 						<li><a href="/07/performance.html">performance</a></li>
 					</ul>
 				</li>

Modified: kafka/site/styles.css
URL: http://svn.apache.org/viewvc/kafka/site/styles.css?rev=1502335&r1=1502334&r2=1502335&view=diff
==============================================================================
--- kafka/site/styles.css (original)
+++ kafka/site/styles.css Thu Jul 11 19:32:16 2013
@@ -109,4 +109,10 @@ a {
 .caption {
 	font-size: 11pt; 
 	font-weight: bold
+}
+.toc {
+	font-size: 16pt;
+}
+.toc ul {
+	font-size: 14pt;
 }
\ No newline at end of file



Mime
View raw message