lucene-solr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shalin Shekhar Mangar (JIRA)" <>
Subject [jira] Commented: (SOLR-829) replication Compression
Date Tue, 18 Nov 2008 07:13:44 GMT


Shalin Shekhar Mangar commented on SOLR-829:

# Yes, the constructor call is moved to a different line, that's all.
# We disable checksum because if GZIP does checksums internally, so we do not need to do it
again. However, deflate does not use checksums and when we use the InflaterInputStream, we
should do checksums. This is not in the patch right now.
# That code is exactly copied from CommonsHttpSolrServer. In this case, if we are getting
a compressed stream from the master, it should be decompressed and written to the filesystem
as it is. We do not need to worry about the type of the response. This patch is only for this
particular use-case.

I don't think this patch is in sync with Noble's latest proposal. A new one will be needed.

> replication Compression
> -----------------------
>                 Key: SOLR-829
>                 URL:
>             Project: Solr
>          Issue Type: Improvement
>          Components: replication (java)
>            Reporter: Simon Collins
>            Assignee: Shalin Shekhar Mangar
>         Attachments: email discussion.txt, solr-829.patch, solr-829.patch
> From a discussion on the mailing list solr-user, it would be useful to have an option
to compress the files sent between servers for replication purposes.
> Files sent across between indexes can be compressed by a large margin allowing for easier
replication between sites.
> ...Noted by Noble Paul 
> we will use a gzip on both ends of the pipe . On the slave side you can say <str name="zip">true<str>
as an extra option to compress and send data from server 
> Other thoughts on issue: 
> Do keep in mind that compression is a CPU intensive process so it is a trade off between
CPU utilization and network bandwidth.  I have see cases where compressing the data before
a network transfer ended up being slower than without compression because the cost of compression
and un-compression was more than the gain in network transfer.
> Why invent something when compression is standard in HTTP? --wunder

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message