Here's how we reduced @apachekafka disk and network usage 4.5x on a cluster with 100Gbps of just ingress bandwidth:https://blog.cloudflare.com/squeezing-the-firehose/ …
-
-
Replying to @ibobrik @apachekafka
mind elaborating on the incompatibility that prevented you from using LZ4? This was fixed with KIP-57 as far as I know.
1 reply 0 retweets 0 likes -
Replying to @xvrl @apachekafka
This is for downconversion on older clients. New broker produced reaponses we could not read there in testing.
2 replies 1 retweet 0 likes -
Replying to @ibobrik @apachekafka
why would you worry about older clients if you are introducing a new compression format they wouldn’t support anyway?
1 reply 0 retweets 0 likes -
Replying to @xvrl @apachekafka
We worried during initial evaluation, before we introduced new compression format or even touched any production data.
1 reply 1 retweet 0 likes -
Replying to @ibobrik
ok, that makes more sense, thank you for clarifying. Is there a jira already for the downconversion issue?
1 reply 0 retweets 0 likes -
Replying to @xvrl
This is not a problem with Java client. It's the issue with sarama and lz4 in Go: https://github.com/Shopify/sarama/blob/44e7121d3b5189096ae4ef90c442f5f806c10fc9/config.go#L416-L418 … If want jiras, I have some I'm interested in: https://issues.apache.org/jira/browse/KAFKA-6465?jql=project%20%3D%20KAFKA%20AND%20status%20!%3D%20Resolved%20AND%20watcher%20%3D%20bobrik … or https://issues.apache.org/jira/browse/KAFKA-6465?jql=project%20%3D%20KAFKA%20AND%20status%20!%3D%20Resolved%20AND%20reporter%20%3D%20bobrik … :)
1 reply 0 retweets 0 likes -
Replying to @ibobrik
I see, the uninitiated reader might be misled into thinking there are still LZ4 issues in Kafka
1 reply 0 retweets 0 likes
Fair enough, I changed it to "LZ4 had incompatibility issues between Kafka versions and our Go client" to remove ambiguity.
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.