1 Star 0 Fork 0

Sean/parallel-consumer

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
克隆/下载
CHANGELOG.adoc 6.14 KB
一键复制 编辑 原始数据 按行查看 历史
Antony Stubbs 提交于 2021-05-11 18:59 +08:00 . minor: Changelog

Change Log

v0.3.0.2

Fixes and Improvements

  • ci: Add CODEOWNER

  • fix: #101 Validate GroupId is configured on managed consumer

  • Use 8B1DA6120C2BF624 GPG Key For Signing

  • ci: Bump jdk8 version path

  • fix: #97 Vert.x thread and connection pools setup incorrect

  • Disable Travis and Codecov

  • ci: Apache Kafka and JDK build matrix

  • fix: Set Serdes for MockProducer for AK 2.7 partition fix KAFKA-10503 to fix new NPE

  • Only log slow message warnings periodically, once per sweep

  • Upgrade Kafka container version to 6.0.2

  • Clean up stalled message warning logs

  • Reduce log-level if no results are returned from user-function (warn → debug)

  • Enable java 8 Github

  • Fixes #87 - Upgrade UniJ version for UnsupportedClassVersion error

  • Bump TestContainers to stable release to specifically fix #3574

  • Clarify offset management capabilities

v0.3.0.1

  • fixes #62: Off by one error when restoring offsets when no offsets are encoded in metadata

  • fix: Actually skip work that is found as stale

v0.3.0.0

Features

  • Queueing and pressure system now self tuning, performance over default old tuning values (softMaxNumberMessagesBeyondBaseCommitOffset and maxMessagesToQueue) has doubled.

    • These options have been removed from the system.

  • Offset payload encoding back pressure system

    • If the payload begins to take more than a certain threshold amount of the maximum available, no more messages will be brought in for processing, until the space need beings to reduce back below the threshold. This is to try to prevent the situation where the payload is too large to fit at all, and must be dropped entirely.

    • See Proper offset encoding back pressure system so that offset payloads can’t ever be too large #47

    • Messages that have failed to process, will always be allowed to retry, in order to reduce this pressure.

Improvements

  • Default ordering mode is now KEY ordering (was UNORDERED).

    • This is a better default as it’s the safest mode yet high performing mode. It maintains the partition ordering characteristic that all keys are processed in log order, yet for most use cases will be close to as fast as UNORDERED when the key space is large enough.

  • Support BitSet encoding lengths longer than Short.MAX_VALUE #37 - adds new serialisation formats that supports wider range of offsets - (32,767 vs 2,147,483,647) for both BitSet and run-length encoding.

  • Commit modes have been renamed to make it clearer that they are periodic, not per message.

  • Minor performance improvement, switching away from concurrent collections.

Fixes

  • Maximum offset payload space increased to correctly not be inversely proportional to assigned partition quantity.

  • Run-length encoding now supports compacted topics, plus other bug fixes as well as fixes to Bitset encoding.

v0.2.0.3

Fixes

v0.2.0.2

Fixes

v0.2.0.1 DO NOT USE - has critical bug

Fixes

v0.2.0.0

Features

  • Choice of commit modes: Consumer Asynchronous, Synchronous and Producer Transactions

  • Producer instance is now optional

  • Using a transactional Producer is now optional

  • Use the Kafka Consumer to commit offsets Synchronously or Asynchronously

Improvements

  • Memory performance - garbage collect empty shards when in KEY ordering mode

  • Select tests adapted to non transactional (multiple commit modes) as well

  • Adds supervision to broker poller

  • Fixes a performance issue with the async committer not being woken up

  • Make committer thread revoke partitions and commit

  • Have onPartitionsRevoked be responsible for committing on close, instead of an explicit call to commit by controller

  • Make sure Broker Poller now drains properly, committing any waiting work

Fixes

  • Fixes bug in commit linger, remove genesis offset (0) from testing (avoid races), add ability to request commit

  • Fixes #25 https://github.com/confluentinc/parallel-consumer/issues/25:

    • Sometimes a transaction error occurs - Cannot call send in state COMMITTING_TRANSACTION #25

  • ReentrantReadWrite lock protects non-thread safe transactional producer from incorrect multithreaded use

  • Wider lock to prevent transaction’s containing produced messages that they shouldn’t

  • Must start tx in MockProducer as well

  • Fixes example app tests - incorrectly testing wrong thing and MockProducer not configured to auto complete

  • Add missing revoke flow to MockConsumer wrapper

  • Add missing latch timeout check

v0.1

Features:

  • Have massively parallel consumption processing without running hundreds or thousands of

    • Kafka consumer clients

    • topic partitions

      without operational burden or harming the clusters performance

  • Efficient individual message acknowledgement system (without local or third system state) to massively reduce message replay upon failure

  • Per key concurrent processing, per partition and unordered message processing

  • Offsets committed correctly, in order, of only processed messages, regardless of concurrency level or retries

  • Vert.x non-blocking library integration (HTTP currently)

  • Fair partition traversal

  • Zero~ dependencies (Slf4j and Lombok) for the core module

  • Java 8 compatibility

  • Throttle control and broker liveliness management

  • Clean draining shutdown cycle

Loading...
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/zh-xy/parallel-consumer.git
git@gitee.com:zh-xy/parallel-consumer.git
zh-xy
parallel-consumer
parallel-consumer
master

搜索帮助