ci: Add CODEOWNER
fix: #101 Validate GroupId is configured on managed consumer
Use 8B1DA6120C2BF624 GPG Key For Signing
ci: Bump jdk8 version path
fix: #97 Vert.x thread and connection pools setup incorrect
Disable Travis and Codecov
ci: Apache Kafka and JDK build matrix
fix: Set Serdes for MockProducer for AK 2.7 partition fix KAFKA-10503 to fix new NPE
Only log slow message warnings periodically, once per sweep
Upgrade Kafka container version to 6.0.2
Clean up stalled message warning logs
Reduce log-level if no results are returned from user-function (warn → debug)
Enable java 8 Github
Fixes #87 - Upgrade UniJ version for UnsupportedClassVersion error
Bump TestContainers to stable release to specifically fix #3574
Clarify offset management capabilities
fixes #62: Off by one error when restoring offsets when no offsets are encoded in metadata
fix: Actually skip work that is found as stale
Queueing and pressure system now self tuning, performance over default old tuning values (softMaxNumberMessagesBeyondBaseCommitOffset
and maxMessagesToQueue
) has doubled.
These options have been removed from the system.
Offset payload encoding back pressure system
If the payload begins to take more than a certain threshold amount of the maximum available, no more messages will be brought in for processing, until the space need beings to reduce back below the threshold. This is to try to prevent the situation where the payload is too large to fit at all, and must be dropped entirely.
See Proper offset encoding back pressure system so that offset payloads can’t ever be too large #47
Messages that have failed to process, will always be allowed to retry, in order to reduce this pressure.
Default ordering mode is now KEY
ordering (was UNORDERED
).
This is a better default as it’s the safest mode yet high performing mode.
It maintains the partition ordering characteristic that all keys are processed in log order, yet for most use cases will be close to as fast as UNORDERED
when the key space is large enough.
Support BitSet encoding lengths longer than Short.MAX_VALUE #37 - adds new serialisation formats that supports wider range of offsets - (32,767 vs 2,147,483,647) for both BitSet and run-length encoding.
Commit modes have been renamed to make it clearer that they are periodic, not per message.
Minor performance improvement, switching away from concurrent collections.
Bitset overflow check (#35) - gracefully drop BitSet or Runlength encoding as an option if offset difference too large (short overflow)
A new serialisation format will be added in next version - see Support BitSet encoding lengths longer than Short.MAX_VALUE #37
Gracefully drops encoding attempts if they can’t be run
Fixes a bug in the offset drop if it can’t fit in the offset metadata payload
Turns back on the Bitset overflow check (#35)
Incorrectly turns off an over-flow check in offset serialisation system (#35)
Choice of commit modes: Consumer Asynchronous, Synchronous and Producer Transactions
Producer instance is now optional
Using a transactional Producer is now optional
Use the Kafka Consumer to commit offsets
Synchronously or Asynchronously
Memory performance - garbage collect empty shards when in KEY ordering mode
Select tests adapted to non transactional (multiple commit modes) as well
Adds supervision to broker poller
Fixes a performance issue with the async committer not being woken up
Make committer thread revoke partitions and commit
Have onPartitionsRevoked be responsible for committing on close, instead of an explicit call to commit by controller
Make sure Broker Poller now drains properly, committing any waiting work
Fixes bug in commit linger, remove genesis offset (0) from testing (avoid races), add ability to request commit
Fixes #25 https://github.com/confluentinc/parallel-consumer/issues/25:
Sometimes a transaction error occurs - Cannot call send in state COMMITTING_TRANSACTION #25
ReentrantReadWrite lock protects non-thread safe transactional producer from incorrect multithreaded use
Wider lock to prevent transaction’s containing produced messages that they shouldn’t
Must start tx in MockProducer as well
Fixes example app tests - incorrectly testing wrong thing and MockProducer not configured to auto complete
Add missing revoke flow to MockConsumer wrapper
Add missing latch timeout check
Have massively parallel consumption processing without running hundreds or thousands of
Kafka consumer clients
topic partitions
without operational burden or harming the clusters performance
Efficient individual message acknowledgement system (without local or third system state) to massively reduce message replay upon failure
Per key
concurrent processing, per partition
and unordered message processing
Offsets
committed correctly, in order, of only processed messages, regardless of concurrency level or retries
Vert.x non-blocking library integration (HTTP currently)
Fair partition traversal
Zero~ dependencies (Slf4j
and Lombok
) for the core module
Java 8 compatibility
Throttle control and broker liveliness management
Clean draining shutdown cycle
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。