forked from apache/flink-connector-kafka
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Update fork main #1
Open
guruguha
wants to merge
31
commits into
main
Choose a base branch
from
v3.2.0-riv-1-SNAPSHOT
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
… number of parameters (apache#30)
…onnector repo The formats will remain in the core apache/flink repo for now, as they can be commonly used by various different connectors and not just the Kafka connector. This closes apache#31.
… Kafka connector This closes apache#36.
…s discoveried later based on FLIP-288
…izer and TimestampOffsetsInitializer This closes apache#29.
… change in Flink 1.18 This is a temporary workaround for a breaking change that occured on the TypeSerializerUpgradeTestBase due to FLINK-27518. A proper fix would be to properly introduce a public-facing test utility to replace TypeSerializerUpgradeTestBase, and move the Kafka connector test code to use that instead.
* add sink.delivery-guarantee and sink.transactional-id-prefix options to upsert-kafka * fix the default isolation.level in kafka connector documentation * let ReducingUpsertSink implement TwoPhaseCommittingSink * add update upsert-kafka connector documentation This closes apache#7.
…Source based on FLIP-288 This closes apache#40.
…le Dynamic Partition Discovery by Default in Kafka Source This closes apache#41.
…ssly a method(pauseOrResumeSplits)
The bump in shaded guava in Flink 1.18 changed import paths and caused the class loader fail when loading ManagedMemoryUtils. Looking at the root cause of the issue, shading was used as a technique to avoid dependency hell. As flink-connector-kafka should work with both flink 1.17 and 1.18 that use different guava versions (and hence library import paths), shading did not really solve the problem it was introduced for in the first place. There are several several options to work around the problem. First, we could introduce our own shading for guava. Second, we could see if the dependency on guava is necessary at all and maybe remove it completely. This patch takes the latter route and removes dependency on guava from this connector.
…k Awareness This closes apache#53. This closes apache#20. Co-authored-by: Jeremy DeGroot <jeremy.degroot@gmail.com> Co-authored-by: jcmejias1 <jcmejias1@gmail.com> Co-authored-by: Mason Chen <mas.chen@berkeley.edu> Co-authored-by: Ethan Gouty <ethan.gouty@imperva.com> Co-authored-by: Siva Venkat Gogineni <gogineni.sivavenkat@gmail.com>
…troduced by FLINK-31804. This closes apache#56 * [FLINK-33219][connector/kafka] Add new archunit violation messages introduced by FLINK-31804 The reason we add new violation messages instead of update existing ones is that the patch of FLINK-31804 is only applied after Flink 1.18. We need to make sure the CI could run successfully for Flink versions before and after that. If Kafka connector decides to drop support for versions before 1.18 in the future, please re-freeze the violations then. Co-authored-by: Martijn Visser <martijnvisser@apache.org>
Bumps [snappy-java](https://github.com/xerial/snappy-java) from 1.1.8.3 to 1.1.10.5. - [Release notes](https://github.com/xerial/snappy-java/releases) - [Commits](xerial/snappy-java@1.1.8.3...v1.1.10.5) --- updated-dependencies: - dependency-name: org.xerial.snappy:snappy-java dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com>
…oint complete if no offsets exist Prior to this fix, if the offsets to commit for a given checkpoint is empty, which can be the case if no starting offsets were retrieved from Kafka yet, then on checkpoint completion the cache is not properly evicted up to the given checkpoint. This change fixes this such that in notifyOnCheckpointComplete, we shortcut the method execution to not need to try to commit the offsets since its empty anyways, and always remember to evict the cache up to the completed checkpoint.
Bumps [guava](https://github.com/google/guava) from 30.1.1-jre to 32.1.2-jre. - [Release notes](https://github.com/google/guava/releases) - [Commits](https://github.com/google/guava/commits) --- updated-dependencies: - dependency-name: com.google.guava:guava dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com>
…fka AVRO serializer and Kafka Schema Registry Client
…nSchemaWrapper` test use `flink-shaded-jackson` since it tests `flink-shaded-jackson` ObjectNodes. Co-authored-by: zentol <chesnay@apache.org> This closes apache#57.
…loses apache#59 * [FLINK-33238][Formats/Avro] Upgrade used AVRO version to 1.11.3 to mitigate scanners flagging Flink or the Flink Kafka connector as vulnerable for CVE-2023-39410 * [FLINK-33238][Formats/Avro] Pin transitive dependency org.apache.commons:commons-compress to 1.22 to address dependency convergence
1. Test all PRs for `main` against all supported versions, meaning 1.17.x and 1.18.x. That's because only PRs run the dependency convergence check, and not nightly builds. 2. Make sure that we test nightlies against all supported versions (currently 1.17.x for the `v3.0` branch plus 1.17.x and 1.18.x against `main`)
…pache#50 * [FLINK-30400][build] Stop bundling flink-connector-base --------- Co-authored-by: Martijn Visser <martijnvisser@apache.org>
… strategy lose data This closes apache#52.
# for free
to join this conversation on GitHub.
Already have an account?
# to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.