Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Update fork main #1

Open
wants to merge 31 commits into
base: main
Choose a base branch
from
Open

Update fork main #1

wants to merge 31 commits into from

Conversation

guruguha
Copy link
Collaborator

No description provided.

reswqa and others added 30 commits May 25, 2023 16:36
…onnector repo

The formats will remain in the core apache/flink repo for now, as they can be commonly used by various different connectors and not just the Kafka connector.

This closes apache#31.
…izer and TimestampOffsetsInitializer

This closes apache#29.
… change in Flink 1.18

This is a temporary workaround for a breaking change that occured on the TypeSerializerUpgradeTestBase due to FLINK-27518.

A proper fix would be to properly introduce a public-facing test utility to replace TypeSerializerUpgradeTestBase, and move the Kafka connector test code to use that instead.
* add sink.delivery-guarantee and sink.transactional-id-prefix options
to upsert-kafka
* fix the default isolation.level in kafka connector documentation
* let ReducingUpsertSink implement TwoPhaseCommittingSink
* add update upsert-kafka connector documentation

This closes apache#7.
…le Dynamic Partition Discovery by Default in Kafka Source

This closes apache#41.
The bump in shaded guava in Flink 1.18 changed import paths and caused
the class loader fail when loading ManagedMemoryUtils.

Looking at the root cause of the issue, shading was used as a technique
to avoid dependency hell. As flink-connector-kafka should work with both
flink 1.17 and 1.18 that use different guava versions (and hence library
import paths), shading did not really solve the problem it was introduced
for in the first place.

There are several several options to work around the problem. First,
we could introduce our own shading for guava. Second, we could see if
the dependency on guava is necessary at all and maybe remove it
completely.

This patch takes the latter route and removes dependency on guava from
this connector.
…k Awareness

This closes apache#53.
This closes apache#20.

Co-authored-by: Jeremy DeGroot <jeremy.degroot@gmail.com>
Co-authored-by: jcmejias1 <jcmejias1@gmail.com>
Co-authored-by: Mason Chen <mas.chen@berkeley.edu>
Co-authored-by: Ethan Gouty <ethan.gouty@imperva.com>
Co-authored-by: Siva Venkat Gogineni <gogineni.sivavenkat@gmail.com>
…troduced by FLINK-31804. This closes apache#56

* [FLINK-33219][connector/kafka] Add new archunit violation messages introduced by FLINK-31804

The reason we add new violation messages instead of update existing ones is that
the patch of FLINK-31804 is only applied after Flink 1.18. We need to make sure
the CI could run successfully for Flink versions before and after that.

If Kafka connector decides to drop support for versions before 1.18 in the future,
please re-freeze the violations then.

Co-authored-by: Martijn Visser <martijnvisser@apache.org>
Bumps [snappy-java](https://github.com/xerial/snappy-java) from 1.1.8.3 to 1.1.10.5.
- [Release notes](https://github.com/xerial/snappy-java/releases)
- [Commits](xerial/snappy-java@1.1.8.3...v1.1.10.5)

---
updated-dependencies:
- dependency-name: org.xerial.snappy:snappy-java
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
…oint complete if no offsets exist

Prior to this fix, if the offsets to commit for a given checkpoint is empty,
which can be the case if no starting offsets were retrieved from Kafka yet,
then on checkpoint completion the cache is not properly evicted up to the
given checkpoint.

This change fixes this such that in notifyOnCheckpointComplete, we shortcut
the method execution to not need to try to commit the offsets since its
empty anyways, and always remember to evict the cache up to the completed
checkpoint.
Bumps [guava](https://github.com/google/guava) from 30.1.1-jre to 32.1.2-jre.
- [Release notes](https://github.com/google/guava/releases)
- [Commits](https://github.com/google/guava/commits)

---
updated-dependencies:
- dependency-name: com.google.guava:guava
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
…fka AVRO serializer and Kafka Schema Registry Client
…nSchemaWrapper` test use `flink-shaded-jackson` since it tests `flink-shaded-jackson` ObjectNodes.

Co-authored-by: zentol <chesnay@apache.org>

This closes apache#57.
…loses apache#59

* [FLINK-33238][Formats/Avro] Upgrade used AVRO version to 1.11.3 to mitigate scanners flagging Flink or the Flink Kafka connector as vulnerable for CVE-2023-39410

* [FLINK-33238][Formats/Avro] Pin transitive dependency org.apache.commons:commons-compress to 1.22 to address dependency convergence
1. Test all PRs for `main` against all supported versions, meaning 1.17.x and 1.18.x. That's because only PRs run the dependency convergence check, and not nightly builds.
2. Make sure that we test nightlies against all supported versions (currently 1.17.x for the `v3.0` branch plus 1.17.x and 1.18.x against `main`)
…pache#50

* [FLINK-30400][build] Stop bundling flink-connector-base

---------

Co-authored-by: Martijn Visser <martijnvisser@apache.org>
@guruguha guruguha changed the title Update main from fork Update fork main Oct 30, 2023
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.