Skip to content

Do we have plan to upgrade to Rust 2021 edition? #1177

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
mingmwang opened this issue Oct 26, 2021 · 2 comments · Fixed by #1084
Closed

Do we have plan to upgrade to Rust 2021 edition? #1177

mingmwang opened this issue Oct 26, 2021 · 2 comments · Fixed by #1084
Labels
enhancement New feature or request

Comments

@mingmwang
Copy link
Contributor

Is your feature request related to a problem or challenge? Please describe what you are trying to do.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
(This section helps Arrow developers understand the context and why for this feature, in addition to the what)

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

@mingmwang mingmwang added the enhancement New feature or request label Oct 26, 2021
@alamb
Copy link
Contributor

alamb commented Oct 26, 2021

We do have a plan to upgrade to Rust 2021 edition -- @jimexist has been working on that in #1084

@mingmwang
Copy link
Contributor Author

👍

unkloud pushed a commit to unkloud/datafusion that referenced this issue Mar 23, 2025
## Which issue does this PR close?

## Rationale for this change

After apache/datafusion-comet#1062 We have not running Spark tests for native execution

## What changes are included in this PR?

Removed the off heap requirement for testing

## How are these changes tested?

Bringing back Spark tests for native execution
unkloud pushed a commit to unkloud/datafusion that referenced this issue Mar 23, 2025
* feat: add support for array_contains expression

* test: add unit test for array_contains function

* Removes unnecessary case expression for handling null values

* chore: Move more expressions from core crate to spark-expr crate (apache#1152)

* move aggregate expressions to spark-expr crate

* move more expressions

* move benchmark

* normalize_nan

* bitwise not

* comet scalar funcs

* update bench imports

* remove dead code (apache#1155)

* fix: Spark 4.0-preview1 SPARK-47120 (apache#1156)

## Which issue does this PR close?

Part of apache/datafusion-comet#372 and apache/datafusion-comet#551

## Rationale for this change

To be ready for Spark 4.0

## What changes are included in this PR?

This PR fixes the new test SPARK-47120 added in Spark 4.0

## How are these changes tested?

tests enabled

* chore: Move string kernels and expressions to spark-expr crate (apache#1164)

* Move string kernels and expressions to spark-expr crate

* remove unused hash kernel

* remove unused dependencies

* chore: Move remaining expressions to spark-expr crate + some minor refactoring (apache#1165)

* move CheckOverflow to spark-expr crate

* move NegativeExpr to spark-expr crate

* move UnboundColumn to spark-expr crate

* move ExpandExec from execution::datafusion::operators to execution::operators

* refactoring to remove datafusion subpackage

* update imports in benches

* fix

* fix

* chore: Add ignored tests for reading complex types from Parquet (apache#1167)

* Add ignored tests for reading structs from Parquet

* add basic map test

* add tests for Map and Array

* feat: Add Spark-compatible implementation of SchemaAdapterFactory (apache#1169)

* Add Spark-compatible SchemaAdapterFactory implementation

* remove prototype code

* fix

* refactor

* implement more cast logic

* implement more cast logic

* add basic test

* improve test

* cleanup

* fmt

* add support for casting unsigned int to signed int

* clippy

* address feedback

* fix test

* fix: Document enabling comet explain plan usage in Spark (4.0) (apache#1176)

* test: enabling Spark tests with offHeap requirement (apache#1177)

## Which issue does this PR close?

## Rationale for this change

After apache/datafusion-comet#1062 We have not running Spark tests for native execution

## What changes are included in this PR?

Removed the off heap requirement for testing

## How are these changes tested?

Bringing back Spark tests for native execution

* feat: Improve shuffle metrics (second attempt) (apache#1175)

* improve shuffle metrics

* docs

* more metrics

* refactor

* address feedback

* fix: stddev_pop should not directly return 0.0 when count is 1.0 (apache#1184)

* add test

* fix

* fix

* fix

* feat: Make native shuffle compression configurable and respect `spark.shuffle.compress` (apache#1185)

* Make shuffle compression codec and level configurable

* remove lz4 references

* docs

* update comment

* clippy

* fix benches

* clippy

* clippy

* disable test for miri

* remove lz4 reference from proto

* minor: move shuffle classes from common to spark (apache#1193)

* minor: refactor decodeBatches to make private in broadcast exchange (apache#1195)

* minor: refactor prepare_output so that it does not require an ExecutionContext (apache#1194)

* fix: fix missing explanation for then branch in case when (apache#1200)

* minor: remove unused source files (apache#1202)

* chore: Upgrade to DataFusion 44.0.0-rc2 (apache#1154)

* move aggregate expressions to spark-expr crate

* move more expressions

* move benchmark

* normalize_nan

* bitwise not

* comet scalar funcs

* update bench imports

* save

* save

* save

* remove unused imports

* clippy

* implement more hashers

* implement Hash and PartialEq

* implement Hash and PartialEq

* implement Hash and PartialEq

* benches

* fix ScalarUDFImpl.return_type failure

* exclude test from miri

* ignore correct test

* ignore another test

* remove miri checks

* use return_type_from_exprs

* Revert "use return_type_from_exprs"

This reverts commit febc1f1ec1301f9b359fc23ad6a117224fce35b7.

* use DF main branch

* hacky workaround for regression in ScalarUDFImpl.return_type

* fix repo url

* pin to revision

* bump to latest rev

* bump to latest DF rev

* bump DF to rev 9f530dd

* add Cargo.lock

* bump DF version

* no default features

* Revert "remove miri checks"

This reverts commit 4638fe3aa5501966cd5d8b53acf26c698b10b3c9.

* Update pin to DataFusion e99e02b

* update pin

* Update Cargo.toml

Bump to 44.0.0-rc2

* update cargo lock

* revert miri change

---------

Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>

* update UT

Signed-off-by: Dharan Aditya <dharan.aditya@gmail.com>

* fix typo in UT

Signed-off-by: Dharan Aditya <dharan.aditya@gmail.com>

---------

Signed-off-by: Dharan Aditya <dharan.aditya@gmail.com>
Co-authored-by: Andy Grove <agrove@apache.org>
Co-authored-by: KAZUYUKI TANIMURA <ktanimura@apple.com>
Co-authored-by: Parth Chandra <parthc@apache.org>
Co-authored-by: Liang-Chi Hsieh <viirya@gmail.com>
Co-authored-by: Raz Luvaton <16746759+rluvaton@users.noreply.github.com>
Co-authored-by: Andrew Lamb <andrew@nerdnetworks.org>
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants