-
Notifications
You must be signed in to change notification settings - Fork 674
Spring Data 2023.1 (Vaughan) Release Notes
-
Virtual Thread usage through
Executor
configuration -
Java 21 Compatibility
-
Support for Kotlin Value Classes
-
Explore optimizations for Checkpoint/Restore
-
Single Query Loading for Spring Data JDBC
-
Migrate documentation to Antora
Details
-
Spring Data Build - 3.2
Kotlin Value Classes are a language feature that allows creating a wrapper class around single values with reduced heap allocations. Value classes are inlined on the JVM. Inlining results in flattening the value type into the property declaration site. Usage of Value classes imposes name mangling, introduces specific behavior to Copy methods, and how constructors are generated by Kotlin.
Spring Data now can:
-
Instantiate classes that define properties using Kotlin Value Classes
-
Retrieve and set properties using getters/setters and the
copy
method (for Data classes)
Value classes such as the following example can now be used for persistence operations:
@JvmInline
value class Email(val email : String)
data class Person(@Id val id: String, val email : Email)
Note that Kotlin inlining rules can require Value boxing type usage if the component is a primitive or if values use inner nullability. This results in the compiled class using the Value class as the property type instead of the Value type. Stores such as MongoDB will represent such models using subdocuments, so you want to watch out for type nullability to avoid type changes, especially for existing data.
Limiting result sizes worked in the past by either using the Top…
/First…
keywords (as in findTop10By
) or PageRequest
when using pagination. With the recent introduction of ScrollPosition
, PageRequest
isn’t applicable and a static limit is sometimes not what your use case requires. Spring Data 3.2 ships a Limit
type to specify the number of results to be returned dynamically:
interface UserRepository {
List<User> findByLastname(String lastname, Limit limit);
}
repository.findByLastname("White", Limit.of(10));
repository.findByLastname("White", Limit.unlimited());
When working with SQL databases, the schema is an essential part. Spring Data JDBC supports a wide range of schema options yet when starting with a domain model it can take time to come up with an initial domain model. To help you with a code-first approach, Spring Data JDBC ships with an integration to create database changesets using Liquibase.
LiquibaseChangeSetWriter
is the core class to create change sets. The writer can operate in two modes:
-
Initial Schema Creation (without an existing database)
-
Differential Schema Migration (against a database connected via JDBC).
Consider the following example:
H2Database h2Database = new H2Database();
h2Database.setConnection(new JdbcConnection(c));
File changelogYml = new File(new File("my/directory"), "changelog.yml");
LiquibaseChangeSetWriter writer = new LiquibaseChangeSetWriter(relationalMappingContext);
writer.writeChangeSet(new FileSystemResource(changelogYml));
LiquibaseChangeSetWriter
inspects all known entities in the RelationalMappingContext
and writes a changeset to an existing (or new) changelog file.
Mapping annotations for the table and column name respective mapped collections now accept SpEL expressions to determine table and column names at runtime using expressions.
@Table("#{myTenantController.getPersonTableName()}")
class Person {
@Id
@Column("#{myTenantController.getIdentifierColumnName()}") Long id;
}
Expression evaluation leverages Spring Data’s EvaluationContextExtension
mechanism in which extension beans can contribute SpEL functionality. Note that expression results are used as table/column names. These are sanitized through a default SqlIdentifierSanitizer.words()
, allowing characters and underscores to limit impact of unwanted SQL characters. A different sanitizer can be configured through RelationalMappingContext
.
Converters of JDBC and R2DBC module have evolved for quite some time in parallel without honoring each other. We revised our converters to map JDBC ResultSet
respective R2DBC Row
to a RowDocument
first and then, apply conversion without relying on specifics of the underlying data access API.
-
BasicJdbcConverter
has evolved intoMappingJdbcConverter
andBasicJdbcConverter
is now deprecated. -
BasicRelationalConverter
has evolved intoMappingRelationalConverter
andBasicRelationalConverter
is now deprecated.
The new converter infrastructure is capable of running projections within the converter itself. As one of the future enhancements, JDBC projections can leverage projections without the need to instantiate the underlying entity first.
You can now use Single Query Loading to fetch entire entity graphs with a single query avoiding the N+1 loading problem. Single Query Loading is significantly more efficient, especially for complex aggregates consisting of many entities, as it uses a Single Query to materialize results.
Currently, this feature is restricted according to the following rules:
-
The aggregate must not have nested collections, this includes
Map
. The plan is to remove this constraint in the future. -
The aggregate must not use
AggregateReference
or embedded entities. The plan is to remove this constraint in the future. -
The database dialect must support it. Of the dialects provided by Spring Data JDBC all but H2 and HSQL support this. H2 and HSQL don’t support analytic functions (aka windowing functions).
-
It only works for the find methods in
CrudRepository
, not for derived queries and not for annotated queries. The plan is to remove this constraint in the future. -
Single Query Loading needs to be enabled in the
JdbcMappingContext
, by callingsetSingleQueryLoadingEnabled(true)
If any condition is not fulfilled, Spring Data JDBC falls back to the default approach of loading aggregates.
If you are interested in further progress and plans for this feature, please follow https://github.com/spring-projects/spring-data-relational/issues/1445
Note
|
|
Note
|
|
While the save
method on MongoOperations
already allowed to replace a single document based on its id
the newly introduced replace operations accept a Query
parameter.
This opens up the possibility to use different criteria for identifying the object to replace.
template.update(Jedi.class)
.matching(where("firstname").is("luke"))
.replaceWith(...)
.replaceFirst();
The Aggregation Framework is catching up its support for recent enhancements on the server side (like the $percentile
and $median
expressions) but also ships with some internal improvements like an AggregationVariable
type.
Both, the Reactive-
& GridFsTemplate
no longer create GridFSBucket
instances but if needed can cache those for reuse.
Declarative ReadPreference
selection builds upon API introduced in 4.1 that allows to define the desired behaviour on the Query
level.
Now, the @ReadPreference
annotation lifts this option to the upper layer enabling read preference selection for an entire repository or selected finder/aggregation methods.
@ReadPreference("primaryPreferred")
public interface PersonRepository extends Repository<Person, String> {
@Query(readPreference = "nearest")
List<Person> findByFirstname(String firstname);
}
Spring Data Cassandra provides now a CassandraScrollPosition
to leverage scrolling queries returning Window<T>
. Scrolling is a much more natural fit than pagination through Slice
. It allows you also to use WindowIterator
to use Cassandra’s fetch size (pagination) to scroll across large results:
WindowIterator<Person> iterator = WindowIterator
.of(scrollPosition -> personRepository.findAllWindowByLastname("White", scrollPosition, Limit.of(2)))
.startingAt(CassandraScrollPosition.initial());
The Cassandra Data module uses now (amongst MongoDB) the common infrastructure to provide a store-specific value conversion implementation through CassandraValueConverter
and CassandraConversionContext
. Next to the declarative approach using @ValueConverter
on property declarations, CassandraCustomConversions
allows programmatic registration of converter implementations that only apply to defined properties.
CassandraCustomConversions.create(it -> {
it.configurePropertyConversions(registrar -> {
registrar.registerConverter(Person.class, "ssn", new CassandraValueConverter<>() { ... });
})
})
class Person {
@ValueConverter(EncryptingConverter.class)
String socialSecurityNumber;
}
class EncryptingConverter implements CassandraValueConverter<String, String> {
…
}
With Java 21 introducing SequencedCollection
, we had to revisit our RedisList
as it implements List
and Deque
. Both types return a reversed view of their underlying collection, however since Java doesn’t provide a union type combining List
and Deque
, we moved ahead and made our RedisList
compatible with the newly introduced library functionality. You can use the reversed view of RedisList
with Java 17 already. On Java 21, the RedisList.reversed()
method matches the SequencedCollection.reversed()
signature.
RedisList<Person> list = …;
list.reversed().stream(); // consume the underlying list with reversed semantics
For our efforts on Checkpoint/Restore, we refactored Jedis and Lettuce RedisConnectionFactory
to Lifecycle
beans. Connection factories can be started, stopped, and restarted. By default, connection factories are started upon initialization (i.e. afterPropertiesSet
) to retain existing functionality. Connection factories can be stopped to shut down connections and connection pools to take a snapshot and restarted when starting from a checkpoint snapshot.
-
M1 - Jul 14, 2023
-
M2 - Aug 18, 2023
-
M3 - Sept 15, 2023
-
RC1 - Oct 13, 2023
-
GA - Nov 17, 2023
-
OSS Support until: Nov 17, 2024
-
End of Life: March 17 2026