-
Notifications
You must be signed in to change notification settings - Fork 674
Release Train 2021.1 (Q) Release Notes
-
Upgrade to Querydsl 5.0
-
MongoDB
@DocumentReference
, schema derivation for encrypted fields, MongoDB 5.0 Time Series support -
Support for streaming large result sets in Spring Data JDBC, Projections, and SQL Builder refinements around conditions,
JOIN
s andSELECT
projections -
Support for Impersonation and support for Querydsl in Neo4j
Details
-
Spring Data Build - 2.6
Domain models can now use the @Identity
annotation of jMolecules to denote the identifier property of an aggregate root, to improve developer experience when using jMolecules.
QuerydslPredicateExecutor
, QueryByExampleExecutor
and their reactive variants now define (with findBy(…)
) a query method that allows fluent definition of queries. The fluent API allows customization of projections, sort properties, and various terminal methods, introducing consumption of results as Stream
and other return types. Support for the fluent API is available in all store modules that already support Querydsl or Query by Example.
Repositories that support deleteInBatch
and deleteAllInBatch
methods now publish @DomainEvents
when deleting multiple aggregates by using batch functionality. From Spring Data’s core modules, JPA now supports this functionality.
Repositories can now make use of Smallrye Mutiny types, such as Uni
and Multi
, in repository query methods. These types serve also as markers to detect whether a repository is a reactive one.
RxJava 2 support is now deprecated for removal with Spring Data 3.0. RxJava 2 is end-of-life as of February 28, 2021, and we recommend using RxJava 3 instead.
SimpleTypeInformationMapper
now accepts a ClassLoader
to ensure class visibility from the type mapper when resolving a class name into a class. Arrangements without a configured class loader fall back to the context or system class loader which might not have access to a custom class loader (such as Spring Boot’s AppClassLoader) and that can lead to non-resolvable type hints when reading an entity on e.g. the ForkJoinPool
.
Tickets
M3
RC1
Introduction of JpaRepositoryFactory.getRepositoryFragments(…)
for Easier Customization of Fragments
JpaRepositoryFactory.getRepositoryFragments(RepositoryMetadata, EntityManager, EntityPathResolver, CrudMethodMetadata)
allows customization of fragments, providing more contextual information without requiring reflective access to fields. The related ticket contains additional information.
Using @DocumentReference
offers a flexible way to reference entities in MongoDB. Document references do not follow a specific format. They can be literally anything: a single value, an entire document, or basically everything that can be stored in MongoDB. By default, the mapping layer uses the referenced entities id value for storage and retrieval, as in the following sample:
class Account {
@Id String id;
}
class Person {
@Id String id;
@DocumentReference List<Account> accounts;
}
Document references allow customizing lookup queries, the target database and collection name, and much more. The reference documentation on document references explains how to use references in greater detail.
MongoDB 5.0 introduced Time Series collections that are optimized to efficiently store documents (such as measurements or events) over time. Those collections need to be created before any data can be inserted. Collections can be created by either running the createCollection
command, defining time series collection options, or extracting options from a @TimeSeries
annotation (which is used on domain classes).
@TimeSeries(collection = "weather", timeField = "timestamp")
public class Measurement {
String id;
Instant timestamp;
// ...
}
See the Spring Data MongoDB documentation for further reference.
Wildcard indexes can be created programmatically or declaratively. The annotation-driven declaration style covers various use cases, such as full-document indexes or indexes for maps. See the documentation for wildcard indexes to learn about the details.
Properties whose values are null
were skipped when writing a Document
from an entity. @Field(write=…)
can now be used to control whether to skip (the default) such properties or whether to force a null
property to be written to the Document
.
MongoDB’s client-side field-level encryption requires a schema map to let the driver transparently encrypt and decrypt fields of a document. To simplify the configuration. properties in the domain model can be annotated with @Encrypted
. MongoJsonSchemaCreator
can create the schema map for Mongo’s AutoEncryptionSettings
, based on the domain model. Schema generation considers the algorithm and key identifiers.
The documentation on Encrypted Fields explains the configuration in detail.
Tickets
M1
M2
M3
When using the new Neo4j 4.4 driver together with the Neo4j 4.4 database, the same mechanism that allows dynamic database selection supports impersonation. The feature is available in both imperative and reactive variants. That feature allows having only one driver instance for a technical user and elevating its permissions to a tenant that may be derived from (for example) a Spring Security context, eliminating the need for several connections.
It is now possible to use Querydsl Predicate
instances to run Cypher queries through QuerydslPredicateExecutor
and its reactive variant, ReactiveQuerydslPredicateExecutor
. Querydsl predicates are translated to Cypher by using the Cypher DSL API.
Apart from the changes listed above, there have also been lots of minor tweaks and improvements including:
-
Better projections
-
Possibility to mark single properties as read-only
-
Using Spring beans as converter for attributes. Also, converters can now be applied to whole collection attributes, not only to each member of a collection attribute individually.
When Spring Data Elasticsearch stores an entity in Elasticsearch it automatically adds a field to the document that is named _class
(See Type Hints in the documentation).
This can lead to problems when Spring Data Elasticsearch writes data to in index that was not created by Spring Data Elasticsearch, that therefore does not have the mapping for this field defined and when the user is not allowed to add new fields to the index mapping.
In these cases it will be necessary to disable the type hints. This can be done for the whole application of for single indices, for details see the section "Disabling Type Hints" in the documentation linked above.
Since version 7.12 Elasticsearch has runtime fields, this is a field that is evaluated at query time and can be used in the query itself (in contrary to scripted fields that only are created on the search result).
Spring Data Elasticsearch supports runtime fields from version 4.3.0 on, please refer to the Runtime Fields section in the reference documentation.
Spring Data Elasticsearch already had the possibility for the user to register custom converters that can be used to convert a property of a given type to a values that Elasticsearch understands (like a String or a Map) and back. Such a converter is then used for every property of this type regardless in which entity it appears.
In the case where a converter is needed for a dedicated property of just one entity, this can now be done with a custom property value converter.
The following code shows a (pretty simple) example. Let’s assume that there is an entity where one String property should be stored in reverse in Elasticsearch. All other Strings should be unmodified. First the user needs to define the converter:
private static class ReverseStringValueConverter implements PropertyValueConverter {
@Override
public Object write(Object value) {
return reverse(value);
}
@Override
public Object read(Object value) {
return reverse(value);
}
}
private static String reverse(Object o) {
Assert.notNull(o, "o must not be null");
return new StringBuilder().append(o.toString()).reverse().toString();
}
In the next step, this converter will be defined on a property:
@Document(...)
class EntityWithCustomValueConverters {
@Id
private String id;
@ValueConverter(ReverseStringValueConverter.class)
private String convert;
private String dontConvert;
// getter, setter ...
}
The convert
property will be reversed when stored in and retrieved from Elasticsearch, the other ones (dontConvert
) will not.
Besides the new features, a couple of bugs were fixed and minor additions were made, check the full list of resolved tickets:
Tickets
M1
-
#1767 - DynamicMapping annotation should be applicable to any object field.
-
#1454 - Allow disabling TypeHints.
-
#1787 - Search with MoreLikeThisQuery should use Pageable.
-
#1792 - Upgrade to Elasticsearch 7.12.1.
-
#1800 - Improve handling of immutable classes.
-
#1255 - Add pipeline aggregations to NativeSearchQuery.
-
#1816 - Allow runtime_fields to be defined in the index mapping.
-
#1831 - Upgrade to Elasticsearch 7.13.0.
-
#1839 - Upgrade to Elasticsearch 7.13.1.
-
#1862 - Add native support for range field types by using a range object.
-
#1864 - Upgrade to Elasticsearch 7.13.3.
M3
RC1
-
#1938 - Add @QueryAnnotation meta annotation to @Query.
-
#1941 - Upgrade to Elasticsearch 7.15.0.
-
#1909 - Add repository search for nullable or empty properties..
-
#1950 - AbstractElasticsearchTemplate.searchForStream use Query scrolltime.
-
#1945 - Enable custom converters for single fields .
-
#1911 - Supply a custom Sort.Order providing Elasticsearch specific parameters.
-
#769 - Support for field exclusion from source.
GA
Queries using the IN
relation in combination with bind markers now use a single parameter bind marker for efficient statement reuse when using prepared statements. Using a single bind marker avoids unrolling bound collections into multiple bind markers that make prepared statement caching depending on the actual parameters. Previously, this led to increased memory usage.
PrimaryKeyClassEntityMetadataVerifier
, which verifies mapping metadata for primary key types, now no longer requires that primary key types subclass only java.lang.Object
. Records use java.lang.Record
as the superclass so that the subclass check is no longer applied. We encourage using records as composite primary keys for partitioning primary keys, as those are not updatable in Cassandra itself.
It is now possible to specify write options when using batch operations so that you can customize TTL, timestamp, and other options during batch writes.
With the release of Java 17, Spring Data for Apache Geode (SDG) 2.6/Q now builds on OpenJDK 17.
Additionally, SDG was rebased on Apache Geode 1.14.0
.
Support for Eviction and Expiration configuration via SDG’s @EnableEviction
and @EnableExpiration
was added to Regions configured via @EnableCachingDefinedRegions
when using Spring Data for Apache Geode as a caching provider in Spring’s Cache Abstractions. More information can be found in Issue #518 and #519, respectively.
As of this version, you can use a wide range of Redis 6.2 commands, such as LMOVE
/BLMOVE
, ZMSCORE
, ZRANDMEMBER
, HRANDFIELD
, and many more. See the 2.6.0-M1 Release Notes for a full list of introduced commands.
LettuceConnectionFactory
can now be configured by using a Lettuce RedisURI
. This method creates a RedisConfiguration
that can then be used to create LettuceConnectionFactory
.
It is now possible to configure a BatchStrategy
for RedisCache
. For now, the batch strategy supports cache clearing by using either KEYS
or SCAN
with a configurable batch size.
For example, the following example configures a non-locking CacheWriter
with a SCAN
batching strategy:
RedisCacheManagerBuilder.fromCacheWriter(RedisCacheWriter.nonLockingRedisCacheWriter(connectionFactory, BatchStrategies.scan(42)));
This release includes support for SubscriptionListener
when using MessageListener
for subscription confirmation callbacks. ReactiveRedisMessageListenerContainer
and ReactiveRedisOperations
provide receiveLater(…)
and listenToLater(…)
methods to await until Redis acknowledges the subscription.
As a housekeeping task, we extracted QuerydslKeyValuePredicateExecutor
into its own fragment. QuerydslKeyValueRepository
, which subclasses SimpleKeyValueRepository
and therefore limits composition flexibility, is now deprecated. You should not be affected by this change unless you apply further customizations on Key-Value Querydsl support or you use QuerydslKeyValueRepository
directly.
LDAP repository query methods can now return Stream
as the return type and interface and DTO projections to be consistent with the remaining Spring Data portfolio.
As a housekeeping task, we extracted QuerydslPredicateExecutor
into its own fragment. QuerydslLdapRepository
, which subclasses SimpleLdapRepository
and therefore limits composition flexibility is now deprecated. You should not be affected by this change unless you apply further customizations on Key-Value Querydsl support or you use QuerydslLdapRepository
directly.
JDBC repository query methods can now return Stream
to stream large result sets directly from a ResultSet
, instead of collecting results into a `List. This change reduces memory pressure and latency until the first result.
interface PersonRepository extends CrudRepository<Person, Long> {
@Query("SELECT * FROM person WHERE name < :upper and name > :lower")
Stream<Person> findAsStream(String lower, String upper);
}
Repository query methods can now return projections by using either interface or DTO projections, including dynamic projections. Note that projections cannot be used when specifying a custom RowMapper
.
It is now possible to use driver-specific types in domain objects that are translated to SQL values. Simple types are not translated to entities. Instead, their value is passed on as-is for both reading and writing. As of this release, you can use PGobject
with the Postgres driver and register your own types through dialects if you wish to do so.
The SQL builder usage is growing beyond its intended goal to serve as internal SQL abstraction for Spring Data JDBC and R2DBC. Based on that demand, Conditions
can now be used in JOIN
expressions and SELECT
projections. The JOIN
builder also now accepts subselects for select-based joining of rows.
In the Spring Data portfolio, "after load" describes when data is loaded from the database but is not yet converted into an entity. Spring Data JDBC used AfterLoadEvent
as a signal after materializing the entity. To address this deviation, we deprecated AfterLoadEvent
in favor of AfterConvertEvent
. Please switch to AfterConvertEvent
if you use AfterLoadEvent
in your application.