Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Suspect deadlock in InternalLoggerRegistry (intermittent) #3399

Closed
nilsson-petter opened this issue Jan 15, 2025 · 49 comments
Closed

Suspect deadlock in InternalLoggerRegistry (intermittent) #3399

nilsson-petter opened this issue Jan 15, 2025 · 49 comments
Assignees
Labels
bug Incorrect, unexpected, or unintended behavior of existing code waiting-for-maintainer
Milestone

Comments

@nilsson-petter
Copy link

Description

Intermittent deadlock/thread starvation on startup in InternalLoggerRegistry#getLoggers on Spring Boot 3.3.5 application.

Configuration

Version:
2.24.3

Operating system:
Red Hat Enterprise Linux 8.10 (Ootpa)
Kernel: Linux 4.18.0-553.27.1.el8_10.x86_64

JDK:
openjdk version "21.0.5" 2024-10-15 LTS
OpenJDK Runtime Environment (Red_Hat-21.0.5.0.10-1) (build 21.0.5+10-LTS)
OpenJDK 64-Bit Server VM (Red_Hat-21.0.5.0.10-1) (build 21.0.5+10-LTS, mixed mode, sharing)

Logs

I can't find the thread that has locked <0x00000000e01f0be0>.

"Log4j2-TF-3-ConfigurationFileWatcher-3" #275 [1024631] daemon prio=5 os_prio=0 cpu=9,61ms elapsed=1901,81s tid=0x00007f4164037f80 nid=1024631 waiting on condition  [0x00007f414c0bf000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@21.0.5/Native Method)
	- parking to wait for  <0x00000000e01f0be0> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
	at java.util.concurrent.locks.LockSupport.park(java.base@21.0.5/LockSupport.java:221)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@21.0.5/AbstractQueuedSynchronizer.java:754)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(java.base@21.0.5/AbstractQueuedSynchronizer.java:1079)
	at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(java.base@21.0.5/ReentrantReadWriteLock.java:738)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.getLoggers(InternalLoggerRegistry.java:84)
	at org.apache.logging.log4j.core.LoggerContext.updateLoggers(LoggerContext.java:812)
	at org.apache.logging.log4j.core.LoggerContext.updateLoggers(LoggerContext.java:802)
	at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:675)
	at org.apache.logging.log4j.core.LoggerContext.onChange(LoggerContext.java:828)
	- locked <0x00000000e01f0a20> (a org.apache.logging.log4j.core.LoggerContext)
	at org.apache.logging.log4j.core.util.AbstractWatcher$ReconfigurationRunnable.run(AbstractWatcher.java:97)
	at java.lang.Thread.runWith(java.base@21.0.5/Thread.java:1596)
	at java.lang.Thread.run(java.base@21.0.5/Thread.java:1583)

Reproduction

Happens intermittently on deploy. Can't reproduce it at will.

@vy
Copy link
Member

vy commented Jan 16, 2025

@nilsson-petter, thanks so much for the report. It indeed appears like a bug.

  1. <0x00000000e01f0be0> points to the read-lock of a read-write-lock. The associated write-lock is acquired only at one place: InternalLoggerRegistry#computeIfAbsent(). Do you happen to see any references to that in your thread dump?
  2. Would you mind providing the complete thread dump, please?

@vy vy added bug Incorrect, unexpected, or unintended behavior of existing code waiting-for-user More information is needed from the user and removed waiting-for-maintainer labels Jan 16, 2025
@nilsson-petter
Copy link
Author

Thank you for the quick response! I can find six references of that method in the thread dump. However, they are all stuck at InternalLoggerRegistry.getLogger, like this:

"qtp1164772301-181" #181 [1024431] prio=5 os_prio=0 cpu=3,07ms elapsed=1950,75s tid=0x00007f415059a910 nid=1024431 waiting on condition  [0x00007f414fdfb000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park(java.base@21.0.5/Native Method)
	- parking to wait for  <0x00000000e01f0be0> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
	at java.util.concurrent.locks.LockSupport.park(java.base@21.0.5/LockSupport.java:221)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@21.0.5/AbstractQueuedSynchronizer.java:754)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(java.base@21.0.5/AbstractQueuedSynchronizer.java:1079)
	at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(java.base@21.0.5/ReentrantReadWriteLock.java:738)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.getLogger(InternalLoggerRegistry.java:71)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.computeIfAbsent(InternalLoggerRegistry.java:145)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:566)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:539)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:70)
	at org.apache.logging.slf4j.Log4jLoggerFactory.newLogger(Log4jLoggerFactory.java:49)
	at org.apache.logging.slf4j.Log4jLoggerFactory.newLogger(Log4jLoggerFactory.java:32)
	at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:52)
	at org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:32)
	at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:432)
	at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:457)
	at org.eclipse.jetty.io.IdleTimeout.<clinit>(IdleTimeout.java:35)
	at org.eclipse.jetty.server.ServerConnector.newEndPoint(ServerConnector.java:439)
	at org.eclipse.jetty.server.ServerConnector$ServerConnectorManager.newEndPoint(ServerConnector.java:607)
	at org.eclipse.jetty.server.ServerConnector$ServerConnectorManager.newEndPoint(ServerConnector.java:591)
	at org.eclipse.jetty.io.ManagedSelector.createEndPoint(ManagedSelector.java:384)
	at org.eclipse.jetty.io.ManagedSelector$Accept.run(ManagedSelector.java:890)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:979)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.doRunJob(QueuedThreadPool.java:1209)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1164)
	at java.lang.Thread.runWith(java.base@21.0.5/Thread.java:1596)
	at java.lang.Thread.run(java.base@21.0.5/Thread.java:1583)

I have attached a redacted thread dump.

log4j_deadlock.txt

@github-actions github-actions bot added waiting-for-maintainer and removed waiting-for-user More information is needed from the user labels Jan 16, 2025
@nilsson-petter
Copy link
Author

nilsson-petter commented Jan 16, 2025

I will provide a bit more context for this issue. We use Spring Kafka in this application. We are consuming one topic, using ordinary threads. We are producing to another topic, doing so with a thread pool of 32 Java 21 Virtual Threads.

Perhaps this is not playing along very nicely at times.

We copied InternalLoggerRegistry to the classpath and added some logging statements over when a thread is

  1. Waiting to obtain a lock (WAITING)
  2. Has aquired a lock (LOCKED)
  3. Has released a lock (UNLOCKED)

The logging statements contain context on which thread/name of the logger that is aqcuiring the lock.

It starts to look a bit funky closer to the end, with virtual-thread-spring-kafka-producer-30 and virtual-thread-spring-kafka-producer-0 competing.

org.apache.kafka.clients.producer.internals.BuiltInPartitioner is the only logger I can find that's being initialized with computeIfAbsent() by two threads at the same time I can find in my logs.

At the end, we have three statements of WAITING for the readLock. The application did not recover from this and went OutOfMemory short after.

WAITING getLogger() name=org.apache.kafka.common.utils.KafkaThread thread=virtual-thread-spring-kafka-producer-3
LOCKED getLogger() name=org.apache.kafka.common.utils.KafkaThread thread=virtual-thread-spring-kafka-producer-3
UNLOCKED getLogger() name=org.apache.kafka.common.utils.KafkaThread thread=virtual-thread-spring-kafka-producer-3

WAITING computeIfAbsent() name=org.apache.kafka.common.utils.KafkaThread thread=virtual-thread-spring-kafka-producer-3
LOCKED computeIfAbsent() name=org.apache.kafka.common.utils.KafkaThread thread=virtual-thread-spring-kafka-producer-3
UNLOCKED computeIfAbsent() name=org.apache.kafka.common.utils.KafkaThread thread=virtual-thread-spring-kafka-producer-3

WAITING getLogger() name=io.confluent.kafka.schemaregistry.avro.AvroSchema thread=virtual-thread-spring-kafka-producer-3
LOCKED getLogger() name=io.confluent.kafka.schemaregistry.avro.AvroSchema thread=virtual-thread-spring-kafka-producer-3
UNLOCKED getLogger() name=io.confluent.kafka.schemaregistry.avro.AvroSchema thread=virtual-thread-spring-kafka-producer-3

WAITING computeIfAbsent() name=io.confluent.kafka.schemaregistry.avro.AvroSchema thread=virtual-thread-spring-kafka-producer-3
LOCKED computeIfAbsent() name=io.confluent.kafka.schemaregistry.avro.AvroSchema thread=virtual-thread-spring-kafka-producer-3
UNLOCKED computeIfAbsent() name=io.confluent.kafka.schemaregistry.avro.AvroSchema thread=virtual-thread-spring-kafka-producer-3

WAITING getLogger() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-30

WAITING getLogger() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-0
LOCKED getLogger() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-0
UNLOCKED getLogger() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-0

WAITING computeIfAbsent() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-0

LOCKED getLogger() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-30
UNLOCKED getLogger() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-30

WAITING computeIfAbsent() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-30
LOCKED computeIfAbsent() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-30
UNLOCKED computeIfAbsent() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-30

WAITING getLogger() name=org.apache.kafka.common.record.MemoryRecords thread=virtual-thread-spring-kafka-producer-30
WAITING getLogger() name=org.apache.kafka.common.requests.OffsetFetchRequest thread=kafka-consumer-0-C-1
WAITING getLogger() name=org.eclipse.jetty.io.IdleTimeout thread=qtp1380654192-171

@nilsson-petter
Copy link
Author

Happened again in the same way, with BuiltInPartitioner leaving the state somewhat unclear.

WAITING readLock getLogger() name=org.apache.kafka.common.utils.KafkaThread thread=virtual-thread-spring-kafka-producer-28
LOCKED readLock getLogger() name=org.apache.kafka.common.utils.KafkaThread thread=virtual-thread-spring-kafka-producer-28
UNLOCKED readLock getLogger() name=org.apache.kafka.common.utils.KafkaThread thread=virtual-thread-spring-kafka-producer-28
WAITING writeLock computeIfAbsent() name=org.apache.kafka.common.utils.KafkaThread thread=virtual-thread-spring-kafka-producer-28
LOCKED writeLock computeIfAbsent() name=org.apache.kafka.common.utils.KafkaThread thread=virtual-thread-spring-kafka-producer-28
UNLOCKED writeLock computeIfAbsent() name=org.apache.kafka.common.utils.KafkaThread thread=virtual-thread-spring-kafka-producer-28

WAITING readLock getLogger() name=io.confluent.kafka.schemaregistry.avro.AvroSchema thread=virtual-thread-spring-kafka-producer-28
LOCKED readLock getLogger() name=io.confluent.kafka.schemaregistry.avro.AvroSchema thread=virtual-thread-spring-kafka-producer-28
UNLOCKED readLock getLogger() name=io.confluent.kafka.schemaregistry.avro.AvroSchema thread=virtual-thread-spring-kafka-producer-28
WAITING writeLock computeIfAbsent() name=io.confluent.kafka.schemaregistry.avro.AvroSchema thread=virtual-thread-spring-kafka-producer-28
LOCKED writeLock computeIfAbsent() name=io.confluent.kafka.schemaregistry.avro.AvroSchema thread=virtual-thread-spring-kafka-producer-28
UNLOCKED writeLock computeIfAbsent() name=io.confluent.kafka.schemaregistry.avro.AvroSchema thread=virtual-thread-spring-kafka-producer-28

WAITING readLock getLogger() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-29
LOCKED readLock getLogger() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-29
UNLOCKED readLock getLogger() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-29
WAITING readLock getLogger() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-4
LOCKED readLock getLogger() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-4
UNLOCKED readLock getLogger() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-4
WAITING writeLock computeIfAbsent() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-4
LOCKED writeLock computeIfAbsent() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-4
WAITING writeLock computeIfAbsent() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-29
WAITING readLock getLogger() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-21
UNLOCKED writeLock computeIfAbsent() name=org.apache.kafka.clients.producer.internals.BuiltInPartitioner thread=virtual-thread-spring-kafka-producer-4

WAITING readLock getLogger() name=org.apache.kafka.common.record.MemoryRecords thread=virtual-thread-spring-kafka-producer-4
WAITING readLock getLogger() name=org.apache.kafka.common.requests.OffsetFetchRequest thread=kafka-consumer-2-C-1
WAITING readLock getLogger() name=com.j_spaces.core.cache.EntriesIter thread=GS-LRMI-Connection-pool-1-thread-1
WAITING readLock getLogger() name=org.eclipse.jetty.io.IdleTimeout thread=qtp728003396-183
WAITING readLock getLogger() name=io.grpc.netty.shaded.io.netty.util.concurrent.PromiseNotifier thread=grpc-default-worker-ELG-1-2

@nilsson-petter
Copy link
Author

I have managed to reproduce the deadlock in our codebase.
The deadlock occurs when we are producing a lot of messages in a short timespan to a KafkaTemplate with several Virtual Threads. The test uses 32 threads.

When switching to non-virtual threads no deadlock can be observed.

@vy
Copy link
Member

vy commented Jan 17, 2025

@nilsson-petter, would you mind testing if you can still reproduce the problem using virtual threads and JDK 241 EA, please?

1 JDK 24 will ship JEP 491 delivering several improvements for pinning issues.

@nilsson-petter
Copy link
Author

I can not reproduce it with virtual threads using JDK 24 instead of 21.

Local tests run on

macOS 15.2

openjdk version "24-ea" 2025-03-18
OpenJDK Runtime Environment (build 24-ea+31-3600)
OpenJDK 64-Bit Server VM (build 24-ea+31-3600, mixed mode, sharing)

@tristantarrant
Copy link

We've been seeing these in the Infinispan testsuite and we don't enable vthreads by default.

2025-01-17 05:28:09
Full thread dump OpenJDK 64-Bit Server VM (21.0.4+7-LTS mixed mode, sharing):

"testng-ConflictManagerTest" #47 [950191] prio=5 os_prio=0 cpu=14892.05ms elapsed=436.52s tid=0x00007f02818f49d0 nid=950191 waiting on condition  [0x00007f02125f1000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.$$BlockHound$$_park(java.base@21.0.4/Native Method)
	- parking to wait for  <0x00000000c06328c8> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
	at jdk.internal.misc.Unsafe.park(java.base@21.0.4/Unsafe.java)
	at java.util.concurrent.locks.LockSupport.park(java.base@21.0.4/LockSupport.java:221)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@21.0.4/AbstractQueuedSynchronizer.java:754)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(java.base@21.0.4/AbstractQueuedSynchronizer.java:1079)
	at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(java.base@21.0.4/ReentrantReadWriteLock.java:738)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.getLogger(InternalLoggerRegistry.java:71)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.computeIfAbsent(InternalLoggerRegistry.java:145)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:566)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:70)
	at org.apache.logging.log4j.spi.LoggerContext.getLogger(LoggerContext.java:59)
	at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:571)
	at org.apache.logging.log4j.LogManager.getFormatterLogger(LogManager.java:454)
	at org.jgroups.logging.Log4J2LogImpl.<init>(Log4J2LogImpl.java:26)
	at org.jgroups.logging.LogFactory.getLog(LogFactory.java:69)
	at org.jgroups.stack.Protocol.<init>(Protocol.java:68)
	at org.jgroups.protocols.pbcast.NAKACK2.<init>(NAKACK2.java:42)
	at java.lang.invoke.LambdaForm$DMH/0x00007f0218ce3400.newInvokeSpecial(java.base@21.0.4/LambdaForm$DMH)
	at java.lang.invoke.Invokers$Holder.invokeExact_MT(java.base@21.0.4/Invokers$Holder)
	at jdk.internal.reflect.DirectConstructorHandleAccessor.invokeImpl(java.base@21.0.4/DirectConstructorHandleAccessor.java:86)
	at jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(java.base@21.0.4/DirectConstructorHandleAccessor.java:62)
	at java.lang.reflect.Constructor.newInstanceWithCaller(java.base@21.0.4/Constructor.java:502)
	at java.lang.reflect.Constructor.newInstance(java.base@21.0.4/Constructor.java:486)
	at org.jgroups.stack.Configurator.createLayer(Configurator.java:190)
	at org.jgroups.stack.Configurator.createProtocols(Configurator.java:170)
	at org.jgroups.stack.Configurator.createProtocolsAndInitializeAttrs(Configurator.java:104)
	at org.jgroups.stack.Configurator.setupProtocolStack(Configurator.java:65)
	at org.jgroups.stack.Configurator.setupProtocolStack(Configurator.java:55)
	at org.jgroups.stack.ProtocolStack.setup(ProtocolStack.java:439)
	at org.jgroups.JChannel.init(JChannel.java:916)
	at org.jgroups.JChannel.<init>(JChannel.java:128)
	at org.jgroups.JChannel.<init>(JChannel.java:118)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.buildChannel(JGroupsTransport.java:708)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.initChannel(JGroupsTransport.java:467)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.start(JGroupsTransport.java:451)
	at org.infinispan.remoting.transport.jgroups.CorePackageImpl$2.start(CorePackageImpl.java:64)
	at org.infinispan.remoting.transport.jgroups.CorePackageImpl$2.start(CorePackageImpl.java:49)
	at org.infinispan.factories.impl.BasicComponentRegistryImpl.invokeStart(BasicComponentRegistryImpl.java:616)
	at org.infinispan.factories.impl.BasicComponentRegistryImpl.doStartWrapper(BasicComponentRegistryImpl.java:607)
	at org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:576)
	at org.infinispan.factories.impl.BasicComponentRegistryImpl$ComponentWrapper.running(BasicComponentRegistryImpl.java:807)
	at org.infinispan.factories.impl.BasicComponentRegistryImpl.startDependencies(BasicComponentRegistryImpl.java:634)
	at org.infinispan.factories.impl.BasicComponentRegistryImpl.doStartWrapper(BasicComponentRegistryImpl.java:598)
	at org.infinispan.factories.impl.BasicComponentRegistryImpl.startWrapper(BasicComponentRegistryImpl.java:576)
	at org.infinispan.factories.impl.BasicComponentRegistryImpl$ComponentWrapper.running(BasicComponentRegistryImpl.java:807)
	at org.infinispan.factories.GlobalComponentRegistry.preStart(GlobalComponentRegistry.java:307)
	at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:241)
	at org.infinispan.manager.DefaultCacheManager.internalStart(DefaultCacheManager.java:783)
	at org.infinispan.manager.DefaultCacheManager.start(DefaultCacheManager.java:751)
	at org.infinispan.test.MultipleCacheManagersTest.addClusterEnabledCacheManager(MultipleCacheManagersTest.java:278)
	at org.infinispan.test.MultipleCacheManagersTest.createClusteredCaches(MultipleCacheManagersTest.java:444)
	at org.infinispan.test.MultipleCacheManagersTest.createClusteredCaches(MultipleCacheManagersTest.java:415)
	at org.infinispan.partitionhandling.BasePartitionHandlingTest.createCacheManagers(BasePartitionHandlingTest.java:79)
	at org.infinispan.conflict.impl.ConflictManagerTest.createCacheManagers(ConflictManagerTest.java:71)
	at org.infinispan.test.MultipleCacheManagersTest.callCreateCacheManagers(MultipleCacheManagersTest.java:124)
	at org.infinispan.test.MultipleCacheManagersTest.createBeforeMethod(MultipleCacheManagersTest.java:134)
	at java.lang.invoke.LambdaForm$DMH/0x00007f0218464c00.invokeVirtual(java.base@21.0.4/LambdaForm$DMH)
	at java.lang.invoke.LambdaForm$MH/0x00007f0218c78400.invoke(java.base@21.0.4/LambdaForm$MH)
	at java.lang.invoke.Invokers$Holder.invokeExact_MT(java.base@21.0.4/Invokers$Holder)
	at jdk.internal.reflect.DirectMethodHandleAccessor.invokeImpl(java.base@21.0.4/DirectMethodHandleAccessor.java:153)
	at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(java.base@21.0.4/DirectMethodHandleAccessor.java:103)
	at java.lang.reflect.Method.invoke(java.base@21.0.4/Method.java:580)
	at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:124)
	at org.testng.internal.MethodInvocationHelper.invokeMethodConsideringTimeout(MethodInvocationHelper.java:59)
	at org.testng.internal.Invoker.invokeConfigurationMethod(Invoker.java:458)
	at org.testng.internal.Invoker.invokeConfigurations(Invoker.java:222)
	at org.testng.internal.Invoker.invokeMethod(Invoker.java:523)
	at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:719)
	at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:989)
	at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:125)
	at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:109)
	at org.testng.TestRunner.privateRun(TestRunner.java:648)
	at org.testng.TestRunner.run(TestRunner.java:505)
	at org.testng.SuiteRunner.runTest(SuiteRunner.java:455)
	at org.testng.SuiteRunner.access$000(SuiteRunner.java:40)
	at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:489)
	at org.testng.internal.thread.ThreadUtil$1.call(ThreadUtil.java:52)
	at java.util.concurrent.FutureTask.run(java.base@21.0.4/FutureTask.java:317)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@21.0.4/ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@21.0.4/ThreadPoolExecutor.java:642)
	at java.lang.Thread.runWith(java.base@21.0.4/Thread.java:1596)
	at java.lang.Thread.run(java.base@21.0.4/Thread.java:1583)

   Locked ownable synchronizers:
	- <0x00000000c1822d90> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"jgroups-7,IracCustomConflictTest-NodeA" #23652 [973529] prio=5 os_prio=0 cpu=7.10ms elapsed=353.61s tid=0x00007f0281676820 nid=973529 waiting on condition  [0x00007f018f9e7000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.$$BlockHound$$_park(java.base@21.0.4/Native Method)
	- parking to wait for  <0x00000000c06328c8> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
	at jdk.internal.misc.Unsafe.park(java.base@21.0.4/Unsafe.java)
	at java.util.concurrent.locks.LockSupport.park(java.base@21.0.4/LockSupport.java:221)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@21.0.4/AbstractQueuedSynchronizer.java:754)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(java.base@21.0.4/AbstractQueuedSynchronizer.java:1079)
	at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(java.base@21.0.4/ReentrantReadWriteLock.java:738)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.getLogger(InternalLoggerRegistry.java:71)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.computeIfAbsent(InternalLoggerRegistry.java:145)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:566)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:539)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:70)
	at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:622)
	at org.jboss.logging.Log4j2Logger.<init>(Log4j2Logger.java:36)
	at org.jboss.logging.Log4j2LoggerProvider.getLogger(Log4j2LoggerProvider.java:36)
	at org.jboss.logging.Log4j2LoggerProvider.getLogger(Log4j2LoggerProvider.java:32)
	at org.jboss.logging.Logger.getLogger(Logger.java:2467)
	at org.infinispan.util.logging.LogFactory.getLogger(LogFactory.java:28)
	at org.infinispan.util.logging.events.impl.BasicEventLogger.log(BasicEventLogger.java:57)
	at org.infinispan.util.logging.events.impl.DecoratedEventLogger.log(DecoratedEventLogger.java:41)
	at org.infinispan.util.logging.events.EventLogger.info(EventLogger.java:41)
	at org.infinispan.topology.ClusterCacheStatus.startQueuedRebalance(ClusterCacheStatus.java:1007)
	- locked <0x00000000dca10cc0> (a org.infinispan.topology.ClusterCacheStatus)
	at org.infinispan.topology.ClusterCacheStatus.queueRebalance(ClusterCacheStatus.java:152)
	- locked <0x00000000dca10cc0> (a org.infinispan.topology.ClusterCacheStatus)
	at org.infinispan.partitionhandling.impl.PreferAvailabilityStrategy.onJoin(PreferAvailabilityStrategy.java:40)
	at org.infinispan.topology.ClusterCacheStatus.doJoin(ClusterCacheStatus.java:747)
	- locked <0x00000000dca10cc0> (a org.infinispan.topology.ClusterCacheStatus)
	at org.infinispan.topology.ClusterTopologyManagerImpl.lambda$handleJoin$4(ClusterTopologyManagerImpl.java:265)
	at org.infinispan.topology.ClusterTopologyManagerImpl$$Lambda/0x00007f02189fe3e8.apply(Unknown Source)
	at java.util.concurrent.CompletableFuture.uniApplyNow(java.base@21.0.4/CompletableFuture.java:684)
	at java.util.concurrent.CompletableFuture.uniApplyStage(java.base@21.0.4/CompletableFuture.java:662)
	at java.util.concurrent.CompletableFuture.thenApply(java.base@21.0.4/CompletableFuture.java:2200)
	at java.util.concurrent.CompletableFuture.thenApply(java.base@21.0.4/CompletableFuture.java:144)
	at org.infinispan.topology.ClusterTopologyManagerImpl.lambda$handleJoin$5(ClusterTopologyManagerImpl.java:265)
	at org.infinispan.topology.ClusterTopologyManagerImpl$$Lambda/0x00007f02189f5930.apply(Unknown Source)
	at java.util.concurrent.CompletableFuture.uniComposeStage(java.base@21.0.4/CompletableFuture.java:1187)
	at java.util.concurrent.CompletableFuture.thenCompose(java.base@21.0.4/CompletableFuture.java:2341)
	at java.util.concurrent.CompletableFuture.thenCompose(java.base@21.0.4/CompletableFuture.java:144)
	at org.infinispan.topology.ClusterTopologyManagerImpl.handleJoin(ClusterTopologyManagerImpl.java:257)
	at org.infinispan.commands.topology.CacheJoinCommand.invokeAsync(CacheJoinCommand.java:46)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler$ReplicableCommandRunner.run(GlobalInboundInvocationHandler.java:156)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleReplicableCommand(GlobalInboundInvocationHandler.java:133)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleFromCluster(GlobalInboundInvocationHandler.java:79)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processRequest(JGroupsTransport.java:1527)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1454)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1649)
	at org.jgroups.JChannel.up(JChannel.java:748)
	at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:941)
	at org.jgroups.protocols.relay.RELAY2.up(RELAY2.java:153)
	at org.jgroups.protocols.FRAG2.up(FRAG2.java:139)
	at org.jgroups.protocols.FlowControl.up(FlowControl.java:253)
	at org.jgroups.protocols.FlowControl.up(FlowControl.java:261)
	at org.jgroups.protocols.pbcast.GMS.up(GMS.java:853)
	at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:235)
	at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1144)
	at org.jgroups.protocols.UNICAST3.addMessage(UNICAST3.java:880)
	at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:862)
	at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:474)
	at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:669)
	at org.jgroups.protocols.VERIFY_SUSPECT2.up(VERIFY_SUSPECT2.java:105)
	at org.jgroups.protocols.FailureDetection.up(FailureDetection.java:180)
	at org.jgroups.protocols.Discovery.up(Discovery.java:296)
	at org.jgroups.stack.Protocol.up(Protocol.java:360)
	at org.jgroups.protocols.TP.passMessageUp(Unknown Source)
	at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:95)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@21.0.4/ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@21.0.4/ThreadPoolExecutor.java:642)
	at java.lang.Thread.runWith(java.base@21.0.4/Thread.java:1596)
	at java.lang.Thread.run(java.base@21.0.4/Thread.java:1583)

   Locked ownable synchronizers:
	- <0x00000000dca10fe0> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"jgroups-7,RetryMechanismTest-NodeK" #36714 [986397] prio=5 os_prio=0 cpu=1.80ms elapsed=311.55s tid=0x00007f01f405b510 nid=986397 waiting on condition  [0x00007f018ab99000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.$$BlockHound$$_park(java.base@21.0.4/Native Method)
	- parking to wait for  <0x00000000c06328c8> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
	at jdk.internal.misc.Unsafe.park(java.base@21.0.4/Unsafe.java)
	at java.util.concurrent.locks.LockSupport.park(java.base@21.0.4/LockSupport.java:221)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@21.0.4/AbstractQueuedSynchronizer.java:754)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(java.base@21.0.4/AbstractQueuedSynchronizer.java:1079)
	at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(java.base@21.0.4/ReentrantReadWriteLock.java:738)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.getLogger(InternalLoggerRegistry.java:71)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.computeIfAbsent(InternalLoggerRegistry.java:145)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:566)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:70)
	at org.apache.logging.log4j.spi.LoggerContext.getLogger(LoggerContext.java:59)
	at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:571)
	at org.apache.logging.log4j.LogManager.getFormatterLogger(LogManager.java:454)
	at org.jgroups.logging.Log4J2LogImpl.<init>(Log4J2LogImpl.java:26)
	at org.jgroups.logging.LogFactory.getLog(LogFactory.java:69)
	at org.jgroups.stack.Protocol.<init>(Protocol.java:68)
	at org.jgroups.protocols.RED.<init>(RED.java:26)
	at java.lang.invoke.LambdaForm$DMH/0x00007f0218ce2c00.newInvokeSpecial(java.base@21.0.4/LambdaForm$DMH)
	at java.lang.invoke.Invokers$Holder.invokeExact_MT(java.base@21.0.4/Invokers$Holder)
	at jdk.internal.reflect.DirectConstructorHandleAccessor.invokeImpl(java.base@21.0.4/DirectConstructorHandleAccessor.java:86)
	at jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(java.base@21.0.4/DirectConstructorHandleAccessor.java:62)
	at java.lang.reflect.Constructor.newInstanceWithCaller(java.base@21.0.4/Constructor.java:502)
	at java.lang.reflect.Constructor.newInstance(java.base@21.0.4/Constructor.java:486)
	at org.jgroups.stack.Configurator.createLayer(Configurator.java:190)
	at org.jgroups.stack.Configurator.createProtocols(Configurator.java:170)
	at org.jgroups.stack.Configurator.createProtocolsAndInitializeAttrs(Configurator.java:104)
	at org.jgroups.stack.Configurator.setupProtocolStack(Configurator.java:65)
	at org.jgroups.stack.Configurator.setupProtocolStack(Configurator.java:55)
	at org.jgroups.stack.ProtocolStack.setup(ProtocolStack.java:439)
	at org.jgroups.JChannel.init(JChannel.java:916)
	at org.jgroups.JChannel.<init>(JChannel.java:128)
	at org.jgroups.JChannel.<init>(JChannel.java:109)
	at org.jgroups.protocols.relay.config.RelayConfig$PropertiesBridgeConfig.createChannel(RelayConfig.java:178)
	at org.jgroups.protocols.relay.Relayer2.start(Relayer2.java:45)
	at org.jgroups.protocols.relay.RELAY2.startRelayer(RELAY2.java:476)
	at org.jgroups.protocols.relay.RELAY2.lambda$handleView$2(RELAY2.java:239)
	at org.jgroups.protocols.relay.RELAY2$$Lambda/0x00007f0218d2b0b8.run(Unknown Source)
	at org.jgroups.util.TimeScheduler3$Task.run(TimeScheduler3.java:332)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@21.0.4/ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@21.0.4/ThreadPoolExecutor.java:642)
	at java.lang.Thread.runWith(java.base@21.0.4/Thread.java:1596)
	at java.lang.Thread.run(java.base@21.0.4/Thread.java:1583)

   Locked ownable synchronizers:
	- <0x00000000c3d088b0> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"jgroups-8,FunctionalTxInMemoryTest-NodeA" #48780 [998446] prio=5 os_prio=0 cpu=2.07ms elapsed=277.36s tid=0x00007f01b0405020 nid=998446 waiting on condition  [0x00007f0183826000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.$$BlockHound$$_park(java.base@21.0.4/Native Method)
	- parking to wait for  <0x00000000c06328c8> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
	at jdk.internal.misc.Unsafe.park(java.base@21.0.4/Unsafe.java)
	at java.util.concurrent.locks.LockSupport.park(java.base@21.0.4/LockSupport.java:221)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@21.0.4/AbstractQueuedSynchronizer.java:754)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(java.base@21.0.4/AbstractQueuedSynchronizer.java:1079)
	at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(java.base@21.0.4/ReentrantReadWriteLock.java:738)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.getLogger(InternalLoggerRegistry.java:71)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.computeIfAbsent(InternalLoggerRegistry.java:145)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:566)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:539)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:70)
	at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:622)
	at org.jboss.logging.Log4j2Logger.<init>(Log4j2Logger.java:36)
	at org.jboss.logging.Log4j2LoggerProvider.getLogger(Log4j2LoggerProvider.java:36)
	at org.jboss.logging.Log4j2LoggerProvider.getLogger(Log4j2LoggerProvider.java:32)
	at org.jboss.logging.Logger.getLogger(Logger.java:2467)
	at org.infinispan.util.logging.LogFactory.getLogger(LogFactory.java:28)
	at org.infinispan.util.logging.events.impl.BasicEventLogger.log(BasicEventLogger.java:57)
	at org.infinispan.util.logging.events.impl.DecoratedEventLogger.log(DecoratedEventLogger.java:41)
	at org.infinispan.util.logging.events.EventLogger.info(EventLogger.java:41)
	at org.infinispan.topology.ClusterCacheStatus.endReadNewPhase(ClusterCacheStatus.java:483)
	at org.infinispan.topology.ClusterCacheStatus$$Lambda/0x00007f0218a7f898.run(Unknown Source)
	at org.infinispan.topology.RebalanceConfirmationCollector.confirmPhase(RebalanceConfirmationCollector.java:54)
	- locked <0x00000000c2646e58> (a org.infinispan.topology.RebalanceConfirmationCollector)
	at org.infinispan.topology.ClusterCacheStatus.confirmRebalancePhase(ClusterCacheStatus.java:362)
	- locked <0x00000000c2646d20> (a org.infinispan.topology.ClusterCacheStatus)
	at org.infinispan.topology.ClusterTopologyManagerImpl.handleRebalancePhaseConfirm(ClusterTopologyManagerImpl.java:340)
	at org.infinispan.commands.topology.RebalancePhaseConfirmCommand.invokeAsync(RebalancePhaseConfirmCommand.java:43)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler$ReplicableCommandRunner.run(GlobalInboundInvocationHandler.java:156)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleReplicableCommand(GlobalInboundInvocationHandler.java:133)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleFromCluster(GlobalInboundInvocationHandler.java:79)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processRequest(JGroupsTransport.java:1527)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1454)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1649)
	at org.jgroups.JChannel.up(JChannel.java:748)
	at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:941)
	at org.jgroups.protocols.FRAG2.up(FRAG2.java:139)
	at org.jgroups.protocols.FlowControl.up(FlowControl.java:253)
	at org.jgroups.protocols.FlowControl.up(FlowControl.java:261)
	at org.jgroups.protocols.pbcast.GMS.up(GMS.java:853)
	at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:235)
	at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1144)
	at org.jgroups.protocols.UNICAST3.addMessage(UNICAST3.java:880)
	at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:862)
	at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:474)
	at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:669)
	at org.jgroups.protocols.Discovery.up(Discovery.java:296)
	at org.jgroups.stack.Protocol.up(Protocol.java:360)
	at org.jgroups.protocols.TP.passMessageUp(Unknown Source)
	at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:95)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@21.0.4/ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@21.0.4/ThreadPoolExecutor.java:642)
	at java.lang.Thread.runWith(java.base@21.0.4/Thread.java:1596)
	at java.lang.Thread.run(java.base@21.0.4/Thread.java:1583)

   Locked ownable synchronizers:
	- <0x00000000c3d08cf0> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"jgroups-7,StateTransferLinkFailuresTest-NodeK" #49037 [998702] prio=5 os_prio=0 cpu=2.61ms elapsed=276.80s tid=0x00007f01c4098380 nid=998702 waiting on condition  [0x00007f0186452000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.$$BlockHound$$_park(java.base@21.0.4/Native Method)
	- parking to wait for  <0x00000000c06328c8> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
	at jdk.internal.misc.Unsafe.park(java.base@21.0.4/Unsafe.java)
	at java.util.concurrent.locks.LockSupport.park(java.base@21.0.4/LockSupport.java:221)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@21.0.4/AbstractQueuedSynchronizer.java:754)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(java.base@21.0.4/AbstractQueuedSynchronizer.java:1079)
	at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(java.base@21.0.4/ReentrantReadWriteLock.java:738)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.getLogger(InternalLoggerRegistry.java:71)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.computeIfAbsent(InternalLoggerRegistry.java:145)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:566)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:539)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:70)
	at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:622)
	at org.jboss.logging.Log4j2Logger.<init>(Log4j2Logger.java:36)
	at org.jboss.logging.Log4j2LoggerProvider.getLogger(Log4j2LoggerProvider.java:36)
	at org.jboss.logging.Log4j2LoggerProvider.getLogger(Log4j2LoggerProvider.java:32)
	at org.jboss.logging.Logger.getLogger(Logger.java:2467)
	at org.infinispan.util.logging.LogFactory.getLogger(LogFactory.java:28)
	at org.infinispan.util.logging.events.impl.BasicEventLogger.log(BasicEventLogger.java:57)
	at org.infinispan.util.logging.events.impl.DecoratedEventLogger.log(DecoratedEventLogger.java:41)
	at org.infinispan.util.logging.events.EventLogger.info(EventLogger.java:41)
	at org.infinispan.topology.LocalTopologyManagerImpl.handleRebalance(LocalTopologyManagerImpl.java:663)
	at org.infinispan.commands.topology.RebalanceStartCommand.invokeAsync(RebalanceStartCommand.java:59)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler$ReplicableCommandRunner.run(GlobalInboundInvocationHandler.java:156)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleReplicableCommand(GlobalInboundInvocationHandler.java:133)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleFromCluster(GlobalInboundInvocationHandler.java:79)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processRequest(JGroupsTransport.java:1527)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1454)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.lambda$up$0(JGroupsTransport.java:1663)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks$$Lambda/0x00007f0218a78688.accept(Unknown Source)
	at java.lang.Iterable.forEach(java.base@21.0.4/Iterable.java:75)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1655)
	at org.jgroups.JChannel.up(JChannel.java:764)
	at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:944)
	at org.jgroups.protocols.relay.RELAY2.up(RELAY2.java:209)
	at org.jgroups.protocols.FRAG2.up(FRAG2.java:161)
	at org.jgroups.protocols.FlowControl.up(FlowControl.java:319)
	at org.jgroups.protocols.FlowControl.up(FlowControl.java:319)
	at org.jgroups.protocols.pbcast.GMS.up(GMS.java:867)
	at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:255)
	at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:519)
	at org.jgroups.protocols.pbcast.NAKACK2.deliverBatch(NAKACK2.java:1028)
	at org.jgroups.protocols.pbcast.NAKACK2.handleMessageBatch(NAKACK2.java:925)
	at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:758)
	at org.jgroups.protocols.VERIFY_SUSPECT2.up(VERIFY_SUSPECT2.java:119)
	at org.jgroups.protocols.FailureDetection.up(FailureDetection.java:193)
	at org.jgroups.protocols.Discovery.up(Discovery.java:316)
	at org.jgroups.protocols.RED.up(RED.java:123)
	at org.jgroups.protocols.TP.passBatchUp(Unknown Source)
	at org.jgroups.util.SubmitToThreadPool$BatchHandler.passBatchUp(SubmitToThreadPool.java:137)
	at org.jgroups.util.SubmitToThreadPool$BatchHandler.run(SubmitToThreadPool.java:133)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@21.0.4/ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@21.0.4/ThreadPoolExecutor.java:642)
	at java.lang.Thread.runWith(java.base@21.0.4/Thread.java:1596)
	at java.lang.Thread.run(java.base@21.0.4/Thread.java:1583)

   Locked ownable synchronizers:
	- <0x00000000db8d7c28> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"jgroups-8,GroupsChFunctionalTest-NodeC" #68221 [1017748] prio=5 os_prio=0 cpu=3.46ms elapsed=162.26s tid=0x00007f01f806de40 nid=1017748 waiting on condition  [0x00007f019d4d2000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.$$BlockHound$$_park(java.base@21.0.4/Native Method)
	- parking to wait for  <0x00000000c06328c8> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
	at jdk.internal.misc.Unsafe.park(java.base@21.0.4/Unsafe.java)
	at java.util.concurrent.locks.LockSupport.park(java.base@21.0.4/LockSupport.java:221)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@21.0.4/AbstractQueuedSynchronizer.java:754)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(java.base@21.0.4/AbstractQueuedSynchronizer.java:1079)
	at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(java.base@21.0.4/ReentrantReadWriteLock.java:738)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.getLogger(InternalLoggerRegistry.java:71)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.computeIfAbsent(InternalLoggerRegistry.java:145)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:566)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:539)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:70)
	at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:622)
	at org.jboss.logging.Log4j2Logger.<init>(Log4j2Logger.java:36)
	at org.jboss.logging.Log4j2LoggerProvider.getLogger(Log4j2LoggerProvider.java:36)
	at org.jboss.logging.Log4j2LoggerProvider.getLogger(Log4j2LoggerProvider.java:32)
	at org.jboss.logging.Logger.getLogger(Logger.java:2467)
	at org.infinispan.util.logging.LogFactory.getLogger(LogFactory.java:28)
	at org.infinispan.util.logging.events.impl.BasicEventLogger.log(BasicEventLogger.java:57)
	at org.infinispan.util.logging.events.impl.DecoratedEventLogger.log(DecoratedEventLogger.java:41)
	at org.infinispan.util.logging.events.EventLogger.info(EventLogger.java:41)
	at org.infinispan.topology.LocalTopologyManagerImpl.handleRebalance(LocalTopologyManagerImpl.java:663)
	at org.infinispan.commands.topology.RebalanceStartCommand.invokeAsync(RebalanceStartCommand.java:59)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler$ReplicableCommandRunner.run(GlobalInboundInvocationHandler.java:156)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleReplicableCommand(GlobalInboundInvocationHandler.java:133)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleFromCluster(GlobalInboundInvocationHandler.java:79)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processRequest(JGroupsTransport.java:1527)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1454)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.lambda$up$0(JGroupsTransport.java:1663)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks$$Lambda/0x00007f0218a78688.accept(Unknown Source)
	at java.lang.Iterable.forEach(java.base@21.0.4/Iterable.java:75)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1655)
	at org.jgroups.JChannel.up(JChannel.java:764)
	at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:944)
	at org.jgroups.protocols.FRAG2.up(FRAG2.java:161)
	at org.jgroups.protocols.FlowControl.up(FlowControl.java:319)
	at org.jgroups.protocols.FlowControl.up(FlowControl.java:319)
	at org.jgroups.protocols.pbcast.GMS.up(GMS.java:867)
	at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:255)
	at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:519)
	at org.jgroups.protocols.pbcast.NAKACK2.deliverBatch(NAKACK2.java:1028)
	at org.jgroups.protocols.pbcast.NAKACK2.handleMessageBatch(NAKACK2.java:925)
	at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:758)
	at org.jgroups.protocols.Discovery.up(Discovery.java:316)
	at org.jgroups.protocols.RED.up(RED.java:123)
	at org.jgroups.protocols.TP.passBatchUp(Unknown Source)
	at org.jgroups.util.SubmitToThreadPool$BatchHandler.passBatchUp(SubmitToThreadPool.java:137)
	at org.jgroups.util.SubmitToThreadPool$BatchHandler.run(SubmitToThreadPool.java:133)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@21.0.4/ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@21.0.4/ThreadPoolExecutor.java:642)
	at java.lang.Thread.runWith(java.base@21.0.4/Thread.java:1596)
	at java.lang.Thread.run(java.base@21.0.4/Thread.java:1583)

   Locked ownable synchronizers:
	- <0x00000000d3000648> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"jgroups-6,DistSyncOnePhaseWriteSkewTxStateTransferTest-NodeBS" #78708 [1028101] prio=5 os_prio=0 cpu=0.50ms elapsed=109.00s tid=0x00007f01ac40f850 nid=1028101 waiting on condition  [0x00007f02117e4000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.$$BlockHound$$_park(java.base@21.0.4/Native Method)
	- parking to wait for  <0x00000000c06328c8> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
	at jdk.internal.misc.Unsafe.park(java.base@21.0.4/Unsafe.java)
	at java.util.concurrent.locks.LockSupport.park(java.base@21.0.4/LockSupport.java:221)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@21.0.4/AbstractQueuedSynchronizer.java:754)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(java.base@21.0.4/AbstractQueuedSynchronizer.java:1079)
	at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(java.base@21.0.4/ReentrantReadWriteLock.java:738)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.getLogger(InternalLoggerRegistry.java:71)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.computeIfAbsent(InternalLoggerRegistry.java:145)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:566)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:70)
	at org.apache.logging.log4j.spi.LoggerContext.getLogger(LoggerContext.java:59)
	at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:571)
	at org.apache.logging.log4j.LogManager.getFormatterLogger(LogManager.java:454)
	at org.jgroups.logging.Log4J2LogImpl.<init>(Log4J2LogImpl.java:26)
	at org.jgroups.logging.LogFactory.getLog(LogFactory.java:69)
	at org.jgroups.stack.Protocol.<init>(Protocol.java:68)
	at org.jgroups.protocols.UNICAST3.<init>(UNICAST3.java:39)
	at java.lang.invoke.LambdaForm$DMH/0x00007f0218cf2000.newInvokeSpecial(java.base@21.0.4/LambdaForm$DMH)
	at java.lang.invoke.Invokers$Holder.invokeExact_MT(java.base@21.0.4/Invokers$Holder)
	at jdk.internal.reflect.DirectConstructorHandleAccessor.invokeImpl(java.base@21.0.4/DirectConstructorHandleAccessor.java:86)
	at jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(java.base@21.0.4/DirectConstructorHandleAccessor.java:62)
	at java.lang.reflect.Constructor.newInstanceWithCaller(java.base@21.0.4/Constructor.java:502)
	at java.lang.reflect.Constructor.newInstance(java.base@21.0.4/Constructor.java:486)
	at org.jgroups.stack.Configurator.createLayer(Configurator.java:190)
	at org.jgroups.stack.Configurator.createProtocols(Configurator.java:170)
	at org.jgroups.stack.Configurator.createProtocolsAndInitializeAttrs(Configurator.java:104)
	at org.jgroups.stack.Configurator.setupProtocolStack(Configurator.java:65)
	at org.jgroups.stack.Configurator.setupProtocolStack(Configurator.java:55)
	at org.jgroups.stack.ProtocolStack.setup(ProtocolStack.java:439)
	at org.jgroups.JChannel.init(JChannel.java:916)
	at org.jgroups.JChannel.<init>(JChannel.java:128)
	at org.jgroups.JChannel.<init>(JChannel.java:109)
	at org.jgroups.protocols.relay.config.RelayConfig$PropertiesBridgeConfig.createChannel(RelayConfig.java:178)
	at org.jgroups.protocols.relay.Relayer2.start(Relayer2.java:45)
	at org.jgroups.protocols.relay.RELAY2.startRelayer(RELAY2.java:476)
	at org.jgroups.protocols.relay.RELAY2.lambda$handleView$2(RELAY2.java:239)
	at org.jgroups.protocols.relay.RELAY2$$Lambda/0x00007f0218d2b0b8.run(Unknown Source)
	at org.jgroups.util.TimeScheduler3$Task.run(TimeScheduler3.java:332)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@21.0.4/ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@21.0.4/ThreadPoolExecutor.java:642)
	at java.lang.Thread.runWith(java.base@21.0.4/Thread.java:1596)
	at java.lang.Thread.run(java.base@21.0.4/Thread.java:1583)

   Locked ownable synchronizers:
	- <0x00000000c6224bf0> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"jgroups-6,IracLocalStateTransferTest-NodeB" #93184 [1042334] prio=5 os_prio=0 cpu=2.77ms elapsed=44.39s tid=0x00007f01cc2e6ef0 nid=1042334 waiting on condition  [0x00007f0195478000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.$$BlockHound$$_park(java.base@21.0.4/Native Method)
	- parking to wait for  <0x00000000c06328c8> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
	at jdk.internal.misc.Unsafe.park(java.base@21.0.4/Unsafe.java)
	at java.util.concurrent.locks.LockSupport.park(java.base@21.0.4/LockSupport.java:221)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@21.0.4/AbstractQueuedSynchronizer.java:754)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(java.base@21.0.4/AbstractQueuedSynchronizer.java:1079)
	at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(java.base@21.0.4/ReentrantReadWriteLock.java:738)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.getLogger(InternalLoggerRegistry.java:71)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.computeIfAbsent(InternalLoggerRegistry.java:145)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:566)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:539)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:70)
	at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:622)
	at org.jboss.logging.Log4j2Logger.<init>(Log4j2Logger.java:36)
	at org.jboss.logging.Log4j2LoggerProvider.getLogger(Log4j2LoggerProvider.java:36)
	at org.jboss.logging.Log4j2LoggerProvider.getLogger(Log4j2LoggerProvider.java:32)
	at org.jboss.logging.Logger.getLogger(Logger.java:2467)
	at org.infinispan.util.logging.LogFactory.getLogger(LogFactory.java:28)
	at org.infinispan.util.logging.events.impl.BasicEventLogger.log(BasicEventLogger.java:57)
	at org.infinispan.util.logging.events.impl.DecoratedEventLogger.log(DecoratedEventLogger.java:41)
	at org.infinispan.util.logging.events.EventLogger.info(EventLogger.java:41)
	at org.infinispan.topology.LocalTopologyManagerImpl.handleRebalance(LocalTopologyManagerImpl.java:663)
	at org.infinispan.commands.topology.RebalanceStartCommand.invokeAsync(RebalanceStartCommand.java:59)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler$ReplicableCommandRunner.run(GlobalInboundInvocationHandler.java:156)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleReplicableCommand(GlobalInboundInvocationHandler.java:133)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleFromCluster(GlobalInboundInvocationHandler.java:79)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processRequest(JGroupsTransport.java:1527)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1454)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.lambda$up$0(JGroupsTransport.java:1663)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks$$Lambda/0x00007f0218a78688.accept(Unknown Source)
	at java.lang.Iterable.forEach(java.base@21.0.4/Iterable.java:75)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1655)
	at org.jgroups.JChannel.up(JChannel.java:764)
	at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:944)
	at org.jgroups.protocols.relay.RELAY2.up(RELAY2.java:209)
	at org.jgroups.protocols.FRAG2.up(FRAG2.java:161)
	at org.jgroups.protocols.FlowControl.up(FlowControl.java:319)
	at org.jgroups.protocols.FlowControl.up(FlowControl.java:319)
	at org.jgroups.protocols.pbcast.GMS.up(GMS.java:867)
	at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:255)
	at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:519)
	at org.jgroups.protocols.pbcast.NAKACK2.deliverBatch(NAKACK2.java:1028)
	at org.jgroups.protocols.pbcast.NAKACK2.handleMessageBatch(NAKACK2.java:925)
	at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:758)
	at org.jgroups.protocols.VERIFY_SUSPECT2.up(VERIFY_SUSPECT2.java:119)
	at org.jgroups.protocols.FailureDetection.up(FailureDetection.java:193)
	at org.jgroups.protocols.Discovery.up(Discovery.java:316)
	at org.jgroups.protocols.RED.up(RED.java:123)
	at org.jgroups.protocols.TP.passBatchUp(TP.java:1269)
	at org.jgroups.util.SubmitToThreadPool$BatchHandler.passBatchUp(SubmitToThreadPool.java:137)
	at org.jgroups.util.SubmitToThreadPool$BatchHandler.run(SubmitToThreadPool.java:133)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@21.0.4/ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@21.0.4/ThreadPoolExecutor.java:642)
	at java.lang.Thread.runWith(java.base@21.0.4/Thread.java:1596)
	at java.lang.Thread.run(java.base@21.0.4/Thread.java:1583)

   Locked ownable synchronizers:
	- <0x00000000c1316b68> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"jgroups-6,IracLocalStateTransferTest-NodeC" #94029 [1043151] prio=5 os_prio=0 cpu=1.56ms elapsed=39.41s tid=0x00007f020c7c85d0 nid=1043151 waiting on condition  [0x00007f0211be9000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.$$BlockHound$$_park(java.base@21.0.4/Native Method)
	- parking to wait for  <0x00000000c06328c8> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
	at jdk.internal.misc.Unsafe.park(java.base@21.0.4/Unsafe.java)
	at java.util.concurrent.locks.LockSupport.park(java.base@21.0.4/LockSupport.java:221)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@21.0.4/AbstractQueuedSynchronizer.java:754)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@21.0.4/AbstractQueuedSynchronizer.java:990)
	at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(java.base@21.0.4/ReentrantReadWriteLock.java:959)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.computeIfAbsent(InternalLoggerRegistry.java:151)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:566)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:539)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:70)
	at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:622)
	at org.jboss.logging.Log4j2Logger.<init>(Log4j2Logger.java:36)
	at org.jboss.logging.Log4j2LoggerProvider.getLogger(Log4j2LoggerProvider.java:36)
	at org.jboss.logging.Log4j2LoggerProvider.getLogger(Log4j2LoggerProvider.java:32)
	at org.jboss.logging.Logger.getLogger(Logger.java:2467)
	at org.infinispan.util.logging.LogFactory.getLogger(LogFactory.java:28)
	at org.infinispan.util.logging.events.impl.BasicEventLogger.log(BasicEventLogger.java:57)
	at org.infinispan.util.logging.events.impl.DecoratedEventLogger.log(DecoratedEventLogger.java:41)
	at org.infinispan.util.logging.events.EventLogger.info(EventLogger.java:41)
	at org.infinispan.topology.LocalTopologyManagerImpl.handleRebalance(LocalTopologyManagerImpl.java:663)
	at org.infinispan.commands.topology.RebalanceStartCommand.invokeAsync(RebalanceStartCommand.java:59)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler$ReplicableCommandRunner.run(GlobalInboundInvocationHandler.java:156)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleReplicableCommand(GlobalInboundInvocationHandler.java:133)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleFromCluster(GlobalInboundInvocationHandler.java:79)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processRequest(JGroupsTransport.java:1527)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1454)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1649)
	at org.jgroups.JChannel.up(JChannel.java:748)
	at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:941)
	at org.jgroups.protocols.relay.RELAY2.up(RELAY2.java:153)
	at org.jgroups.protocols.FRAG2.up(FRAG2.java:139)
	at org.jgroups.protocols.FlowControl.up(FlowControl.java:261)
	at org.jgroups.protocols.FlowControl.up(FlowControl.java:253)
	at org.jgroups.protocols.pbcast.GMS.up(GMS.java:853)
	at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:235)
	at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:462)
	at org.jgroups.protocols.pbcast.NAKACK2.deliver(NAKACK2.java:1006)
	at org.jgroups.protocols.pbcast.NAKACK2.handleMessage(NAKACK2.java:894)
	at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:679)
	at org.jgroups.protocols.VERIFY_SUSPECT2.up(VERIFY_SUSPECT2.java:105)
	at org.jgroups.protocols.FailureDetection.up(FailureDetection.java:180)
	at org.jgroups.protocols.Discovery.up(Discovery.java:296)
	at org.jgroups.stack.Protocol.up(Protocol.java:360)
	at org.jgroups.protocols.TP.passMessageUp(TP.java:1232)
	at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:95)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@21.0.4/ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@21.0.4/ThreadPoolExecutor.java:642)
	at java.lang.Thread.runWith(java.base@21.0.4/Thread.java:1596)
	at java.lang.Thread.run(java.base@21.0.4/Thread.java:1583)

   Locked ownable synchronizers:
	- <0x00000000c62003f0> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"jgroups-9,IracLocalStateTransferTest-NodeH" #95762 [1044837] prio=5 os_prio=0 cpu=0.38ms elapsed=31.12s tid=0x00007f026007c560 nid=1044837 waiting on condition  [0x00007f019cac9000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.$$BlockHound$$_park(java.base@21.0.4/Native Method)
	- parking to wait for  <0x00000000c06328c8> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
	at jdk.internal.misc.Unsafe.park(java.base@21.0.4/Unsafe.java)
	at java.util.concurrent.locks.LockSupport.park(java.base@21.0.4/LockSupport.java:221)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@21.0.4/AbstractQueuedSynchronizer.java:754)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(java.base@21.0.4/AbstractQueuedSynchronizer.java:1079)
	at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(java.base@21.0.4/ReentrantReadWriteLock.java:738)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.getLogger(InternalLoggerRegistry.java:71)
	at org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.computeIfAbsent(InternalLoggerRegistry.java:145)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:566)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:539)
	at org.apache.logging.log4j.core.LoggerContext.getLogger(LoggerContext.java:70)
	at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:622)
	at org.jboss.logging.Log4j2Logger.<init>(Log4j2Logger.java:36)
	at org.jboss.logging.Log4j2LoggerProvider.getLogger(Log4j2LoggerProvider.java:36)
	at org.jboss.logging.Log4j2LoggerProvider.getLogger(Log4j2LoggerProvider.java:32)
	at org.jboss.logging.Logger.getLogger(Logger.java:2467)
	at org.infinispan.util.logging.LogFactory.getLogger(LogFactory.java:28)
	at org.infinispan.util.logging.events.impl.BasicEventLogger.log(BasicEventLogger.java:57)
	at org.infinispan.util.logging.events.impl.DecoratedEventLogger.log(DecoratedEventLogger.java:41)
	at org.infinispan.util.logging.events.EventLogger.info(EventLogger.java:41)
	at org.infinispan.topology.LocalTopologyManagerImpl.handleRebalance(LocalTopologyManagerImpl.java:663)
	at org.infinispan.commands.topology.RebalanceStartCommand.invokeAsync(RebalanceStartCommand.java:59)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler$ReplicableCommandRunner.run(GlobalInboundInvocationHandler.java:156)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleReplicableCommand(GlobalInboundInvocationHandler.java:133)
	at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.handleFromCluster(GlobalInboundInvocationHandler.java:79)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processRequest(JGroupsTransport.java:1527)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1454)
	at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1649)
	at org.jgroups.JChannel.up(JChannel.java:748)
	at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:941)
	at org.jgroups.protocols.relay.RELAY2.up(RELAY2.java:153)
	at org.jgroups.protocols.FRAG2.up(FRAG2.java:139)
	at org.jgroups.protocols.FlowControl.up(FlowControl.java:261)
	at org.jgroups.protocols.FlowControl.up(FlowControl.java:253)
	at org.jgroups.protocols.pbcast.GMS.up(GMS.java:853)
	at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:235)
	at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:462)
	at org.jgroups.protocols.pbcast.NAKACK2.deliver(NAKACK2.java:1006)
	at org.jgroups.protocols.pbcast.NAKACK2.handleMessage(NAKACK2.java:894)
	at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:679)
	at org.jgroups.protocols.VERIFY_SUSPECT2.up(VERIFY_SUSPECT2.java:105)
	at org.jgroups.protocols.FailureDetection.up(FailureDetection.java:180)
	at org.jgroups.protocols.Discovery.up(Discovery.java:296)
	at org.jgroups.stack.Protocol.up(Protocol.java:360)
	at org.jgroups.protocols.TP.passMessageUp(TP.java:1232)
	at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:95)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@21.0.4/ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@21.0.4/ThreadPoolExecutor.java:642)
	at java.lang.Thread.runWith(java.base@21.0.4/Thread.java:1596)
	at java.lang.Thread.run(java.base@21.0.4/Thread.java:1583)

   Locked ownable synchronizers:
	- <0x00000000dccb3a50> (a java.util.concurrent.ThreadPoolExecutor$Worker)

@vy
Copy link
Member

vy commented Jan 17, 2025

@tristantarrant, which Log4j version? Which JDK distribution and version?

@tristantarrant
Copy link

tristantarrant commented Jan 17, 2025

openjdk version "21.0.4" 2024-07-16 LTS
OpenJDK Runtime Environment (Red_Hat-21.0.4.0.7-1) (build 21.0.4+7-LTS)
OpenJDK 64-Bit Server VM (Red_Hat-21.0.4.0.7-1) (build 21.0.4+7-LTS, mixed mode, sharing)

log4j 2.24.3

It was not happening with 2.23.1

@vy
Copy link
Member

vy commented Jan 20, 2025

@tristantarrant, could you provide a minimal reproducer, please?

@vy
Copy link
Member

vy commented Jan 20, 2025

@tristantarrant, is the stack trace you shared complete/exhaustive? If not, could you provide a complete one, please?

@tristantarrant
Copy link

Not I trimmed only the stack traces that were locked on that. I'm working on the simplest possible reproducer.

@freelon
Copy link

freelon commented Jan 24, 2025

I am stumbling upon (probably) the same problem and can reliably reproduce it. I could share stack traces if that helps, but sadly I cannot hand out a minimal example because we are using an internal library in our setup.

Our setup reads from a zookeeper instance which log levels are configured, but the classes that do that contain loggers themselves. Until now this wasn't a problem. I suspect that due to the changes in locking between LoggerRegistry and InternalLoggerRegistry the thread that receives the answer from zookeeper is blocked trying to obtain its logger.

@vy
Copy link
Member

vy commented Jan 24, 2025

I am stumbling upon (probably) the same problem and can reliably reproduce it. I could share stack traces if that helps, but sadly I cannot hand out a minimal example because we are using an internal library in our setup.

Our setup reads from a zookeeper instance which log levels are configured, but the classes that do that contain loggers themselves. Until now this wasn't a problem. I suspect that due to the changes in locking between LoggerRegistry and InternalLoggerRegistry the thread that receives the answer from zookeeper is blocked trying to obtain its logger.

@freelon, would you mind helping with the following questions, please?

  1. What is the latest Log4j version that you do not observe this problem?
  2. What is the latest Log4j version that you observe this problem?
  3. Which JDK distribution and version do you use?
  4. Do your application use virtual threads? If so, does upgrading to JDK 24-EA help with mitigating the problems you are having?
  5. Would you mind sharing the stack trace, please? (If it contains information you don't want to disclose, you can email it to the private@logging.apache.org mailing list – see the Logging Services support page for details. This way, only Log4j maintainers will see it.)

Our setup reads from a zookeeper instance which log levels are configured, but the classes that do that contain loggers themselves. Until now this wasn't a problem.

Would you mind elaborating on this, please? Maybe with a pseudo code example demonstrating the case?

@vy vy added waiting-for-user More information is needed from the user and removed waiting-for-maintainer labels Jan 27, 2025
@freelon
Copy link

freelon commented Jan 27, 2025

@vy So for the easy part:

  1. The latest working version is 2.24.1
  2. The latest not working version is 2.24.3 (the problem appears in 2.24.2 as well)
  3. OpenJDK Runtime Environment Temurin-21.0.2+13
  4. We do not use virtual threads. While the app is running on Java 21 our internal libs are still compiled for Java 8 so not possible.
  5. The full thread-dump is in the file, I just removed our internal names. You will see some packages beginning with shaded.by.configuration - those are all the dependencies of our internal zookeeper-configuration library that we deliver in a fixed variant as shaded dependencies. Threaddump.txt

For the pseudo-code part I'll try my best explaining it here 🙈 :

We have a custom Log4jConfiguration extends AbstractConfiguration.

It overwrites getLoggerConfig(String loggerName):

if not loggerName.startsWith("shaded.by.configuration" or "com.example.configuration" or "org.apache.zookeeper/curation")
    reconfigure()

return super.getLoggerConfig(loggerName) 

reconfigure obtains all logger settings from our zookeeper wrapper where we can change the log levels during runtime (like setting logger.com.example.important=DEBUG. This is a blocking call on apache curator (a libray on top of zookeeper). We don't call reconfigure for some of our libraries and zookeeper so that the system can start up (which it doesn't at this point, but...).

Zookeeper/Curator uses multiple threads (as you can see in the stack traces) to coordinate itself. While the application main thread awaits zookeeper to go to state CONNECTED (while holding the log4j InternalLoggerRegistry lock) another zookeeper thread that is managing the TCP connection to the zookeeper server and also initializes a logger - running into the lock held by the first thread.

Since the tcp connection thread is now blocked the connection to the zookeeper server cannot succeed and therefor the main thread, waiting for the connection watcher going to status CONNECTED never unblocks, never releasing the InternalLoggerRegistry lock etc.

Is this helpful and understandable? If this does not get you any further I will invest some time to try and build a reproducable example that I can share.

@github-actions github-actions bot added waiting-for-maintainer and removed waiting-for-user More information is needed from the user labels Jan 27, 2025
@freelon
Copy link

freelon commented Jan 27, 2025

Looking into the stack traces further (I looked at the traces when .getLogger() is called the difference between the old and new implementation in der LoggerContext when a new logger needs to be created:

Previously:

  • receive a null (the current value for the logger)
  • call newInstance (does all the zookeeper stuff in our case)
    • call putIfAbsent
    • take write lock
    • store logger
    • return write lock
  • return initialized logger

With the new InternalLoggerRegistry we have:

  • call internalLoggerRegistry.computeIfAbsent()
    • take write lock
    • create new instance (all the zookeeper stuff again)
    • release write lock
    • return logger
  • return logger

So the difference seems that in the old implementation the actual initialization of a logger was done outside the lock, now it's done while holding the write lock, causing the deadlock in our case.

@ppkarwasz
Copy link
Contributor

Is this helpful and understandable? If this does not get you any further I will invest some time to try and build a reproducable example that I can share.

It is very clear thanks.

We have already seen some unwanted effects if LoggerContext.newInstance has a complex logic (e.g. #3252). Probably we should rewrite computeIfAbsent, so that logger creation occurs without any locks:

  1. (under read lock) We check if the logger already exists.
  2. We create the logger.
  3. (under write lock) We add the logger to the repository.

Since all loggers with the same name are equivalent, we don't need to ensure that the logger is created only once. @vy, what do you think?

ppkarwasz added a commit that referenced this issue Jan 28, 2025
It has been reported that holding a lock on `InternalLoggerRegistry` during the creation of a logger can cause performance loss and deadlocks. The logger constructor can trigger property lookups and other pluggable operations, we don't entirely control. The fix to #3252 only covered one of these cases.

This change moves the instantiation of new `Logger`s outside the write lock. While in some cases, this will cause multiple instantiations of loggers with the same parameters, all such loggers are functionally equivalent. On the other hand, the change allows the creation of different loggers in parallel.

Closes #3399
ppkarwasz added a commit that referenced this issue Jan 28, 2025
It has been reported that holding a lock on `InternalLoggerRegistry` during the creation of a logger can cause performance loss and deadlocks. The logger constructor can trigger property lookups and other pluggable operations, we don't entirely control. The fix to #3252 only covered one of these cases.

This change moves the instantiation of new `Logger`s outside the write lock. While in some cases, this will cause multiple instantiations of loggers with the same parameters, all such loggers are functionally equivalent. On the other hand, the change allows the creation of different loggers in parallel.

Closes #3399
@tristantarrant
Copy link

@tristantarrant, @nilsson-petter, are you sure you're using a local build compiled from the #3418, which is a PR and not included with any 2.x-SNAPSHOT published by Log4j yet. The steps to build it are:

Yes, I'm sure:

Image

Image

I modified the Infinispan pom.xml to use that SNAPSHOT version and ran its core testsuite using Maven in offline mode and with debug options to ensure that the Surefire classpath pointed to the correct jar.

I also used OpenJDK 17 to avoid any potential vthread interference.
A thread dump from a fresh run this morning

https://gist.github.com/tristantarrant/93b0a80974a14cbc01d6d3ed71f540ba

The above includes read locks:
org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.computeIfAbsent(InternalLoggerRegistry.java:145
and write locks:
org.apache.logging.log4j.core.util.internal.InternalLoggerRegistry.computeIfAbsent(InternalLoggerRegistry.java:177)
which are obviously lines from the patched code.

There are 84 threads waiting for the same lock (0x00000000c0334a58), 16 of which on the read lock and 68 on the write lock.

@vy
Copy link
Member

vy commented Jan 29, 2025

@tristantarrant, thanks so much for the detailed analysis, really appreciated it. 🙇 I examined the stack trace, but I still cannot see a smoking gun. Would you mind trying if one of the following helps, please? Edit InternalLoggerRegistry in the fix/3399_create_logger_no_lock branch such that

  1. Pass true to the ReentrantReadWriteLock ctor
  2. Replace read-write locks with a single ReentrantLock

After changes, you can perform a quick local install of log4j-core as follows:

./mvnw install -pl :log4j-core -DskipTests -Dspotbugs.skip -Dspotless.skip -Drat.skip -Dxml.skip

@tristantarrant
Copy link

tristantarrant commented Jan 29, 2025

Neither of those suggestions improved things. However I have some additional info.
I've put some good old System.out.printfs around the write lock code. Here is an example of things going well:

writeLock - ask - org.infinispan.notifications.cachemanagerlistener.CacheManagerNotifierImpl, org.apache.logging.log4j.message.ReusableMessageFactory@6dab9b6d
writeLock - acquire - org.infinispan.notifications.cachemanagerlistener.CacheManagerNotifierImpl, org.apache.logging.log4j.message.ReusableMessageFactory@6dab9b6d
writeLock - ret - org.infinispan.notifications.cachemanagerlistener.CacheManagerNotifierImpl, org.apache.logging.log4j.message.ReusableMessageFactory@6dab9b6d / c=org.infinispan.notifications.cachemanagerlistener.CacheManagerNotifierImpl:DEBUG in Default n=org.infinispan.notifications.cachemanagerlistener.CacheManagerNotifierImpl:DEBUG in Default
writeLock - release - org.infinispan.notifications.cachemanagerlistener.CacheManagerNotifierImpl, org.apache.logging.log4j.message.ReusableMessageFactory@6dab9b6d

the ret part is what is being returned: c is the currentLogger and n is the newLogger.

Here is an example when bad things start happening. I get 100s of the following:

writeLock - ask - org.infinispan.LIFECYCLE, org.apache.logging.log4j.message.ReusableMessageFactory@6dab9b6d
writeLock - acquire - org.infinispan.LIFECYCLE, org.apache.logging.log4j.message.ReusableMessageFactory@6dab9b6d
writeLock - ret - org.infinispan.LIFECYCLE, org.apache.logging.log4j.message.ReusableMessageFactory@6dab9b6d / c=null n=org.infinispan.LIFECYCLE:DEBUG in Default
writeLock - release - org.infinispan.LIFECYCLE, org.apache.logging.log4j.message.ReusableMessageFactory@6dab9b6d

The Weak semantics are causing it to not find the entry in the map, so every single request for the org.infinispan.LIFECYCLE acquires the write lock, skipping the "fast path".
Changing the loggerRefByNameByMessageFactory to use a normal HashMap and removing the WeakReferences fixes the issue. Obviously, I know that's not a proper fix

@freelon
Copy link

freelon commented Jan 29, 2025

@tristantarrant do I understand correctly that your test is not going into a deadlock but instead blocks many threads because the write lock is "just" really busy?

@tristantarrant
Copy link

From the way I described it, it does come across like that, but observing the log, at some point it just hangs.

@tristantarrant
Copy link

Ok, I've researched it further: the problem is the WeakReference. At some point it starts returning null and it is not replaced by a new WeakReference wrapping the newLogger.

@tristantarrant
Copy link

I tried to replace the

.computeIfAbsent(name, ignored -> new WeakReference<>(newLogger))

with

.compute(name, (ignored, ref) -> (ref == null || ref.get() == null) ? new WeakReference<>(newLogger) : ref))

but it doesn't solve the hanging threads.

@vy
Copy link
Member

vy commented Jan 29, 2025

@tristantarrant, would you mind providing us the minimal set of steps we can reproduce the issue on our side? Clone this Infinispan repo, compile like this, run the test like this, etc.

@vy
Copy link
Member

vy commented Jan 30, 2025

@tristantarrant, I presume the stack trace you shared last is during the deadlock, not while getLogger() calls are busy with the excessive write-lock acquisition, right?

@tristantarrant
Copy link

Yes: our testsuite has a test-killer which detects if its been unresponsive after a timeout. It will get a threaddump of the JVM and then kill itself.

I believe I've figured out why it's happening a lot to our testsuite:

  • Some of our code gets a logger on class instantiation instead of in a static initializer. The assumption is that the log factory will give us a cached instance of a previously used logger with the same name.
  • Our testsuite is quite heavy, with many threads and triggers GC quite frequently
  • GC will collect those per-instance loggers, causing the WeakReferences in InternalLoggerRegistry to become null. Static loggers will never be GCed unless the class that owns them is unloaded.
  • The current logic in InternalLoggerRegistry.computeIfAbsent will always acquire a write lock without actually replacing the null WeakReferences.

I've identified one place in our code which was unnecessarily obtaining a logger in a method and replaced it with a static instance. This has reduced the lock contention considerably, but doesn't solve the issue.

I've created an Infinispan branch with my small fix which you can use for your investigations.
Here are the instructions:

git clone -b log4j-lock-investigation --single-branch git@github.com:tristantarrant/infinispan.git
cd infinispan
./mvnw install -DskipTests -am -pl core
./mvnw verify -pl core

Just let it run. After a while it will hang and start creating threaddump-XXX.log files in the core directory.

@tristantarrant
Copy link

I have some good news: combining the fix in #3399 (comment) with my fix in https://github.com/tristantarrant/infinispan/tree/log4j-lock-investigation seems to allow our core testsuite to complete.
I think the reason why just the fix in log4j wasn't enough is because of resource starvation: our testsuite has lots of parallelism with tons of lock contention, many threads, GCs. Still, I want to investigate a bit further.
JGroups (one of the libraries we use) still creates per-instance loggers: I'll create a PR to fix that today.

@vy
Copy link
Member

vy commented Jan 30, 2025

@tristantarrant, great news! I am looking forward to your PR. 🤩

I suggest taking #3418 as the base, yet I think there is a (performance) bug that needs to be addressed.

@tristantarrant
Copy link

Yes, fixing our "abuse" of per-instance, or worse, local, loggers doesn't solve the problem that with high contention this will deadlock.

@tristantarrant
Copy link

Here's the Infinispan issue to remove local loggers: infinispan/infinispan#13927

@tristantarrant
Copy link

Maybe we should look for a lock-less solution. A ConcurrentHashMap with support for weak keys ?

@vy
Copy link
Member

vy commented Jan 30, 2025 via email

@tristantarrant
Copy link

tristantarrant commented Jan 30, 2025 via email

tristantarrant pushed a commit to tristantarrant/logging-log4j2 that referenced this issue Jan 31, 2025
It has been reported that holding a lock on `InternalLoggerRegistry` during the creation of a logger can cause performance loss and deadlocks. The logger constructor can trigger property lookups and other pluggable operations, we don't entirely control. The fix to apache#3252 only covered one of these cases.

This change moves the instantiation of new `Logger`s outside the write lock. While in some cases, this will cause multiple instantiations of loggers with the same parameters, all such loggers are functionally equivalent. On the other hand, the change allows the creation of different loggers in parallel.

Closes apache#3399
@vy
Copy link
Member

vy commented Jan 31, 2025

@tristantarrant, checking #3427 you recently submitted... Implementing our own Map<K, WeakReference<V>> is a big undertaking. It needs to be tested comprehensively. Historically, this approach has been a source of troubles – see #2942 and #2946 due to the new Thread Context map implementation that wasn't properly tested. I've updated #3418 to address the performance regression I shared earlier. I think this should help with your resource starvation issue.

You have been very kind and of great help. I would appreciate if you can give the most recent #3418 one more spin in your test suite.

@tristantarrant
Copy link

Not implementing reference reaping means that the inner map will contain useless entries. Also, if you notice, I only implemented the most basic parts of the map, so it's not really a general purpose class.
As I see you've sensibly removed some lambda invocations, this can be simplified even further.
But, ultimately, your confidence in the solution is what matters the most, since you're the one having to maintain it :)
At any rate, I've tested your latest changes with our testsuite and it works fine. Thank you

@vy
Copy link
Member

vy commented Jan 31, 2025

@tristantarrant, thanks so much for your understanding. I will merge #3418 (waiting for @ppkarwasz, since he is first original author) and mark this issue as closed. I created #3430 to see what we can do for entries referring to reclaimed loggers.

@ppkarwasz
Copy link
Contributor

@vy,

Feel free to merge #3418, since I will be back only next Tuesday.

@vy
Copy link
Member

vy commented Feb 3, 2025

Feel free to merge #3418, since I will be back only next Tuesday.

@ppkarwasz, thanks so much. 😍 Though we're not in a big hurry. I prefer to wait for your review.

@vy vy added this to the 2.25.0 milestone Feb 3, 2025
@vy vy self-assigned this Feb 3, 2025
@vy
Copy link
Member

vy commented Feb 6, 2025

Fixed by #3418.

@vy vy closed this as completed Feb 6, 2025
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
bug Incorrect, unexpected, or unintended behavior of existing code waiting-for-maintainer
Projects
None yet
Development

No branches or pull requests

5 participants