Skip to content

Bad performances of multithreaded tasklet using chunks due to throttling algorithm [BATCH-2081] #1516

Open
@spring-projects-issues

Description

@spring-projects-issues

Philippe Mouawad opened BATCH-2081 and commented

I have a batch with following configuration:

  • tasklet using a multiThreaded task-executor
  • multiThreaded task-executor has 15 threads
  • throttle-limit == number of threads
  • A thread-safe ItemReader using JDBC is used as reader
  • Writer uses Hibernate

I notice that execution of batch is not performing greatly.
Analysing what happens, I notice that frequently :

  • 1 thread is writing
  • All other are doing nothing:
    Name: threadPoolTaskExecutorForBatch-X (DOING NOTHING)
    State: WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@5c45bb65
    Total blocked: 15 703 Total waited: 15 347

Stack trace:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)

  • The Tasklet thread is blocked on this:
    Name: SimpleAsyncTaskExecutor-1
    State: WAITING on java.lang.Object@9e58027
    Total blocked: 14 536 Total waited: 23 229

Stack trace:
java.lang.Object.wait(Native Method)
java.lang.Object.wait(Object.java:485)
org.springframework.batch.repeat.support.ResultHolderResultQueue.take(ResultHolderResultQueue.java:139)
org.springframework.batch.repeat.support.ResultHolderResultQueue.take(ResultHolderResultQueue.java:33)
org.springframework.batch.repeat.support.TaskExecutorRepeatTemplate.getNextResult(TaskExecutorRepeatTemplate.java:144)
org.springframework.batch.repeat.support.RepeatTemplate.executeInternal(RepeatTemplate.java:215)
org.springframework.batch.repeat.support.RepeatTemplate.iterate(RepeatTemplate.java:144)
org.springframework.batch.core.step.tasklet.TaskletStep.doExecute(TaskletStep.java:253)
org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:195)
org.springframework.batch.core.job.SimpleStepHandler.handleStep(SimpleStepHandler.java:137)
org.springframework.batch.core.job.flow.JobFlowExecutor.executeStep(JobFlowExecutor.java:64)
org.springframework.batch.core.job.flow.support.state.StepState.handle(StepState.java:60)
org.springframework.batch.core.job.flow.support.SimpleFlow.resume(SimpleFlow.java:152)
org.springframework.batch.core.job.flow.support.SimpleFlow.start(SimpleFlow.java:131)
org.springframework.batch.core.job.flow.FlowJob.doExecute(FlowJob.java:135)
org.springframework.batch.core.job.AbstractJob.execute(AbstractJob.java:301)
org.springframework.batch.core.launch.support.SimpleJobLauncher$1.run(SimpleJobLauncher.java:134)
java.lang.Thread.run(Thread.java:662)

java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
java.lang.Thread.run(Thread.java:662)

This happens when you have one processing (writer) of an item that takes a lot of time and all other threads have finished working on items filled in queue, instead of reading new items and letting idle thread work, what happens is that queue waits for the last long thread work to end (changing count > results.size() condition ) and give data to others.


Affects: 2.1.8, 2.1.9, 2.2.0.RC1, 2.2.0.RC2, 2.2.1

Attachments:

1 votes, 2 watchers

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions