You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* ① ClickHouse needs to read 3,609 granules (indicated as marks in the trace logs) across 3 data ranges.
87
90
* ② With 59 CPU cores, it distributes this work across 59 parallel processing streams—one per lane.
88
91
89
92
Alternatively, we can use the [EXPLAIN](/sql-reference/statements/explain#explain-pipeline) clause to inspect the [physical operator plan](/academic_overview#4-2-multi-core-parallelization)—also known as the "query pipeline"—for the aggregation query:
As mentioned above, the number of `n` parallel processing lanes is controlled by the `max_threads` setting, which by default matches the number of CPU cores available to ClickHouse on the server:
131
-
```sql runnable
133
+
```sql runnable=false
132
134
SELECT getSetting('max_threads');
133
135
```
134
136
135
137
```txt
136
-
Static result for the query above from April 2025
137
138
┌─getSetting('max_threads')─┐
138
139
1. │ 59 │
139
140
└───────────────────────────┘
140
141
```
141
142
142
-
However, the `max_threads` value may be ignored for some plan operators depending on the amount of data selected for processing:
143
-
```sql runnable
143
+
However, the `max_threads` value may be ignored depending on the amount of data selected for processing:
As shown in the operator plan extract above, even though `max_threads` is set to `59`, ClickHouse uses only **30** concurrent streams to scan the data.
160
160
161
161
Now let’s run the query:
162
-
```sql runnable
162
+
```sql runnable=false
163
163
SELECT
164
164
max(price)
165
165
FROM
@@ -168,7 +168,6 @@ WHERE town = 'LONDON';
168
168
```
169
169
170
170
```txt
171
-
Static result for the query above from April 2025
172
171
┌─max(price)─┐
173
172
1. │ 594300000 │ -- 594.30 million
174
173
└────────────┘
@@ -179,7 +178,7 @@ Peak memory usage: 27.24 MiB.
179
178
180
179
As shown in the output above, the query processed 2.31 million rows and read 13.66MB of data. This is because, during the index analysis phase, ClickHouse selected **282 granules** for processing, each containing 8,192 rows, totaling approximately 2.31 million rows:
Regardless of the configured `max_threads` value, ClickHouse only allocates additional parallel processing lanes for operator plans scanning the data when there’s enough data to justify them. The "max" in `max_threads` refers to an upper limit, not a guaranteed number of threads used.
207
+
Regardless of the configured `max_threads` value, ClickHouse only allocates additional parallel processing lanes when there’s enough data to justify them. The "max" in `max_threads` refers to an upper limit, not a guaranteed number of threads used.
210
208
211
209
What "enough data" means is primarily determined by two settings, which define the minimum number of rows (163,840 by default) and the minimum number of bytes (2,097,152 by default) that each processing lane should handle:
0 commit comments