You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Looks like bulk inserts are inserting 1 by 1 and not in batches.
I have created the following table:
CREATE TABLE IF NOT EXISTS test
(
`a` Int16,
`b` Int16,
`created` DateTime
)
ENGINE = MergeTree()
PARTITION BY (toYYYYMM(created))
ORDER BY (created)
TTL created + INTERVAL 13 MONTH;
Am I doing something wrong or is this the intended behaviour?
I have also tried using this approach but same behaviour
return conn.createBatch()
.add("insert into test values (1, 2, '2024-01-17 00:00:00')")
.add("insert into test values (5, 6, '2024-01-17 00:00:00')")
.execute();
I am using clickhouse-r2dbc version 0.6.4.
The text was updated successfully, but these errors were encountered:
I am using clickhouse-r2dbc version 0.7.0 and see the issue too.
Looks to me as a critical issue since you can't really use r2dbc for batch inserts (the rows are inserted into different DB parts instead of a single part)
Looks like bulk inserts are inserting 1 by 1 and not in batches.
I have created the following table:
If I try to do the following:
When checking the logs, I can see that 2 connections are established and inserts are run independently so it is not acting as a batch insert.
Am I doing something wrong or is this the intended behaviour?
I have also tried using this approach but same behaviour
I am using clickhouse-r2dbc version 0.6.4.
The text was updated successfully, but these errors were encountered: