-
Notifications
You must be signed in to change notification settings - Fork 745
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Tracking: Load very large dataset into databend #7444
Comments
we can consider make the cron in an external tool to focus on the file discovery in s3 directory. there're different ways on the file discovery task:
|
FYI, I'm testing tpch performance on our cloud but block at copy into lineitem from @tpch_data files=('tpch100_single/lineitem.tbl') file_format=(type='CSV' field_delimiter='|' record_delimiter='\n'); The specific situation is that The above SQL return success, but no data is inserted. (It's better to return error?) cc @BohuTANG I don't know why can't even one row of data be inserted. I think We should make sure that the large data can be copied in first, even if it's slow. |
I think there are some errors but not return to the client. You can try: |
All works great now. |
Summary
Tasks:
Flow:
The text was updated successfully, but these errors were encountered: