Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

复制功能引发程序崩溃 #8023

Open
4 tasks done
relleo opened this issue Feb 19, 2025 · 5 comments
Open
4 tasks done

复制功能引发程序崩溃 #8023

relleo opened this issue Feb 19, 2025 · 5 comments
Labels
bug Something isn't working

Comments

@relleo
Copy link

relleo commented Feb 19, 2025

Please make sure of the following things

  • I have read the documentation.
    我已经阅读了文档

  • I'm sure there are no duplicate issues or discussions.
    我确定没有重复的issue或讨论。

  • I'm sure it's due to AList and not something else(such as Network ,Dependencies or Operational).
    我确定是AList的问题,而不是其他原因(例如网络依赖操作)。

  • I'm sure this issue is not fixed in the latest version.
    我确定这个问题在最新版本中没有被修复。

AList Version / AList 版本

v3.42.0

Driver used / 使用的存储驱动

本地存储、 seafile

Describe the bug / 问题描述

从本地存储复制大文件(测试的文件大小为9.7GB)到seafile存储,alist崩溃,容器重启。复制小文件正常。日志无错误输出。

从seafile存储复制大文件到本地存储正常。

Reproduction / 复现链接

Config / 配置

{
"force": false,
"site_url": "",
"cdn": "",
"jwt_secret": "xxxxxxxx",
"token_expires_in": 48,
"database": {
"type": "sqlite3",
"host": "",
"port": 0,
"user": "",
"password": "",
"name": "",
"db_file": "data/data.db",
"table_prefix": "x_",
"ssl_mode": "",
"dsn": ""
},
"meilisearch": {
"host": "http://localhost:7700",
"api_key": "",
"index_prefix": ""
},
"scheme": {
"address": "0.0.0.0",
"http_port": 5244,
"https_port": -1,
"force_https": false,
"cert_file": "",
"key_file": "",
"unix_file": "",
"unix_file_perm": ""
},
"temp_dir": "data/temp",
"bleve_dir": "data/bleve",
"dist_dir": "",
"log": {
"enable": true,
"name": "data/log/log.log",
"max_size": 50,
"max_backups": 30,
"max_age": 28,
"compress": false
},
"delayed_start": 0,
"max_connections": 0,
"max_concurrency": 64,
"tls_insecure_skip_verify": true,
"tasks": {
"download": {
"workers": 5,
"max_retry": 1,
"task_persistant": false
},
"transfer": {
"workers": 5,
"max_retry": 2,
"task_persistant": false
},
"upload": {
"workers": 5,
"max_retry": 0,
"task_persistant": false
},
"copy": {
"workers": 5,
"max_retry": 2,
"task_persistant": false
},
"decompress": {
"workers": 5,
"max_retry": 2,
"task_persistant": false
},
"decompress_upload": {
"workers": 5,
"max_retry": 2,
"task_persistant": false
},
"allow_retry_canceled": false
},
"cors": {
"allow_origins": [
""
],
"allow_methods": [
"
"
],
"allow_headers": [
"*"
]
},
"s3": {
"enable": false,
"port": 5246,
"ssl": false
},
"ftp": {
"enable": false,
"listen": ":5221",
"find_pasv_port_attempts": 50,
"active_transfer_port_non_20": false,
"idle_timeout": 900,
"connection_timeout": 30,
"disable_active_mode": false,
"default_transfer_binary": false,
"enable_active_conn_ip_check": true,
"enable_pasv_conn_ip_check": true
},
"sftp": {
"enable": false,
"listen": ":5222"
},
"last_launched_version": "v3.42.0"
}

Logs / 日志

No response

@relleo relleo added the bug Something isn't working label Feb 19, 2025
@wiseman815
Copy link

我也遇到,我是多文件复制,群晖CPU占用90➕,过一会儿就崩溃

@wuai1024
Copy link

我也遇到,我是多文件复制,群晖CPU占用90➕,过一会儿就崩溃

一样,这时候cpu和内存都满了

@aiaiwuai
Copy link

夸克网盘到本地路径的复制,cpu 飙高崩溃

@wiseman815
Copy link

log (1).log

Image

补个log和截图

@relleo
Copy link
Author

relleo commented Feb 20, 2025

又测试了几遍,感觉像是alist复制大文件到seafile存储时,由于seafile存储性能原因,seafile处理大文件post请求时间过长,导致client timeout,由此超时问题再引发alist主程序cpu 狂飙至崩溃??

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants