We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
I have read the documentation. 我已经阅读了文档。
I'm sure there are no duplicate issues or discussions. 我确定没有重复的issue或讨论。
I'm sure it's due to AList and not something else(such as Network ,Dependencies or Operational). 我确定是AList的问题,而不是其他原因(例如网络,依赖或操作)。
AList
Dependencies
Operational
依赖
操作
I'm sure this issue is not fixed in the latest version. 我确定这个问题在最新版本中没有被修复。
v3.42.0
本地存储、 seafile
从本地存储复制大文件(测试的文件大小为9.7GB)到seafile存储,alist崩溃,容器重启。复制小文件正常。日志无错误输出。
从seafile存储复制大文件到本地存储正常。
无
{ "force": false, "site_url": "", "cdn": "", "jwt_secret": "xxxxxxxx", "token_expires_in": 48, "database": { "type": "sqlite3", "host": "", "port": 0, "user": "", "password": "", "name": "", "db_file": "data/data.db", "table_prefix": "x_", "ssl_mode": "", "dsn": "" }, "meilisearch": { "host": "http://localhost:7700", "api_key": "", "index_prefix": "" }, "scheme": { "address": "0.0.0.0", "http_port": 5244, "https_port": -1, "force_https": false, "cert_file": "", "key_file": "", "unix_file": "", "unix_file_perm": "" }, "temp_dir": "data/temp", "bleve_dir": "data/bleve", "dist_dir": "", "log": { "enable": true, "name": "data/log/log.log", "max_size": 50, "max_backups": 30, "max_age": 28, "compress": false }, "delayed_start": 0, "max_connections": 0, "max_concurrency": 64, "tls_insecure_skip_verify": true, "tasks": { "download": { "workers": 5, "max_retry": 1, "task_persistant": false }, "transfer": { "workers": 5, "max_retry": 2, "task_persistant": false }, "upload": { "workers": 5, "max_retry": 0, "task_persistant": false }, "copy": { "workers": 5, "max_retry": 2, "task_persistant": false }, "decompress": { "workers": 5, "max_retry": 2, "task_persistant": false }, "decompress_upload": { "workers": 5, "max_retry": 2, "task_persistant": false }, "allow_retry_canceled": false }, "cors": { "allow_origins": [ "" ], "allow_methods": [ "" ], "allow_headers": [ "*" ] }, "s3": { "enable": false, "port": 5246, "ssl": false }, "ftp": { "enable": false, "listen": ":5221", "find_pasv_port_attempts": 50, "active_transfer_port_non_20": false, "idle_timeout": 900, "connection_timeout": 30, "disable_active_mode": false, "default_transfer_binary": false, "enable_active_conn_ip_check": true, "enable_pasv_conn_ip_check": true }, "sftp": { "enable": false, "listen": ":5222" }, "last_launched_version": "v3.42.0" }
No response
The text was updated successfully, but these errors were encountered:
我也遇到,我是多文件复制,群晖CPU占用90➕,过一会儿就崩溃
Sorry, something went wrong.
一样,这时候cpu和内存都满了
夸克网盘到本地路径的复制,cpu 飙高崩溃
log (1).log
补个log和截图
又测试了几遍,感觉像是alist复制大文件到seafile存储时,由于seafile存储性能原因,seafile处理大文件post请求时间过长,导致client timeout,由此超时问题再引发alist主程序cpu 狂飙至崩溃??
No branches or pull requests
Please make sure of the following things
I have read the documentation.
我已经阅读了文档。
I'm sure there are no duplicate issues or discussions.
我确定没有重复的issue或讨论。
I'm sure it's due to
AList
and not something else(such as Network ,Dependencies
orOperational
).我确定是
AList
的问题,而不是其他原因(例如网络,依赖
或操作
)。I'm sure this issue is not fixed in the latest version.
我确定这个问题在最新版本中没有被修复。
AList Version / AList 版本
v3.42.0
Driver used / 使用的存储驱动
本地存储、 seafile
Describe the bug / 问题描述
从本地存储复制大文件(测试的文件大小为9.7GB)到seafile存储,alist崩溃,容器重启。复制小文件正常。日志无错误输出。
从seafile存储复制大文件到本地存储正常。
Reproduction / 复现链接
无
Config / 配置
{
"force": false,
"site_url": "",
"cdn": "",
"jwt_secret": "xxxxxxxx",
"token_expires_in": 48,
"database": {
"type": "sqlite3",
"host": "",
"port": 0,
"user": "",
"password": "",
"name": "",
"db_file": "data/data.db",
"table_prefix": "x_",
"ssl_mode": "",
"dsn": ""
},
"meilisearch": {
"host": "http://localhost:7700",
"api_key": "",
"index_prefix": ""
},
"scheme": {
"address": "0.0.0.0",
"http_port": 5244,
"https_port": -1,
"force_https": false,
"cert_file": "",
"key_file": "",
"unix_file": "",
"unix_file_perm": ""
},
"temp_dir": "data/temp",
"bleve_dir": "data/bleve",
"dist_dir": "",
"log": {
"enable": true,
"name": "data/log/log.log",
"max_size": 50,
"max_backups": 30,
"max_age": 28,
"compress": false
},
"delayed_start": 0,
"max_connections": 0,
"max_concurrency": 64,
"tls_insecure_skip_verify": true,
"tasks": {
"download": {
"workers": 5,
"max_retry": 1,
"task_persistant": false
},
"transfer": {
"workers": 5,
"max_retry": 2,
"task_persistant": false
},
"upload": {
"workers": 5,
"max_retry": 0,
"task_persistant": false
},
"copy": {
"workers": 5,
"max_retry": 2,
"task_persistant": false
},
"decompress": {
"workers": 5,
"max_retry": 2,
"task_persistant": false
},
"decompress_upload": {
"workers": 5,
"max_retry": 2,
"task_persistant": false
},
"allow_retry_canceled": false
},
"cors": {
"allow_origins": [
""
],
"allow_methods": [
""
],
"allow_headers": [
"*"
]
},
"s3": {
"enable": false,
"port": 5246,
"ssl": false
},
"ftp": {
"enable": false,
"listen": ":5221",
"find_pasv_port_attempts": 50,
"active_transfer_port_non_20": false,
"idle_timeout": 900,
"connection_timeout": 30,
"disable_active_mode": false,
"default_transfer_binary": false,
"enable_active_conn_ip_check": true,
"enable_pasv_conn_ip_check": true
},
"sftp": {
"enable": false,
"listen": ":5222"
},
"last_launched_version": "v3.42.0"
}
Logs / 日志
No response
The text was updated successfully, but these errors were encountered: