-
Notifications
You must be signed in to change notification settings - Fork 372
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Looks like devspacehelper sync
uses quite a lot of memory during initialize
#1757
Comments
@hvt thanks for creating this issue! Mhh thats odd, do you have any large directories that might be excluded only on client side? Usually this high memory consumption can only be found in pods that have a lot of files or directories present that need to be scanned and compared initially, but even then over 700MB seems really extreme. |
Hey @FabianKramm , besides Like I said, in this particular project there are no large directories that I know of and can see quickly (running |
@hvt you can also use |
@FabianKramm here were go: $ devspace dev --debug
...
19:26:07 [0:ports] Port-Forwarding: Waiting for containers to start...
19:26:07 [0:sync] Waiting for containers to start...
19:26:20 [0:sync] Starting sync...
19:26:20 [0:sync] Trying to download devspacehelper into pod hvt/crm-data-main-v1-7ff88b5c75-g98g9
19:26:20 [0:sync] Warning: Couldn't download devspacehelper in container, error: stdout, stderr:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to github.com port 443 after 2 ms: Connection refused
: command terminated with exit code 7
19:26:20 [0:sync] Trying to inject devspacehelper from local machine
19:26:21 [0:ports] Port forwarding started on 60635:8080 (hvt/crm-data-main-v1-7ff88b5c75-g98g9)
19:26:23 [0:sync] Start syncing
19:26:23 [0:sync] Sync started on /home/hvt/dev/crm-data-main-v1/container/server <-> /server (Pod: hvt/crm-data-main-v1-7ff88b5c75-g98g9)
19:26:23 [0:sync] Waiting for initial sync to complete
19:26:23 [0:sync] Initial Sync - Retrieve Initial State
19:26:23 [0:sync] Downstream - Start collecting changes
19:26:24 [0:sync] Helper - Use inotify as watching method in container
19:26:24 [0:sync] Downstream - Done collecting changes
19:26:24 [0:sync] Initial Sync - Done Retrieving Initial State
19:26:24 [0:sync] Initial Sync - Calculate Delta from Remote State
19:26:24 [0:sync] Initial Sync - Done Calculating Delta (Download: 0, Upload: 0)
19:26:24 [0:sync] Downstream - Initial sync completed
19:26:24 [0:sync] Upstream - Initial sync completed
19:26:24 [0:sync] Initial sync took: 702.3774ms
19:26:24 [info] Starting log streaming
... Looking at memory usage again: $ devspace enter
# selecting the correct container
[info] Opening shell to pod:container crm-data-main-v1-7ff88b5c75-g98g9:crm-data-main-v1-server
/server $ ps axf | grep devspace
PID USER TIME COMMAND
79 www-data 0:00 /tmp/devspacehelper sync upstream --override-permissions --exclude .gitignore --exclude .php-cs-fixer.cache --ex
80 www-data 0:03 /tmp/devspacehelper sync downstream --exclude .gitignore --exclude .php-cs-fixer.cache --exclude .phpunit.result
/server $ cat /proc/79/status
Name: devspacehelper
Umask: 0022
...
VmPeak: 720440 kB
VmSize: 720440 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 18444 kB
VmRSS: 18444 kB
... So the initial sync is nihil, yet the maximum amount of memory used by Currently using:
|
Any updates on this issue I have been facing the same. Even after the pod is initialised and I have access to the shell the /tmp/devspacehelper is still consuming memory. |
Same here |
+1. This happens very frequently. I'm not even sure which container it is, some of my pods have multiple. I have lots of RAM on my dev machine and don't mind increasing the limits, so even a better error message would help! All I see is
Took me quite awhile to figure out what Also not sure why it's killing devspace. When a pod OOMs, shouldn't it just be restarted be k8s? Getting this in |
What happened?
This is not exactly a bug, since devspace handles it well, I just was amazed by the memory used at some point by the
devspacehelper sync
commands and was wondering whether this could be lowered somehow.When running a project using
devspace dev
, the devspace sync processes are killed due to a cgroup memory violation. That looks like this:As you can see, devspace handles this well and retries injecting the
devspacehelper sync
processes after the OOM exit code, and after a few tries, sync works. However, I cannot lay a finger on why it works after a few tries, perhaps memory usage is lower after a few tries, due to less things to sync?The sync configuration of this particular project was low volume, around 10 files of only a few KBs each, without anything to initially sync.
When the sync is running smoothly after a while, looking at the size of the
devspacehelper sync
processes, memory usage seemed normal to around 15MB per process. However, looking at peak memory usage, I was amazed to see:So both processes used around ~715MB some period in time, which seemed a little excessive to me. It might be that I'm looking at it wrong, so therefore just checking out what you think :].
What did you expect to happen instead?
The
devspacehelper sync
processes use not that much peak memory.How can we reproduce the bug? (as minimally and precisely as possible)
Limit the container you're developing in k8s to say 50Mi, and configure a sync too.
Local Environment:
devspace version 5.16.2
Kubernetes Cluster:
/kind bug
The text was updated successfully, but these errors were encountered: