Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Looks like devspacehelper sync uses quite a lot of memory during initialize #1757

Open
hvt opened this issue Nov 10, 2021 · 8 comments
Open
Labels
area/sync Issues related to the real-time code synchronization

Comments

@hvt
Copy link

hvt commented Nov 10, 2021

What happened?

This is not exactly a bug, since devspace handles it well, I just was amazed by the memory used at some point by the devspacehelper sync commands and was wondering whether this could be lowered somehow.

When running a project using devspace dev, the devspace sync processes are killed due to a cgroup memory violation. That looks like this:

[0:sync] Error: Sync Error on /home/hvt/dev/on2it/orchestrator-v1/container/job: Sync - connection lost to pod hvt/orchestrator-v1-68cd7d65c-f595r:  command terminated with exit code 137
[0:sync] Sync stopped
[0:sync] Restarting sync...
[0:sync] Waiting for pods...
...some seconds later...
[0:sync] Starting sync...
[0:sync] Inject devspacehelper into pod hvt/orchestrator-v1-5fb4b559d8-zdtch
[0:sync] Start syncing
[0:sync] Sync started on /home/hvt/dev/on2it/orchestrator-v1/container/job <-> /usr/local/on2it/job (Pod: hvt/orchestrator-v1-5fb4b559d8-zdtch)
[0:sync] Waiting for initial sync to complete
[0:sync] Helper - Use inotify as watching method in container
[0:sync] Downstream - Initial sync completed
[0:sync] Upstream - Initial sync completed
...

As you can see, devspace handles this well and retries injecting the devspacehelper sync processes after the OOM exit code, and after a few tries, sync works. However, I cannot lay a finger on why it works after a few tries, perhaps memory usage is lower after a few tries, due to less things to sync?

The sync configuration of this particular project was low volume, around 10 files of only a few KBs each, without anything to initially sync.

When the sync is running smoothly after a while, looking at the size of the devspacehelper sync processes, memory usage seemed normal to around 15MB per process. However, looking at peak memory usage, I was amazed to see:

$ ps
PID   USER     TIME  COMMAND
    1 www-data  0:00 {docker-cmd.sh} /bin/sh bin/docker-cmd.sh
   23 www-data  0:02 /tmp/devspacehelper sync downstream --exclude .gitignore --exclude /doc/ --exclud...
   24 www-data  0:00 /tmp/devspacehelper sync upstream --override-permissions --exclude .gitignore --e...
....
$ cat /proc/23/status
Name:	devspacehelper
...
VmPeak:	  716160 kB
VmRSS:	   15100 kB
...
$ cat /proc/24/status
Name:	devspacehelper
Umask:	0022
...
VmPeak:	  713216 kB
VmRSS:	   15120 kB
...

So both processes used around ~715MB some period in time, which seemed a little excessive to me. It might be that I'm looking at it wrong, so therefore just checking out what you think :].

What did you expect to happen instead?

The devspacehelper sync processes use not that much peak memory.

How can we reproduce the bug? (as minimally and precisely as possible)

Limit the container you're developing in k8s to say 50Mi, and configure a sync too.

Local Environment:

  • DevSpace Version: devspace version 5.16.2
  • Operating System: linux
  • Deployment method: kubectl apply

Kubernetes Cluster:

  • Cloud Provider: other
  • Kubernetes Version:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.15", GitCommit:"58178e7f7aab455bc8de88d3bdd314b64141e7ee", GitTreeState:"clean", BuildDate:"2021-09-15T19:18:00Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

/kind bug

@FabianKramm
Copy link
Collaborator

@hvt thanks for creating this issue! Mhh thats odd, do you have any large directories that might be excluded only on client side? Usually this high memory consumption can only be found in pods that have a lot of files or directories present that need to be scanned and compared initially, but even then over 700MB seems really extreme.

@FabianKramm FabianKramm added the area/sync Issues related to the real-time code synchronization label Nov 12, 2021
@hvt
Copy link
Author

hvt commented Nov 12, 2021

Hey @FabianKramm , besides --verbose-sync, is there any way to show all local (and perhaps also remote) files the devspacehelper sync considers to sync or not?

Like I said, in this particular project there are no large directories that I know of and can see quickly (running find . locally)...

@FabianKramm
Copy link
Collaborator

@hvt you can also use --debug which will give you more information what the sync is currently doing, which might help you finding out what causes this issue

@hvt
Copy link
Author

hvt commented Feb 10, 2022

@FabianKramm here were go:

$ devspace dev --debug
...
19:26:07 [0:ports] Port-Forwarding: Waiting for containers to start...
19:26:07 [0:sync] Waiting for containers to start...
19:26:20 [0:sync] Starting sync...
19:26:20 [0:sync] Trying to download devspacehelper into pod hvt/crm-data-main-v1-7ff88b5c75-g98g9
19:26:20 [0:sync] Warning: Couldn't download devspacehelper in container, error: stdout, stderr: 
   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (7) Failed to connect to github.com port 443 after 2 ms: Connection refused
: command terminated with exit code 7
19:26:20 [0:sync] Trying to inject devspacehelper from local machine
19:26:21 [0:ports] Port forwarding started on 60635:8080 (hvt/crm-data-main-v1-7ff88b5c75-g98g9)
19:26:23 [0:sync] Start syncing
19:26:23 [0:sync] Sync started on /home/hvt/dev/crm-data-main-v1/container/server <-> /server (Pod: hvt/crm-data-main-v1-7ff88b5c75-g98g9)
19:26:23 [0:sync] Waiting for initial sync to complete
19:26:23 [0:sync] Initial Sync - Retrieve Initial State
19:26:23 [0:sync] Downstream - Start collecting changes
19:26:24 [0:sync] Helper - Use inotify as watching method in container
19:26:24 [0:sync] Downstream - Done collecting changes
19:26:24 [0:sync] Initial Sync - Done Retrieving Initial State
19:26:24 [0:sync] Initial Sync - Calculate Delta from Remote State
19:26:24 [0:sync] Initial Sync - Done Calculating Delta (Download: 0, Upload: 0)
19:26:24 [0:sync] Downstream - Initial sync completed
19:26:24 [0:sync] Upstream - Initial sync completed
19:26:24 [0:sync] Initial sync took: 702.3774ms
19:26:24 [info]   Starting log streaming
...

Looking at memory usage again:

$ devspace enter
# selecting the correct container
[info]   Opening shell to pod:container crm-data-main-v1-7ff88b5c75-g98g9:crm-data-main-v1-server
/server $ ps axf | grep devspace
PID   USER     TIME  COMMAND
   79 www-data  0:00 /tmp/devspacehelper sync upstream --override-permissions --exclude .gitignore --exclude .php-cs-fixer.cache --ex
   80 www-data  0:03 /tmp/devspacehelper sync downstream --exclude .gitignore --exclude .php-cs-fixer.cache --exclude .phpunit.result
/server $ cat /proc/79/status
Name:	devspacehelper
Umask:	0022
...
VmPeak:	  720440 kB
VmSize:	  720440 kB
VmLck:	       0 kB
VmPin:	       0 kB
VmHWM:	   18444 kB
VmRSS:	   18444 kB
...

So the initial sync is nihil, yet the maximum amount of memory used by devspacehelper sync is around 720MB. And that is only the upstream one. Downstream used around 723MB at its peak.

Currently using:

  • DevSpace Version: devspace version 5.18.3
  • Operating System: linux
  • ARCH of the OS: AMD64
    Kubernetes Cluster:
  • Cloud Provider: other
  • Kubernetes Version:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.15", GitCommit:"58178e7f7aab455bc8de88d3bdd314b64141e7ee", GitTreeState:"clean", BuildDate:"2021-09-15T19:23:02Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.15", GitCommit:"58178e7f7aab455bc8de88d3bdd314b64141e7ee", GitTreeState:"clean", BuildDate:"2021-09-15T19:18:00Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

@RebliNk17
Copy link

This is happening for me too,
i am launching a pod with limited resources, the devspace ssh processes alone are taking about 1.5gb ram

image

this is too much memory, is it possible somehow to reduce it?

this is the dev section in devspace.yml

dev:
  frontend:
    labelSelector:
      app: frontend
    env:
      - name: DEVELOPER_SCHEMA
        value: ${devspace.namespace}
    resources:
      limits:
        cpu: "1"
        memory: 2Gi
      requests:
        cpu: 30m
        memory: 500Mi
    sync:
      - path: ./:/data
        excludePaths:
          - node_modules
          - main/core.*
          - main/node_modules
          - main/.angular
          - main/dist/
          - .githooks
          - main/.webstorm_nodejs_helpers
          - .stfolder
          - .vscode
          - .idea
          - .git
        startContainer: true
    workingDir: /data/main
    # Open a terminal and use the following command to start it
    terminal:
      command: /data/devspace_start.sh
    ssh:
      enabled: false
    proxyCommands:
      - command: devspace
      - command: kubectl
      - command: helm
      - command: git
      - gitCredentials: true

@SudoMishra
Copy link

Any updates on this issue I have been facing the same. Even after the pod is initialised and I have access to the shell the /tmp/devspacehelper is still consuming memory.

@demetris-manikas
Copy link

Same here

@mnpenner
Copy link

mnpenner commented Jan 4, 2025

+1. This happens very frequently. I'm not even sure which container it is, some of my pods have multiple. I have lots of RAM on my dev machine and don't mind increasing the limits, so even a better error message would help!

All I see is

dev:app sync  Initial sync completed
start_dev: initial sync: Sync - connection lost to pod my-org/kmb-app-deployment-development-6444cd8d8b-4dt2d:  command terminated with exit code 137
fatal exit status 1

Took me quite awhile to figure out what 137 meant.

Also not sure why it's killing devspace. When a pod OOMs, shouldn't it just be restarted be k8s?


Getting this in devspace version 6.3.14 BTW. DevSpace 5 was much more stable.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
area/sync Issues related to the real-time code synchronization
Projects
None yet
Development

No branches or pull requests

6 participants