Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

fix(hash): recreate container on project config content change #11931

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

idsulik
Copy link
Collaborator

@idsulik idsulik commented Jun 23, 2024

What I did
Fixed hash.ServiceHash() to support config content change

Related issue
#11900

image

@ndeloof
Copy link
Contributor

ndeloof commented Jul 1, 2024

While I understand the intent, I don't like we get the config content added into the service hash. This also only makes sense as the config content is inlined.
I wonder we could rely on a label to track the config state by the time it was created : com.docker.compose.config.name=<config hash>.

pkg/compose/hash.go Outdated Show resolved Hide resolved
@idsulik
Copy link
Collaborator Author

idsulik commented Jul 6, 2024

I don't like we get the config content added into the service hash

why?

I wonder we could rely on a label to track the config state by the time it was created : com.docker.compose.config.name=<config hash>.

I don't get the idea, time it was created - do you mean config file? or docker-compose.yaml? but docker-compose can refer to external config file

@ndeloof
Copy link
Contributor

ndeloof commented Jul 8, 2024

time container was created, so we can check it needs to be recreated if current config doesn't match

pkg/compose/hash.go Outdated Show resolved Hide resolved
@ndeloof
Copy link
Contributor

ndeloof commented Jul 10, 2024

Some poins:

  1. At some point we will have to do the same for secrets (same constraints) and networks/volumes (which can get labeled with a has). Solution we adopt here should be extensible to allow this in the future
  2. Configs set by file are implemented by a bind mount but this may change in near future (see Enable configs.file's on remote docker hosts #11871) so we also need to consider those
  3. There's no technical reason docker engine can't offer secrets/configs natively, just this is guarded by Swarm mode, but AFAIK some discussion happened to get them also available in standalone mode. I can't tell if/when this would take place, but preferably the logic here should consider this may happen
  4. Last but not least, I'd prefer we don't mix service config hash with resources it depends on, so my suggestion to introduce an additional label com.docker.compose.config.xx=hash that we can use to track this relation, and need to recreate container, without the need to change the service hash computation (which as impact on existing installations)

@idsulik
Copy link
Collaborator Author

idsulik commented Jul 11, 2024

@ndeloof thanks for the details. pushed changes:

  1. reverted old changes
  2. added new func ServiceDependenciesHash and label
// ConfigHashDependenciesLabel stores configuration hash for a compose service dependencies
ConfigHashDependenciesLabel = "com.docker.compose.config-hash-dependencies"

Let me know if you have better idea for the label name, because I'm not fully satisfied with my name)

pkg/compose/hash.go Outdated Show resolved Hide resolved
pkg/compose/hash.go Outdated Show resolved Hide resolved
Copy link

codecov bot commented Oct 6, 2024

Codecov Report

Attention: Patch coverage is 59.30233% with 70 lines in your changes missing coverage. Please review.

Project coverage is 50.15%. Comparing base (6e818b9) to head (21f172e).
Report is 29 commits behind head on main.

Files with missing lines Patch % Lines
pkg/utils/tar.go 58.22% 22 Missing and 11 partials ⚠️
pkg/compose/convergence.go 32.00% 10 Missing and 7 partials ⚠️
pkg/compose/hash.go 70.21% 11 Missing and 3 partials ⚠️
pkg/compose/create.go 68.42% 4 Missing and 2 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main   #11931      +/-   ##
==========================================
+ Coverage   49.68%   50.15%   +0.47%     
==========================================
  Files         157      158       +1     
  Lines       15428    15681     +253     
==========================================
+ Hits         7665     7865     +200     
- Misses       6985     7014      +29     
- Partials      778      802      +24     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@idsulik idsulik force-pushed the issue-11900 branch 2 times, most recently from cc14e73 to 309fd59 Compare October 6, 2024 17:41
data = append(data, b.Bytes()...)
}

return digest.SHA256.FromBytes(data).Encoded(), nil
Copy link
Contributor

@ndeloof ndeloof Oct 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer we have one label per config/secret mount, so it makes it easier to track|debug changes and container being recreated.
Also need to consider config can be mounted from docker host, i.e. file is not available for compose to compute hash, and then must be excluded from label / no label created. createTarForConfig could return ErrNotFound and we would ignore it for this specific usage

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mean something like that:

com.docker.compose.service.configs-hash-{configName}={hash}
com.docker.compose.service.configs-hash-{serviceName}={hash}

?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

indeed, or maybe, to follow the dot-notation style used for labels, com.docker.compose.service.configs.{configName}.hash

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems to me that this will complicate the logic, first you need to generate a hash for each item separately, then you need to go through all labels whose names start with “com.docker.compose.service.configs.” to check if the hash has changed.

Copy link
Contributor

@ndeloof ndeloof Oct 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't look such a pain to me, as this would allow to trace reason we recreate a container, and make it easier to diagnose potential regressions (this sometimes happened :P)

for c := range service.Configs {
  hash := labels["com.docker.compose.configs."+c+".hash"]
  expected := ConfigHash(project.Configs[c]
  if hash := expected {
    log.Debug("container has to be recreated after config %s has been updated", c)
    return DIVERGED
  }
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ndeloof, but we don't have a config/service name that we can use to create the label name. I pushed changes to create a hash of the config/secret of each service so it'll be easy to figure out what service's config/secret caused the change

Comment on lines +127 to +131
if err != nil {
return nil, err
}

return b, nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: make it simpler as return b, err

Signed-off-by: Suleiman Dibirov <idsulik@gmail.com>
This reverts commit 64c37bf.

Signed-off-by: Suleiman Dibirov <idsulik@gmail.com>
…older support

Signed-off-by: Suleiman Dibirov <idsulik@gmail.com>
Copy link

stale bot commented Dec 15, 2024

This issue has been automatically marked as not stale anymore due to the recent activity.

@stale stale bot removed the stale label Dec 15, 2024
Signed-off-by: Suleiman Dibirov <idsulik@gmail.com>
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants