You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Rancher Kubernetes Engine (RKE) is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. It works on bare-metal and virtualized servers. With RKE, the installation and operation of Kubernetes are both simplified and easily automated, and they are entirely independent of the operating system and platform you're running.
What's Changed
Introduced v1.30.4, v1.29.8 and v1.28.13
Introduced calico and canal v3.28.1 for v1.30.4
Introduced calico and canal v3.27.4 for v1.29.8 and v1.28.13
Introduced nginx v1.11.2 for v1.30.4, v1.29.8 and v1.28.13
Introdced ACI v6.0.4.3 for v1.30.4, v1.29.8 and v1.28.13
RKE Kubernetes versions
v1.27.16-rancher1-1
v1.28.13-rancher1-1
v1.29.8-rancher1-1
v1.30.4-rancher1-1 (default)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
If you want to rebase/retry this PR, check this box
The various Is methods (IsPrivate, IsLoopback, etc) did not work as expected for IPv4-mapped IPv6 addresses, returning false for addresses which would return true in their traditional IPv4 forms.
Affected range
<1.22.7
Fixed version
1.22.7
Description
Calling Parse on a "// +build" build tag line with deeply nested expressions can cause a panic due to stack exhaustion.
Affected range
<1.22.7
Fixed version
1.22.7
Description
Calling Decoder.Decode on a message which contains deeply nested structures can cause a panic due to stack exhaustion. This is a follow-up to CVE-2022-30635.
Affected range
>=1.22.0-0 <1.22.5
Fixed version
1.22.5
Description
The net/http HTTP/1.1 client mishandled the case where a server responds to a request with an "Expect: 100-continue" header with a non-informational (200 or higher) status. This mishandling could leave a client connection in an invalid state, where the next request sent on the connection will fail.
An attacker sending a request to a net/http/httputil.ReverseProxy proxy can exploit this mishandling to cause a denial of service by sending "Expect: 100-continue" requests which elicit a non-informational response from the backend. Each such request leaves the proxy with an invalid connection, and causes one subsequent request using that connection to fail.
Affected range
<1.22.7
Fixed version
1.22.7
Description
Calling Decoder.Decode on a message which contains deeply nested structures can cause a panic due to stack exhaustion. This is a follow-up to CVE-2022-30635.
Affected range
>=1.22.0-0 <1.22.4
Fixed version
1.22.4
Description
The archive/zip package's handling of certain types of invalid zip files differs from the behavior of most zip implementations. This misalignment could be exploited to create an zip file with contents that vary depending on the implementation reading the file. The archive/zip package now rejects files containing these errors.
Affected range
<1.22.7
Fixed version
1.22.7
Description
Calling any of the Parse functions on Go source code which contains deeply nested literals can cause a panic due to stack exhaustion.
A security vulnerability has been detected in certain versions of Docker Engine, which could allow an attacker to bypass authorization plugins (AuthZ) under specific circumstances. The base likelihood of this being exploited is low. This advisory outlines the issue, identifies the affected versions, and provides remediation steps for impacted users.
Impact
Using a specially-crafted API request, an Engine API client could make the daemon forward the request or response to an authorization plugin without the body. In certain circumstances, the authorization plugin may allow a request which it would have otherwise denied if the body had been forwarded to it.
A security issue was discovered In 2018, where an attacker could bypass AuthZ plugins using a specially crafted API request. This could lead to unauthorized actions, including privilege escalation. Although this issue was fixed in Docker Engine v18.09.1 in January 2019, the fix was not carried forward to later major versions, resulting in a regression. Anyone who depends on authorization plugins that introspect the request and/or response body to make access control decisions is potentially impacted.
Docker EE v19.03.x and all versions of Mirantis Container Runtime are not vulnerable.
Vulnerability details
AuthZ bypass and privilege escalation: An attacker could exploit a bypass using an API request with Content-Length set to 0, causing the Docker daemon to forward the request without the body to the AuthZ plugin, which might approve the request incorrectly.
Initial fix: The issue was fixed in Docker Engine v18.09.1 January 2019..
Regression: The fix was not included in Docker Engine v19.03 or newer versions. This was identified in April 2024 and patches were released for the affected versions on July 23, 2024. The issue was assigned CVE-2024-41110.
Patches
docker-ce v27.1.1 containes patches to fix the vulnerability.
Patches have also been merged into the master, 19.0, 20.0, 23.0, 24.0, 25.0, 26.0, and 26.1 release branches.
Remediation steps
If you are running an affected version, update to the most recent patched version.
Mitigation if unable to update immediately:
Avoid using AuthZ plugins.
Restrict access to the Docker API to trusted parties, following the principle of least privilege.
The classic builder cache system is prone to cache poisoning if the image is built FROM scratch.
Also, changes to some instructions (most important being HEALTHCHECK and ONBUILD) would not cause a cache miss.
An attacker with the knowledge of the Dockerfile someone is using could poison their cache by making them pull a specially crafted image that would be considered as a valid cache candidate for some build steps.
For example, an attacker could create an image that is considered as a valid cache candidate for:
FROM scratch
MAINTAINER Pawel
when in fact the malicious image used as a cache would be an image built from a different Dockerfile.
In the second case, the attacker could for example substitute a different HEALTCHECK command.
Impact
23.0+ users are only affected if they explicitly opted out of Buildkit (DOCKER_BUILDKIT=0 environment variable) or are using the /build API endpoint (which uses the classic builder by default).
All users on versions older than 23.0 could be impacted. An example could be a CI with a shared cache, or just a regular Docker user pulling a malicious image due to misspelling/typosquatting.
Image build API endpoint (/build) and ImageBuild function from github.com/docker/docker/client is also affected as it the uses classic builder by default.
Patches
Patches are included in Moby releases:
v25.0.2
v24.0.9
v23.0.10
Workarounds
Use --no-cache or use Buildkit if possible (DOCKER_BUILDKIT=1, it's default on 23.0+ assuming that the buildx plugin is installed).
Use Version = types.BuilderBuildKit or NoCache = true in ImageBuildOptions for ImageBuild call.
Incorrect Resource Transfer Between Spheres
Affected range
<23.0.11
Fixed version
23.0.11
CVSS Score
5.9
CVSS Vector
CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N
Description
Moby is an open source container framework originally developed by Docker Inc. as Docker. It is a key component of Docker Engine, Docker Desktop, and other distributions of container tooling or runtimes. As a batteries-included container runtime, Moby comes with a built-in networking implementation that enables communication between containers, and between containers and external resources.
Moby's networking implementation allows for creating and using many networks, each with their own subnet and gateway. This feature is frequently referred to as custom networks, as each network can have a different driver, set of parameters, and thus behaviors. When creating a network, the --internal flag is used to designate a network as internal. The internal attribute in a docker-compose.yml file may also be used to mark a network internal, and other API clients may specify the internal parameter as well.
When containers with networking are created, they are assigned unique network interfaces and IP addresses (typically from a non-routable RFC 1918 subnet). The root network namespace (hereafter referred to as the 'host') serves as a router for non-internal networks, with a gateway IP that provides SNAT/DNAT to/from container IPs.
Containers on an internal network may communicate between each other, but are precluded from communicating with any networks the host has access to (LAN or WAN) as no default route is configured, and firewall rules are set up to drop all outgoing traffic. Communication with the gateway IP address (and thus appropriately configured host services) is possible, and the host may communicate with any container IP directly.
In addition to configuring the Linux kernel's various networking features to enable container networking, dockerd directly provides some services to container networks. Principal among these is serving as a resolver, enabling service discovery (looking up other containers on the network by name), and resolution of names from an upstream resolver.
When a DNS request for a name that does not correspond to a container is received, the request is forwarded to the configured upstream resolver (by default, the host's configured resolver). This request is made from the container network namespace: the level of access and routing of traffic is the same as if the request was made by the container itself.
As a consequence of this design, containers solely attached to internal network(s) will be unable to resolve names using the upstream resolver, as the container itself is unable to communicate with that nameserver. Only the names of containers also attached to the internal network are able to be resolved.
Many systems will run a local forwarding DNS resolver, typically present on a loopback address (127.0.0.0/8), such as systemd-resolved or dnsmasq. Common loopback address examples include 127.0.0.1 or 127.0.0.53. As the host and any containers have separate loopback devices, a consequence of the design described above is that containers are unable to resolve names from the host's configured resolver, as they cannot reach these addresses on the host loopback device.
To bridge this gap, and to allow containers to properly resolve names even when a local forwarding resolver is used on a loopback address, dockerd will detect this scenario and instead forward DNS requests from the host/root network namespace. The loopback resolver will then forward the requests to its configured upstream resolvers, as expected.
Impact
Because dockerd will forward DNS requests to the host loopback device, bypassing the container network namespace's normal routing semantics entirely, internal networks can unexpectedly forward DNS requests to an external nameserver.
By registering a domain for which they control the authoritative nameservers, an attacker could arrange for a compromised container to exfiltrate data by encoding it in DNS queries that will eventually be answered by their nameservers. For example, if the domain evil.example was registered, the authoritative nameserver(s) for that domain could (eventually and indirectly) receive a request for this-is-a-secret.evil.example.
Docker Desktop is not affected, as Docker Desktop always runs an internal resolver on a RFC 1918 address.
Patches
Moby releases 26.0.0-rc3, 25.0.5 (released) and 23.0.11 (to be released) are patched to prevent forwarding DNS requests from internal networks.
Workarounds
Run containers intended to be solely attached to internal networks with a custom upstream address (--dns argument to docker run, or API equivalent), which will force all upstream DNS queries to be resolved from the container network namespace.
Background
yair zak originally reported this issue to the Docker security team.
The official documentation claims that "the --internal flag that will completely isolate containers on a network from any communications external to that network," which necessitated this advisory and CVE.
Affected range
<20.10.27
Fixed version
24.0.7
Description
Intel's RAPL (Running Average Power Limit) feature, introduced by the Sandy Bridge microarchitecture, provides software insights into hardware energy consumption. To facilitate this, Intel introduced the powercap framework in Linux kernel 3.13, which reads values via relevant MSRs (model specific registers) and provides unprivileged userspace access via sysfs. As RAPL is an interface to access a hardware feature, it is only available when running on bare metal with the module compiled into the kernel.
By 2019, it was realized that in some cases unprivileged access to RAPL readings could be exploited as a power-based side-channel against security features including AES-NI (potentially inside a SGX enclave) and KASLR (kernel address space layout randomization). Also known as the PLATYPUS attack, Intel assigned CVE-2020-8694 and CVE-2020-8695, and AMD assigned CVE-2020-12912.
Several mitigations were applied; Intel reduced the sampling resolution via a microcode update, and the Linux kernel prevents access by non-root users since 5.10. However, this kernel-based mitigation does not apply to many container-based scenarios:
Unless using user namespaces, root inside a container has the same level of privilege as root outside the container, but with a slightly more narrow view of the system
sysfs is mounted inside containers read-only; however only read access is needed to carry out this attack on an unpatched CPU
While this is not a direct vulnerability in container runtimes, defense in depth and safe defaults are valuable and preferred, especially as this poses a risk to multi-tenant container environments running directly on affected hardware. This is provided by masking /sys/devices/virtual/powercap in the default mount configuration, and adding an additional set of rules to deny it in the default AppArmor profile.
While sysfs is not the only way to read from the RAPL subsystem, other ways of accessing it require additional capabilities such as CAP_SYS_RAWIO which is not available to containers by default, or perf paranoia level less than 1, which is a non-default kernel tunable.
OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities
Affected range
<20.10.27
Fixed version
v24.0.7
Description
Intel's RAPL (Running Average Power Limit) feature, introduced by the Sandy Bridge microarchitecture, provides software insights into hardware energy consumption. To facilitate this, Intel introduced the powercap framework in Linux kernel 3.13, which reads values via relevant MSRs (model specific registers) and provides unprivileged userspace access via sysfs.
OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities
Affected range
<1.2.0-rc.1
Fixed version
1.2.0-rc.1
CVSS Score
7.2
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H
Description
Withdrawn Advisory
This advisory has been withdrawn because it was incorrectly attributed to runc. Please see the issue here for more information.
Original Description
A flaw was found in cri-o, where an arbitrary systemd property can be injected via a Pod annotation. Any user who can create a pod with an arbitrary annotation may perform an arbitrary action on the host system. This issue has its root in how runc handles Config Annotations lists.
Race Condition Enabling Link Following
Affected range
<1.1.14
Fixed version
1.1.14
CVSS Score
3.6
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:C/C:N/I:L/A:N
Description
Impact
runc 1.1.13 and earlier as well as 1.2.0-rc2 and earlier can be tricked into
creating empty files or directories in arbitrary locations in the host
filesystem by sharing a volume between two containers and exploiting a race
with os.MkdirAll. While this can be used to create empty files, existing
files will not be truncated.
An attacker must have the ability to start containers using some kind of custom
volume configuration. Containers using user namespaces are still affected, but
the scope of places an attacker can create inodes can be significantly reduced.
Sufficiently strict LSM policies (SELinux/Apparmor) can also in principle block
this attack -- we suspect the industry standard SELinux policy may restrict
this attack's scope but the exact scope of protection hasn't been analysed.
This is exploitable using runc directly as well as through Docker and
Kubernetes.
The CVSS score for this vulnerability is
CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:C/C:N/I:L/A:N (Low severity, 3.6).
Workarounds
Using user namespaces restricts this attack fairly significantly such that the
attacker can only create inodes in directories that the remapped root
user/group has write access to. Unless the root user is remapped to an actual
user on the host (such as with rootless containers that don't use
/etc/sub[ug]id), this in practice means that an attacker would only be able to
create inodes in world-writable directories.
A strict enough SELinux or AppArmor policy could in principle also restrict the
scope if a specific label is applied to the runc runtime, though we haven't
thoroughly tested to what extent the standard existing policies block this
attack nor what exact policies are needed to sufficiently restrict this attack.
The Go AWS S3 Crypto SDK contains vulnerabilities that can permit an attacker with write access to a bucket to decrypt files in that bucket.
Files encrypted by the V1 EncryptionClient using either the AES-CBC content cipher or the KMS key wrap algorithm are vulnerable. Users should migrate to the V1 EncryptionClientV2 API, which will not create vulnerable files. Old files will remain vulnerable until re-encrypted with the new client.
Affected range
>=0
Fixed version
Not Fixed
Description
The Go AWS S3 Crypto SDK contains vulnerabilities that can permit an attacker with write access to a bucket to decrypt files in that bucket.
Files encrypted by the V1 EncryptionClient using either the AES-CBC content cipher or the KMS key wrap algorithm are vulnerable. Users should migrate to the V1 EncryptionClientV2 API, which will not create vulnerable files. Old files will remain vulnerable until re-encrypted with the new client.
k8s.io/apiserver0.30.1 (golang)
pkg:golang/k8s.io/apiserver@0.30.1
OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities
Affected range
<1.15.10
Fixed version
1.15.10, 1.16.7, 1.17.3
CVSS Score
4.3
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:L
Description
The Kubernetes API server component has been found to be vulnerable to a denial of service attack via successful API requests.
k8s.io/kubernetes1.30.1 (golang)
pkg:golang/k8s.io/kubernetes@1.30.1
Incorrect Default Permissions
Affected range
>=1.30.0 <1.30.3
Fixed version
1.30.3
CVSS Score
6.1
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:N
Description
A security issue was discovered in Kubernetes clusters with Windows nodes where BUILTIN\Users may be able to read container logs and NT AUTHORITY\Authenticated Users may be able to modify container logs.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
1.6.1
->1.6.2
Warning
Some dependencies could not be looked up. Check the Dependency Dashboard for more information.
Release Notes
rancher/rke (rancher/rke)
v1.6.2
Compare Source
Rancher Kubernetes Engine (RKE) is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. It works on bare-metal and virtualized servers. With RKE, the installation and operation of Kubernetes are both simplified and easily automated, and they are entirely independent of the operating system and platform you're running.
What's Changed
v1.30.4
,v1.29.8
andv1.28.13
v3.28.1
forv1.30.4
v3.27.4
forv1.29.8
andv1.28.13
v1.11.2
forv1.30.4
,v1.29.8
andv1.28.13
v6.0.4.3
forv1.30.4
,v1.29.8
andv1.28.13
RKE Kubernetes versions
v1.27.16-rancher1-1
v1.28.13-rancher1-1
v1.29.8-rancher1-1
v1.30.4-rancher1-1
(default)Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR has been generated by Renovate Bot.