This repository was archived by the owner on Apr 3, 2023. It is now read-only.
forked from kubernetes/kubernetes
-
Notifications
You must be signed in to change notification settings - Fork 3
chore: Get version v1.13.7 from upstream K8s #24
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
`elbv2.AddTags` doesn't seem to support assigning the same set of tags to multiple resources at once leading to the following error: Error adding tags after modifying load balancer targets: "ValidationError: Only one resource can be tagged at a time" This can happen when using AWS NLB with multiple listeners pointing to different node ports. When k8s creates a NLB it creates a target group per listener along with installing security group ingress rules allowing the traffic to reach the k8s nodes. Unfortunately if those target groups are not tagged, k8s will not manage them, thinking it is not the owner. This small changes assigns tags one resource at a time instead of batching them as before. Signed-off-by: Brice Figureau <brice@daysofwonder.com>
If a iSCSI target is down while a volume is attached, reading from /sys/class/iscsi_host/host415/device/session383/connection383:0/iscsi_connection/connection383:0/address fails with an error. Kubelet should assume that such target is not available / logged in and try to relogin. Eventually, if such error persists, it should continue mounting the volume if the other paths are healthy instead of failing whole WaitForAttach().
apply fix for msi and fix test failure
Use Nginx as the DaemonSet image instead of the ServeHostname image. This was changed because the ServeHostname has a sleep after terminating which makes it incompatible with the DaemonSet Rolling Upgrade e2e test. In addition, make the DaemonSet Rolling Upgrade e2e test timeout a function of the number of nodes that make up the cluster. This is required because the more nodes there are, the longer the time it will take to complete a rolling upgrade. Signed-off-by: Alexander Brand <alexbrand09@gmail.com>
Always report 0 cpu/memory usage for exited containers to make metrics-server work as expect. Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>
while cleaning up ipvs mode. flushing iptable chains first and then remove the chains. this avoids trying to remove chains that are still referenced by rules in other chains. fixes kubernetes#70615
…er that requests any device plugin resource. If not, re-issue Allocate grpc calls. This allows us to handle the edge case that a pod got assigned to a node even before it populates its extended resource capacity.
…pick-of-#74636-upstream-release-1.13 Automated cherry pick of kubernetes#74636: Remove reflector metrics as they currently cause a memory
…k-of-#73968-upstream-release-1.13 Automated cherry pick of kubernetes#73968: record event on endpoint update failure
…y-pick-of-#74371-upstream-release-1.13 Automated cherry pick of kubernetes#74371: add health plugin in the DNS tests
Rebase docker image on debian-base:0.4.1
…ata from metadata.
This is a cherrypick of the following commit https://github.com/kubernetes/kubernetes/pull/74290/commits
This is to deal with the flaky session affinity test.
Signed-off-by: Lantao Liu <lantaol@google.com>
…pick-of-#77426-upstream-release-1.13 Automated cherry pick of kubernetes#77426: Remove terminated pod from summary api.
…y-pick-of-#77619-upstream-release-1.13 Automated cherry pick of kubernetes#77619: In GuaranteedUpdate, retry on any error if we are working
…pick-of-#77613-upstream-release-1.13 Automated cherry pick of kubernetes#77613 upstream release 1.13
…ck-of-#77029-upstream-release-1.13 Automated cherry pick of kubernetes#77029: Update k8s-dns-node-cache image version
…ick-of-#77874-github-release-1.13 Automated cherry pick of kubernetes#77874: fix CVE-2019-11244: `kubectl --http-cache=<world-accessible
…-of-#77656-upstream-release-1.13 Automated cherry pick of kubernetes#77656: check if Memory is not nil for container stats
…ted-cherry-pick-of-#76060-upstream-release-1.13 Automated cherry pick of kubernetes#76060: Delete only unscheduled pods if node doesn't exist anymore.
little code refactor
This reverts commit 26e3c86.
fix comments
…ck-of-#76969-upstream-release-1.13 Automated cherry pick of kubernetes#76969: Fix eviction dry-run
…pick-of-#77722-upstream-release-1.13 Automated cherry pick of kubernetes#77722: fix incorrect prometheus metrics
…pick-of-#78298-upstream-release-1.13 Automated cherry pick of kubernetes#78298: fix azure retry issue when return 2XX with error
…k-of-#78029-upstream-release-1.13 Automated cherry pick of kubernetes#78029: Terminate watchers when watch cache is destroyed
…ick-of-#78261-upstream-release-1.13 Revert "Use consistent imageRef during container startup"
…01-1.13 Automated cherry pick of kubernetes#78012: Upgrade Azure network API version to 2018-07-01
…k-of-#77802-upstream-release-1.13 Automated cherry pick of kubernetes#77802 upstream release 1.13
Kubernetes official release v1.13.7
jadarsie
approved these changes
Jun 20, 2019
# for free
to subscribe to this conversation on GitHub.
Already have an account?
#.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Get version v1.13.7 from upstream K8s
Testing in progress