This repository was archived by the owner on Apr 3, 2023. It is now read-only.
forked from kubernetes/kubernetes
-
Notifications
You must be signed in to change notification settings - Fork 3
Update code to get 1.13.6 code changes from Upstream #21
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
`elbv2.AddTags` doesn't seem to support assigning the same set of tags to multiple resources at once leading to the following error: Error adding tags after modifying load balancer targets: "ValidationError: Only one resource can be tagged at a time" This can happen when using AWS NLB with multiple listeners pointing to different node ports. When k8s creates a NLB it creates a target group per listener along with installing security group ingress rules allowing the traffic to reach the k8s nodes. Unfortunately if those target groups are not tagged, k8s will not manage them, thinking it is not the owner. This small changes assigns tags one resource at a time instead of batching them as before. Signed-off-by: Brice Figureau <brice@daysofwonder.com>
If a iSCSI target is down while a volume is attached, reading from /sys/class/iscsi_host/host415/device/session383/connection383:0/iscsi_connection/connection383:0/address fails with an error. Kubelet should assume that such target is not available / logged in and try to relogin. Eventually, if such error persists, it should continue mounting the volume if the other paths are healthy instead of failing whole WaitForAttach().
apply fix for msi and fix test failure
Use Nginx as the DaemonSet image instead of the ServeHostname image. This was changed because the ServeHostname has a sleep after terminating which makes it incompatible with the DaemonSet Rolling Upgrade e2e test. In addition, make the DaemonSet Rolling Upgrade e2e test timeout a function of the number of nodes that make up the cluster. This is required because the more nodes there are, the longer the time it will take to complete a rolling upgrade. Signed-off-by: Alexander Brand <alexbrand09@gmail.com>
Always report 0 cpu/memory usage for exited containers to make metrics-server work as expect. Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>
while cleaning up ipvs mode. flushing iptable chains first and then remove the chains. this avoids trying to remove chains that are still referenced by rules in other chains. fixes kubernetes#70615
…er that requests any device plugin resource. If not, re-issue Allocate grpc calls. This allows us to handle the edge case that a pod got assigned to a node even before it populates its extended resource capacity.
…pick-of-#74636-upstream-release-1.13 Automated cherry pick of kubernetes#74636: Remove reflector metrics as they currently cause a memory
…k-of-#73968-upstream-release-1.13 Automated cherry pick of kubernetes#73968: record event on endpoint update failure
…y-pick-of-#74371-upstream-release-1.13 Automated cherry pick of kubernetes#74371: add health plugin in the DNS tests
Rebase docker image on debian-base:0.4.1
…ata from metadata.
This is a cherrypick of the following commit https://github.com/kubernetes/kubernetes/pull/74290/commits
This is to deal with the flaky session affinity test.
…ck-of-#76341-upstream-release-1.13 Automated cherry pick of kubernetes#76341: Fix concurrent map access in Portworx create volume call
…manager This PR fixes the issue kubernetes#75345. This fix modified the checking volume in actual state when validating whether volume can be removed from desired state or not. Only if volume status is already mounted in actual state, it can be removed from desired state. For the case of mounting fails always, it can still work because the check also validate whether pod still exist in pod manager. In case of mount fails, pod should be able to removed from pod manager so that volume can also be removed from desired state.
…of-#76773-upstream-release-1.13 Automated cherry pick of kubernetes#76773: Create the "internal" firewall rule for kubemark master.
…k-of-#76788-upstream-release-1.13 Automated cherry pick of kubernetes#76788: Test kubectl cp escape
…pick-of-#76988-upstream-release-1.13 Automated cherry pick of kubernetes#76988: add shareName param in azure file storage class
…o-1.13 Automated cherry pick of kubernetes#76656: Switch to instance-level update APIs for Azure VMSS loadbalancer operations
…ick-of-#76762-upstream-release-1.13 Automated cherry pick of kubernetes#76762: Pick up security patches for fluentd-gcp-scaler by upgrading
…pick-of-#71471-upstream-release-1.13 Automated cherry pick of kubernetes#71471: Fix nil pointer dereference panic in attachDetachController
…-pick-of-#77224-upstream-release-1.13 Automated cherry pick of kubernetes#77224: Upgrade Stackdriver Logging Agent addon image from 1.6.0 to
…ck-of-#76977-upstream-release-1.13 Update the dynamic volume limit in GCE PD
…pick-of-#75087-upstream-release-1.13 Automated cherry pick of kubernetes#75087: fix smb unmount issue on Windows
…pick-of-#70645-upstream-release-1.13 Automated cherry pick of kubernetes#70645: if ephemeral-storage not exist in initialCapacity, don't
…k-of-#75072-upstream-release-1.13 Automated cherry pick of kubernetes#75072: Check for required name parameter in dynamic client
Update Cluster Autoscaler to 1.13.4
…ick-of-#76665-upstream-release-1.13 Automated cherry pick of kubernetes#76665 upstream release 1.13
…ck-of-#75458-upstream-release-1.13 Fix race condition between actual and desired state in kublet volume …
…k-of-#76675-upstream-release-1.13 Automated cherry pick of kubernetes#76675: Error when etcd3 watch finds delete event with nil prevKV
…y-pick-of-#72534-kubernetes#74394-upstream-release-1.13 Automated cherry pick of kubernetes#72534: kube-proxy: rename internal field for clarity kubernetes#74394: Fix small race in e2e
Kubernetes official release v1.13.6
jadarsie
approved these changes
May 22, 2019
# for free
to subscribe to this conversation on GitHub.
Already have an account?
#.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Update code to get 1.13.6 code changes from Upstream
Test cases Run:
K8s deployment with 3 master and 3 node.
Tomcat and Wordpress app passed.