Skip to content
This repository was archived by the owner on Apr 3, 2023. It is now read-only.

chore: Get version v1.13.7 from upstream K8s #24

Merged
merged 205 commits into from
Jun 20, 2019
Merged

Conversation

rjaini
Copy link

@rjaini rjaini commented Jun 19, 2019

Get version v1.13.7 from upstream K8s

Testing in progress

Brice Figureau and others added 30 commits February 8, 2019 14:40
`elbv2.AddTags` doesn't seem to support assigning the same set of
tags to multiple resources at once leading to the following error:
  Error adding tags after modifying load balancer targets:
  "ValidationError: Only one resource can be tagged at a time"

This can happen when using AWS NLB with multiple listeners pointing
to different node ports.

When k8s creates a NLB it creates a target group per listener along
with installing security group ingress rules allowing the traffic to
reach the k8s nodes.

Unfortunately if those target groups are not tagged, k8s will not
manage them, thinking it is not the owner.

This small changes assigns tags one resource at a time instead of
batching them as before.

Signed-off-by: Brice Figureau <brice@daysofwonder.com>
If a iSCSI target is down while a volume is attached, reading from
/sys/class/iscsi_host/host415/device/session383/connection383:0/iscsi_connection/connection383:0/address
fails with an error. Kubelet should assume that such target is not
available / logged in and try to relogin. Eventually, if such error
persists, it should continue mounting the volume if the other
paths are healthy instead of failing whole WaitForAttach().
apply fix for msi and fix test failure
Use Nginx as the DaemonSet image instead of the ServeHostname image.
This was changed because the ServeHostname has a sleep after terminating
which makes it incompatible with the DaemonSet Rolling Upgrade e2e test.

In addition, make the DaemonSet Rolling Upgrade e2e test timeout a
function of the number of nodes that make up the cluster. This is
required because the more nodes there are, the longer the time it will
take to complete a rolling upgrade.

Signed-off-by: Alexander Brand <alexbrand09@gmail.com>
Always report 0 cpu/memory usage for exited containers to make
metrics-server work as expect.

Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>
while cleaning up ipvs mode. flushing iptable chains first and then
remove the chains. this avoids trying to remove chains that are still
referenced by rules in other chains.

fixes kubernetes#70615
…er that requests any device plugin resource. If not, re-issue Allocate grpc calls. This allows us to handle the edge case that a pod got assigned to a node even before it populates its extended resource capacity.
…pick-of-#74636-upstream-release-1.13

Automated cherry pick of kubernetes#74636: Remove reflector metrics as they currently cause a memory
…k-of-#73968-upstream-release-1.13

Automated cherry pick of kubernetes#73968: record event on endpoint update failure
…y-pick-of-#74371-upstream-release-1.13

Automated cherry pick of kubernetes#74371: add health plugin in the DNS tests
Rebase docker image on debian-base:0.4.1
This is to deal with the flaky session affinity test.
feiskyer and others added 23 commits May 17, 2019 14:43
Signed-off-by: Lantao Liu <lantaol@google.com>
…pick-of-#77426-upstream-release-1.13

Automated cherry pick of kubernetes#77426: Remove terminated pod from summary api.
…y-pick-of-#77619-upstream-release-1.13

Automated cherry pick of kubernetes#77619: In GuaranteedUpdate, retry on any error if we are working
…pick-of-#77613-upstream-release-1.13

Automated cherry pick of kubernetes#77613 upstream release 1.13
…ck-of-#77029-upstream-release-1.13

Automated cherry pick of kubernetes#77029: Update k8s-dns-node-cache image version
…ick-of-#77874-github-release-1.13

Automated cherry pick of kubernetes#77874: fix CVE-2019-11244: `kubectl --http-cache=<world-accessible
…-of-#77656-upstream-release-1.13

Automated cherry pick of kubernetes#77656: check if Memory is not nil for container stats
…ted-cherry-pick-of-#76060-upstream-release-1.13

Automated cherry pick of kubernetes#76060: Delete only unscheduled pods if node doesn't exist anymore.
little code refactor
…ck-of-#76969-upstream-release-1.13

Automated cherry pick of kubernetes#76969: Fix eviction dry-run
…pick-of-#77722-upstream-release-1.13

Automated cherry pick of kubernetes#77722: fix incorrect prometheus metrics
…pick-of-#78298-upstream-release-1.13

Automated cherry pick of kubernetes#78298: fix azure retry issue when return 2XX with error
…k-of-#78029-upstream-release-1.13

Automated cherry pick of kubernetes#78029: Terminate watchers when watch cache is destroyed
…ick-of-#78261-upstream-release-1.13

Revert "Use consistent imageRef during container startup"
…01-1.13

Automated cherry pick of kubernetes#78012: Upgrade Azure network API version to 2018-07-01
…k-of-#77802-upstream-release-1.13

Automated cherry pick of kubernetes#77802 upstream release 1.13
Kubernetes official release v1.13.7
@rjaini rjaini added the enhancement New feature or request label Jun 19, 2019
@rjaini rjaini requested review from jadarsie and a team June 19, 2019 20:30
@rjaini rjaini self-assigned this Jun 19, 2019
@rjaini rjaini merged commit 2b232d5 into release-1.13 Jun 20, 2019
@rjaini rjaini deleted the test-release-1.13 branch June 20, 2019 00:10
# for free to subscribe to this conversation on GitHub. Already have an account? #.
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.