Skip to content

Google Summer of Code 2025

aburdenthehand edited this page Feb 11, 2025 · 7 revisions

Google Summer of Code 2025

"Google Summer of Code (GSoC) is a global, online program that brings new contributors into open source software organizations." - Google Summer of Code Contributor Guide

The KubeVirt community is applying to be a Google Summer of Code organization, to provide mentorship opportunity to applicants interested in learning about open source software development in the cloud native ecosystem.

See the Google Summer of Code website for more information about the program.

Key Dates

Feb 27: List of accepted organizations announced
Feb 27 - Mar 24: Potential contributors discuss project application ideas with organizations
Apr 8: Contributor application deadline
May 8 - June 1: Community Bonding Period
June 2 - Sep 1: The Summer of Code!

See the Google Summer of Code timeline for more detailed timeline information.

Project Ideas

KubeVirt is proposing the following project ideas as starting points for GSoC contributors to develop their own project applications.

1. Dynamic attachment and removal of filesystem volumes

GitHub issue: https://github.com/kubevirt/community/issues/384

Description
Direct directory sharing between a virtual machine and the host is made possible by filesystem devices. Pods and virtual machines can share the same Persistent Volume Claim thanks to this virtiofs. For instance, the capability to hotplugging an extra directory might be utilized to obtain diagnostic information from the VM and thereafter examine it.

While attaching a disk to a running virtual machine dynamically is standard for VMs, it is an uncommon operation for pods. KubeVirt has already implemented the capability for hotplugging disks and LUNs, but it does not yet have the functionality to add or remove filesystems from a virtual machine. The volume hotplug/unplug feature isn’t supported natively by Kubernetes. KubeVirt mechanism relies on an additional pod known as the attachment pod to schedule and attach storage to the specific node where the VM is operating. Then, the storage is hotplug through Libvirt api.

In addition, virtiofs is deployed in a separate container which is usually started together with the pod. However, in the case of hotplug, the pod cannot be dynamically modified with extra containers. This is a further challenge which needs to be taken into account during the design proposal.

Expected outcomes
The project goal is to propose and develop a solution based on the current approach of the attachment pod which supports filesystem volume types.

Project requirements
Project size: 350 hours
Difficult: Hard
Required skills: Kubernetes knowledge and GoLang programming skills
Mentors: Alice Frosi: afrosi@redhat.com, German Maglione: gmaglion@redhat.com, Javier Cano Cano jcanocan@redhat.com

See the GitHub issue for more information on the project, how to get started, and to ask questions.

2. Adding emulated BMC support to KubeVirt (KubeVirtBMC)

GitHub issue: https://github.com/kubevirt/community/issues/386

Description
KubeVirt is a virtualization API for Kubernetes, that allows to run virtual machine-based workloads on Kubernetes. [1]

Often times, developers require the ability to deploy applications or systems in local virtual environments like bare-metal ones. Existing solutions involve libvirt domains or QEMU VMs with Basedband Management Controller (BMC) emulators, which are not directly compatible with KubeVirt, necessitating a Kubernetes-native solution. The original RFE [2] was followed up by an implementation of a BMC emulator for KubeVirt named KubeVirtBMC [3].

KubeVirtBMC facilitates the deployment of software/applications/platforms such as OpenShift and OpenStack - whose installers typically require communication with bare-metal out-of-band management protocols like IPMI and Redfish - in KubeVirt VMs for development, testing, and debugging purposes, similar to the functionality provided by VirtualBMC [4] and sushy-emulator [5] but within a Kubernetes context.

As a result, a KubeVirt feature proposal was created and accepted [6], which now needs to be implemented. The proposal is divided into four phases, with work on phase one having already begun.

Expected outcomes
The project goal is to transfer KubeVirtBMC into the KubeVirt organization and to continue the development of a native BMC emulator for KubeVirt as laid out in the accepted proposal. Phases one to three of the proposal should be completed, while phase four is optional.

Project requirements
Project size: 350 hours
Difficult: Hard
Required skills: Kubernetes knowledge, GoLang programming skills, possibly experience with BMCs and the IPMI/Redfish protocols
Mentors: Felix Matouschek fmatouschek@redhat.com, Zespre Chang starbops@zespre.com

See the GitHub issue for more information on the project, how to get started, and to ask questions.

3. Self-sufficient virt-handler

GitHub issue: https://github.com/kubevirt/community/issues/388

Description
Kubevirt is a Kubernetes extension to run vVirtual machines on Kubernetes clusters leveraging Libvirt + Qemu & KVM stack. It does this by exposing a custom resource called VirtualMachine which is then translated into a Pod. This Pod is treated as any other application Pods, and includes a monitoring process, virt-launcher, that manages the Libvirt+Qemu processes. The virt-launcher exposes a command grpc server for managing the virtual machine and has a notify client (see below notify server) through which it sends domain (virtual machine state) events and Kubernetes events.

Each node in the cluster is running a node agent, called virt-handler. The virt-handler is using the command servers of virt-launchers to manage virtual machines. It is also providing a notify server that collects domain and Kubernetes events from launchers in order to obtain internal state of virtual machines.

The hard dependencies on OS, file system, presence of virt-launcher Pod and GRPC servers make it hard to run virt-handler independently inside unprivileged Pod without the presence of virt-launcher. The goal of this project is to run virt-handler inside an unprivileged Pod and simulate a virt-launcher so that no Pod for virt-launcher needs to exist.

Expected outcomes
The main goal of this project is to create a proof of concept to run virt-handler in an unprivileged Pod without virt-launcher Pods to be running on the same host. This will enable scalability testing with significantly less resources required.

Project requirements
Project size: 350 hours
Difficulty: Hard
Required skills: Golang
Desirable skills: Kubernetes, GRPC, Unix
Mentor: Ľuboslav Pivarč lpivarc@redhat.com, Co-mentor: Victor Toso victortoso@redhat.com

See the GitHub issue for more information on the project, how to get started, and to ask questions.

4. Early enablement of CBOR

GitHub issue: https://github.com/kubevirt/community/issues/389

Description
Kubevirt is a Kubernetes extension to run Virtual machines on Kubernetes clusters leveraging Libvirt + Qemu & KVM stack. It does this by exposing custom resources (defined by Custom Resource Definition, also known as CRD) called VirtualMachine, VirtualMachineInstance, as well as resources for backpups and other features.

From the beginning, Kubernetes supported only json or yaml format for custom resources, in fact that was a default for core API types as well. Support for Protocol Buffers (protobuf) was introduced for core API types while CRDs were left with json/yaml because they required a schema. The protobuf helped to scale Kubernetes beyond limitations presented in the past. Kubernetes 1.32 introduced Alpha support of CBOR (Concise Binary Object Representation) for CRDs, promising a more compact format and further aiding scalability of Kubernetes and related projects.

The goal of this project is to build a proof of concept, integrating CBOR for our client-go, as well as enabling SIG-scale testing, paving the way for adoption once the feature graduates in Kubernetes.

Expected outcomes
The main goal of this project is to create a proof of concept, integrating CBOR into Kubevirt in a way that can be used to run our scalability jobs. This integration will need to be guarded as the feature is not widely available, and should include a comparison of CBOR and json, visual aids and a presentation for the community about the work and findings.

For the future, we expect guidance for enabling the feature as well summarizing the benefits from this adoption.

Project requirements
Project size: 350 hours
Difficulty: Medium
Required skills: Golang
Desirable skills: Kubernetes
Mentor: Ľuboslav Pivarč lpivarc@redhat.com, Co-mentor: Pending

See the GitHub issue for more information on the project, how to get started, and to ask questions.

Custom project proposals

You can submit your own project idea by emailing the kubevirt-dev Google Group and CC'ing Andrew Burden aburden@redhat.com and Petr Horáček phoracek@redhat.com.

If a mentor from the KubeVirt community supports the proposed project idea, we can add it to the KubeVirt project ideas list.

How and where to find help

First, try to check KubeVirt documentation, we cover many topics and you might already find some of the answers. If there is something unclear, feel free to open an issue and a PR. This is already a great start to getting in touch with the process.
For questions related to KubeVirt and not strictly to the GSoc program, try to use the #kubevirt-dev Slack channel in the Kubernetes workspace and GitHub issues as much as possible. Your question can be useful for other people, and the mentors might have a limited amount of time. It is also important to interact with the community as much as possible.
You can also search the Slack channel archive to see if others have previously encountered the same issue.

If something doesn't work, try to document the steps and how to reproduce the issue as clearly as possible. The more information you provide, the easiest is for us to help you. If you open an issue in KubeVirt, this already guides you with a template with the kind of information we generally need.

Tips on how to begin

  1. Install KubeVirt and deploy KubeVirt VMs following the getting started guide
  2. Look for good-first issues and try to solve one to get familiar with the project (if there isn’t a PR linked to it, feel free to pick it)
  3. Read through our General contributing guide and our Developer contributing guide for understanding of community expectations and further tips on how to get started with the project.

How to submit the proposal

The preferred way is to create a google doc and share it with the mentors (slack or email work). If for any reason, google doc doesn't work for you, please share your proposal by email. Early submissions have higher chances as they will be reviewed on multiple iterations and can be further improved.

What the proposal should contain

The design and your strategy for solving the challenge should be concisely explained in the proposal. Which components you anticipate touching and an example of an API are good starting points. The updates or APIs are merely a draft of what the candidate hopes to expand and change rather than being final. The details and possible issues can be discussed during the project with the mentors that can help to refine the proposal.

It is not necessary to provide an introduction to Kubernetes or KubeVirt; instead, candidates should demonstrate their familiarity with KubeVirt by describing in detail how they intend to approach the task.

Mentors may find it helpful to have a schematic drawing of the flows and examples to better grasp the solution. They will select a couple of good proposals at the end of the selection period and this will be followed by an interview with the candidate.

The proposal can have a free form or you can get inspired by the KubeVirt design proposals and template. However, it should contain a draft schedule of the project phases with some planned extra time to overcome eventual difficulties.