Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Can't start VM with qemu 9.2: Required binary "virtfs-proxy-helper" couldn't be found #1543

Open
6 tasks done
cjwatson opened this issue Dec 23, 2024 · 5 comments · May be fixed by #1547
Open
6 tasks done

Can't start VM with qemu 9.2: Required binary "virtfs-proxy-helper" couldn't be found #1543

cjwatson opened this issue Dec 23, 2024 · 5 comments · May be fixed by #1547
Labels
Bug Confirmed to be a bug Easy Good for new contributors
Milestone

Comments

@cjwatson
Copy link
Contributor

Required information

  • Distribution: Debian
  • Distribution version: testing
  • The output of "incus info":
config: {}
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- network_sriov
- console
- restrict_dev_incus
- migration_pre_copy
- infiniband
- dev_incus_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- dev_incus_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- backup_compression
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- images_all_projects
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
- resources_pci_vpd
- qemu_raw_conf
- storage_cephfs_fscache
- network_load_balancer
- vsock_api
- instance_ready_state
- network_bgp_holdtime
- storage_volumes_all_projects
- metrics_memory_oom_total
- storage_buckets
- storage_buckets_create_credentials
- metrics_cpu_effective_total
- projects_networks_restricted_access
- storage_buckets_local
- loki
- acme
- internal_metrics
- cluster_join_token_expiry
- remote_token_expiry
- init_preseed
- storage_volumes_created_at
- cpu_hotplug
- projects_networks_zones
- network_txqueuelen
- cluster_member_state
- instances_placement_scriptlet
- storage_pool_source_wipe
- zfs_block_mode
- instance_generation_id
- disk_io_cache
- amd_sev
- storage_pool_loop_resize
- migration_vm_live
- ovn_nic_nesting
- oidc
- network_ovn_l3only
- ovn_nic_acceleration_vdpa
- cluster_healing
- instances_state_total
- auth_user
- security_csm
- instances_rebuild
- numa_cpu_placement
- custom_volume_iso
- network_allocations
- zfs_delegate
- storage_api_remote_volume_snapshot_copy
- operations_get_query_all_projects
- metadata_configuration
- syslog_socket
- event_lifecycle_name_and_project
- instances_nic_limits_priority
- disk_initial_volume_configuration
- operation_wait
- image_restriction_privileged
- cluster_internal_custom_volume_copy
- disk_io_bus
- storage_cephfs_create_missing
- instance_move_config
- ovn_ssl_config
- certificate_description
- disk_io_bus_virtio_blk
- loki_config_instance
- instance_create_start
- clustering_evacuation_stop_options
- boot_host_shutdown_action
- agent_config_drive
- network_state_ovn_lr
- image_template_permissions
- storage_bucket_backup
- storage_lvm_cluster
- shared_custom_block_volumes
- auth_tls_jwt
- oidc_claim
- device_usb_serial
- numa_cpu_balanced
- image_restriction_nesting
- network_integrations
- instance_memory_swap_bytes
- network_bridge_external_create
- network_zones_all_projects
- storage_zfs_vdev
- container_migration_stateful
- profiles_all_projects
- instances_scriptlet_get_instances
- instances_scriptlet_get_cluster_members
- instances_scriptlet_get_project
- network_acl_stateless
- instance_state_started_at
- networks_all_projects
- network_acls_all_projects
- storage_buckets_all_projects
- resources_load
- instance_access
- project_access
- projects_force_delete
- resources_cpu_flags
- disk_io_bus_cache_filesystem
- instances_lxcfs_per_instance
- disk_volume_subpath
- projects_limits_disk_pool
- network_ovn_isolated
- qemu_raw_qmp
- network_load_balancer_health_check
- oidc_scopes
- network_integrations_peer_name
- qemu_scriptlet
- instance_auto_restart
- storage_lvm_metadatasize
- ovn_nic_promiscuous
- ovn_nic_ip_address_none
- instances_state_os_info
- network_load_balancer_state
- instance_nic_macvlan_mode
- storage_lvm_cluster_create
- network_ovn_external_interfaces
- instances_scriptlet_get_instances_count
- cluster_rebalance
- custom_volume_refresh_exclude_older_snapshots
- storage_initial_owner
- storage_live_migration
- instance_console_screenshot
- image_import_alias
- authorization_scriptlet
- console_force
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
auth_user_name: cjwatson
auth_user_method: unix
environment:
  addresses: []
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIIB/jCCAYSgAwIBAgIRANpcj0KEa+Ijfwn2bz8z/WgwCgYIKoZIzj0EAwMwMTEZ
    MBcGA1UEChMQTGludXggQ29udGFpbmVyczEUMBIGA1UEAwwLcm9vdEBjYW1vcnIw
    HhcNMjQwMzE1MTcwODQ2WhcNMzQwMzEzMTcwODQ2WjAxMRkwFwYDVQQKExBMaW51
    eCBDb250YWluZXJzMRQwEgYDVQQDDAtyb290QGNhbW9ycjB2MBAGByqGSM49AgEG
    BSuBBAAiA2IABLU8nMSeZ77+D3Mc1j0ZGZQvxDNl5vsKFa/O9F6tiYVg+NM1E5pj
    V46NYp0BEZIPLPF6+kLLGhIfiU5FNfpaJYycVvjNuYZ6a+WX4O1AHCo7rbIeOt9R
    Hnu5PDN3Dq67V6NgMF4wDgYDVR0PAQH/BAQDAgWgMBMGA1UdJQQMMAoGCCsGAQUF
    BwMBMAwGA1UdEwEB/wQCMAAwKQYDVR0RBCIwIIIGY2Ftb3JyhwR/AAABhxAAAAAA
    AAAAAAAAAAAAAAABMAoGCCqGSM49BAMDA2gAMGUCMH1D08mm2p4vYMqQq3hv/WmO
    wp/bQ1oiMxxFBARrg9AEeE2bmZx/aNdeM2gikn/aYAIxAMZu22LwrtfGbDUiVGdB
    T1vlnoqnFJL5142470MzPnf5mWtIFfOTmU4NzqyNKXX6TQ==
    -----END CERTIFICATE-----
  certificate_fingerprint: 28f715b34a5e129ffba7c1e739d446097d7997e61dabfd8240a95b2e79c12d60
  driver: qemu | lxc
  driver_version: 9.2.0 | 6.0.3
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    idmapped_mounts: "true"
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    uevent_injection: "true"
    unpriv_binfmt: "true"
    unpriv_fscaps: "true"
  kernel_version: 6.12.5-amd64
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Debian GNU/Linux
  os_version: ""
  project: default
  server: incus
  server_clustered: false
  server_event_mode: full-mesh
  server_name: camorr
  server_pid: 52063
  server_version: 6.0.3
  storage: zfs
  storage_version: 2.2.7-1
  storage_supported_drivers:
  - name: zfs
    version: 2.2.7-1
    remote: false
  - name: btrfs
    version: 6.6.3
    remote: false
  - name: dir
    version: "1"
    remote: false
  - name: lvm
    version: 2.03.27(2) (2024-10-02) / 1.02.201 (2024-10-02) / 4.48.0
    remote: false

Issue description

$ incus start bookworm-debusine-worker
Error: Failed to start device "home": Failed to setup virtfs-proxy-helper for device "home": Required binary "virtfs-proxy-helper" couldn't be found
Try `incus info --show-log bookworm-debusine-worker` for more info

Looking in /usr/share/doc/qemu-system-common/changelog.Debian.gz, I see:

qemu (1:9.2.0+ds-1~exp1) experimental; urgency=medium
...
  * d/qemu-system-common.*: virtfs-proxy-helper is gone

And indeed I see in https://gitlab.com/qemu-project/qemu/-/commit/ed76671888676792493320db53ed773a108cbd45 that this was removed upstream.

Steps to reproduce

Try to start an Incus VM that mounts a directory from the host system. (There may be other conditions - I don't know the Incus code here well enough.)

Information to attach

  • Any relevant kernel output (dmesg): nothing relevant
  • Container log (incus info NAME --show-log)
Name: bookworm-debusine-worker
Status: STOPPED
Type: virtual-machine
Architecture: x86_64
Created: 2024/03/15 20:32 GMT
Last Used: 2024/10/10 12:19 BST
Error: open /var/log/incus/bookworm-debusine-worker/qemu.log: no such file or directory
  • Container configuration (incus config show NAME --expanded)
architecture: x86_64
config:
  cloud-init.user-data: |
    #cloud-config
    runcmd:
      - "groupadd cjwatson --gid 1000"
      - "useradd cjwatson --uid 1000 --gid cjwatson --groups adm,sudo --shell /bin/bash"
      - "echo 'cjwatson ALL=(ALL) NOPASSWD:ALL' >/etc/sudoers.d/90-cloud-init-users"
      - "chmod 0440 /etc/sudoers.d/90-cloud-init-users"
  image.architecture: amd64
  image.description: Debian bookworm amd64 (20240315_05:24)
  image.os: Debian
  image.release: bookworm
  image.serial: "20240315_05:24"
  image.type: disk-kvm.img
  image.variant: cloud
  volatile.base_image: db744150188feca000407a92fad0b8447391fe15d933baac0995cd171a89a76f
  volatile.cloud-init.instance-id: d6da1ff5-a26a-4ca3-a81a-f9a50f204270
  volatile.eth0.hwaddr: 00:16:3e:b5:66:33
  volatile.last_state.power: STOPPED
  volatile.uuid: 4f8f5aa6-fc95-4614-8564-0104c89891f6
  volatile.uuid.generation: 4f8f5aa6-fc95-4614-8564-0104c89891f6
  volatile.vsock_id: "1397067162"
devices:
  eth0:
    name: eth0
    network: incusbr0
    type: nic
  home:
    path: /home/cjwatson
    shift: "true"
    source: /home/cjwatson
    type: disk
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
- cjwatson
stateful: false
description: ""
  • Main daemon log (at /var/log/incus/incusd.log): time="2024-12-23T20:31:35Z" level=warning msg="Unable to use virtio-fs for device, using 9p as a fallback" device=home driver=disk err="Virtiofsd missing" instance=bookworm-debusine-worker project=default
  • Output of the client with --debug: nothing relevant beyond the "Required binary" message above
  • Output of the daemon with --debug (alternatively output of incus monitor --pretty while reproducing the issue)
WARNING[2024-12-23T20:39:31Z] Unable to use virtio-fs for device, using 9p as a fallback  device=home driver=disk err="Virtiofsd missing" instance=bookworm-debusine-worker project=default
DEBUG  [2024-12-23T20:39:31Z] Instance operation lock finished              action=start err="Failed to start device \"home\": Failed to setup virtfs-proxy-helper for device \"home\": Required binary \"virtfs-proxy-helper\" couldn't be found" instance=bookworm-debusine-worker project=default reusable=false

I tried installing virtiofsd, which made the "Virtiofsd missing" message go away but otherwise didn't fix the problem.

@stgraber
Copy link
Member

Okay, we'll need to look at using that local backend if it supports writing (which last I checked, it didn't...) so then we'd be forced to do virtiofsd which may come with its own problems...

@stgraber
Copy link
Member

I think we actually use the local backend for basic read-only sharing, for read-write we had to use the proxy, so if that's no longer an option, we'll then have to require the use of virtiofsd in those cases.

@stgraber stgraber added Bug Confirmed to be a bug Easy Good for new contributors labels Dec 24, 2024
@stgraber stgraber added this to the incus-6.9 milestone Dec 24, 2024
@stgraber
Copy link
Member

@bensmrs let me know if you'd like to look at this one too since you already did some QEMU 9.2 fixes, otherwise I'll get a QEMU 9.2 dev environment up in the next week or so to look at this

@bensmrs
Copy link
Contributor

bensmrs commented Dec 24, 2024

I’m having a (very quick) look. The local backend seems to work just fine in my few tests. I’ll open a draft as soon as I can.

@bensmrs bensmrs linked a pull request Dec 24, 2024 that will close this issue
@bensmrs
Copy link
Contributor

bensmrs commented Dec 24, 2024

I made a very rough attempt in #1547. It will obviously need further review.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
Bug Confirmed to be a bug Easy Good for new contributors
Development

Successfully merging a pull request may close this issue.

3 participants