Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

[BUG] nvme disk not showing with smartctl --scan #209

Closed
tommyalatalo opened this issue Dec 23, 2021 · 2 comments
Closed

[BUG] nvme disk not showing with smartctl --scan #209

tommyalatalo opened this issue Dec 23, 2021 · 2 comments
Labels
bug Something isn't working

Comments

@tommyalatalo
Copy link

This isn't really a bug or feature, I just didn't find any other way to contact you regarding this.

I have a container set up with sys_admin and sys_rawio capabilities, and I've mounted a number of disks as devices into it. All of the standard hard drives are found by smartctl --scan, except for a nvme disk.

Have you come across this before? I feel like I followed the instructions, but for some reason the nvme device doesn't show up with smartctl. It is however present in the container as /dev/nvme0n1.

I'm running this with Hashicorp Nomad, so the file is in HCL format, but it all works the same as with docker-compose:

Container configuration:

      config {
        image   = "analogj/scrutiny:latest"
        cap_add = ["sys_admin", "sys_rawio"]
        ports   = ["http"]

        volumes = [
          "/run/udev:/run/udev:ro",
          "/secrets/scrutiny/scrutiny.yaml:/scrutiny/config/scrutiny.yaml",
        ]

        devices = [
          {
            host_path      = "/dev/nvme0n1"
            container_path = "/dev/nvme0n1"
          },
          {
            host_path      = "/dev/sda"
            container_path = "/dev/sda"
          },
          {
            host_path      = "/dev/sdb"
            container_path = "/dev/sdb"
          },
          {
            host_path      = "/dev/sdc"
            container_path = "/dev/sdc"
          },
          {
            host_path      = "/dev/sdd"
            container_path = "/dev/sdd"
          },
          {
            host_path      = "/dev/sde"
            container_path = "/dev/sde"
          },
          {
            host_path      = "/dev/sdf"
            container_path = "/dev/sdf"
          },
        ]
      }

smartctl --scan in the container:

smartctl --scan
/dev/sda -d scsi # /dev/sda, SCSI device
/dev/sdb -d scsi # /dev/sdb, SCSI device
/dev/sdc -d scsi # /dev/sdc, SCSI device
/dev/sdd -d scsi # /dev/sdd, SCSI device
/dev/sde -d scsi # /dev/sde, SCSI device
/dev/sdf -d scsi # /dev/sdf, SCSI device

smartctl --scan on the host:

❯ smartctl --scan
/dev/sda -d scsi # /dev/sda, SCSI device
/dev/sdb -d scsi # /dev/sdb, SCSI device
/dev/sdc -d scsi # /dev/sdc, SCSI device
/dev/sdd -d scsi # /dev/sdd, SCSI device
/dev/sde -d scsi # /dev/sde, SCSI device
/dev/sdf -d scsi # /dev/sdf, SCSI device
/dev/nvme0 -d nvme # /dev/nvme0, NVMe device
@tommyalatalo tommyalatalo added the bug Something isn't working label Dec 23, 2021
@tommyalatalo tommyalatalo changed the title [QUESTION] nvme disk not showing with smartctl --scan [BUG] nvme disk not showing with smartctl --scan Dec 23, 2021
@dannycjones
Copy link

I experienced this issue today also. I did find however it works fine if I bind /dev/nvme0 instead of /dev/nvme0n1.

I found the following information on serverfault which hopefully explains it:

The character device /dev/nvme0 is the NVME device controller, and block devices like /dev/nvme0n1 are the NVME storage namespaces: the devices you use for actual storage, which will behave essentially as disks.

In enterprise-grade hardware, there might be support for several namespaces, thin provisioning within namespaces and other features. For now, you could think namespaces as sort of meta-partitions with extra features for enterprise use.

https://serverfault.com/a/892135

I'd guess many users will run into this, binding the block device rather than the controller itself.

@tommyalatalo
Copy link
Author

tommyalatalo commented Jan 30, 2022

I experienced this issue today also. I did find however it works fine if I bind /dev/nvme0 instead of /dev/nvme0n1.

I found the following information on serverfault which hopefully explains it:

The character device /dev/nvme0 is the NVME device controller, and block devices like /dev/nvme0n1 are the NVME storage namespaces: the devices you use for actual storage, which will behave essentially as disks.
In enterprise-grade hardware, there might be support for several namespaces, thin provisioning within namespaces and other features. For now, you could think namespaces as sort of meta-partitions with extra features for enterprise use.

https://serverfault.com/a/892135

I'd guess many users will run into this, binding the block device rather than the controller itself.

I tried this and it works. This peculiarity of how to mount the nvme device needs to be documented.

@AnalogJ AnalogJ closed this as completed in e243d55 May 9, 2022
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants