Whats new in Proxmox Virtual Environment 6.4

16

 

Enterprise software developer Proxmox Server Solutions GmbH (or “Proxmox”) has today released version 6.4 of its server virtualization management platform Proxmox Virtual Environment. This latest version comes with important new features such as live-restore and single file restore, support for Ceph Octopus 15.2.11 and Ceph Nautilus 14.2.20, many enhancements to KVM/QEMU, and notable bug fixes. The usability of Proxmox VE has improved significantly with the addition of many features and management options to the web interface.

The new version is based on Debian Buster 10.9, but using a newer, long-term supported Linux kernel 5.4. Optionally, the 5.11 kernel can be installed, providing support for the latest hardware. The 5.4 kernel remains the default on Proxmox VE 6.x series. The latest versions of leading open-source technologies for virtualization like QEMU 5.2, LXC 4.0, and OpenZFS 2.0.4 have been included. The Proxmox maintainers support two versions of Ceph, the massively scalable, distributed storage system, in their virtualization platform. During the installation process, users can select their preferred version, either Ceph Octopus 15.2.11 or Ceph Nautilus 14.2.20.

Single-File Restore and Live Restore for KVM

Proxmox Virtual Environment 6.4 brings Live Restore and Single File Restore enabling users to simplify restore tasks and further improve recovery time objectives (RTO).

  • Single-File Restore: Quite often, users only need to recover a single file. This feature is now available for virtual machine and container backup archives stored on a Proxmox Backup Server, meaning an individual file or directory can be selected for restore, without having to download the entire archive. To restore a file via the Proxmox VE web interface, users can open a file browser directly via the ‘File Restore’ button. A ‘Download’ button then allows the user to download files and directories, the latter being compressed into a zip archive on the fly. In case users want to download a VM image, which might contain untrusted data, Proxmox VE starts a temporary VM to download the data from it. This avoids exposing the hypervisor system to danger.
  • Live Restore: The new live restore feature can be enabled via the GUI or through the command ‘qmrestore’. The restore of a selected VM starts immediately after activation. This feature currently works for all VMs saved on a Proxmox Backup Server storage. It is especially useful for large VMs, for example, a web server, where only a small amount of data is required for the initial operation. The VM becomes operational as soon as the operating system and all necessary services have been started, while – in the background – the lesser used data is continuously restored.
Support for Ceph Octopus 15.2.11 and Ceph Nautilus 14.2.20

Proxmox Virtual Environment supports two versions of the massively scalable, distributed storage system Ceph. Users can select their preferred Ceph version– Ceph Octopus 15.2.11 or Ceph Nautilus 14.2.20–during the installation process. The integration of the placement group (PG) auto-scaler has improved in version 6.4. This allows administrators to configure Target Size or Target Ratio settings and see the optimal numbers of PGs in the GUI. For easier usage, the Ceph pool view has been optimized, making it possible to show the columns related to the auto-scaler, as well as to configure the major pool properties from the web interface.

Further enhancements

Proxmox VE API Proxy Daemon: pveproxy listens to both IPv4 and IPv6 addresses by default. The Listening IP addresses are configurable in /etc/default/pveproxy. This can help to limit the exposure to the outside, e.g., by only binding to an internal IP.

  • Container: Appliance templates or support for Alpine Linux 3.13, Devuan 3, Fedora 34, and Ubuntu 21.04. Improved handling of cgroup v2 (control group).
  • External metric server: In Proxmox VE, you can define external metric servers, providing you with various statistics about your hosts, virtual guests, and storages. The new version supports InfluxDB HTTPs API and instances of InfluxDB behind a reverse proxy.
  • Improved ISO installer: The boot setup for ZFS installations is now better equipped for legacy hardware. Installations on ZFS now install the boot-loader to all selected disks, instead of only to the first mirror vdev, improving the experience with hardware where the boot-device is not easily selectable. Before installation, an NTP synchronization is attempted.
  • Storage: Proxmox VE 6.4 now allows for adding backup notes on any CephFS, CIFS, or NFS storage. Users can also configure a namespace for accessing a Ceph pool.
  • VMs (KVM/QEMU):
    • Support pinning a VM to a specific QEMU machine version.
    • Automatically pin VMs with Windows as OS type to the current QEMU machine on VM creation. This improves stability and guarantees that the hardware layout stays the same, even with newer QEMU versions.
    • cloud-init: re-add Stateless Address Autoconfiguration (SLAAC) option to IPv6 configuration.
  • Enhancements to the GUI
    • Show current usage of host memory and CPU resources by each guest in the node search-view.
    • Use binary (1 KiB equals 1024 B instead of 1 KB equals 1000 B) as base in the node and guest memory usage graphs, ensuring it is consistent with the current usage gauge.
    • Firewall rules: Columns are more responsive and flexible by default.

Based on Debian Buster (10.9)

  • Ceph Octopus 15.2.11 and Ceph Nautilus 14.2.20
  • Kernel 5.4 default
  • Kernel 5.11 opt-in
  • LXC 4.0
  • QEMU 5.2
  • ZFS 2.0.4 – new major version
  • Virtual Machines (KVM/QEMU):
    • Support pinning a VM to a specific QEMU machine version.
    • Automatically pin VMs with Windows as OS type to the current QEMU machine on VM creation.
    This improves stability and guarantees that the hardware layout can stay the same even with newer QEMU versions.
    • Address issues with hanging QMP commands, causing VMs to freeze on disk resize and indeterministic edge cases.
    Note that some QMP timeout log messages are still being investigated, they are harmless and only of informative nature.
    • cloud-init: re-add Stateless Address Autoconfiguration (SLAAC) option to IPv6 configuration.
    • Improve output in task log for mirroring drives and VM live-migration.
  • Container
    • Improved cgroup v2 (control group) handling.
    • Support and provide appliance templates for Alpine Linux 3.13, Devuan 3, Fedora 34, Ubuntu 21.04.
  • Backup and Restore
    • Implement unified single-file restore for virtual machine (VM) and container (CT) backup archives located on a Proxmox Backup Server.
    The file-restore is available in the GUI and in a new command line tool proxmox-file-restore.
    • Live-Restore of VM backup archives located on a Proxmox Backup Server.
    No more watching the task log, waiting for a restore to finish; VMs can now be brought up while the restore runs in the background.
    • Consistent handling of excludes for container backups across the different backup modes and storage types.
    • Container restores now default to the privilege setting from the backup archive.
  • Ceph Server
    • Improve integration for placement group (PG) auto-scaler status and configuration.
      Allow configuration of the CRUSH-rule, Target Size and Target Ratio settings, and automatically calculating the optimal numbers of PGs based on this.
  • Storage
    • Support editing of backup notes on any CephFS, CIFS or NFS storage.
    • Support configuring a namespace for accessing a Ceph pool.
    • Improve handling ZFS pool by doing separate checks for imported and mounted.
    This separation helps in situations where a pool was imported but not mounted, and executing another import command failed.
  • Disk Management
    • Return partitions and display them in tree format.
    • Improve detection of disk and partition usage.
  • Enhancements in the web interface (GUI)
    • Show current usage of host memory and CPU resources by each guest in a node’s search-view.
    • Use binary (1 KiB equals 1024 B instead of 1 KB equals 1000 B) as base in the node and guest memory usage graphs, providing consistency with the units used in the current usage gauge.
    • Make columns in the firewall rule view more responsive and flexible by default.
    • Improve Ceph pool view, show auto-scaler related columns.
    • Support editing existing Ceph pools, adapting the CRUSH-rule, Target Size and Target Ratio, among other things.
  • External metric servers:
    • Support the InfluxDB 1.8 and 2.0 HTTP(s) API.
    • Allow use of InfluxDB instances placed behind a reverse-proxy.
  • Proxmox VE API Proxy Daemon (pveproxy)
    • Make listening IP configurable (in /etc/default/pveproxy). This can help to limit exposure to the outside (e.g. by only binding to an internal IP).
    • pveproxy now listens for both IPv4 and IPv6, by default
  • Installation ISO:
    • Installation on ZFS:
      • if booted with legacy BIOS (non-UEFI), now also copy the kernel images to the second VFAT partition (ESP), allowing the system to boot from there with grub, making it possible to enable all ZFS features on such systems.
      • set up the boot-partition and boot-loader to all selected disks, instead of only to the first mirror vdev, improving the experience with hardware where the boot-device is not easily selectable.
    • The installer environment attempts to do an NTP time synchronization before actually starting the installation, avoiding telemetry and cluster issues, if the RTC had a huge time-drift.
  • pve-zsync
    • Improved snapshot handling allowing for multiple sync intervals for a source and destination pair.
    • Better detection of aborted syncs, which previously caused pve-zsync to stop the replication.
  • Firewall
    • Fixes in the API schema to prevent storing rules with a big IP-address list, which get rejected by iptables-restore due to its size limitations. We recommended you to create and use IP-Sets for that use case.
    • Improvements to the command-line parameter handling.

Known Issues

  • Please avoid using zpool upgrade on the “rpool” (root pool) itself, when upgrading to ZFS 2.0 on a system booted by GRUB in legacy mode, as that will break pool import by GRUB.
    See the documentation for determining the bootloader used, if you’re unsure.
    Setups installed with the Proxmox VE 6.4 ISO are not affected, as there the installer always sets up an easier to handle, vfat formatted, ESP to boot.
  • New default bind address for pveproxy and spiceproxy, unifying the default behavior with Proxmox Backup Server
    • With making the LISTEN_IP configurable, the daemon now binds to both wildcard addresses (IPv4 0.0.0.0:8006 and IPv6 [::]:8006) by default.
    Should you wish to prevent it from listening on IPv6, simply configure the IPv4 wildcard as LISTEN_IP in /etc/default/pveproxy:
    LISTEN_IP="0.0.0.0"
    • Additionally, the logged IP address format changed for IPv4 in pveproxy’s access log (/var/log/pveproxy/access.log). They are now logged as IPv4-mapped IPv6 addresses instead of:
    192.168.16.68 - [email protected] [10/04/2021:12:35:11 +0200] "GET /api2/json/cluster/tasks HTTP/1.1" 200 854
    the line now looks like:
    ::ffff:192.168.16.68 - [email protected] [10/04/2021:12:35:11 +0200] "GET /api2/json/cluster/tasks HTTP/1.1" 200 854
    If you want to restore the old logging format, also set LISTEN_IP="0.0.0.0"
  • Resolving the Ceph `insecure global_id reclaim` Health Warning
    With Ceph Octopus 15.2.11 and Ceph Nautilus 14.2.20 we released an update to fix a security issue (CVE-2021-20288) where Ceph was not ensuring that reconnecting/renewing clients were presenting an existing ticket when reclaiming their global_id value.
    Updating from an earlier version will result in the above health warning.
    See the forum post here for more details and instructions to address this warning.
Source https://www.proxmox.com/en/news/press-releases/proxmox-virtual-environment-6-4-available
Comments