Es el sistema de archivos por defecto en Red Hat Enterprise Linux 8. 15 comments. Ability to shrink filesystem. ZFS also offers data integrity, not just physical redundancy. If you use Debian, Ubuntu, or Fedora Workstation, the installer defaults to ext4. Proxmox VE Linux kernel with KVM and LXC support. 4, the native Linux kernel port of the ZFS file system is introduced as optional file system and also as an additional selection for the root file system. In the Create Snapshot dialog box, enter a name and description for the snapshot. Starting with Red Hat Enterprise Linux 7. Buy now!The XFS File System. For a while, MySQL (not Maria DB) had performance issues on XFS with default settings, but even that is a thing of the past. docker successfully installed and running but that warning message appears in the proxmox host and I don't understand, why?! In the docker lxc, docker info shows that overlay2 is used. The Proxmox Backup Server installer, which partitions the local disk(s) with ext4, xfs or ZFS, and installs the operating system. Add the storage space to Proxmox. What about using xfs for the boot disk during initial install, instead of the default ext4? I would think, for a smaller, single SSD server, it would be better than ext4? 1 r/Proxmox. Features of the XFS and ZFS. 2 we changed the LV data to a thin pool, to provide snapshots and native performance of the disk. 3-based kernel. on NVME, vMware and Hyper-V will do 2. 1 and a LXC container with Fedora 27. The boot-time filesystem check is triggered by either /etc/rc. There are a lot of post and blogs warning about extreme wear on SSD on Proxmox when using ZFS. During the installation wizard, you'll just format it to ext4 and create two partitions -- one named "local," which. Remove the local-lvm from storage in the GUI. (The equivalent to running update-grub systems with ext4 or xfs on root). 7T 0 part ext4 d8871cd7-11b1-4f75-8cb6-254a6120 72f6. ;-). The hardware raid controller will and does function the same regardless if the file system is NTFS, ext(x), xfs, etc etc. g. This is a significant difference: The Ext4 file system supports journaling, while Btrfs has a copy-on-write (CoW) feature. XFS supports larger file sizes and. Snapshots, transparent compression and quite importantly blocklevel checksums. I use lvm snapshots only for the root partition (/var, /home and /boot are on a different partitions) and I have a pacman hook that does a snapshot when doing an upgrade, install or when removing packages (it takes about 2 seconds). After installation, in proxmox env, partition SSD in ZFS for three, 32GB root, 16GB swap, and 512MB boot. Still, I am exclusively use XFS where there is no diverse media under the system (SATA/SAS only, or SSD only), and had no real problem for decades, since it's simple and it's fast. So far EXT4 is at the top of our list because it is more mature than others. Ext4 is the default file system on most Linux distributions for a reason. Since we have used a Filebench workloads for testing, our idea was to find the best FS for each test. Step 5. Things like snapshots, copy-on-write, checksums and more. You can create an ext4 or xfs filesystem on a disk using fs create, or by navigating to Administration -> Storage/Disks -> Directory in the web interface and creating one from there. The last step is to resize the file system to grow all the way to fill added space. LVM-Thin. I'm intending on Synology NAS being shared storage for all three of these. Reply reply Yes you have miss a lot of points: - btrfs is not integrated in the PMX web interface (for many good reasons ) - btrfs develop path is very slow with less developers compares with zfs (see yourself how many updates do you have in the last year for zfs and for btrfs) - zfs is cross platform (linux, bsd, unix) but btrfs is only running on linux. Proxmox Filesystems Unveiled: A Beginner’s Dive into EXT4 and ZFS. Btrfs trails the other options for a database in terms of latency and throughput. Figure 8: Use the lvextend command to extend the LV. It's got oodles of RAM and more than enough CPU horsepower to chew through these storage tests without breaking a sweat. xfs is really nice and reliable. We tried, in proxmox, EXT4, ZFS, XFS, RAW & QCOW2 combinations. Select Datacenter, Storage, then Add. Proxmox VE Linux kernel with KVM and LXC support Complete toolset for administering virtual machines, containers, the host system, clusters and all necessary resources XFS与Ext4性能比较. can someone point me to a howto that will show me how to use a single disk with proxmox and ZFS so I can migrate my esxi vms. Sistemas de archivos de almacenamiento compartido 27. Now, the storage entries are merely tracking things. As modern computing gets more and more advanced, data files get larger and more. CoW ontop of CoW should be avoided, like ZFS ontop of ZFS, qcow2 ontop of ZFS, btrfs ontop of ZFS and so on. This section highlights the differences when using or administering an XFS file system. ago. Ext4 is the default file system on most Linux distributions for a reason. Regardless of your choice of volume manager, you can always use both LVM and ZFS to manage your data across disks and servers when you move onto a VPS platform as well. EvertM. You probably could. Here is the basic command for ext4: # resize2fs /dev/vg00/sales-lv 3T Reduce capacity. b) Proxmox is better than FreeNAS for virtualization due to the use of KVM, which seems to be much more. As per Proxmox wiki "On file based storages, snapshots are possible with the qcow2 format. Otherwise you would have to partition and format it yourself using the CLI. Get your own in 60 seconds. A 3TB / volume and the software in /opt routinely chews up disk space. "EXT4 does not support concurrent writes, XFS does" (But) EXT4 is more "mainline"The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. 10. But, as always, your specific use case affects this greatly, and there are corner cases where any of. Select Proxmox Backup Server from the dropdown menu. And you might just as well use EXT4. In the preceding screenshot, we selected zfs (RAID1) for mirroring, and the two drives, Harddisk 0 and Harddisk 1, to install Proxmox. This is necessary should you make. After searching the net, seeing youtube tutorials, and reading manuals for hours - I still can not understand the difference between LVM and Directory. By far, XFS can handle large data better than any other filesystem on this list and do it reliably too. Roopee. Hi there! I'm not sure which format to use between EXT4, XFS, ZFS and BTRFS for my Proxmox installation, wanting something that once installed will perform. 3 XFS. Snapraid says if the disk size is below 16TB there are no limitations, if above 16TB the parity drive has to be XFS because the parity is a single file and EXT4 has a file size limit of 16TB. ext4 /dev/sdc mke2fs 1. Please do not discuss about EXT4 and XFS as they are not CoW filesystems. You also have full ZFS integration in PVE, so that you can use native snapshots with ZFS, but not with XFS. also XFS has been recommended by many for MySQL/MariaDB for some time. 52TB I want to dedicate to GlusterFS (which will then be linked to k8s nodes running on the VMs through a storage class). Is it worth using ZFS for the Proxmox HDD over ext4? My original plan was to use LVM across the two SSDs for the VMs themselves. For LXC, Proxmox uses ZFS subvols, but ZFS subvols cannot be formatted with a different filesystem. 77. Some features do use a fair bit of RAM (like automatic deduplication), but those are features that most other filesystems lack entirely. These quick benchmarks are just intended for reference purposes for those wondering how the different file-systems are comparing these days on the latest Linux kernel across the popular Btrfs, EXT4, F2FS, and XFS mainline choices. (Install proxmox on the NVME, or on another SATA SSD). Sorry to revive this old thread, but I had to ask: Am I wrong to think that the main reason for ZFS never getting into the Linux Kernel is actually a license problem? See full list on linuxopsys. XFS was surely a slow-FS on metadata operations, but it has been fixed recently as well. Let’s go through the different features of the two filesystems. 7. choose d to delete existing partition (you might need to do it several times, until there is no partition anymore) then w to write the deletion. 6. Storage replication brings redundancy for guests using local storage and reduces migration time. This is a sub that aims at bringing data hoarders together to share their passion with like minded…27. To organize that data, ZFS uses a flexible tree in which each new system is a child. Install Debian: 32GB root (ext4), 16GB swap, and 512MB boot in NVMe. michaelpaoli 2 yr. 10 is relying upon various back-ports from ZFS On Linux 0. I have set up proxmox ve on a dell R720. The main tradeoff is pretty simple to understand: BTRFS has better data safety, because the checksumming lets it ID which copy of a block is wrong when only one is wrong, and means it can tell if both copies are bad. Any changes done to the VM's disk contents are stored separately. As well as ext4. 8. Will sagen, wenn Du mit hohen IO-Delay zu kämpfen hast, sorge für mehr IOPS (Verteilung auf mehr Spindeln, z. 9. BTRFS is working on per-subvolume settings (new data written in home. If you are sure there is no data on that disk you want to keep you can wipe it using the webUI: "Datacenter -> YourNode -> Disks -> select the disk you want to wipe. 压测过程中 xfs 在高并发 72个并发情况下出现thread_running 抖动,而ext4 表现比较稳定。. If you add, or delete, a storage through Datacenter. On lower thread counts, it’s as much as 50% faster than EXT4. Ubuntu 18. Step 3 - Prepare your system. The pvesr command line tool manages the Proxmox VE storage replication framework. It tightly integrates the KVM hypervisor and Linux Containers (LXC), software-defined storage and networking functionality, on a single platform. d/rc. 10!) and am just wondering about the above. I am trying to decide between using XFS or EXT4 inside KVM VMs. ZFS gives you snapshots, flexible subvolumes, zvols for VMs, and if you have something with a large ZFS disk you can use ZFS to do easy backups to it with native send/receive abilities. Now, XFS doesn't support shrinking as such. Replication uses snapshots to minimize traffic sent over. But running zfs on raid shouldn't lead to anymore data loss than using something like ext4. The ID should be the name you can easily identify the store, we use the same name as the name of the directory itself. You really need to read a lot more, and actually build stuff to. 2. Still, I am exclusively use XFS where there is no diverse media under the system (SATA/SAS only, or SSD only), and had no real problem for decades, since it's simple and it's fast. ext4 4 threads: 74 MiB/sec. Using native mount from a client provided an up/down speed of about 4 MB/s, so I added nfs-ganesha-gluster (3. So that's what most Linux users would be familiar with. I understand Proxmox 6 now has SSD TRIM support on ZFS, so that might help. 1, the installer creates a standard logical volume called “data”, which is mounted at /var/lib/vz. x or newer). ZFS is faster than ext4, and is a great filesystem candidate for boot partitions! I would go with ZFS, and not look back. Clean installs of Ubuntu 19. Defaults: ext4 and XFS. ext4 vs xfs vs. However the default filesystem suggested by the Centos7 installer is XFS. 10 were done both with EXT4 and ZFS while using the stock mount options / settings each time. this should show you a single process with an argument that contains 'file-restore' in the '-kernel' parameter of the restore vm. It's absolutely better than EXT4 in just about every way. Post by Sabuj Pattanayek Hi, I've seen that EXT4 has better random I/O performance than XFS, especially on small reads and writes. Step 4: Resize / partition to fill all space. Copy-on-Write (CoW): ZFS is a Copy-on-Write filesystem and works quite different to a classic filesystem like FAT32 or NTFS. Despite some capacity limitations, EXT4 makes it a very reliable and robust system to work with. for that you would need a mirror). I have a pcie NVMe drive which is 256gb in size and I then have two 3TB iron wolf drives in. ZFS can detect data corruption (but not correct data corruption. Get your own in 60 seconds. Complete toolset for administering virtual machines, containers, the host system, clusters and all necessary resources. 2k 3. XFS does not require extensive reading. As in general practice xfs is being used for large file systems not likely for / and /boot and /var. With a decent CPU transparent compression can even improve the performance. remount zvol to /var/lib/vz. 4. Introduction. Select the local-lvm, Click on “Remove” button. btrfs for this feature. That's right, XFS "repairs" errors on the fly, whereas ext4 requires you to remount read-only and fsck. # xfs_growfs -d /dev/sda1. But I think you should use directory for anything other than normal filesystem like ext4. Starting from version 4. fight with zfs automount for 3 hours because it doesn't always remount zfs on startup. I've tweaked the answer slightly. Common Commands for ext3 and ext4 Compared to XFS If you found this article helpful then do click on 👏 the button and also feel free to drop a comment. As in Proxmox OS on HW RAID1 + 6 Disks on ZFS ( RAIDZ1) + 2 SSD ZFS RAID1. If you have a NAS or Home server, BTRFS or XFS can offer benefits but then you'll have to do some extensive reading first. I’d still choose ZFS. 9. g. The problem here is that overlay2 only supports EXT4 and XFS as backing filesystems, not ZFS. Proxmox VE Linux kernel with KVM and LXC support. or details, see Terms & Conditions incl. Three identical nodes, each with 256 GB nvme + 256 GB sata. Proxmox actually creates the « datastore » in an LVM so you’re good there. Reducing storage space is a less common task, but it's worth noting. Click remove and confirm. #1. See Proxmox VE reference documentation about ZFS root file systems and host bootloaders . As PBS can also check for data integrity on the software level, I would use a ext4 with a single SSD. howto use a single disk with proxmox. we are evaluating ZFS for our Proxmox VE future installations over the currently used LVM. This can make differences as there. , where PVE can put disk images of virtual machines, where ISO files or container templates for VM/CT creation may be, which storage may be used for backups, and so on. With Proxmox you need a reliable OS/boot drive more than a fast one. Other helpful info. In the future, Linux distributions will gradually shift towards BtrFS. But shrinking is no problem for ext4 or btrfs. 1. isaacssv • 3 yr. Through many years of development, it is one of the most stable file systems. Sorry to revive this. ZFS features are hard to beat. 1: Disk images for VMs are stored in ZFS volume (zvol) datasets, which provide block device functionality. 8. I was contemplating using the PERC H730 to configure six of the physical disks as a RAID10 virtual disk with two physical. So the perfect storage. Complete operating system (Debian Linux, 64-bit) Proxmox Linux kernel with ZFS support. service (7. Proxmox Filesystems Unveiled: A Beginner’s Dive into EXT4 and ZFS. I want to use 1TB of this zpool as storage for 2 VMs. Ext4 has a more robust fsck and runs faster on low-powered systems. Also, with lvm you can have snapshots even with ext4. Subscription period is one year from purchase date. Virtual machines storage performance is a hot topic – after all, one of the main problem when virtualizing many OS instances is to correctly size the I/O subsystem, both in term of space and speed. Ext4 seems better suited for lower-spec configurations although it will work just fine on faster ones as well, and performance-wise still better than btrfs in most cases. Each Proxmox VE server needs a subscription with the right CPU-socket count. . storage pool type: lvmthin LVM normally allocates blocks when you create a volume. Select the Target Harddisk Note: Don’t change the filesystem unless you know what you are doing and want to use ZFS, Btrfs or xfs. But default file system is ext4 and I want xfs file system because of performance. w to write it. ZFS has a dataset (or pool) wise snapshots, this has to be done with XFS on a per filesystem level, which is not as fine-grained as with ZFS. Create a VM inside proxmox, use Qcow2 as the VM HDD. NVMe drives formatted to 4096k. XFS was surely a slow-FS on metadata operations, but it has been fixed recently as well. It has some advantages over EXT4. That XFS performs best on fast storage and better hardware allowing more parallelism was my conclusion too. Note: If you have used xfs, replace ext4 with xfs. Btrfs El sistema de archivos Btrfs nació como sucesor natural de EXT4, su objetivo es sustituirlo eliminando el mayor número de sus limitaciones, sobre todo lo referido al tamaño. 1. Proxmox has the ability to automatically do zfs send and receive on nodes. Please note that Proxmox VE currently only supports one technology for local software defined RAID storage: ZFS Supported Technologies ZFS. 05 MB/s and the sdb drive device gave 2. 4, the new system uses 2 raid 10 arrays, formated with xfs. ZFS is a filesystem and volume manager combined. There are two more empty drive bays in the. This is addressed in this knowledge base article; the main consideration for you will be the support levels available: Ext4 is supported up to 50TB, XFS up to 500TB. The container has 2 disk (raw format), the rootfs and an additional mount point, both of them are in ext4, I want to format to xfs the second mount point. From Wikipedia: "In Linux, the ext2, ext3, ext4, JFS, Squashfs, Yaffs2, ReiserFS, Reiser4, XFS, Btrfs, OrangeFS, Lustre, OCFS2 1. Redundancy cannot be achieved by one huge disk drive plugged into your project. The way I have gone about this (following the wiki) is summarized by the following: First i went to the VM page via the proxmox web browser control panel. used for files not larger than 10GB, many small files, timemachine backups, movies, books, music. Additionally, ZFS works really well with different sized disks and pool expansion from what I've read. For example it's xfsdump/xfsrestore for xfs, dump/restore for ext2/3/4. The Proxmox VE installer, which partitions the local disk(s) with ext4, XFS, BTRFS (technology preview), or ZFS and installs the operating system. Unfortunately you will probably lose a few files in both cases. g. Comparing direct XFS/ext4 vs Longhorn which has distributed built-in its design, may provide the incorrect expectation. I'm always in favor of ZFS because it just has so many features, but it's up to you. XFS is a robust and mature 64-bit journaling file system that supports very large files and file systems on a single host. It was pretty nice when I last used it with only 2 nodes. XFS or ext4 should work fine. To answer the LVM vs ZFS- LVM is just an abstract layer that would have ext4 or xfs on top, where as ZFS is an abstract layer, raid orchestrator, and filesystem in one big stack. Re: EXT4 vs. I've got a SansDigital EliteRAID storage unit that is currently set to on-device RAID 5 and is using usb passthrough to a Windows Server vm. 1. org's git. One caveat I can think of is /etc/fstab and some other things may be somewhat different for ZFS root and so should probably not be transferred over. XFS es un sistema de archivos de 64 bits altamente escalable, de alto rendimiento, robusto y maduro que soporta archivos y sistemas de archivos muy grandes en un solo host. You probably don’t want to run either for speed. In terms of XFS vs Ext4, XFS is superior to Ext4 in the following aspects: Larger Partition Size and File Size: Ext4 supports partition size up to 1 EiB and file. Offizieller Beitrag. Note that ESXi does not support software RAID implementations. The one they your distribution recommends. The EXT4 f ile system is 48-bit with a maximum file size of 1 exbibyte, depending on the host operating system. Privileged vs Unprivileged: Doesn't matter. 3 结论. Procedure. XFS still has some reliability issues, but could be good for a large data store where speed matters but rare data loss (e. ZFS und auch ext4, xfs, etc. ZFS gives you snapshots, flexible subvolumes, zvols for VMs, and if you have something with a large ZFS disk you can use ZFS to do easy backups to it with native send/receive abilities. Tenga en cuenta que el uso de inode32 no afecta a los inodos que ya están asignados con números de 64 bits. g. ZFS zvol support snapshots, dedup and. Compared to Ext4, XFS has a relatively poor performance for single threaded, metadata-intensive workloads. 7. We tried, in proxmox, EXT4, ZFS, XFS, RAW & QCOW2 combinations. I personally haven't noticed any difference in RAM consumption when switched from ext4 about a year ago. So yes you can do it but it's not recommended and could potentially cause data loss. So I am in the process of trying to increase the disk size of one of my VMs from 750GB -> 1. ZFS storage uses ZFS volumes which can be thin provisioned. On my old installation (Upgrade machine from pve3 to pve4) there is the defaultcompression to "on". (You can also use RAW or something else, but this removes a lot of the benefits of things like Thin Provisioning. The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway. g. 2. To install PCP, enter: # yum install pcp. , power failure) could be acceptable. could go with btrfs even though it's still in beta and not recommended for production yet. Turn the HDDs into LVM, then create vm disk. With iostat XFS zd0 gave 2. It's an improved version of the older Ext3 file system. Once you have selected Directory it is time to fill out some info. In the directory option input the directory we created and select VZDump backup file: Finally schedule backups by going to Datacenter – > Backups. The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. xfs 4 threads: 97 MiB/sec. The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. So I think you should have no strong preference, except to consider what you are familiar with and what is best documented. You can have VM configured with LVM partitions inside a qcow2 file, I don't think qcow2 inside LVM really makes sense. This article here has a nice summary of ZFS's features: acohdehydrogenase • 4 yr. xfs is really nice and reliable. . I want to convert that file system. They perform differently for some specific workloads like creating or deleting tenthousands of files / folders. 42. 현재 Ext4는 Red Hat Enterprise Linux 6의 기본 파일 시스템으로 단일 파일 및 파일 시스템 모두에서 최대 16 TB 크기 까지 지원합니다. Head over to the Proxmox download page and grab yourself the Proxmox VE 6. Note that when adding a directory as a BTRFS storage, which is not itself also the mount point, it is highly recommended to specify the actual mount point via the is_mountpoint option. Create a directory to mount it to (e. If i am using ZFS with proxmox, then the lv with the lvm-thin will be a zfs pool. XFS vs EXT4!This is a very common question when it comes to Linux filesystems and if you’re looking for the difference between XFS and EXT4, here is a quick summary:. It replicates guest volumes to another node so that all data is available without using shared storage. Happy server building!In an other hand if i install proxmox backup server on ext4 inside a VM hosted directly on ZFS of proxmox VE i can use snapshot of the whole proxmox backup server or even zfs replication for maintenance purpose. Here are some key differences between them: XFS is a high-performance file system that Silicon Graphics originally developed. So I installed Proxmox "normally", i. g. XFS will generally have better allocation group. Issue the following commands from the shell (Choose the node > shell): # lvremove /dev/pve/data # lvresize -l +100%FREE /dev/pve/root #. • 1 yr. jinjer Active Member. XFS fue desarrollado originalmente a principios de. For now the PVE hosts store backups both locally and on PBS single disk backup datastore. If you installed Proxmox on a single disk with ZFS on root, then you just have a pool with single, single-disk vdev. All have pros and cons. Unless you're doing something crazy, ext4 or btrfs would both be fine. . We are looking for the best filesystem for the purpose of RAID1 host partitions. swear at your screen while figuring out why your VM doesn't start. The kvm guest may even freeze when high IO traffic is done on the guest. ago. Note the use of ‘--’, to prevent the following ‘-1s’ last-sector indicator from being interpreted. The container has 2 disk (raw format), the rootfs and an additional mount point, both of them are in ext4, I want to format to xfs the second mount point. I've tried to use the typical mkfs. When dealing with multi-disk configurations and RAID, the ZFS file-system on Linux can begin to outperform EXT4 at least in some configurations. GitHub. #6. You cannot go beyond that. 5 (15-Dec-2018) Creating filesystem with 117040640 4k blocks and 29261824 inodes Filesystem UUID: bb405991-4aea-4fe7-b265-cc644ea5e770. ext4 can claim historical stability, while the consumer advantage of btrfs is snapshots (the ease of subvolumes is nice too, rather than having to partition). EXT4 - I know nothing about this file system. sysinit or udev rules will normally run a vgchange -ay to automatically activate any LVM logical volumes. gehen z. I just got my first home server thanks to a generous redditor, and I'm intending to run Proxmox on it. You're better off using a regular SAS controller and then letting ZFS do RAIDZ (aka RAID5). You can see several XFS vs ext4 benchmarks on phoronix. For really large sequentialProxmox boot drive best practice. Select the VM or container, and click the Snapshots tab. I have a high end consumer unit (i9-13900K, 64GB DDR5 RAM, 4TB WD SN850X NVMe), I know it total overkill but I want something that can resync quickly new clients since I like to tinker. 또한 ext3. Btrfs is still developmental and has some deficiencies that need to be worked out - but have made a fair amount of progress. Now i noticed that my SSD shows up with 223,57GiB in size under Datacenter->pve->Disks. But for spinning rust storage for data. The pvesr command line tool manages the Proxmox VE storage replication framework. Both aren't Copy-on-Write (CoW) filesystems. 1 Login to pve via SSH. 2 nvme in my r630 server. For single disks over 4T, I would consider xfs over zfs or ext4. Get your own in 60 seconds. Select your Country, Time zone and Keyboard LayoutHi, on a fresh install of Proxmox with BTRFS, I noticed that the containers install by default with a loop device formatted as ext4, instead of using a BTRFS subvolume, even when the disk is configured using the BTRFS storage backend. Thanks in advance! TL:DR Should I use EXT4 or ZFS for my file server / media server. EDIT 1: Added that BTRFS is the default filesystem for Red Hat but only on Fedora. Putting ZFS on hardware RAID is a bad idea. Cheaper SSD/USB/SD cards tend to get eaten up by Proxmox, hence the High Endurance. fiveangle. Without that, probably just noatime. So what is the optimal configuration? I assume. The /var/lib/vz is now included in the LV root. In doing so I’m rebuilding the entire box. From the documentation: The choice of a storage type will determine the format of the hard disk image. I hope that's a typo, because XFS offers zero data integrity protection. Tried all three, following is the stats - XFS #pveperf /vmdiskProxmox VE Community Subscription 4 CPUs/year. XFS. Both ext4 and XFS should be able to handle it. Storages which present block devices (LVM, ZFS, Ceph) will require the raw disk image format, whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose either the raw disk image format or the QEMU image format. Elegir entre sistemas de archivos de red y de almacenamiento compartido 1. . Proxmox running ZFS. Specs at a glance: Summer 2019 Storage Hot Rod, as tested. Offizieller Beitrag.