Is there a way to make ext-filesystems use less space for themselves in Linux?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;







up vote
35
down vote

favorite
4












I have a bunch of external and internal HDDs that I use on a Linux system. I only have Linux systems, so using a Linux file-system would only make sense, right? However I'm currently using NTFS everywhere, because it gives me the most usable space out of HDDs.



I would like to switch to Linux file-systems now though, mostly because of permissions and compability (e.g. I can't get my LUKS encrypted NTFS partition to resize under Linux, keeps telling me to chkdsk under Windows).



However when I formatted those HDDs I tried out a bunch of different filesystems and every Linux filesystem, even ext2 which as far as I know has no journaling, used a lot of space for itself. I don't recall exact values, but it was over 100GB that NTFS got me more on a 2TB HDD, which is a lot.



So my question is: Is there a way to make ext-filesystems use less space for themselves? Or is there another filesystem (I've tried ext2, ext3, ext4, NTFS and vfat - None of them came even close to the usable space NTFS offered me) with perfect Linux support and great usable space?



I'd love to hear about how and why filesystems (especially ext2 which has no journaling) use that much more space than NTFS and I don't know where else to ask. I'd mostly prefer a way to use ext4 without journaling and anything else that uses up this much space, if that's possible.







share|improve this question

















  • 1




    Have you seen this thread?
    – JakeGould
    yesterday






  • 4




    I have, and it explained what uses up the extra space but the difference between NTFS and ext is MUCH bigger than between reiserfs and ext, and I'm wondering if there is any way to make it smaller. For example on a 1TB HDD I'm able to use 989GB with NTFS. ext4 would give me around 909GB.
    – confetti
    yesterday










  • Fair enough. Decent question and the answer is enlightening too.
    – JakeGould
    yesterday






  • 1




    how do you actually measure what space is available? this is important because depending on what values you look at you may or may not see the effect of the 5% reservation for example as is stated in the linked question
    – eMBee
    21 hours ago










  • I used to check the "Free space" properties in the file manager, df and gnome-system-monitor. However the latter seems to have another column called free that actually shows the free space including the 5% which I just found out about though.
    – confetti
    21 hours ago
















up vote
35
down vote

favorite
4












I have a bunch of external and internal HDDs that I use on a Linux system. I only have Linux systems, so using a Linux file-system would only make sense, right? However I'm currently using NTFS everywhere, because it gives me the most usable space out of HDDs.



I would like to switch to Linux file-systems now though, mostly because of permissions and compability (e.g. I can't get my LUKS encrypted NTFS partition to resize under Linux, keeps telling me to chkdsk under Windows).



However when I formatted those HDDs I tried out a bunch of different filesystems and every Linux filesystem, even ext2 which as far as I know has no journaling, used a lot of space for itself. I don't recall exact values, but it was over 100GB that NTFS got me more on a 2TB HDD, which is a lot.



So my question is: Is there a way to make ext-filesystems use less space for themselves? Or is there another filesystem (I've tried ext2, ext3, ext4, NTFS and vfat - None of them came even close to the usable space NTFS offered me) with perfect Linux support and great usable space?



I'd love to hear about how and why filesystems (especially ext2 which has no journaling) use that much more space than NTFS and I don't know where else to ask. I'd mostly prefer a way to use ext4 without journaling and anything else that uses up this much space, if that's possible.







share|improve this question

















  • 1




    Have you seen this thread?
    – JakeGould
    yesterday






  • 4




    I have, and it explained what uses up the extra space but the difference between NTFS and ext is MUCH bigger than between reiserfs and ext, and I'm wondering if there is any way to make it smaller. For example on a 1TB HDD I'm able to use 989GB with NTFS. ext4 would give me around 909GB.
    – confetti
    yesterday










  • Fair enough. Decent question and the answer is enlightening too.
    – JakeGould
    yesterday






  • 1




    how do you actually measure what space is available? this is important because depending on what values you look at you may or may not see the effect of the 5% reservation for example as is stated in the linked question
    – eMBee
    21 hours ago










  • I used to check the "Free space" properties in the file manager, df and gnome-system-monitor. However the latter seems to have another column called free that actually shows the free space including the 5% which I just found out about though.
    – confetti
    21 hours ago












up vote
35
down vote

favorite
4









up vote
35
down vote

favorite
4






4





I have a bunch of external and internal HDDs that I use on a Linux system. I only have Linux systems, so using a Linux file-system would only make sense, right? However I'm currently using NTFS everywhere, because it gives me the most usable space out of HDDs.



I would like to switch to Linux file-systems now though, mostly because of permissions and compability (e.g. I can't get my LUKS encrypted NTFS partition to resize under Linux, keeps telling me to chkdsk under Windows).



However when I formatted those HDDs I tried out a bunch of different filesystems and every Linux filesystem, even ext2 which as far as I know has no journaling, used a lot of space for itself. I don't recall exact values, but it was over 100GB that NTFS got me more on a 2TB HDD, which is a lot.



So my question is: Is there a way to make ext-filesystems use less space for themselves? Or is there another filesystem (I've tried ext2, ext3, ext4, NTFS and vfat - None of them came even close to the usable space NTFS offered me) with perfect Linux support and great usable space?



I'd love to hear about how and why filesystems (especially ext2 which has no journaling) use that much more space than NTFS and I don't know where else to ask. I'd mostly prefer a way to use ext4 without journaling and anything else that uses up this much space, if that's possible.







share|improve this question













I have a bunch of external and internal HDDs that I use on a Linux system. I only have Linux systems, so using a Linux file-system would only make sense, right? However I'm currently using NTFS everywhere, because it gives me the most usable space out of HDDs.



I would like to switch to Linux file-systems now though, mostly because of permissions and compability (e.g. I can't get my LUKS encrypted NTFS partition to resize under Linux, keeps telling me to chkdsk under Windows).



However when I formatted those HDDs I tried out a bunch of different filesystems and every Linux filesystem, even ext2 which as far as I know has no journaling, used a lot of space for itself. I don't recall exact values, but it was over 100GB that NTFS got me more on a 2TB HDD, which is a lot.



So my question is: Is there a way to make ext-filesystems use less space for themselves? Or is there another filesystem (I've tried ext2, ext3, ext4, NTFS and vfat - None of them came even close to the usable space NTFS offered me) with perfect Linux support and great usable space?



I'd love to hear about how and why filesystems (especially ext2 which has no journaling) use that much more space than NTFS and I don't know where else to ask. I'd mostly prefer a way to use ext4 without journaling and anything else that uses up this much space, if that's possible.









share|improve this question












share|improve this question




share|improve this question








edited 10 hours ago









Braiam

3,98531850




3,98531850









asked yesterday









confetti

482112




482112







  • 1




    Have you seen this thread?
    – JakeGould
    yesterday






  • 4




    I have, and it explained what uses up the extra space but the difference between NTFS and ext is MUCH bigger than between reiserfs and ext, and I'm wondering if there is any way to make it smaller. For example on a 1TB HDD I'm able to use 989GB with NTFS. ext4 would give me around 909GB.
    – confetti
    yesterday










  • Fair enough. Decent question and the answer is enlightening too.
    – JakeGould
    yesterday






  • 1




    how do you actually measure what space is available? this is important because depending on what values you look at you may or may not see the effect of the 5% reservation for example as is stated in the linked question
    – eMBee
    21 hours ago










  • I used to check the "Free space" properties in the file manager, df and gnome-system-monitor. However the latter seems to have another column called free that actually shows the free space including the 5% which I just found out about though.
    – confetti
    21 hours ago












  • 1




    Have you seen this thread?
    – JakeGould
    yesterday






  • 4




    I have, and it explained what uses up the extra space but the difference between NTFS and ext is MUCH bigger than between reiserfs and ext, and I'm wondering if there is any way to make it smaller. For example on a 1TB HDD I'm able to use 989GB with NTFS. ext4 would give me around 909GB.
    – confetti
    yesterday










  • Fair enough. Decent question and the answer is enlightening too.
    – JakeGould
    yesterday






  • 1




    how do you actually measure what space is available? this is important because depending on what values you look at you may or may not see the effect of the 5% reservation for example as is stated in the linked question
    – eMBee
    21 hours ago










  • I used to check the "Free space" properties in the file manager, df and gnome-system-monitor. However the latter seems to have another column called free that actually shows the free space including the 5% which I just found out about though.
    – confetti
    21 hours ago







1




1




Have you seen this thread?
– JakeGould
yesterday




Have you seen this thread?
– JakeGould
yesterday




4




4




I have, and it explained what uses up the extra space but the difference between NTFS and ext is MUCH bigger than between reiserfs and ext, and I'm wondering if there is any way to make it smaller. For example on a 1TB HDD I'm able to use 989GB with NTFS. ext4 would give me around 909GB.
– confetti
yesterday




I have, and it explained what uses up the extra space but the difference between NTFS and ext is MUCH bigger than between reiserfs and ext, and I'm wondering if there is any way to make it smaller. For example on a 1TB HDD I'm able to use 989GB with NTFS. ext4 would give me around 909GB.
– confetti
yesterday












Fair enough. Decent question and the answer is enlightening too.
– JakeGould
yesterday




Fair enough. Decent question and the answer is enlightening too.
– JakeGould
yesterday




1




1




how do you actually measure what space is available? this is important because depending on what values you look at you may or may not see the effect of the 5% reservation for example as is stated in the linked question
– eMBee
21 hours ago




how do you actually measure what space is available? this is important because depending on what values you look at you may or may not see the effect of the 5% reservation for example as is stated in the linked question
– eMBee
21 hours ago












I used to check the "Free space" properties in the file manager, df and gnome-system-monitor. However the latter seems to have another column called free that actually shows the free space including the 5% which I just found out about though.
– confetti
21 hours ago




I used to check the "Free space" properties in the file manager, df and gnome-system-monitor. However the latter seems to have another column called free that actually shows the free space including the 5% which I just found out about though.
– confetti
21 hours ago










2 Answers
2






active

oldest

votes

















up vote
65
down vote



accepted










By default, ext2 and its successors reserve 5% of the filesystem for use by the root user. This reduces fragmentation, and makes it less likely that the administrator or any root-owned daemons will be left with no space to work in.



These reserved blocks prevent programs not running as root from filling your disk.
Whether these considerations justify the loss of capacity depends on what the filesystem is used for.



The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.



The reservation can be changed using the -m option of the tune2fs command:



tune2fs -m 0 /dev/sda1


This will set the reserved blocks percentage to 0% (0 blocks).



To get the current value (among others), use the command :



tune2fs -l <device> 





share|improve this answer



















  • 8




    This would explain the immense difference in usable space perfectly (as 5% of 2TB are 100GB). The disks won't be used for anything as root or system-file related, so I think it would be save to disable this. I got a question though: How do root-owned programs know there is more free space than non-root programs? Running df as non-root vs. root shows no difference.
    – confetti
    yesterday






  • 9




    @confetti: Because the VFS doesn't reject their attempts to write to the disk with an error (until the volume is actually full, of course).
    – Ignacio Vazquez-Abrams
    yesterday







  • 1




    tune2fs -l <device> should give this value among others. The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
    – harrymc
    yesterday







  • 5




    XFS reserves the smaller of 5% or 8192 blocks (32 MiB), so the reserved amount is generally tiny compared to the size of the filesystem.
    – Michael Hampton
    yesterday






  • 4




    Thank you very much everyone for the explanations. This helped me understand greatly. My disk used to fill up entirely to its last byte before, yet my system did not fail completely, now I understand why.
    – confetti
    21 hours ago

















up vote
2
down vote













if the data you intend to store on it is compressible, btrfs mounted with compress=zstd (or compress-force=zstd) would probbaly use significantly less disk space than ext*



  • this will make btrfs transparently compress your data before writing it to disk, and transparently decompress it when reading it back. also, ext4 pre-allocate all inodes at filesystem creation, btrfs creates them as needed, i guess that might save some space too.





share|improve this answer



















  • 1




    Do you mind adding more information to this answer? (How it works, what it does, maybe a reference, ...)
    – confetti
    17 hours ago










  • @confetti like this? patchwork.kernel.org/patch/9817875
    – hanshenrik
    17 hours ago










  • I really like this idea, but more information about how this would impact speed and performance and such would be nice.
    – confetti
    5 hours ago










Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "3"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1346350%2fis-there-a-way-to-make-ext-filesystems-use-less-space-for-themselves-in-linux%23new-answer', 'question_page');

);

Post as a guest






























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
65
down vote



accepted










By default, ext2 and its successors reserve 5% of the filesystem for use by the root user. This reduces fragmentation, and makes it less likely that the administrator or any root-owned daemons will be left with no space to work in.



These reserved blocks prevent programs not running as root from filling your disk.
Whether these considerations justify the loss of capacity depends on what the filesystem is used for.



The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.



The reservation can be changed using the -m option of the tune2fs command:



tune2fs -m 0 /dev/sda1


This will set the reserved blocks percentage to 0% (0 blocks).



To get the current value (among others), use the command :



tune2fs -l <device> 





share|improve this answer



















  • 8




    This would explain the immense difference in usable space perfectly (as 5% of 2TB are 100GB). The disks won't be used for anything as root or system-file related, so I think it would be save to disable this. I got a question though: How do root-owned programs know there is more free space than non-root programs? Running df as non-root vs. root shows no difference.
    – confetti
    yesterday






  • 9




    @confetti: Because the VFS doesn't reject their attempts to write to the disk with an error (until the volume is actually full, of course).
    – Ignacio Vazquez-Abrams
    yesterday







  • 1




    tune2fs -l <device> should give this value among others. The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
    – harrymc
    yesterday







  • 5




    XFS reserves the smaller of 5% or 8192 blocks (32 MiB), so the reserved amount is generally tiny compared to the size of the filesystem.
    – Michael Hampton
    yesterday






  • 4




    Thank you very much everyone for the explanations. This helped me understand greatly. My disk used to fill up entirely to its last byte before, yet my system did not fail completely, now I understand why.
    – confetti
    21 hours ago














up vote
65
down vote



accepted










By default, ext2 and its successors reserve 5% of the filesystem for use by the root user. This reduces fragmentation, and makes it less likely that the administrator or any root-owned daemons will be left with no space to work in.



These reserved blocks prevent programs not running as root from filling your disk.
Whether these considerations justify the loss of capacity depends on what the filesystem is used for.



The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.



The reservation can be changed using the -m option of the tune2fs command:



tune2fs -m 0 /dev/sda1


This will set the reserved blocks percentage to 0% (0 blocks).



To get the current value (among others), use the command :



tune2fs -l <device> 





share|improve this answer



















  • 8




    This would explain the immense difference in usable space perfectly (as 5% of 2TB are 100GB). The disks won't be used for anything as root or system-file related, so I think it would be save to disable this. I got a question though: How do root-owned programs know there is more free space than non-root programs? Running df as non-root vs. root shows no difference.
    – confetti
    yesterday






  • 9




    @confetti: Because the VFS doesn't reject their attempts to write to the disk with an error (until the volume is actually full, of course).
    – Ignacio Vazquez-Abrams
    yesterday







  • 1




    tune2fs -l <device> should give this value among others. The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
    – harrymc
    yesterday







  • 5




    XFS reserves the smaller of 5% or 8192 blocks (32 MiB), so the reserved amount is generally tiny compared to the size of the filesystem.
    – Michael Hampton
    yesterday






  • 4




    Thank you very much everyone for the explanations. This helped me understand greatly. My disk used to fill up entirely to its last byte before, yet my system did not fail completely, now I understand why.
    – confetti
    21 hours ago












up vote
65
down vote



accepted







up vote
65
down vote



accepted






By default, ext2 and its successors reserve 5% of the filesystem for use by the root user. This reduces fragmentation, and makes it less likely that the administrator or any root-owned daemons will be left with no space to work in.



These reserved blocks prevent programs not running as root from filling your disk.
Whether these considerations justify the loss of capacity depends on what the filesystem is used for.



The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.



The reservation can be changed using the -m option of the tune2fs command:



tune2fs -m 0 /dev/sda1


This will set the reserved blocks percentage to 0% (0 blocks).



To get the current value (among others), use the command :



tune2fs -l <device> 





share|improve this answer















By default, ext2 and its successors reserve 5% of the filesystem for use by the root user. This reduces fragmentation, and makes it less likely that the administrator or any root-owned daemons will be left with no space to work in.



These reserved blocks prevent programs not running as root from filling your disk.
Whether these considerations justify the loss of capacity depends on what the filesystem is used for.



The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.



The reservation can be changed using the -m option of the tune2fs command:



tune2fs -m 0 /dev/sda1


This will set the reserved blocks percentage to 0% (0 blocks).



To get the current value (among others), use the command :



tune2fs -l <device> 






share|improve this answer















share|improve this answer



share|improve this answer








edited yesterday


























answered yesterday









harrymc

233k9237513




233k9237513







  • 8




    This would explain the immense difference in usable space perfectly (as 5% of 2TB are 100GB). The disks won't be used for anything as root or system-file related, so I think it would be save to disable this. I got a question though: How do root-owned programs know there is more free space than non-root programs? Running df as non-root vs. root shows no difference.
    – confetti
    yesterday






  • 9




    @confetti: Because the VFS doesn't reject their attempts to write to the disk with an error (until the volume is actually full, of course).
    – Ignacio Vazquez-Abrams
    yesterday







  • 1




    tune2fs -l <device> should give this value among others. The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
    – harrymc
    yesterday







  • 5




    XFS reserves the smaller of 5% or 8192 blocks (32 MiB), so the reserved amount is generally tiny compared to the size of the filesystem.
    – Michael Hampton
    yesterday






  • 4




    Thank you very much everyone for the explanations. This helped me understand greatly. My disk used to fill up entirely to its last byte before, yet my system did not fail completely, now I understand why.
    – confetti
    21 hours ago












  • 8




    This would explain the immense difference in usable space perfectly (as 5% of 2TB are 100GB). The disks won't be used for anything as root or system-file related, so I think it would be save to disable this. I got a question though: How do root-owned programs know there is more free space than non-root programs? Running df as non-root vs. root shows no difference.
    – confetti
    yesterday






  • 9




    @confetti: Because the VFS doesn't reject their attempts to write to the disk with an error (until the volume is actually full, of course).
    – Ignacio Vazquez-Abrams
    yesterday







  • 1




    tune2fs -l <device> should give this value among others. The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
    – harrymc
    yesterday







  • 5




    XFS reserves the smaller of 5% or 8192 blocks (32 MiB), so the reserved amount is generally tiny compared to the size of the filesystem.
    – Michael Hampton
    yesterday






  • 4




    Thank you very much everyone for the explanations. This helped me understand greatly. My disk used to fill up entirely to its last byte before, yet my system did not fail completely, now I understand why.
    – confetti
    21 hours ago







8




8




This would explain the immense difference in usable space perfectly (as 5% of 2TB are 100GB). The disks won't be used for anything as root or system-file related, so I think it would be save to disable this. I got a question though: How do root-owned programs know there is more free space than non-root programs? Running df as non-root vs. root shows no difference.
– confetti
yesterday




This would explain the immense difference in usable space perfectly (as 5% of 2TB are 100GB). The disks won't be used for anything as root or system-file related, so I think it would be save to disable this. I got a question though: How do root-owned programs know there is more free space than non-root programs? Running df as non-root vs. root shows no difference.
– confetti
yesterday




9




9




@confetti: Because the VFS doesn't reject their attempts to write to the disk with an error (until the volume is actually full, of course).
– Ignacio Vazquez-Abrams
yesterday





@confetti: Because the VFS doesn't reject their attempts to write to the disk with an error (until the volume is actually full, of course).
– Ignacio Vazquez-Abrams
yesterday





1




1




tune2fs -l <device> should give this value among others. The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
– harrymc
yesterday





tune2fs -l <device> should give this value among others. The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
– harrymc
yesterday





5




5




XFS reserves the smaller of 5% or 8192 blocks (32 MiB), so the reserved amount is generally tiny compared to the size of the filesystem.
– Michael Hampton
yesterday




XFS reserves the smaller of 5% or 8192 blocks (32 MiB), so the reserved amount is generally tiny compared to the size of the filesystem.
– Michael Hampton
yesterday




4




4




Thank you very much everyone for the explanations. This helped me understand greatly. My disk used to fill up entirely to its last byte before, yet my system did not fail completely, now I understand why.
– confetti
21 hours ago




Thank you very much everyone for the explanations. This helped me understand greatly. My disk used to fill up entirely to its last byte before, yet my system did not fail completely, now I understand why.
– confetti
21 hours ago












up vote
2
down vote













if the data you intend to store on it is compressible, btrfs mounted with compress=zstd (or compress-force=zstd) would probbaly use significantly less disk space than ext*



  • this will make btrfs transparently compress your data before writing it to disk, and transparently decompress it when reading it back. also, ext4 pre-allocate all inodes at filesystem creation, btrfs creates them as needed, i guess that might save some space too.





share|improve this answer



















  • 1




    Do you mind adding more information to this answer? (How it works, what it does, maybe a reference, ...)
    – confetti
    17 hours ago










  • @confetti like this? patchwork.kernel.org/patch/9817875
    – hanshenrik
    17 hours ago










  • I really like this idea, but more information about how this would impact speed and performance and such would be nice.
    – confetti
    5 hours ago














up vote
2
down vote













if the data you intend to store on it is compressible, btrfs mounted with compress=zstd (or compress-force=zstd) would probbaly use significantly less disk space than ext*



  • this will make btrfs transparently compress your data before writing it to disk, and transparently decompress it when reading it back. also, ext4 pre-allocate all inodes at filesystem creation, btrfs creates them as needed, i guess that might save some space too.





share|improve this answer



















  • 1




    Do you mind adding more information to this answer? (How it works, what it does, maybe a reference, ...)
    – confetti
    17 hours ago










  • @confetti like this? patchwork.kernel.org/patch/9817875
    – hanshenrik
    17 hours ago










  • I really like this idea, but more information about how this would impact speed and performance and such would be nice.
    – confetti
    5 hours ago












up vote
2
down vote










up vote
2
down vote









if the data you intend to store on it is compressible, btrfs mounted with compress=zstd (or compress-force=zstd) would probbaly use significantly less disk space than ext*



  • this will make btrfs transparently compress your data before writing it to disk, and transparently decompress it when reading it back. also, ext4 pre-allocate all inodes at filesystem creation, btrfs creates them as needed, i guess that might save some space too.





share|improve this answer















if the data you intend to store on it is compressible, btrfs mounted with compress=zstd (or compress-force=zstd) would probbaly use significantly less disk space than ext*



  • this will make btrfs transparently compress your data before writing it to disk, and transparently decompress it when reading it back. also, ext4 pre-allocate all inodes at filesystem creation, btrfs creates them as needed, i guess that might save some space too.






share|improve this answer















share|improve this answer



share|improve this answer








edited 17 hours ago


























answered 17 hours ago









hanshenrik

1215




1215







  • 1




    Do you mind adding more information to this answer? (How it works, what it does, maybe a reference, ...)
    – confetti
    17 hours ago










  • @confetti like this? patchwork.kernel.org/patch/9817875
    – hanshenrik
    17 hours ago










  • I really like this idea, but more information about how this would impact speed and performance and such would be nice.
    – confetti
    5 hours ago












  • 1




    Do you mind adding more information to this answer? (How it works, what it does, maybe a reference, ...)
    – confetti
    17 hours ago










  • @confetti like this? patchwork.kernel.org/patch/9817875
    – hanshenrik
    17 hours ago










  • I really like this idea, but more information about how this would impact speed and performance and such would be nice.
    – confetti
    5 hours ago







1




1




Do you mind adding more information to this answer? (How it works, what it does, maybe a reference, ...)
– confetti
17 hours ago




Do you mind adding more information to this answer? (How it works, what it does, maybe a reference, ...)
– confetti
17 hours ago












@confetti like this? patchwork.kernel.org/patch/9817875
– hanshenrik
17 hours ago




@confetti like this? patchwork.kernel.org/patch/9817875
– hanshenrik
17 hours ago












I really like this idea, but more information about how this would impact speed and performance and such would be nice.
– confetti
5 hours ago




I really like this idea, but more information about how this would impact speed and performance and such would be nice.
– confetti
5 hours ago












 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1346350%2fis-there-a-way-to-make-ext-filesystems-use-less-space-for-themselves-in-linux%23new-answer', 'question_page');

);

Post as a guest













































































Popular posts from this blog

Greedy Best First Search implementation in Rust

Function to Return a JSON Like Objects Using VBA Collections and Arrays

C++11 CLH Lock Implementation