Is there a way to make ext-filesystems use less space for themselves in Linux?
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
up vote
35
down vote
favorite
I have a bunch of external and internal HDDs that I use on a Linux system. I only have Linux systems, so using a Linux file-system would only make sense, right? However I'm currently using NTFS everywhere, because it gives me the most usable space out of HDDs.
I would like to switch to Linux file-systems now though, mostly because of permissions and compability (e.g. I can't get my LUKS encrypted NTFS partition to resize under Linux, keeps telling me to chkdsk under Windows).
However when I formatted those HDDs I tried out a bunch of different filesystems and every Linux filesystem, even ext2 which as far as I know has no journaling, used a lot of space for itself. I don't recall exact values, but it was over 100GB that NTFS got me more on a 2TB HDD, which is a lot.
So my question is: Is there a way to make ext-filesystems use less space for themselves? Or is there another filesystem (I've tried ext2, ext3, ext4, NTFS and vfat - None of them came even close to the usable space NTFS offered me) with perfect Linux support and great usable space?
I'd love to hear about how and why filesystems (especially ext2 which has no journaling) use that much more space than NTFS and I don't know where else to ask. I'd mostly prefer a way to use ext4 without journaling and anything else that uses up this much space, if that's possible.
linux filesystems ext4 ext3 ext2
 |Â
show 2 more comments
up vote
35
down vote
favorite
I have a bunch of external and internal HDDs that I use on a Linux system. I only have Linux systems, so using a Linux file-system would only make sense, right? However I'm currently using NTFS everywhere, because it gives me the most usable space out of HDDs.
I would like to switch to Linux file-systems now though, mostly because of permissions and compability (e.g. I can't get my LUKS encrypted NTFS partition to resize under Linux, keeps telling me to chkdsk under Windows).
However when I formatted those HDDs I tried out a bunch of different filesystems and every Linux filesystem, even ext2 which as far as I know has no journaling, used a lot of space for itself. I don't recall exact values, but it was over 100GB that NTFS got me more on a 2TB HDD, which is a lot.
So my question is: Is there a way to make ext-filesystems use less space for themselves? Or is there another filesystem (I've tried ext2, ext3, ext4, NTFS and vfat - None of them came even close to the usable space NTFS offered me) with perfect Linux support and great usable space?
I'd love to hear about how and why filesystems (especially ext2 which has no journaling) use that much more space than NTFS and I don't know where else to ask. I'd mostly prefer a way to use ext4 without journaling and anything else that uses up this much space, if that's possible.
linux filesystems ext4 ext3 ext2
1
Have you seen this thread?
â JakeGould
yesterday
4
I have, and it explained what uses up the extra space but the difference between NTFS and ext is MUCH bigger than between reiserfs and ext, and I'm wondering if there is any way to make it smaller. For example on a 1TB HDD I'm able to use 989GB with NTFS. ext4 would give me around 909GB.
â confetti
yesterday
Fair enough. Decent question and the answer is enlightening too.
â JakeGould
yesterday
1
how do you actually measure what space is available? this is important because depending on what values you look at you may or may not see the effect of the 5% reservation for example as is stated in the linked question
â eMBee
21 hours ago
I used to check the "Free space" properties in the file manager,df
andgnome-system-monitor
. However the latter seems to have another column calledfree
that actually shows the free space including the 5% which I just found out about though.
â confetti
21 hours ago
 |Â
show 2 more comments
up vote
35
down vote
favorite
up vote
35
down vote
favorite
I have a bunch of external and internal HDDs that I use on a Linux system. I only have Linux systems, so using a Linux file-system would only make sense, right? However I'm currently using NTFS everywhere, because it gives me the most usable space out of HDDs.
I would like to switch to Linux file-systems now though, mostly because of permissions and compability (e.g. I can't get my LUKS encrypted NTFS partition to resize under Linux, keeps telling me to chkdsk under Windows).
However when I formatted those HDDs I tried out a bunch of different filesystems and every Linux filesystem, even ext2 which as far as I know has no journaling, used a lot of space for itself. I don't recall exact values, but it was over 100GB that NTFS got me more on a 2TB HDD, which is a lot.
So my question is: Is there a way to make ext-filesystems use less space for themselves? Or is there another filesystem (I've tried ext2, ext3, ext4, NTFS and vfat - None of them came even close to the usable space NTFS offered me) with perfect Linux support and great usable space?
I'd love to hear about how and why filesystems (especially ext2 which has no journaling) use that much more space than NTFS and I don't know where else to ask. I'd mostly prefer a way to use ext4 without journaling and anything else that uses up this much space, if that's possible.
linux filesystems ext4 ext3 ext2
I have a bunch of external and internal HDDs that I use on a Linux system. I only have Linux systems, so using a Linux file-system would only make sense, right? However I'm currently using NTFS everywhere, because it gives me the most usable space out of HDDs.
I would like to switch to Linux file-systems now though, mostly because of permissions and compability (e.g. I can't get my LUKS encrypted NTFS partition to resize under Linux, keeps telling me to chkdsk under Windows).
However when I formatted those HDDs I tried out a bunch of different filesystems and every Linux filesystem, even ext2 which as far as I know has no journaling, used a lot of space for itself. I don't recall exact values, but it was over 100GB that NTFS got me more on a 2TB HDD, which is a lot.
So my question is: Is there a way to make ext-filesystems use less space for themselves? Or is there another filesystem (I've tried ext2, ext3, ext4, NTFS and vfat - None of them came even close to the usable space NTFS offered me) with perfect Linux support and great usable space?
I'd love to hear about how and why filesystems (especially ext2 which has no journaling) use that much more space than NTFS and I don't know where else to ask. I'd mostly prefer a way to use ext4 without journaling and anything else that uses up this much space, if that's possible.
linux filesystems ext4 ext3 ext2
edited 10 hours ago
Braiam
3,98531850
3,98531850
asked yesterday
confetti
482112
482112
1
Have you seen this thread?
â JakeGould
yesterday
4
I have, and it explained what uses up the extra space but the difference between NTFS and ext is MUCH bigger than between reiserfs and ext, and I'm wondering if there is any way to make it smaller. For example on a 1TB HDD I'm able to use 989GB with NTFS. ext4 would give me around 909GB.
â confetti
yesterday
Fair enough. Decent question and the answer is enlightening too.
â JakeGould
yesterday
1
how do you actually measure what space is available? this is important because depending on what values you look at you may or may not see the effect of the 5% reservation for example as is stated in the linked question
â eMBee
21 hours ago
I used to check the "Free space" properties in the file manager,df
andgnome-system-monitor
. However the latter seems to have another column calledfree
that actually shows the free space including the 5% which I just found out about though.
â confetti
21 hours ago
 |Â
show 2 more comments
1
Have you seen this thread?
â JakeGould
yesterday
4
I have, and it explained what uses up the extra space but the difference between NTFS and ext is MUCH bigger than between reiserfs and ext, and I'm wondering if there is any way to make it smaller. For example on a 1TB HDD I'm able to use 989GB with NTFS. ext4 would give me around 909GB.
â confetti
yesterday
Fair enough. Decent question and the answer is enlightening too.
â JakeGould
yesterday
1
how do you actually measure what space is available? this is important because depending on what values you look at you may or may not see the effect of the 5% reservation for example as is stated in the linked question
â eMBee
21 hours ago
I used to check the "Free space" properties in the file manager,df
andgnome-system-monitor
. However the latter seems to have another column calledfree
that actually shows the free space including the 5% which I just found out about though.
â confetti
21 hours ago
1
1
Have you seen this thread?
â JakeGould
yesterday
Have you seen this thread?
â JakeGould
yesterday
4
4
I have, and it explained what uses up the extra space but the difference between NTFS and ext is MUCH bigger than between reiserfs and ext, and I'm wondering if there is any way to make it smaller. For example on a 1TB HDD I'm able to use 989GB with NTFS. ext4 would give me around 909GB.
â confetti
yesterday
I have, and it explained what uses up the extra space but the difference between NTFS and ext is MUCH bigger than between reiserfs and ext, and I'm wondering if there is any way to make it smaller. For example on a 1TB HDD I'm able to use 989GB with NTFS. ext4 would give me around 909GB.
â confetti
yesterday
Fair enough. Decent question and the answer is enlightening too.
â JakeGould
yesterday
Fair enough. Decent question and the answer is enlightening too.
â JakeGould
yesterday
1
1
how do you actually measure what space is available? this is important because depending on what values you look at you may or may not see the effect of the 5% reservation for example as is stated in the linked question
â eMBee
21 hours ago
how do you actually measure what space is available? this is important because depending on what values you look at you may or may not see the effect of the 5% reservation for example as is stated in the linked question
â eMBee
21 hours ago
I used to check the "Free space" properties in the file manager,
df
and gnome-system-monitor
. However the latter seems to have another column called free
that actually shows the free space including the 5% which I just found out about though.â confetti
21 hours ago
I used to check the "Free space" properties in the file manager,
df
and gnome-system-monitor
. However the latter seems to have another column called free
that actually shows the free space including the 5% which I just found out about though.â confetti
21 hours ago
 |Â
show 2 more comments
2 Answers
2
active
oldest
votes
up vote
65
down vote
accepted
By default, ext2 and its successors reserve 5% of the filesystem for use by the root user. This reduces fragmentation, and makes it less likely that the administrator or any root-owned daemons will be left with no space to work in.
These reserved blocks prevent programs not running as root from filling your disk.
Whether these considerations justify the loss of capacity depends on what the filesystem is used for.
The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
The reservation can be changed using the -m
option of the tune2fs
command:
tune2fs -m 0 /dev/sda1
This will set the reserved blocks percentage to 0% (0 blocks).
To get the current value (among others), use the command :
tune2fs -l <device>
8
This would explain the immense difference in usable space perfectly (as 5% of 2TB are 100GB). The disks won't be used for anything as root or system-file related, so I think it would be save to disable this. I got a question though: How do root-owned programs know there is more free space than non-root programs? Runningdf
as non-root vs. root shows no difference.
â confetti
yesterday
9
@confetti: Because the VFS doesn't reject their attempts to write to the disk with an error (until the volume is actually full, of course).
â Ignacio Vazquez-Abrams
yesterday
1
tune2fs -l <device>
should give this value among others. The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
â harrymc
yesterday
5
XFS reserves the smaller of 5% or 8192 blocks (32 MiB), so the reserved amount is generally tiny compared to the size of the filesystem.
â Michael Hampton
yesterday
4
Thank you very much everyone for the explanations. This helped me understand greatly. My disk used to fill up entirely to its last byte before, yet my system did not fail completely, now I understand why.
â confetti
21 hours ago
 |Â
show 7 more comments
up vote
2
down vote
if the data you intend to store on it is compressible, btrfs mounted with compress=zstd
(or compress-force=zstd
) would probbaly use significantly less disk space than ext*
- this will make btrfs transparently compress your data before writing it to disk, and transparently decompress it when reading it back. also, ext4 pre-allocate all inodes at filesystem creation, btrfs creates them as needed, i guess that might save some space too.
1
Do you mind adding more information to this answer? (How it works, what it does, maybe a reference, ...)
â confetti
17 hours ago
@confetti like this? patchwork.kernel.org/patch/9817875
â hanshenrik
17 hours ago
I really like this idea, but more information about how this would impact speed and performance and such would be nice.
â confetti
5 hours ago
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
65
down vote
accepted
By default, ext2 and its successors reserve 5% of the filesystem for use by the root user. This reduces fragmentation, and makes it less likely that the administrator or any root-owned daemons will be left with no space to work in.
These reserved blocks prevent programs not running as root from filling your disk.
Whether these considerations justify the loss of capacity depends on what the filesystem is used for.
The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
The reservation can be changed using the -m
option of the tune2fs
command:
tune2fs -m 0 /dev/sda1
This will set the reserved blocks percentage to 0% (0 blocks).
To get the current value (among others), use the command :
tune2fs -l <device>
8
This would explain the immense difference in usable space perfectly (as 5% of 2TB are 100GB). The disks won't be used for anything as root or system-file related, so I think it would be save to disable this. I got a question though: How do root-owned programs know there is more free space than non-root programs? Runningdf
as non-root vs. root shows no difference.
â confetti
yesterday
9
@confetti: Because the VFS doesn't reject their attempts to write to the disk with an error (until the volume is actually full, of course).
â Ignacio Vazquez-Abrams
yesterday
1
tune2fs -l <device>
should give this value among others. The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
â harrymc
yesterday
5
XFS reserves the smaller of 5% or 8192 blocks (32 MiB), so the reserved amount is generally tiny compared to the size of the filesystem.
â Michael Hampton
yesterday
4
Thank you very much everyone for the explanations. This helped me understand greatly. My disk used to fill up entirely to its last byte before, yet my system did not fail completely, now I understand why.
â confetti
21 hours ago
 |Â
show 7 more comments
up vote
65
down vote
accepted
By default, ext2 and its successors reserve 5% of the filesystem for use by the root user. This reduces fragmentation, and makes it less likely that the administrator or any root-owned daemons will be left with no space to work in.
These reserved blocks prevent programs not running as root from filling your disk.
Whether these considerations justify the loss of capacity depends on what the filesystem is used for.
The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
The reservation can be changed using the -m
option of the tune2fs
command:
tune2fs -m 0 /dev/sda1
This will set the reserved blocks percentage to 0% (0 blocks).
To get the current value (among others), use the command :
tune2fs -l <device>
8
This would explain the immense difference in usable space perfectly (as 5% of 2TB are 100GB). The disks won't be used for anything as root or system-file related, so I think it would be save to disable this. I got a question though: How do root-owned programs know there is more free space than non-root programs? Runningdf
as non-root vs. root shows no difference.
â confetti
yesterday
9
@confetti: Because the VFS doesn't reject their attempts to write to the disk with an error (until the volume is actually full, of course).
â Ignacio Vazquez-Abrams
yesterday
1
tune2fs -l <device>
should give this value among others. The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
â harrymc
yesterday
5
XFS reserves the smaller of 5% or 8192 blocks (32 MiB), so the reserved amount is generally tiny compared to the size of the filesystem.
â Michael Hampton
yesterday
4
Thank you very much everyone for the explanations. This helped me understand greatly. My disk used to fill up entirely to its last byte before, yet my system did not fail completely, now I understand why.
â confetti
21 hours ago
 |Â
show 7 more comments
up vote
65
down vote
accepted
up vote
65
down vote
accepted
By default, ext2 and its successors reserve 5% of the filesystem for use by the root user. This reduces fragmentation, and makes it less likely that the administrator or any root-owned daemons will be left with no space to work in.
These reserved blocks prevent programs not running as root from filling your disk.
Whether these considerations justify the loss of capacity depends on what the filesystem is used for.
The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
The reservation can be changed using the -m
option of the tune2fs
command:
tune2fs -m 0 /dev/sda1
This will set the reserved blocks percentage to 0% (0 blocks).
To get the current value (among others), use the command :
tune2fs -l <device>
By default, ext2 and its successors reserve 5% of the filesystem for use by the root user. This reduces fragmentation, and makes it less likely that the administrator or any root-owned daemons will be left with no space to work in.
These reserved blocks prevent programs not running as root from filling your disk.
Whether these considerations justify the loss of capacity depends on what the filesystem is used for.
The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
The reservation can be changed using the -m
option of the tune2fs
command:
tune2fs -m 0 /dev/sda1
This will set the reserved blocks percentage to 0% (0 blocks).
To get the current value (among others), use the command :
tune2fs -l <device>
edited yesterday
answered yesterday
harrymc
233k9237513
233k9237513
8
This would explain the immense difference in usable space perfectly (as 5% of 2TB are 100GB). The disks won't be used for anything as root or system-file related, so I think it would be save to disable this. I got a question though: How do root-owned programs know there is more free space than non-root programs? Runningdf
as non-root vs. root shows no difference.
â confetti
yesterday
9
@confetti: Because the VFS doesn't reject their attempts to write to the disk with an error (until the volume is actually full, of course).
â Ignacio Vazquez-Abrams
yesterday
1
tune2fs -l <device>
should give this value among others. The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
â harrymc
yesterday
5
XFS reserves the smaller of 5% or 8192 blocks (32 MiB), so the reserved amount is generally tiny compared to the size of the filesystem.
â Michael Hampton
yesterday
4
Thank you very much everyone for the explanations. This helped me understand greatly. My disk used to fill up entirely to its last byte before, yet my system did not fail completely, now I understand why.
â confetti
21 hours ago
 |Â
show 7 more comments
8
This would explain the immense difference in usable space perfectly (as 5% of 2TB are 100GB). The disks won't be used for anything as root or system-file related, so I think it would be save to disable this. I got a question though: How do root-owned programs know there is more free space than non-root programs? Runningdf
as non-root vs. root shows no difference.
â confetti
yesterday
9
@confetti: Because the VFS doesn't reject their attempts to write to the disk with an error (until the volume is actually full, of course).
â Ignacio Vazquez-Abrams
yesterday
1
tune2fs -l <device>
should give this value among others. The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
â harrymc
yesterday
5
XFS reserves the smaller of 5% or 8192 blocks (32 MiB), so the reserved amount is generally tiny compared to the size of the filesystem.
â Michael Hampton
yesterday
4
Thank you very much everyone for the explanations. This helped me understand greatly. My disk used to fill up entirely to its last byte before, yet my system did not fail completely, now I understand why.
â confetti
21 hours ago
8
8
This would explain the immense difference in usable space perfectly (as 5% of 2TB are 100GB). The disks won't be used for anything as root or system-file related, so I think it would be save to disable this. I got a question though: How do root-owned programs know there is more free space than non-root programs? Running
df
as non-root vs. root shows no difference.â confetti
yesterday
This would explain the immense difference in usable space perfectly (as 5% of 2TB are 100GB). The disks won't be used for anything as root or system-file related, so I think it would be save to disable this. I got a question though: How do root-owned programs know there is more free space than non-root programs? Running
df
as non-root vs. root shows no difference.â confetti
yesterday
9
9
@confetti: Because the VFS doesn't reject their attempts to write to the disk with an error (until the volume is actually full, of course).
â Ignacio Vazquez-Abrams
yesterday
@confetti: Because the VFS doesn't reject their attempts to write to the disk with an error (until the volume is actually full, of course).
â Ignacio Vazquez-Abrams
yesterday
1
1
tune2fs -l <device>
should give this value among others. The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.â harrymc
yesterday
tune2fs -l <device>
should give this value among others. The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.â harrymc
yesterday
5
5
XFS reserves the smaller of 5% or 8192 blocks (32 MiB), so the reserved amount is generally tiny compared to the size of the filesystem.
â Michael Hampton
yesterday
XFS reserves the smaller of 5% or 8192 blocks (32 MiB), so the reserved amount is generally tiny compared to the size of the filesystem.
â Michael Hampton
yesterday
4
4
Thank you very much everyone for the explanations. This helped me understand greatly. My disk used to fill up entirely to its last byte before, yet my system did not fail completely, now I understand why.
â confetti
21 hours ago
Thank you very much everyone for the explanations. This helped me understand greatly. My disk used to fill up entirely to its last byte before, yet my system did not fail completely, now I understand why.
â confetti
21 hours ago
 |Â
show 7 more comments
up vote
2
down vote
if the data you intend to store on it is compressible, btrfs mounted with compress=zstd
(or compress-force=zstd
) would probbaly use significantly less disk space than ext*
- this will make btrfs transparently compress your data before writing it to disk, and transparently decompress it when reading it back. also, ext4 pre-allocate all inodes at filesystem creation, btrfs creates them as needed, i guess that might save some space too.
1
Do you mind adding more information to this answer? (How it works, what it does, maybe a reference, ...)
â confetti
17 hours ago
@confetti like this? patchwork.kernel.org/patch/9817875
â hanshenrik
17 hours ago
I really like this idea, but more information about how this would impact speed and performance and such would be nice.
â confetti
5 hours ago
add a comment |Â
up vote
2
down vote
if the data you intend to store on it is compressible, btrfs mounted with compress=zstd
(or compress-force=zstd
) would probbaly use significantly less disk space than ext*
- this will make btrfs transparently compress your data before writing it to disk, and transparently decompress it when reading it back. also, ext4 pre-allocate all inodes at filesystem creation, btrfs creates them as needed, i guess that might save some space too.
1
Do you mind adding more information to this answer? (How it works, what it does, maybe a reference, ...)
â confetti
17 hours ago
@confetti like this? patchwork.kernel.org/patch/9817875
â hanshenrik
17 hours ago
I really like this idea, but more information about how this would impact speed and performance and such would be nice.
â confetti
5 hours ago
add a comment |Â
up vote
2
down vote
up vote
2
down vote
if the data you intend to store on it is compressible, btrfs mounted with compress=zstd
(or compress-force=zstd
) would probbaly use significantly less disk space than ext*
- this will make btrfs transparently compress your data before writing it to disk, and transparently decompress it when reading it back. also, ext4 pre-allocate all inodes at filesystem creation, btrfs creates them as needed, i guess that might save some space too.
if the data you intend to store on it is compressible, btrfs mounted with compress=zstd
(or compress-force=zstd
) would probbaly use significantly less disk space than ext*
- this will make btrfs transparently compress your data before writing it to disk, and transparently decompress it when reading it back. also, ext4 pre-allocate all inodes at filesystem creation, btrfs creates them as needed, i guess that might save some space too.
edited 17 hours ago
answered 17 hours ago
hanshenrik
1215
1215
1
Do you mind adding more information to this answer? (How it works, what it does, maybe a reference, ...)
â confetti
17 hours ago
@confetti like this? patchwork.kernel.org/patch/9817875
â hanshenrik
17 hours ago
I really like this idea, but more information about how this would impact speed and performance and such would be nice.
â confetti
5 hours ago
add a comment |Â
1
Do you mind adding more information to this answer? (How it works, what it does, maybe a reference, ...)
â confetti
17 hours ago
@confetti like this? patchwork.kernel.org/patch/9817875
â hanshenrik
17 hours ago
I really like this idea, but more information about how this would impact speed and performance and such would be nice.
â confetti
5 hours ago
1
1
Do you mind adding more information to this answer? (How it works, what it does, maybe a reference, ...)
â confetti
17 hours ago
Do you mind adding more information to this answer? (How it works, what it does, maybe a reference, ...)
â confetti
17 hours ago
@confetti like this? patchwork.kernel.org/patch/9817875
â hanshenrik
17 hours ago
@confetti like this? patchwork.kernel.org/patch/9817875
â hanshenrik
17 hours ago
I really like this idea, but more information about how this would impact speed and performance and such would be nice.
â confetti
5 hours ago
I really like this idea, but more information about how this would impact speed and performance and such would be nice.
â confetti
5 hours ago
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1346350%2fis-there-a-way-to-make-ext-filesystems-use-less-space-for-themselves-in-linux%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
Have you seen this thread?
â JakeGould
yesterday
4
I have, and it explained what uses up the extra space but the difference between NTFS and ext is MUCH bigger than between reiserfs and ext, and I'm wondering if there is any way to make it smaller. For example on a 1TB HDD I'm able to use 989GB with NTFS. ext4 would give me around 909GB.
â confetti
yesterday
Fair enough. Decent question and the answer is enlightening too.
â JakeGould
yesterday
1
how do you actually measure what space is available? this is important because depending on what values you look at you may or may not see the effect of the 5% reservation for example as is stated in the linked question
â eMBee
21 hours ago
I used to check the "Free space" properties in the file manager,
df
andgnome-system-monitor
. However the latter seems to have another column calledfree
that actually shows the free space including the 5% which I just found out about though.â confetti
21 hours ago