This article will explain what the kibibytes, mebibytes, gibibytes, tebibytes storage units are in relation to Kilobytes, megabytes, gigabytes, and terabytes.
… So What are They?
Simply, they are a different set of storage units used to express/measure file sizes.
They are analogous to kilobytes, megabytes, gigabytes, and terabytes – and often used interchangeably – but they are not the same.
Kilobytes, megabytes, etc., all measure units in thousands – 1000 kilobytes to a megabyte, 1000 megabytes to a gigabyte, and so on. This is the decimal measurement system of file storage.
Kibibytes, mebibytes, gibibytes, etc., are the binary measurement system – 1024 kibibytes to a mebibyte, 1024 mebibytes to a gibibyte, and so on. Everything is measured in a power of 2 to be consistent with other binary computer standards.
Prior to the introduction of names for the units in the binary system, kilobytes, megabytes, etc., were used interchangeably for both measurements. However, a megabyte could have referred to 1000 kilobytes or 1024 depending on who you were talking to, so new terms were created to reduce confusion.
People still use the terms interchangeably sometimes, which is probably why you’re searching has bought you here.
Why are Binary Units Used (And Becoming More Widely Used)?
Simple – because they’re a more accurate unit of measurement. Computers work in binary, so a binary measurement system will always provide a better representation of file sizes.
Just Show me a Table.
Decimal units are always smaller than their accompanying binary unit – a kibibyte is bigger than a kilobyte because there are only 1000 kilobytes to 1024 in a kibibyte.
Sure, that’s a small difference when talking about a single kilobyte – but as sizes increase, the difference increases quickly with it.
|Binary Name||Symbol||Bytes||Decimal Name||Symbol||Bytes|
Where You’ll See Kibibytes, Mebibytes, Gibibytes, Tebibytes Used
Using binary measurements for filesizes has sort-of-but-not-completely caught on. You’ll see both binary and decimal measurements used in code examples and documentation (depending on the developer’s preference) and even in operating systems.
macOS and many Linux distributions use binary notation when displaying file sizes. Windows uses decimal. Some Linux desktop environments/GUIs will use decimal while binary will be displayed on the command line.
It’s a bit of a mess, but now that you know the difference, I hope there’s less confusion!