# Bit

We explain what a bit is, what are its different uses and the methods in which this computing unit can be calculated.

### What is a bit?

In computer science, it is called `` bit '' (acronym in English of `` *Binary* *''* *digit*, that is, `` binary digit '') to **a value of the binary numbering system** . This system is called that because it comprises only two basic values: 1 and 0, with which an infinite number of binary conditions can be represented: on and off, true and false, present and absent, etc.

Unbit, then, is **the minimum unit of information used by computer science**, whose systems are all supported by that binary code. Each informationbit represents a specific value: 1 0, but by combining different bits you can get many more combinations, for example:

Model of 2 bits (4 combinations):

00 Both off

01 First off, second on

10 First on, second off

11 Both lit

With these two units **we can represent four point values** . Now suppose we have 8 bits (one octet), equivalent in some systems to one *byte* : 256 different values are obtained.

In this way, the binary system operates paying attention to the value of the ``bit '' (1 or 0) and its position in the represented chain: if it is turned on and appears in a position to the left its value doubles, and if it appears to the right, it is cut in half. For example:

To represent the number 20 in binary

**Binary value** **net** **:** **1** **0** **1** **0** **0**

Numerical value per position: 168421

Result: 16 + 0 + 4 + 0 + 0 = **20**

Another example: to represent the number 2.75 in binary, assuming the reference in the middle of the figure:

**Binary value** **net** **:** **0** **1** **0** **1** **1**

Numerical value per position: 4210.50.25

Result: 0 + 2 + 0 + 0.5 + 0.25 = **2,** **7** **5**

The bits in value 0 (off) are not counted, only those of value 1 (on) and their numerical equivalent is given based on their position in the chain, to form thus a representation mechanism that will then be applied to alphanumeric characters (called ASCII).

In this way the operations of the microprocessors of the computers are registered: there **can be architectures of 4, 8, 16, 32 and 64 bits** . This means that the microprocessor handles that internal number of records, that is, the calculation capacity of the Arithmetic-Logical Unit.

For example, the first computers in the x86 series (the Intel 8086 and the Intel 8088) had 16-bit processors, and the noticeable difference between their speeds had to do not so much with their processing capacity, but with the additional help of a 16 and 8 bit bus respectively.

Similarly, bits are used to measure the storage capacity of a digital memory.

See also: HTML.