This article is part of the sequence The Basics You Won’t Learn in the Basics aimed at eager people striving to gain a deeper understanding of programming and computer science.

Computers store data using numbers and last time, we covered how they store positive numbers in binary. But our adventure will be incomplete if we don’t present how to store negative numbers. This time, we will explore different variants of storing negative binary numbers and we shall see why do we store them that way.

### How do we compute negative numbers?

Normally, for doing any kind of arithmetic, we have 4 basic operations – addition, subtraction, multiplication, division. But the fun part is that we can express all of those operations using only addition. That is why, on the low level, the machine implements only addition and all other operations use numerous additions to do their job. That is the case with negative numbers as well.

But in order to use addition as means for subtraction, a special kind of storage is required for the numbers. Before we head onto that, however, we should explore how do we actually add binary numbers.

### How do we add binary numbers

The first time I recognized this, it seemed pretty mind blowing to me, but **adding binary numbers follows the exact same rules as adding decimal numbers**. Let’s take an example.

If we want to compute 15 + 7, we would do the following:

- Calculate the digit in the units place. So we calculate 5 + 7, and we get a number which is bigger than 9. So we have
**exceeded the base of our numeral system**(which is 10). Therefore, we perform a**carry**. We get 2 and a carry of 1 (or 12). - Next, we calculate the digit in the tens place. Hence, we calculate 1 + 0. But note that we
**have a carry of 1**from our previous computation. Therefore, we calculate 1 + 1 + 0, which equals 2. - Our final answer is 22

That’s how we learned to perform addition in the first grades of school. The same algorithm is used when adding binary numbers. The only difference is that this time we **perform a carry**, once we exceed 1 (not 9), since 2 is the base of the binary numeral system.

If we want to compute 01 + 11 in binary, we would do the following:

- Calculate the digit in the first position (If we had been in decimal, we would call this
**the units place**. But this notation differs for binary numbers. In binary it would be**ones, twos, fours… places**). We calculate 1 + 1, which is bigger than 1. So, again, we have exceeded the base of our numeral system (which is 2) and we**perform a carry**. The result is 0 and a carry of 1 (or 10). - Calculate the digit in the second position. We calculate 0 + 1. But since we have a carry of 1 from the previous operation, we calculate 1 + 0 + 1 instead. This, again, results in a carry. We get 0 and a carry of 1 (10 again).
- Calculate the digit in the third position. Normally, it’s 0 + 0. But we have a carry again, and therefore we calculate 1 + 0 + 0, equaling 1.
- Our final answer is 100 (in binary). If you convert back the numbers in decimal, you will see that this algorithm correctly produced the final result.

Our task now is to think of a way of storing numbers, so that the above algorithm works for negative numbers as well.

### Storing negative numbers using signed magnitude – a naive approach

When we write negative numbers in our notebook, we normally mark them by putting a minus sign in front of them. When we see -5, we know that this is the number negative five. Well, we don’t really have a convenient way of storing a negative sign in computer memory and treat it as part of the number. But what we can do is use one of the digits of the number to indicate whether the number is negative.

So if we have the 4 bit binary number 0011 (which equals 3), we can indicate that we are using the negative version of it by setting the leftmost bit to be one. So 1011 would equal -3 in decimal. This notation is pretty simple and easy to remember. It is called **signed magnitude**. But it is not very practical. Particularly, it has 2 main issues.

The first one is that we have 2 numbers representing a zero. 1000 represents -0 and 0000 represents 0. But -0 and 0 are the same numbers. That means that we are not fully utilizing the capacity of our 4 bit number. So instead of being able to hold 15 distinct values (1111 is the max number we can store in 4 bits and it equals 15), we hold 14 distinct values. This might not seem as a huge drawback, but there is a bigger issue.

This notation does not allow us to use the above algorithm for adding positive and negative numbers alike.

If we take 0011 (3 in decimal) and we add to it 1001 (-1 in decimal, using our notation) we will get faulty results.

0011 + —> 3

1001 —> -1 (signed magnitude)

——

1100 —> -4

So it seems that 3 + (-1) = -4. That, of course, is not a valid result. So we should think of improving our notation.

### Ones’ complement – getting closer

Another idea is instead of just inverting the leftmost bit to one, to invert all the bits of the original positive number.

The 4 bit binary number 0010, which equals 2, can be represented as 1101 to get its negative counterpart. This notation is called **ones’ complement**.

As in signed magnitude, here we still have the issue of using two ways for representing 0. One is 0000 and another is 1111. But this notation has a major improvement over the use of the addition algorithm.

If we add 3 and -1, we get:

0011 + —> 3

1110 —> -1 (ones’ complement)

——

(1)0001 —> 1

The actual number we get is 10001 in binary. But since we are using a 4 bit number to store the result, we remove the leftmost 1 from the result and we get the number 0001 as a final result. This event is called **overflow** and we will talk more about what effect it has in our programs in another article.

So 1 is still not an accurate result. But it gets very close. The accurate result is 2. So there is only a difference of 1 between the result we need and the result we get. That calls for a **hotfix**.

### Twos’ complement

The last notation we shall explore is a minor modification to ones’ complement. We store negative binary numbers by inverting the positive version of the number **and adding 1** to the final result. So if we want to store 0011 (3) as a negative number, we invert to get 1100 and then we add 1 to this result. The result is 1101. This notation is called **twos’ complement**.

This time, the notation fixes the issue we had with the ways of storing a zero. Positive 0 is stored as 0000. If we convert to twos’ complement:

Invert —> 1111

Add 1 to the result —> 1111 + 0001 = (1)0000. (We remove the overflowing one and we get 0000. The same value as before!)

Let’s now explore the issue with adding values:

0011 + —> 3

1111 —> -1 (in twos’ complement)

——-

(1)0010 —> 2 (The leftmost one overflows. The result is 0010)

So now 3 + (-1) = 2. That is the correct answer we were looking for.

**Twos’ complement** has efficiently solved the two main issues we explored in the previous notations. That is why it is used in all modern computers nowadays to represent negative numbers. Signed magnitude and ones’ complement can be integrated in a computer system as well, but they will require additional modifications in order to resolve their erroneous behavior.

### Conclusion

Instead of bothering with all those notations, we could have of course stored all numbers in a computer in their positive variants, but that would require additional hardware modifications to be done to modern processors in order to support such architecture. That is why, computer scientists have agreed on the standard of storing negative numbers using such notations.

Apart from that, in this article, I have chosen a more novice-friendly approach to introducing the notations for negative binary numbers. Due to this, we haven’t answered the question **why do these notations work**. We shall leave this for an upcoming article.

Furthermore, twos’ complement has its effect on working with numbers in our programs. Again, **what effect does this have on our programs** we shall leave for another article.

Next time, we shall explore an even more interesting topic – how do we represent real numbers in the computer.