# A Brilliant Hack: Why does Layer 2/3 Checksum use 1’s Complement, Not 2’s A super quick recap, one’s complement represents negative x by reverting every bit of x, while two’s complement negative x as one’s complement of x plus 1. Symbolically,

``````1
2
one’s complement    -x = ~x
two’s complement    -x = ~x + 1
``````

Two’s complements seem to have taken over the entire computing world. Some major reasons promoting two’s complements include that the adder in ALU on your processor is the same for both positive and negative operands, that the same adder could be used as subtractor easily, and that there is exactly one way to represent the value of zero.

Nonetheless, the error detection checksum makes the conscious decision to use 1’s complement even when its host ISA uses 2’s. A refresh on the checksum algorithm:

``````1
2
3
4
5
6
7
8
9
10
11
Send:
calculate the 1’s complement sum
over the payload on a 16-bit word boundary
negate the sum to use as the checksum

calculate the 1’s complement sum
on a 16-bit word boundary
over the payload INCLUDING the sender checksum
assert the result has bits of all ones.
``````

An example implementation might look like the following

``````1
2
3
4
5
6
7
8
9
10
11
12
13
u_short cksum(u_short *buf, int count) {
register u_long sum = 0;
while (count--) {
sum += *buf++;
if (sum & 0xFFFF0000) {
Perhaps such implementation is slightly faster than that using two’s complement, but I found the explanation so far unsatisfactory. I did some more digging. I realized the performance boost from using 1’s complement sum is endianness independent. Endianness matters because for a 16-bit checksum, its bytes are swapped when read from link to CPU. Different hosts use different endianness. Using 2’s complement in checksum mandates calling `ntohs` and `htons` on EVERY router along the way. 1’s complement does not need to reorder bytes. It asserts the final sum has all bits of ones. Blazing fast, and a brilliant hack.