1. Bit representation in ordinal data types
Every bit has corresponding value in a ordinal data type
BIT_0 -> 1
BIT_1 -> 2
BIT_2 -> 4
BIT_3 -> 8
BIT_4 -> 16
BIT_5 -> 32
BIT_6 -> 64
BIT_7 -> 128
BIT_n -> 2^n
2. How to check if a bit is set
bitN = (value & BIT_N) == BIT_N
3. How to set a bit
value|= BIT_N
4. How to unset/toggle a bit
value~= BIT_N
5. Most/least significant bit
MSB LSB
7 6 5 4 3 2 1 0
the most significant bit in a byte is BIT_7 with a value of 128
the least significant bit in a byte is BIT_0 with a value of 1
6. Shifting
int<<=1
will multiply a int by 2 by moving the bits 1 bit left in direction to the most significant bit
int>>=1
will divide a int by 2 by moving the bits 1 bit to right in direction to the least significant bit
7. What else can you do
mask more than one bit
byte = int & 0xFF
will return only the first 8 bit (sum of BIT_0 … BIT_7 = 0xFF) from an integer
combined mask and shifting
byte0ofint = int & 0xFF
byte1ofint = (int & (0xFF << 8)) >> 8
byte2ofint = (int & (0xFF << 16)) >> 16
byte3ofint = (int & (0xFF << 24)) >> 24
…
8. Byte order
each cpu stores ordinal data types bigger than one byte in a specific byte order
intel: little endian byte order begins with LSB byte
powerpc: big endian byte order begins with MSB byte
arm: can do both as far as i know
…
so whenever you store and read ordinal data types bigger than a byte binary you should care about byte order
9. Why?
- you need most likely work with bits and bytes when doing
- hardware related programming
- network programming
- other stuff
- e.g. you can implement sets using bits
- …