I am reading Sams’s Learn C++ in 21 Days, and this sentence doesn’t make sense to me: “Because you have the same number of bytes for both signed and unsigned integers, the largest number you can store in an unsigned integer is twice as big as the largest positive number you can store in a signed integer.” Please clear up this signed/unsigned issue and, if possible, explain how “unisigned short int” has a value 0-65,535 and how “short int” has a value (-)32,768-32,767.
Don’t get too hung up on this. The bits of a signed integer store data exactly the same as the bits in an unsigned integer. Short integers all use 2 bytes of memory and can be set to any one of 65,536 different values.
All C does with signed integers is to logically consider the values 32,767 through 65,535 to instead be -32,768 through -1. Internally, everything is binary. But by having C and C++ treat signed numbers differently for some operations, you can implement signed and unsigned integers.