Hipkiss said that we are humans, and he's right. In fact the first computers we tried to build were based on the human-friendly decimal system. Wikipedia knows more.
So why didn't we keep on using 10 digits, so easy to understand? It didn't work well.
Let's say you were to build one of these computers. You have a copper wire, and you have to transmit, using a single current "shot", a signal that can represent 10 values.
You may map no current to 0, up to 10V to 1, up to 20V to 2, up to 30V to 3 and so forth.
The problem with this scheme is that keeping the current stable is very hard. Current fluctuates all the time. You would end up make wrong readings so frequently that your computer would be useless. Now, there are ways to improve the signal quality, but your risk of adding more complexity to keep the signal stable, than to do processing itself.
So, we have fall back to binary. Approximately 0V is read as binary 0 (there's tolerance up to 0.8V IIRC) and approximately 5V is read as 1 (anything more than 2V, actually). There is a no-fly zone between 0.8V and 2V. Any signal in that area cannot be interpreted reliably. What happens next is implementation dependent. Reboot? Retransmit? Halt?