Background: Not a CS graduate.
Scenario: Reading this book, found a code snippet that looks like this:
# A power function in python using & and >> bitwise operators
def numpower(x, n):
res = 1
while n > 0:
if n & 1:
res = res * x
x = x * x
n = n >> 1
return res
Question: If someone had asked me to implement a power function, I'd have used recursion with dirty checking something like this.
Even though I followed the above python example with even and odd powers (n) of any number (x), I still couldn't understand how would one actually come up with this logic for this particular problem in the first place.
What I Tried: I could feel by tracking down the even odd case that the bitwise & operator is doing a gateway check of some sorts that essentially behaves differently in the sequence of the while loop depending whether the power n is even or odd, and the >> shift operator is used to reduce n to sort of half (closest even number to the half of previous n), but I couldn't put it all together.
I would feel: "the programmer who writes this script is a pro" :D
Playing with bitwise operators is one of my weakness.
It looks like an implementation of the "Square And Multiply" algorithm, which is based on the bitwise representation of the exponent. That algorithm is a very fast way to compute powers and you would not simply come up with that solution by thinking; you would probably already know that algorithm from a book when you use it.
I did not learn about it in CS, but in my current master studies (Applied IT Security). It is detailed on Wikipedia , however here is the short and easy explanation from my book for the vanilla algorithm:
In your case, the algorithm walks from the right to the left and does a "Multiply and Square" operation. That means that, for example, if you want to compute x^26 (with 0d26 = 0b11010), all you have to do is check the rightmost exponent bit and if the bit is 0 (even numbers), you only do a square operation of the base, else, if the bit is 1 (uneven numbers; in code:
if n & 1:) you additionally do a multiplication with the result before the squaring. In order to walk left, the above algorithm shifts all bits to the right (n = n >> 1). Then it can start anew, until there are no more bits left to shift, in which case n contains the value 0 and the loop ends. Using x^26, that would result inThe vanilla algorithm, just as a comparison, would work like that:
If you compare the exponent bits of the current power you calculate (right-most column) and the current exponent bit (second column from the left), you can see that you slowly build the correct exponent bit by bit, just by using the multiplication operation of your CPU (square is also just a multiplication with the number itself). So, calculating a very large power of a number results in only a few multiplications, which a CPU can very quickly calculate.
I think, the vanilla algorithm is easier to understand and I wonder why they did not introduce and use it in that book. It's something I would have expected from such a book.
"Square and Multiply" is used in current crypto-systems, for example for RSA encryption, which can be used for HTTPS.