Zach, you're also just plain wrong. 128-bit integers only get you up to around f(200) or so (too lazy to calculate the exact number of bits, but it's close). Fibonacci numbers are *big*.
Anyway, any decent implementation will have a short linear part at the beginning, and then an n^2 part after that.
Unless, of course, you can play tricks to get your computation time down to constant again. No existing processor can do this, but a processor that had a configurable number of bits could stay linear for as long as it had computation units. Again, difference between theory and practice.
Reply To This Comment
About the site:
Peter Krumins' blog about programming, hacking, software reuse, software ideas, computer security, browserling, google and technology.
Zach, you're also just plain wrong. 128-bit integers only get you up to around f(200) or so (too lazy to calculate the exact number of bits, but it's close). Fibonacci numbers are *big*.
Anyway, any decent implementation will have a short linear part at the beginning, and then an n^2 part after that.
Unless, of course, you can play tricks to get your computation time down to constant again. No existing processor can do this, but a processor that had a configurable number of bits could stay linear for as long as it had computation units. Again, difference between theory and practice.
Reply To This Comment