The most exciting thing about this world is its ever changing quality.

Sunday, April 09, 2006

Unsolved problems in Computer Science

At the level of subatomic particles, the universe is stochastic, not deterministic.

The biggest unsolved problem in computer science is how to apply massively parallel computers to solve fundamentally serial problems. There are four physical limits which between them impose a ceiling: the speed of light, the size of an atom, the time it takes for an electron to change state, and Planck's constant. The speed of light limits the rate at which signals can propagate. (In half a nanosecond, light travels fifteen centimeters.) The size of an atom imposes a minimum size on a gate. The electron state change time imposes a minimum time that a transistor can take to change state. Planck's constant is also a hard limit but more esoteric, because it controls "tunneling". It forces you against the other three walls.

(Bill Gates college tour visited Columbia yesterday and the computer science faculty there presented Bill wit h a list of the top five remaining computer science problems.) See Kevin Schofield's blog: http://spaces.msn.com/kevinonthejob/Blog/cns!1ptWhvkELF9sf9D96FilMvRg!127.entry

1. How do we prove that certain problems are hard to solve? This actually related to security and cryptography, because it relies upon using algorithms that are provably expensive to compute.

2. How can we make truly reliable software? Is there something analagous to Shannon's work on error correction and von Neumann's work on reliability through redundancy?

3. How should we architect systems so that they can be more easily maintained and evolved?

4. How can we make computers understandable and usable for people of all backgrounds, ages, and abilities, in the many different situations we encounter in life?

5. How can we program computers to have human qualities such as consciousness, intelligence and emotion?

6. Concurrency. Since CPU clock speeds seem likely to max out at around 5 GHz, Intel and AMD are moving towards multi-core and other multi-processors architectures. But we won't be able to fully utilize that for extra power until we really solve the problems of how to write solid, reliable concurrent progams. Computer scientists have dabbled in this for more than twenty years, but it's never been a priority. That is quickly changing.

No comments: