Friday, 14 March 2008

Microsoft's top visionary sees a parallel world

At last some one found courage to spell their vision in public domain, it is no other than Microsoft's top visionary, chief research and strategy officer Craig Mundie.


Craig
Mundie says;
Parallel computing has been hyped for years as the next big thing in technology, allowing computers to run faster by dividing up tasks over multiple microprocessors instead of using a single processor to perform one task at a time.

The technology's full potential is almost unfathomable today, but it could lead to major advances in robotics or software applications that can translate documents in real time in multiple languages.

The computer industry has taken its first steps toward parallel computing in recent years by using "multi-core" chips, but Mundie said this is the "tip of the iceberg."

To maximize computing horsepower, software makers will need to change how software programmers work. Only a handful of programmers in the world know how to write software code to divide computing tasks into chunks that can be processed at the same time instead of a traditional, linear, one-job-at-a-time approach.

A new programming language would be required, and could affect how almost every piece of software is written.

"This problem will be hard," admitted Mundie, who worked on parallel computing as the head of supercomputer company Alliant Computer Systems before joining Microsoft. "This challenge looms large over the next 5 to 10 years."

The shift to parallel computing was born out of necessity after processor speeds ran into heat and power limitations, forcing the semiconductor industry to assemble multiple cores, or electronic brains, on a single chip.” (link)


At this point I could not resist showing off my priceless work, that I worked on 14 years ago in 1993, to the public.


For my MSc computer science project titled “Distributed Parallel Computing” in 1993, I designed a scheme for ordinary C language programmers to run any number of functions in parallel on SUN Workstations. The end product was PPG (Parallel Program Generator). I utilized non-blocking I/O, TCP/IP, RPC (remote procedure call) along with LWP (Light Weight Processes) now called threads.


Working on this project resulted in invention of a new method to solve the systems of linear equations using distributed parallel computing. The Jacobi and Gauss-Siedel methods are other two well known linear stationary methods for solving linear systems of equations.


Purpose of writing all this is to let people know that it is not as difficult as it is described by Craig Mundie to write parallel programs. What was possible on a particular computing hardware and related software in 1993 is possible on almost all computing platforms now.


I will describe the model that I used in my MSc project "distributed parallel computing" in next post.


Perhaps if operating system is transformed from single core to multi core then there may not be any need for programmers to write parallel applications to utilize multi core processors.