Python is a powerful, flexible and elegant dynamic programming language, pervasively used from system administration to web applications. However, due to the lame performance, that is the price we pay for the versatility, Python is seldom used in the high performance computing.
Things have been changed recently. GPGPU, Cell, Clearspeed have emerged to our horizon as the new candidates for their outstanding performance/power ratio. They all work in the accelerate board manner, aka, the host(processor) prepares the data, pushes it to the co-processor, does some trivial work, and waits until the slaves return results. Python and other dynamic languages may glue the different pieces. Here are some approaches on the go:
StarP targets to the scientist who demand high performance but reluctant to parallelize their code. The framework plays the magic to hide all the parallel diagram by attaching suffix *p to the variables. The job is decomposed in the client to basic linear algebra operations and distribute it to the server, then fetch the result if necessary. Here is the exploration of StarP magic in Python.
The essential problem for parallel computing is how to decompose the job to different work space and how to minimize the communication between the threads. StarP provides some built-in distribution and give the users options. Anyway, there is no silver bullet in this field.
Global Array is another approach, It sits on MPI, but provides PGAS programming model by using a set of API. Although it is too MPI-Fortran-like, it still may arouse the interest in the python community to implement a PGAS programming model without introducing new language constructs.
Current PGAS implementations(UPC, CAF and Titanium) share the same communication backbone, GASNet, is it possible to build a GASNet python binding and take the same approach as Global Array to construct a python PGAS library?