Been following a series of posts that started with this one, moved to this one and, so far, has reached this one. To save you the bother of reading these the chain starts with a post on how Ruby 1.9 seems to provide an enormous performance enhancement over its predecessor and this is followed by 2 posts on Haskell that show its power when it comes to parallelism on multicore systems.
I have to say that I’m greatly impressed with what is achieved in the postings relating to Haskell. Theres only one problem - I really have a strong aversion to functional programming languages. I had a short course of Miranda in college and I detested it. This might seem unreasonable but this one small foray into functional programming has left me with a great distaste for the whole field. While I’m not ruling out the possibility that I’ll pick up such a language in the future they are not top of my list for new languages to learn. I know they are currently enjoying a degree of vogue but it still recalls bad memories for me.
Anyway, that being said, the point about the convenience of exploiting the CPUs in a multicore system is not lost on me. The example is a simple one but the point is a strong one. The currently favoured technique for exploiting multicore systems in other languages is threading but creating viable and bug free multithreaded systems is widely regarded as difficult (some would even say impossible). Having acknowledged this point I have, for some time now, been trawling around to try and find out if alternatives exist.
To date I haven’t had much success. I’m not even sure if alternatives are even under consideration and, if they are, they have so far kept a very low profile. Obviously, the examples detailed in the Haskell postings outline above show that its possible to have simpler alternatives but I’m sure these examples are insufficient to be representative for application development purposes.
All of the big chip manufacturers have reached a concensus that multiple cores are the way of the future so ways to exploited this are unquestionably needed. Current threading approaches are undoubtably painful to implement when applied to large and complex real world systems. To a certain degree there are things that application developers can do to aid in the adoption of multicore systems. For example, we could split an application into isolated ’streams’ that would be suitable for execution on individual cores. In many ways this is reflective of a multiprocessing (as opposed to multithreaded) approach and you’re back to needing something similar to interprocess communication technologies.
This kind of coarse-grained split really suits todays multicore systems where 2, 4, 8 and 16 cores are standard. It won’t, however, provide optimal use of capabilities in systems as the number of cores rises into the hundreds. To exploit such systems a more fine-grained split in the functionality will be needed. This can, in part, be handled by source level annotations like those seen in the Haskell examples but I don’t think this is were the biggest gains are to be made. In the realm of current performance issues developers are notoriously bad at guessing where application bottlenecks occur. This is best done with a profiler and I believe that something similar will be needed to best exploit large scale multicore machines. Some form of automated assessment of code that allows it to be broken into independent execution streams probably offers the best option in this line. I’d say theres good money potential in such a tool - if only I had the brains to write it!