As someone who spends quite alot of time explaining and debugging multithreaded applications it's nice to see someone else take a go at it for a change.
Future Chips (Mmm chips) has an articles on what makes parallel programming hard. It goes over why not all programs get a speedup with more cores and why it's hard to write and debug multithreaded (MT) programs.
They have also posted a follow up article on writing and optimizing parallel programs; a complete example. Which is nice...
Subscribe to:
Post Comments (Atom)
2 comments:
Hi Andrew,
Nice blog! I especially like the post on user interface books. I need to learn UIs better. My only experience has been web UIs an the in-house tools I write to help myself (which are anything but user-friendly) or pretty.
Thank you for reading my article at FutureChips.org . I am glad you liked it. You mention that you have experience explaining this stuff. I need an opinion. I wrote a post about work-queues last night (http://bit.ly/jNKTju) and I can't seem to decide if the next post should be on branch-and-bound algorithms or pipeline parallelism. Any suggestions?
Well, if you want to learn more about UIs the books I mentioned in that article are definitely a good place to start.
If I have to choose between branch-and-bound algorythms or pipeline parallelism I would definitely go with pipeline parallelism. I think pipelining in general is a fascinating concept and has all sorts of applications both in code and in life. I've always thought my washing machine and dryer as being a two-stage pipeline. The way the CPU and GPU interact to build ascene also works as a two-stage pipeline. Once you understand pipelines to start to see them all over the place. At the car wash, at the checkout counter, going through security at an airport.
Arstechnica did a wonderful article about pipelining as it relates to processors.
http://arstechnica.com/old/content/2004/09/pipelining-1.ars
I liked your article too:
http://www.futurechips.org/tips-for-power-coders/one-lane-highways-programming-new-in-order-cores.html
I suspect that desktop processors probably won't switch to an in-order setup for a while; basically unless they reach a thermal limit of sort sort. History has shown that OOO beats in-order with higher clock frequencies. The POWER 6 architecture is a weird aberation in that sense. It should be noted that IBM decided to go back to an out-of-order core for POWER 7.
http://en.wikipedia.org/wiki/POWER7
For low power processors ditching OOO and the instruction window are massive wins. So it a great deal of sense for them to be in order. Aparently they are big power hogs. Arstechina talked about this when Bobcat was introduced.
http://arstechnica.com/business/news/2010/08/amds-bobcat-plays-it-straight.ars
Love the articles. I've been reading them all.
Post a Comment