Mar-29: A couple of things...
There is a new repository among our COSC-365 collection on gitlab, named thread-test. Go check it out. It contains the example code for creating (and then resolving) contention on a simple, multithreaded toy program.
I have posted instructions for writing the draft of your paper on sorttest. Hop to it.
Feb-28: Since we are not having class today, many of you seem quite active, which is excellent. A few quick notes about what I'm seeing...
Adding things into the repo as a branch in just fine. I'm going to try to merge those into the master branch as soon as I see them (and verify that the files look normal and reasonable). Be sure to git pull with some frequency.
Keep the spreadsheet going. I learned, by mistake, that perhaps we should not click on the Resolve button if we want to keep the thread of comments available to read. Use your discretion.
What seems to be emerging about our sorttest experiments is two observations: (1) the initial run of sorttest is indeed slower than subsequent runs, although the difference gets less pronounced as the array size grows; and, (2) algorithms that create additional arrays (counting sort and merge sort) also exhibit more consistantly slow first runs compared to subsequent runs. My hypothesis about (1) is that instruction cache misses slow the first execution of the sorting algorithm code itself, and for (2) the creation of additional arrays by the sorting algorithm creates added data cache misses on the first iteration.
PAPI, and thus the performance counters, would allow us to verify these hypotheses. I am trying to install and prepare the PAPI tools so that we can do these runs and gather information on cache misses, allowing us to determine if the differences that we are observing in running times correlate with the expected cache miss counts.
Feb-27: Between my own explorations and the emerging bits of results from the work that y'all are doing on our experiments, I have decided not to have a single, focused thing for you to prepare for tomorrow. That said, there are number of things for us to explore; in preparation for those things, here's a list of smaller but important things to do (and be ready to do more of):
We will see where this brings us, check out perf (and whatever else we can find about using the performance counters), and try to step towards a large run of SPEC benchmarks on the cluster.
Feb-18: Things work again! And so we will try to resume getting things done. Tomorrow, bring your laptops, and be ready to dig into some code. Shukry has made progress on the sort-test project (be sure to git pull from the repository to see the new work), and I want to bring that onto the Condor cluster (in order to see how that works), and we can look and and consider the results and how we view/analyze them.
Feb-05: Check the Assignments page for two new things: soon.
Jan-13: Welcome to Performance Evaluation and Optimization! A couple of things to know before classes begin:
Our first class meeting will be on Tuesday, Jan-29, at 11:00 am, in SCCE A131. We're going to dive right into the good stuff, so be prompt!
Before our first class meeting, read the Course Information.