The project, timetable and notes and such It's going...ok Most people/teams have a chance of getting it to work Complexity of solutions: Complex solutions are often better We'll try to avoid stereotypes by using concrete examples Suspension, tanks, and airplanes Sometimes a simple algorithm is best Why is a complex algorithm not just as reliable? Each piece has a probability of a bug (debug it? An OS works...) Bugs are not all a problem in a specific piece of code I played a duet with my daughter yesterday Interactions between n components... Sometimes a simple solution will NOT work! History of operating systems... There's a reason for AI! Don't overestimate problems Map/Reduce paradigm Google used to use it, and had a proprietary system with that name Programming paradigm, not really a single technology Large shared memory (Cray XMT and such): Convenient! Traditional "supercomputer" setup Tends to have limited size EXPENSIVE (in all caps) Clusters: Not expensive in all caps Can be very large Cluster with special network: Better than a regular cluster Expensive (capital E) depending on how "special" Mesage Passing (MPI): Latency, network size vs. bandwidth MapReduce and Hadoop: map operation (available in Python!) reduce operation (I won't say anything isn't available in Python) data locality wikipedia has some good "standard" examples Data structure in the center: Hash table? Tree? List? Often some sorting required Sort of depends what you're using it for Checkpointing with Map/Reduce: Straightforward What model does a GPU fit into? They have shared memory But, many independent units that cannot easily confer with one another Applying this to GPU processing of images: Can we figure out the average contrast variance of an image? Of course...