Intranet

Teaching Introductory Computer Architecture and Programming: What, When, How? (Part 3)

01/24/2010 15:40
01/24/2010 16:20
Europe/Paris
Location: 
Palazzo dei Congressi "LUNGARNO BUOZZI - Facoltà di economia e commercio" , Pisa, Italy
Speaker(s): 
Sean Halle


The top-down approach to teaching introductory computer architecture and programming

Abstract

The assumptions upon which Introduction to Architecture and Programming have been based in the past are no longer valid. Most importantly, it is a fallacy to teach programming based upon a model of a physical machine. Out-of-order pipelines, modern compilers, higher-level languages, and now the move to portable high-performance parallelism have divorced code, even C code, from any notion of performance other than the number of memory accesses. Due to compilers and out-of-order pipelines, a knowledge of registers is meaningless at the level of C code. Further, PHPP (Portable High-Performance Parallelism) requires code to be divorced from any notion of processor architecture other than that of multiple memory spaces. Therefore, future programmers should be taught only a simple model of the processor, as a memory-to-memory device.

The final PHPP languages have not settled out of the dust yet, but fundamentally, due to the constraints of both being portable across parallel architectures and being high performance on each, such languages will have no notion of thread, nor of communication primitive. However, they will have the notion of scheduler, the notion of work-unit (aka task aka job), and the notion of dependency.

Therefore, in this talk, I examine the old assumptions, show how they are no longer valid, propose a simple memory-to-memory processor model that captures the elements that still matter, and propose that an introduction to architecture and programming only give that simple processor model plus a simple machine model indicating the cache levels, the peripheral bus, and the network. I further propose that programming be taught from the outside-in: start with the simple machine model at one end, and at the other end the use-case method of specifying applications, then proceed moving in-ward from those two end-points.

Outside-in shows how languages layer on top of the machine model at one end, and at the other shows how algorithms and abstract data types fit to the patterns in use-cases. It then joins the two ends by implementing the algorithms and abstract data types in terms of a language. The introductory course should make heavy use of libraries that hide most of the machine details. This is because in the real world, most programmers do exactly that. Finally, throughout the course, any place that dependencies arise they should be pointed out:

* In the basic machine model a dependency is introduced any time different instructions access the same memory location.
* In the language, a dependency is introduced each time the same variable is used, or each time the same memory location is accessed via a pointer.
* In an algorithm, a dependency is any communication between data-structures.
* In an abstract data-type, a dependency is introduced any time state caused by one access to an instance is modified or read by a different access.
* In the use-cases, a dependency is introduced any time an action in one portion of a use-case has an effect on any other action in any portion of any use-case.

The essence of parallelism is dependencies. Therefore, teaching dependencies at all the different levels is the best preparation for parallel programming.

Bio: 

Sean Halle is a fearless entrepreneur from Silicon Valley. He founded a fabless chip company around a MIMD-SIMD low-power massively parallel processor for graphics in 1997 (ProSide), then switched over and worked his way up the software engineering foodchain, ending as Chief Software Architect at Nevik. Along the way, he has taught courses on Object Oriented Programming, as well as in-house seminars on emerging technologies at DEC, SRI, and Sun's JavaSoft. Currently he is finishing his PhD on Portable High Performance Parallelism, with a research focus on techniques for "write once, run high performance anywhere" parallel code.

Affiliation: 
INRIA
Event Type: 
Seminar
AttachmentSize
HiPEAC-Pisa-2010-halle-top-down.wmv216.11 MB
Groups: