Roughly 40 years ago, the computing industry’s leading professional organizations, ACM (Association for Computing Machinery) and IEEE (Institute of Electrical and Electronics Engineers), began a long-term collaboration to offer periodic guidance to universities offering undergraduate degree programs in Computer Science. Then, as now, the collaboration’s goal was to define a curriculum mixing theory with contemporary practice in appropriate balance – letting graduating CS majors pursue meaningful careers in an increasingly diverse industry or continue confidently on the path to more advanced degrees, both in computing science per se, and in allied fields (e.g. computational biology).
In February 2012, the Joint Task Force on Computing Curricula of the ACM and the IEEE Computer Society released the long-awaited ‘strawman’ draft of their 2013 curriculum model ( http://ai.stanford.edu/users/sahami/CS2013/strawman-draft/cs2013-strawman.pdf) – the first complete revision since 2001. Comments to this draft will be accepted until July 15 of this year, after which an ‘ironman’ draft will be produced and commented before release of the proposed curriculum in final form in Summer 2013.
The 17-person task force responsible for the draft comprises an ACM delegation chaired by Mehran Sahami, of Stanford, and an IEEE-CS delegation chaired by Steve Roach of the University of Texas at El Paso. Sahami, who maintains the CS2013 project website at http://ai.stanford.edu/users/sahami/CS2013/, is an expert in probabilistic analysis for machine learning. A former doctoral advisee of the formidable Daphne Koller and a former Google scientist, Sahami recently undertook a complete revamp of Stanford’s undergraduate CS curriculum. Roach, too, returned to academia after some years of work in scientific and aerospace computing (emphasizing engineering of high-reliability systems for NASA), and has for more than a decade been involved in CS curriculum development around computer architecture and related fields.
In preparing to produce the draft, Sahami, Roach, and their colleagues created a survey issued to over 3000 universities and institutions in the US and abroad, seeking recommendations for new knowledge areas (KAs) requiring representation. As a result, several new KAs have been added to those in the 2001 curriculum, including information security and parallel and distributed computing.
As indicated by the draft writers, the treatment of parallel programming as its own KA reflects the emergent importance of the discipline to all aspects of computing, and to all potential student career paths in industry and learning. Say the draft writers, the total 15 course hours recommended in parallel and distributed computing focus on “parallelism as a computing primitive and the complications that arise in parallel and concurrent programming.”
Five introductory course hours cover the fundamentals, including introductions to decomposition, communication/coordination of parallel tasks, and parallel system architecture. This is followed by 10 more advanced core-requirement hours, largely disposed in decomposition, communications, and algorithms. Electives extend coverage of the KA to performance, distributed systems, and semantics.
The pure PD coursework is thus non-redundant with the ‘systems’ view of parallelism offered under the systems fundamentals KA, which “provide(s) a unified view of the system support for simultaneous execution at multiple levels of abstraction…” (in gates, processors, operating systems, servers, etc.).
At lower levels of abstraction, meanwhile, parallelism has been worked into other areas of the proposed core curriculum and electives in a very intrinsic way – it’s hard to see a path through the required coursework that doesn’t provide graduates both with theory and hands-on experience, both in high-level parallel software development and in its lower-level applications in performance enhancement. The architecture KA offers strong grounding in parallelism as intrinsic to modern chip design at the assembly and instruction-set level (SIMD, MIMD). The processing KA covers task/data parallelism. KAs on programming tools treat vectorization, thread APIs, etc. as supported under IDEs; the software development KA treats strategies for concurrency at a practical level that will be familiar to devs working in the field. And a wide range of proposed elective courses offer impressively deep, specific, and concrete treatments – both of the critical discipline of performance optimization through parallel strategies, and about GPUs, compute accelerators, and other special-purpose computing engines.
The delegations have done, in short, what looks to be a fine job in creating a curriculum both to qualify graduates for work in the current field, and to support more advanced study. It’s instructive, too, to read the strawman draft and grasp the intrinsic difficulty of the task: to map the whole exploding and diverse field of computing – what they call the ‘Big Tent’ model, which includes both pure computer science and engineering and ‘Computational X’ topics – down into a set of required courses that fit the time requirements of four-year undergraduate degree programs (two years following major declaration), while providing sensible families of electives enabling useful specialization and hands-on experience while still at the undergraduate level.
John Jainschigg is a Geeknet contributing editor, and is CEO of World2Worlds, Inc., a digital agency focused on immersive technology and gaming. John’s initial intro to concurrency was via interrupt and re-entrancy programming at the assembler level on Z80 and 68000-based systems. He wrote concurrent, time-critical packet-switching applications on HP-UX RISC machines in the late 1980s, and since then has worked up and down the client-server stack in Java, C++, PHP, and other conventional and scripting languages, and more recently, in task-specific, state-based, radically concurrent languages like LSL.