A new golden age for computer architecture
High-level, domain-specific languages and architectures and freeing architects from the chains of proprietary instruction sets will usher in a new golden age. David Patterson explains why, despite the end of Moores law, he expects an outpouring of codesigned ML-specific chips and supercomputers that will improve even faster than Moores original 1965 prediction.
Talk Title | A new golden age for computer architecture |
Speakers | David Patterson (UC Berkeley) |
Conference | Artificial Intelligence Conference |
Conf Tag | Put AI to Work |
Location | San Francisco, California |
Date | September 5-7, 2018 |
URL | Talk Page |
Slides | |
Video | Talk Video |
In the 1980s, Mead and Conway democratized chip design, and high-level language programming surpassed assembly language programming, which made instruction set advances viable. Innovations like reduced instruction set computers (RISCs), superscalar, and speculation ushered in a golden age of computer architecture, when performance doubled every 18 months. Unfortunately, this golden age has ended: microprocessor performance improved only 3% last year. The ending of Dennard scaling and Moore’s law and the deceleration of performance gains for standard microprocessors are not problems that must be solved but facts that if accepted, offer breathtaking opportunities. The good news is that our ravenous ML colleagues want the fastest computers that we can build; ML researchers at the forefront increase their computation appetite for training by 10x per year. High-level, domain-specific languages and architectures and freeing architects from the chains of proprietary instruction sets will usher in a new golden age. David Patterson explains why, despite the end of Moore’s law, he expects an outpouring of codesigned ML-specific chips and supercomputers that will improve even faster than Moore’s original 1965 prediction. Like the 1980s, the next decade will be exciting for computer architects in academia and industry alike.