Mark Horowitz, Stanford University
For the past half century the world has enjoyed the benefits of many innovations enabled by Moore’s Law scaling of silicon technology. While Intel claims that scaling is still healthy, most other organization see issues today, and many more issues ahead. Regardless of whether it has started to happen already, it will eventually stop, and that point is not that far away.
This talk will quickly review the basics behind silicon scaling, the current power problem, and current approaches to continue Moore’s Law after scaling slows (think 3-D and new technologies). I will then describe why I am not optimistic about any of the new technologies rescuing Moore’s Law (though there has been some interesting progress on the quantum side), and why I think that computing will be CMOS based for the foreseeable future. The net effect, which already exists today, is that the value of electronic technology has moved from being technology driven to be application driven. In an application driven world, successful products include many “cupholders”, small low-cost additions that improve the user experience, so enabling them is essential.
The rest of the talk is my view of how the design process and the industry must adapt if it wants to continue to create high-value products. In application driven value scenarios, the technologies that win are those that have low development costs, since most ideas fail. This has profound ramifications for both how we design chips, and how we design systems using chips. In both areas we need to enable people to try to create new innovative hardware solutions and to do that requires creating enough design scaffolding to enable the equivalents of Apple’s IStore/Google Play for hardware design.
Mark Horowitz received his BS and MS in Electrical Engineering from MIT in 1978, and his PhD from Stanford in 1984. Since 1984 he has been a professor at Stanford working in the area of digital integrated circuit design. While at Stanford he has led a number of processor designs including: MIPS-X, one of the first processors to include an on-chip instruction cache; Torch, a statically-scheduled, superscalar processor; Flash, a flexible DSM machine; and Smash, a reconfigurable polymorphic manycore processor. He has also worked in a number of other chip design areas including high-speed memory design, high-bandwidth interfaces, and fast floating point. In 1990 he took leave from Stanford to help start Rambus Inc, a company designing high-bandwidth memory interface technology.
Ion Stoica, UC Berkeley
As machine learning matures, the standard supervised learning setup is no longer sufficient. Instead of making and serving a single prediction as a function of a data point, machine learning applications increasingly must operate in dynamic environments, react to changes in the environment, and take sequences of actions to accomplish a goal. These modern applications are better framed within the context of reinforcement learning (RL), which deals with learning to operate within an environment. RL-based applications have already led to remarkable results, such as Google’s AlphaGo beating the Go world champion, and are finding their way into self-driving cars, UAVs, and surgical robotics.
These applications have very demanding computational requirements–at the high end, they may need to execute millions of tasks per second with millisecond level latencies, and support heterogeneous and dynamic computation graphs. In this talk, we present Ray, a new cluster computing framework that meets these requirements, give some application examples, and discuss how it can be integrated with Apache Spark.
Ion Stoica is a Professor in the EECS Department at University of California at Berkeley. He does research on cloud computing and networked computer systems. Past work includes Apache Spark, Apache Mesos, Tachyon, Chord DHT, and Dynamic Packet State (DPS). He is an ACM Fellow and has received numerous awards, including the SIGOPS Hall of Fame Award (2015), the SIGCOMM Test of Time Award (2011), and the ACM doctoral dissertation award (2001). In 2013, he co-founded Databricks, a startup to commercialize technologies for Big Data processing, and in 2006 he co-founded Conviva, a startup to commercialize technologies for large scale video distribution.