We live in an era of public clouds composed of millions of servers living in more than 100 datacenters spread out throughout the world. A growing percentage of workloads running in Azure rely on deep neural networks (DNNs) as a fundamental component. Every week, a novel DNN emerges that surpasses conventional algorithms used for image classification, natural language understanding, translation, etc. Achieving higher levels of performance on these workloads require novel chips and systems instantiated in the form of new instructions for CPUs, enhanced GPUs, FPGAs, AI processors, and disruptive distributed systems. This, combined with the higher power density encountered as lithography shrinks down to sub 10nm, is driving a renaissance of computer architecture. Innovation up-and-down the stack are yielding greater performance improvements year-over-year than what we saw during the golden era of Instruction Level Parallelism and fast clock rates (1990s). It is not without its challenges, especially in packaging, cooling, networking and software, regardless, we shall see a 1000x improvement in the next 5 years.