University of Michigan Professor, Computer Science and Engineering
The rising role of AI and data-driven analytics in the enterprise, along with the growing volumes of collected data, have significantly increased the computational footprint of most IT departments. In this talk, we will briefly explain some of the inherent inefficiencies across the software and hardware stack that hinder cost-efficient and timely interactions between analysts and their datasets. We will then turn our attention to statistical techniques as the only viable option for overcoming these inefficiencies in the near and long-term future. We will conclude by introducing an open-source framework, called VerdictDB, that enables organizations to reduce their cloud and computational costs, while reducing their time-to-insight by using the state-of-the-art approximate query processing strategies.