0850-0900 | Welcome |
---|---|
0900-0955 | Sudipto Guha "Reflections on Streams: Representation, Randomization and Access" |
0955-1050 | Suresh Venkatasubramanian "Accountability in Data Analysis" |
1100-1130 | Tea Break |
1130-1230 | Ryan Williams "Algorithms for Circuits and Circuits for Algorithms: Connecting the Tractable and the Intractable" |
1230-1400 | Lunch |
1400-1500 | Inauguration of Building by Director and Asha Jadeja Motwani |
1500-1600 | Asha Jadeja Motwani "Entrepreneurship in Computer Science, Rajeev's perspective" |
1600-1620 | Tea |
1620-1720 | Reminiscences |
1735-1830 | Manindra Agrawal "Polynomial Identity Testing for Small Depth Circuits" |
1930- | Dinner (PBCEC Lawns (Guest House)) |
Title: Algorithms for Circuits and Circuits for Algorithms: Connecting the Tractable and Intractable
Abstract:
The title of this talk highlights an emerging duality between two basic topics in algorithms and complexity theory.
Algorithms for circuits refers to the design of algorithms which can analyze finite logical circuits or Boolean functions as input, checking a simple property about the complexity of the underlying function. For instance, an algorithm determining if a given logical circuit C has an input that makes C output true would solve the NP-complete Circuit-SAT problem. Such an algorithm is unlikely to run in polynomial time, but could possibly be more e fficient than exhaustively trying all possible inputs to the circuit.
Circuits for algorithms refers to the modeling of "complex" uniform algorithms with "simple" Boolean circuit families, or proving that such modeling is impossible. For example, can every exponential-time algorithm be simulated using Boolean circuit families of only polynomial size? It is widely conjectured that the answer is no, but the present mathematical tools available are still too crude to resolve this kind of separation problem.
This talk surveys these two generic subjects and the connections that have been developed between them, focusing on connections between non-trivial circuit-analysis algorithms and proofs of circuit complexity lower bounds.
Bio:
Ryan Williams is an assistant professor in the computer science department at Stanford University. Previously he was a postdoctoral fellow at IBM Almaden Research Center in San Jose, California, and a member of the Institute for Advanced Study in Princeton, New Jersey. He received his PhD from Carnegie Mellon in 2007 under Manuel Blum, and B.A. and M.Eng. degrees from Cornell. His honors include several best paper awards, an Alfred P. Sloan Fellowship (2013), and a Microsoft Research Faculty Fellowship (2013).
Title: Reflections on Streams: Representation, Randomization and Access
Abstract:
The field of Data Streams emerged in the late 1970s as a method to compute under limited memory. Much has changed in the last four decades, but that basic premise motivating Data Streams have only become more important. Over the last two decades there has been enormous progress in the field of data streams, but we have perhaps not yet seen even a miniscule view of the transition to the internet of everything. In that emerging world, the basic primitives of computation are up for question – for example, the traditional view of computation as a function mapping input to output does not always hold in such a world. The input is not readily accessible and a program has to simultaneously decide which data to access or materialize and what to compute. The amount of access to the input has been a traditional measure of resource, but we are witnessing the emergence of ``how’’ a program accesses or materializes its input as the key resource.
Data Stream algorithms, loosely defined, constitute the study of how the nature of access to input affects our computational capabilities. This talk will reflect on both early and recent developments in Data Streams, and how the field brings together the threads of randomization, communication and approximation. Each of these threads, as well as the overall theme of Data Streams, has been a significant part of the research profile of Rajeev Motwani.
Bio:
Sudipto Guha is a graduate of IIT Kanpur, class of 1995. He completed his Ph.D.in 2000 at Stanford University under the supervision of Rajeev Motwani. Subsequently he spent a year working as a senior member of technical staff in Network Optimizations and Analysis Research department in AT&T Shannon Labs Research. He is currently an Associate Professor in the Department of Computer and Information Sciences at the University of Pennsylvania. His research interests are in the area of Algorithms, specifically Approximation Algorithms and Data Streams.
He is a recipient of the NSF CAREER award and the Alfred P. Sloan Foundation fellowship. He was also one of the awardees of the President’s Gold Medal, IIT Kanpur.
Title: Polynomial Identity Testing for Small Depth Circuits
Abstract:
Polynomial Identity Testing (PIT) problem is one of the fundamental problems in complexity theory. Solutions of a number of other problems depend on its solution, including finding strong arithmetic circuit lower bounds. Over the last one decade, significant progress has been made towards solving the problem, perhaps the most important one being the reduction to PIT for small depth circuits. In this talk, I will outline the history of the problem, and its current status.
Bio:
Manindra Agrawal is the Dean of Faculty Affairs and a Professor in the Department of Computer Science and Engineering at Indian Institute of Technology Kanpur. He works in theory of computation; specifically, in complexity theory, algorithmic number theory and algebra. He has worked on Isomorphism Conjecture for NP-complete sets, polynomial time algorithm for testing primality of numbers, and Polynomial Identity Testing.
He is a fellow of Indian science and engineering academies, and TWAS academy. He is a recipient of Clay Research Award, ICTP Prize, Godel Prize, Fulkerson Prize, Shanti Swarup Bhatnagar Award, Infosys Mathematics Prize, and Padma Shri.
Title: Accountability in Data Analysis
Abstract:
Can we trust our algorithms ? Should we ?
We live in a world run by algorithms that we can no longer comprehend. This is no longer the realm of science fiction -- not if you use a mailbox sorted by "relevance", or look at recommendations for movies online, or even use your social media feed as a news filter. This state of affairs is a consequence of our successes: in data mining, in the design of algorithms that work at mind-boggling scale, and the building of giant clusters of servers that distribute, communicate and process data in the blink of an eye.
The challenge for us today is no longer how to compute something, but how to trust what we ask a system to compute for us. And in this talk I'll outline some new ideas on the problems of trust, verification and accountability. I'll connect a core idea in the theory of computing -- verification is (possibly) more powerful than computation — with new ways to verify the results of an outsourced computation. And I’ll describe ways to use algorithms (and information theory) to detect whether a system is being biased in how it makes decisions.
Bio:
Suresh Venkatasubramanian is an associate professor in the School of Computing at the University of Utah. His interests include algorithms, computational geometry, data analysis, and the challenges of large data problems. He is a recipient of a Warnock Endowed Chair at the University of Utah, as well as the NSF CAREER award. He also writes the Geomblog, a blog on algorithms, computational geometry and data.