National Institute of Informatics (NII), Director General / The University of Tokyo.
Director General of National Institute of Informatics and Professor at Institute of Industrial Science, the University of Tokyo. Received Ph.D. degree from the University of Tokyo in 1983. Served in various positions such as President of Information Processing Society of Japan (2013–2015) and Chairman of Committee for Informatics, Science Council of Japan（2014-2016）. He has wide research interests, especially in database engineering. He has received many awards including ACM SIGMOD E. F. Codd Innovations Award, IEICE Contribution Award, IPSJ Contribution Award, 21st Century Invention Award of National Commendation for Invention, Japan and C&C Prize, IEEE Innovation in Societal Infrastructure Award and Japan Academy Award. In 2013, he awarded Medal with Purple Ribbon and in 2016, the Chevalier de la Legion D’Honneur. He is a fellow of ACM, IEICE and IPSJ, CCF honorary member, and IEEE Life Fellow.
Professor, Dept. of Computer Science & Engineering, Arizona State University.
Getting AI Agents to Interact and Collaborate with Us on Our Terms
As AI technologies enter our everyday lives at an ever increasing pace, there is a greater need for AI systems to work synergistically with humans. This requires AI systems to exhibit behavior that is explainable to humans. Synthesizing such behavior requires AI systems to reason not only with their own models of the task at hand, but also about the mental models of the human collaborators. At a minimum, AI agents need approximations of human’s task and goal models, as well as the human’s model of the AI agent’s task and goal models. The former will guide the agent to anticipate and manage the needs, desires and attention of the humans in the loop, and the latter allow it to act in ways that are interpretable to humans (by conforming to their mental models of it), and be ready to provide customized explanations when needed. Using several case-studies from our ongoing research, I will discuss how such multi-model reasoning forms the basis for explainable behavior in human-aware AI systems.
Subbarao Kambhampati is a professor of computer science at Arizona State University. Kambhampati studies fundamental problems in planning and decision making, motivated in particular by the challenges of human-aware AI systems. He is a fellow of Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery, and was an NSF Young Investigator. He was the president of the Association for the Advancement of Artificial Intelligence, trustee of International Joint Conference on Artificial Intelligence, and a founding board member of Partnership on AI. Kambhampati’s research as well as his views on the progress and societal impacts of AI have been featured in multiple national and international media outlets. He writes a column on the societal and policy implications of the advances in Artificial Intelligence for The Hill. He can be followed on Twitter @rao2z.
Institute Chair Professor, Computer Science and Engineering, IIT Bombay.
Machine Learning as a Service: The Challenges of Serving a Million client Distributions
Increasing concentration of big data and computing resources has resulted in widespread adoption of machine learning as a service (MLaaS). The best-performing NLP, speech, image and video recognition tools are now provided as network services. In such cases, the labeled data used for training may be proprietary, and different clients may be interested in different data distributions often violating the core ML generalizability assumption of the training and test distributions matching. This talk will discuss techniques for reducing such mismatch. We discuss ways in which the server could exploit multi-client training data to train ML models for better generalization to client distributions without an explicit parameter adaptation. Next, we call for a more detailed specification of a server’s accuracy where accuracy is not a single number, but a surface over interpretable client data properties. Such an interpretable surface would allow a client to make more informed choice of a model from the burgeoning marketplace of cloud services. We discuss methods for lightweight and heavyweight adaptation of a blackbox service in the context of NLP models for topic adaptation, and speech models for accent adaptation.
Sunita Sarawagi researches in the fields of databases and machine learning. She is institute chair professor at IIT Bombay. She got her PhD in databases from the University of California at Berkeley and a bachelors degree from IIT Kharagpur. She has also worked at Google Research (2014-2016), CMU (2004), and IBM Almaden Research Center (1996-1999). She was awarded the Infosys Prize in 2019 for Engineering and Computer Science, and the distinguished Alumnus award from IIT Kharagpur. She has several publications including best paper awards at ACM SIGMOD, VLDB, ICDM, NIPS, and ICML conferences. She has served on the board of directors of the ACM SIGKDD and VLDB foundation. She was program chair for the ACM SIGKDD 2008 conference, research track co-chair for the VLDB 2011 conference and has served as program committee member for SIGMOD, VLDB, SIGKDD, ICDE, and ICML conferences, and on the editorial boards of the ACM TODS and ACM TKDD journals.
Istituto di Scienza e Tecnologie dell'Informazione, Consiglio Nazionale delle Ricerche (ISTI-CNR), Italy.
Fabrizio Sebastiani is a Senior Researcher at the Institute for the Science and Technologies of Information of the National Council of Research (ISTI-CNR), Italy. He has also been a Principal Scientist at the Qatar Computing Research Institute, and an Associate Professor at the Department of Pure and Applied Mathematics of the University of Padova. His research interests lie at the intersection of information retrieval and machine learning, with particular emphasis on text mining, text classification, information extraction, opinion mining, quantification, and their applications in fields such as medical informatics, market research, and customer relationships management.
Director, Computational Social Science, Facebook.
The Geography of Social Ties
This talk will describe how social networks crisscross the world. Geographic patterns emerge from historical borders, as well as migrations past and present. They correlate with myriad different phenomena, from trade to COVID cases. Furthermore, having social connections to regions experiencing changes, for example in COVID cases, is tied to corresponding responses in social distancing, beyond what is expected based on changes within one’s own region.
Lada Adamic leads the Computational Social Science Team at Facebook. Prior to joining Facebook she was an associate professor at the University of Michigan's School of Information and Center for the Study of Complex Systems. Her research interests center on information dynamics in networks. She has received an NSF CAREER award, a University of Michigan Henry Russell award, the 2012 Lagrange Prize in Complex Systems.
|Paper Submission Deadline|
|Paper Acceptance Notification|
|Camera Ready Papers Due|