ÀÏ°ÄÃÅ¿ª½±Ö±²¥

NASIT 2019 Schedule

Schedule for the 2019 ÀÏ°ÄÃÅ¿ª½±Ö±²¥ North American School of Information Theory (NASIT 2019)

Ìý

Abstracts and Biographies


Kannan Ramchandran
UC Berkeley


Tara Javidi
UC San Diego


Adam Smith
Boston University


Alexander Barg
University of Maryland, College Park

Information, Concentration, and Learning
Maxim Raginsky
University of Illinois at Urbana-Champaign
Ìý
Abstract: During the last two decades, concentration of measure has been a subject ofÌývarious exciting developments in convex geometry, functional analysis,Ìýstatistical physics, high-dimensional statistics, probability theory,ÌýinformationÌýtheory, communications and coding theory, computer science, andÌýlearning theory. One common theme that emerges in these fields isÌýprobabilistic stability: complicated, nonlinear functions of a large number ofÌýindependent orÌýweakly dependent random variables often tend to concentrateÌýsharply around their expected values. Information theory plays a key role inÌýthe derivation of concentration inequalities. Indeed, both the entropy methodÌýand the approach based on transportation-cost inequalities are two majorÌýinformation-theoretic paths towardÌýproving concentration.
Ìý
Machine learning algorithms can be viewed as stochastic transformationsÌý(or channels, in information-theoretic parlance) that map training dataÌýto hypotheses. Following the classic paper of Bousquet and Elisseeff, weÌýsay thatÌýsuch an algorithm is stable if its output does not depend tooÌýmuch on any individual training example. Since stability is closelyÌýconnected to generalization capabilities of learning algorithms, it isÌýof theoretical and practical interestÌýto obtain sharp quantitativeÌýestimates on the generalization bias of machine learning algorithms inÌýterms of their stability properties. In this tutorial, I will survey a recent line of work aimed at deriving stability and/or generalization guarantees for learning algorithmsÌýbased on mutualÌýinformation, erasure mutual information, and related information-theoretic quantities.Ìý
Ìý
Bio: Maxim Raginsky received the B.S. and M.S. degrees in 2000 and theÌýPh.D. degree in 2002 from Northwestern University, all in ElectricalÌýEngineering. He has held research positions with Northwestern, theÌýUniversity of Illinois atÌýUrbana-Champaign (where he was a BeckmanÌýFoundation Fellow from 2004 to 2007), and Duke University. In 2012, heÌýhas returned to the UIUC, where he is currently an Associate Professor and William L. Everitt FellowÌýwith the Department of ElectricalÌýand Computer Engineering and theÌýCoordinated Science Laboratory. He also holds a courtesy appointmentÌýwith the Department of Computer Science. He has received the CAREER award from the National Science Foundation in 2013. Prof. Raginsky's interests cover probability and stochasticÌýprocesses, deterministic and stochastic control, machine learning,Ìýoptimization, and information theory. Much of his recent research isÌýmotivated by fundamentalÌýquestions in modeling, learning, and simulationÌýof nonlinear dynamical systems, with applications to advancedÌýelectronics, autonomy, and artificial intelligence.

Ìý

Ìý

Time Monday, July 1 Tuesday, July 2 Wednesday, July 3 Thursday, July 4 Friday, July 5
08:30 – 10:00 Students arrive and check in Tutorial: TBA
Tara Javidi
UC San Diego
Ìý
Tutorial: TBA
Adam Smith
Boston University
Tutorial: TBA
Alexander Barg
University of Maryland, College Park
Tutorial: Information, Concentration, and Learning
Maxim Raginsky
University of Illinois at Urbana-Champaign
10:00 – 10:30 Ìý Coffee Break Coffee Break Coffee Break Coffee Break
10:30 – 12:00 Ìý Tutorial: TBA
Tara Javidi
UC San Diego
Tutorial: TBA
Adam Smith
Boston University
Tutorial: TBA
Alexander Barg
University of Maryland, College Park
Tutorial: Information, Concentration, and Learning
Maxim Raginsky
University of Illinois at Urbana-Champaign
12:00 – 1:00 Ìý Lunch Lunch Lunch Lunch
1:00 - 2:30 Ìý Padovani Lecture: TBA
Kannan Ramchandran
UC Berkeley
POSTER SESSION II FREE TIME
4th of July Fireworks Viewing TBA
TBA
2:30 – 3:00 Ìý Break POSTER SESSION II Ìý TBA
3:00 – 4:30 Ìý Padovani Lecture: TBA
Kannan Ramchandran
UC Berkeley
TBA Ìý Ìý
4:30 – 6:00 Ìý POSTER SESSION 1 Ìý Ìý Ìý
6:00 – 7:00 Ìý Break Ìý Ìý Ìý
7:00 – 9:00 Ìý Banquet Ìý Ìý Ìý