-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathschedule.html
executable file
·512 lines (477 loc) · 20.2 KB
/
schedule.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
---
layout: default
title: Schedule
hide: false
navigation_weight: 8
---
<style type="text/css">
.schedule_table {
margin-bottom: 2em;
}
.schedule_table th{
border-top: 1px solid black;
border-bottom: 1px solid black;
}
.schedule_table td{
padding: 0.5em;
}
.schedule_table td:nth-child(1) {
min-width: 80px;
white-space: nowrap;
}
.schedule_table td:nth-child(2) {
border-left: 1px solid black;
border-bottom: 1px dashed black;
}
.schedule_table tr:last-child td{
border-bottom: 1px solid black;
}
</style>
<!--<h1>{{ site.conference.name }} {{ site.conference.year }} Program of Events</h1>-->
<!--<b>VENUE MAP</b>: A plan of the hotel and surrounding areas can be found <a href="Hyatt66.plan.pdf">here</a>.<br><br>-->
<h1>Schedule</h1>
<h2>Information for presenters</h2>
<p>
Each presenter has 20 minutes: 16-minute presentation and 4-minute Q&A session.
</p><p>
All oral presentations also have a poster presentation slot. Poster boards are 0.90m (wide) x 2.10m (high). We recommend A0 portrait as the poster size.
</p>
<br><br>
<h2>Registration desk hours</h2>
<ul>
<li>Sunday April 8: 17:00 to 20:00</li>
<li>Monday April 9: 7:30 to 13:30 </li>
<li>Tuesday April 10: 7:30 to 13:30 </li>
<li>Wednesday April 11: 7:30 to 10:30 </li>
</ul>
<h2>April 9 (Monday)</h2>
<table class="schedule_table" cellspacing="0">
<tr>
<th>Time</th>
<th>Schedule</th>
</tr>
<tr>
<td>9:00 - 10:00</td>
<td>Invited speaker: <a href="jeniffer_hill.html">Jennifer Hill</a></td>
</tr>
<tr>
<td>10:10 - 11:30</td>
<td><u>Oral Session 1.1: Statistics</u>
<br> Session chair: Dirk Husmeier
<ul>
<!--302-->
<li><b>Statistically Efficient Estimation for Non-Smooth Probability Densities</b><br>
Masaaki Imaizumi, Takanori Maehara, Yuichi Yoshida
</li>
<!--370-->
<li><b> Stochastic Zeroth-order Optimization in High Dimensions</b><br>
Yining Wang, Arindam Banerjee, Simon Du, Sivaraman Balakrishnan, Aarti Singh
</li>
<!--76-->
<li><b> Sparse Linear Isotonic Models</b><br>
Sheng Chen, Arindam Banerjee
</li>
<!--21-->
<li><b> Delayed Sampling and Automatic Rao-Blackwellization of Probabilistic Programs</b><br>
Lawrence Murray, Daniel Lundén, Jan Kudlicka, David Broman, Thomas Schön
</li>
</ul>
</td>
</tr>
<tr>
<td>11:30 - 14:00</td>
<td><a href="poster_sessions.html">Poster session 1</a></td>
</tr>
<tr>
<td>14:00 - 15:30</td>
<td>Lunch (on your own)</td>
</tr>
<tr>
<td>15:30 - 16:50</td>
<td><u>Oral Session 1.2: Online learning</u>
<br> Session chair: Mark Deisenroth
<ul>
<!--147-->
<li><b> Combinatorial Semi-Bandits with Knapsacks</b><br>
Karthik Abinav Sankararaman, Aleksandrs Slivkins
</li>
<!--381-->
<li><b> Online Continuous Submodular Maximization</b><br>
Lin Chen, Hamed Hassani, Amin Karbasi
</li>
<!--305-->
<li><b> Convergence of Value Aggregation for Imitation Learning</b><br>
Ching-An Cheng, Byron Boots
</li>
<!--37-->
<li><b> Competing with Automata-based Expert Sequences</b><br>
Scott Yang, Mehryar Mohri
</li>
</ul>
</td>
</tr>
<tr>
<td>16:50 - 17:20</td>
<td>Coffee break</td>
</tr>
<tr>
<td>17:20 - 18:40</td>
<td><u>Oral Session 1.3: Learning and Estimation</u>
<br> Session chair: Isabel Valera Martinez
<ul>
<!--159-->
<li><b> A Simple Analysis for Exp-concave Empirical Minimization with Arbitrary Convex Regularizer </b><br>
Tianbao Yang, Zhe Li, Lijun Zhang
</li>
<!--10-->
<li><b> Learning linear structural equation models in polynomial time and sample complexity</b><br>
Asish Ghoshal, Jean Honorio
</li>
<!--477-->
<li><b> Consistent Algorithms for Classification under Complex Losses and Constraints</b><br>
Harikrishna Narasimhan
</li>
<!--361-->
<li><b> Subsampling for Ridge Regression via Regularized Volume Sampling</b><br>
Michal Derezinski, Manfred Warmuth
</li>
</ul>
</td>
</tr>
<tr>
<td>19:30</td>
<td>Welcome reception in the Canary (leave at bottom of building, turn right at pool: building near the end of the pool).</td>
</tr>
</table>
<h2>April 10 (Tuesday)</h2>
<table class="schedule_table" cellspacing="0">
<tr>
<th>Time</th>
<th>Schedule</th>
</tr>
<tr>
<td>9:00 - 10:00</td>
<td>Invited speaker: <a href="david_blei.html">David Blei</a></td>
</tr>
<tr>
<td>10:10 - 11:30</td>
<td><u>Oral Session 2.1: Bayesian Methods</u>
<br> Session chair: Barnabas Poczos
<ul>
<!--106-->
<li><b> Fast Threshold Tests for Detecting Discrimination</b><br>
Emma Pierson, Sam Corbett-Davies, Sharad Goel
</li>
<!--13-->
<li><b> Parallelised Bayesian Optimisation via Thompson Sampling</b><br>
Kirthevasan Kandasamy, Akshay Krishnamurthy, Jeff Schneider, Barnabas Poczos
</li>
<!--495-->
<li><b> Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition</b><br>
Pavel Izmailov, Dmitry Kropotov, Alexander Novikov
</li>
<!--390-->
<li><b> Factorial HMM with Collapsed Gibbs Sampling for optimizing long-term HIV Therapy</b><br>
Amit Gruber, Chen Yanover, Tal El-Hay, Yaara Goldschmidt, Anders Sönnerborg, Vanni Borghi, Francesca Incardona
</li>
</ul>
</td>
</tr>
<tr>
<td>11:30 - 14:00</td>
<td><a href="poster_sessions.html">Poster session 2</a></td>
</tr>
<tr>
<td>14:00 - 15:30</td>
<td>Lunch (on your own)</td>
</tr>
<tr>
<td>15:30 - 16:30</td>
<td><u>Oral Session 2.2: Large Scale learning</u>
<br> Session chair: Adrian Weller
<ul>
<!--326-->
<li><b> Sketching for Kronecker Product Regression and P-splines</b><br>
Huaian Diao, Zhao Song, Wen Sun, David Woodruff
</li>
<!--439-->
<li><b> Towards Provable Learning of Polynomial Neural Networks Using Low-Rank Matrix Estimation</b><br>
Mohammadreza Soltani, Chinmay Hegde
</li>
<!--148-->
<li><b> Convergence diagnostics for stochastic gradient descent</b><br>
Jerry Chee, Panos Toulis
</li>
<!--50-->
<!--<li><b> Learning Sparse Additive Models with Interactions in High Dimensions</b><br>-->
<!--<!–Hemant Tyagi, Anastasios Kyrillidis, Bernd Gärtner, Andreas Krause–>-->
<!--</li>-->
</ul>
</td>
</tr>
<tr>
<td>16:30 - 19:00</td>
<td><a href="poster_sessions.html">Poster session 3</a></td>
</tr>
<tr>
<td>19:30</td>
<td>Conference Dinner at Monumento al Campesino: Bus leaves at 7.30 from the front of the Hotel.</td>
</tr>
</table>
<h2>April 11 (Wednesday)</h2>
<table class="schedule_table" cellspacing="0">
<tr>
<th>Time</th>
<th>Schedule</th>
</tr>
<tr>
<td>9:00 - 10:00</td>
<td>Invited speaker: <a href="andreas_krause.html">Andreas Krause </a></td>
</tr>
<tr>
<td>10:10 - 11:30</td>
<td><u>Oral Session 3.1: Approximate Inference</u>
<br> Session chair: Matt Hoffman
<ul>
<!--190-->
<li><b> Variational Sequential Monte Carlo</b><br>
Christian Naesseth, Scott Linderman, Rajesh Ranganath, David Blei
</li>
<!--374-->
<li><b> VAE with a VampPrior</b><br>
Jakub Tomczak, Max Welling
</li>
<!--346-->
<li><b> Scaling up the Automatic Statistician: Scalable Structure Discovery using Gaussian Processes</b><br>
Hyunjik Kim, Yee Whye Teh
</li>
<!--321-->
<li><b> Multimodal Prediction and Personalization of Photo Edits with Deep Generative Models</b><br>
Ardavan Saeedi, Matthew Hoffman, Matthew Hoffman, Stephen DiVerdi, Asma Ghandeharioun, Matthew Johnson, Ryan Adams
</li>
</ul>
</td>
</tr>
<tr>
<td>11:30 - 14:00 </td>
<td><a href="poster_sessions.html">Poster session 4</a></td>
</tr>
<tr>
<td>14:00 - 15:30</td>
<td>Lunch (on your own)</td>
</tr>
<tr>
<td>15:30 - 16:30</td>
<td><u>Oral Session 3.2: Kernel Methods</u>
<br> Session chair: Andrew Gordon Wilson
<ul>
<!--23-->
<li><b> Random Warping Series: A Random Features Method for Time-Series Embedding</b><br>
Lingfei Wu, Ian En-Hsu Yen, Jinfeng Yi, Fangli Xu, Qi Lei, Michael Witbrock
</li>
<!--154-->
<li><b> Efficient and principled score estimation with Nyström kernel exponential families</b><br>
Dougal Sutherland, Heiko Strathmann, Michael Arbel, Arthur Gretton
</li>
<!--423-->
<li><b> Multi-scale Nystrom Method</b><br>
Woosang Lim, Rundong Du, Bo Dai, Kyomin Jung, Le Song, Haesun Park
</li>
<!--476-->
<!--<li><b> Differentially Private Causal Inference</b><br>-->
<!--<!–Matt Kusner, Yu Sun, Karthik Sridharan, Kilian Weinberger–>-->
<!--</li>-->
</ul>
</td>
</tr>
<tr>
<td>16:30 - 17:00</td>
<td>Coffee break</td>
</tr>
<tr>
<td>17:00 - 18:40</td>
<td><u>Oral Session 3.3: Optimization</u>
<br> Session chair: Simon Lacoste-Julien
<ul>
<li><b> Batch-Expansion Training: An Efficient Optimization Framework</b><br>
Michal Derezinski, Dhruv Mahajan, Sathiya Keerthi, S. V. N. Vishwanathan, Markus Weimer
</li>
<li><b> Adaptive balancing of gradient and update computation times using global geometry and approximate subproblems</b><br>
Sai Praneeth Reddy Karimireddy, Sebastian Stich, Martin Jaggi
</li>
<li><b> Frank-Wolfe Splitting via Augmented Lagrangian Method</b><br>
Gauthier Gidel, Fabian Pedregosa, Simon Lacoste-Julien,
</li>
<li><b> Structured Optimal Transport</b><br>
David Alvarez Melis, Tommi Jaakkola, Stefanie Jegelka
</li>
<li><b> Tracking the gradients using the Hessian: A new look at variance reducing stochastic methods</b><br>
Robert Gower, Nicolas Le Roux, Francis Bach
</li>
</ul>
</td>
</tr>
</table>
<br>
<!--<style type="text/css">-->
<!--.tg {border-collapse:collapse;border-spacing:0;}-->
<!--.tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}-->
<!--.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;}-->
<!--.tg .tg-baqh{text-align:center;vertical-align:top}-->
<!--.tg .tg-amwm{font-weight:bold;text-align:center;vertical-align:top}-->
<!--.tg .tg-yw4l{vertical-align:top}-->
<!--</style>-->
<!--<table class="tg" style="undefined;table-layout: fixed; width: 895px">-->
<!--<colgroup>-->
<!--<col style="width: 85px">-->
<!--<col style="width: 113px">-->
<!--<col style="width: 271px">-->
<!--<col style="width: 427px">-->
<!--</colgroup>-->
<!--<tr>-->
<!--<th class="tg-amwm">Day</th>-->
<!--<th class="tg-amwm">Time</th>-->
<!--<th class="tg-amwm">Event Name</th>-->
<!--<th class="tg-amwm">Details</th>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-amwm" rowspan="7">Monday</td>-->
<!--<td class="tg-baqh">9:00-10:00</td>-->
<!--<td class="tg-baqh">Invited Speaker Talk: Prof. Jeniffer Hill</td>-->
<!--<td class="tg-yw4l">Talk Title: Causal inference at the intersection of machine learning and statistics: opportunities and challenges</td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-baqh">10:10-11:30</td>-->
<!--<td class="tg-baqh">Oral Session: Statistics</td>-->
<!--<td class="tg-yw4l">-->
<!--<ul><li>Statistically Efficient Estimation for Non-Smooth Probability Densities</li>-->
<!--<li>Stochastic Zeroth-order Optimization in High Dimensions</li>-->
<!--<li>Sparse Linear Isotonic Models</li>-->
<!--<li>Delayed Sampling and Automatic Rao-Blackwellization of Probabilistic Programs</li></ul></td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-baqh">11:30-14:00</td>-->
<!--<td class="tg-baqh">Poster Session 1</td>-->
<!--<td class="tg-yw4l"></td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-baqh">14:00-15:30</td>-->
<!--<td class="tg-baqh">Lunch (on your own)</td>-->
<!--<td class="tg-yw4l"></td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-baqh">15:30-16:50</td>-->
<!--<td class="tg-baqh">Oral Session: Online learning</td>-->
<!--<td class="tg-yw4l"><ul><li>Combinatorial Semi-Bandits with Knapsacks</li>-->
<!--<li>Online Continuous Submodular Maximization</li>-->
<!--<li>Convergence of Value Aggregation for Imitation Learning</li>-->
<!--<li>Competing with Automata-based Expert Sequences</li></ul></td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-baqh">16:50-17:20</td>-->
<!--<td class="tg-baqh">Coffee Break</td>-->
<!--<td class="tg-yw4l"></td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-baqh">17:20-18:40</td>-->
<!--<td class="tg-baqh">Oral Session: Computational Learning theory</td>-->
<!--<td class="tg-yw4l"><ul><li>A Simple Analysis for Exp-concave Empirical Minimization with Arbitrary Convex Regularizer</li>-->
<!--<li>Learning linear structural equation models in polynomial time and sample complexity</li>-->
<!--<li>Consistent Algorithms for Classification under Complex Losses and Constraints</li>-->
<!--<li>Subsampling for Ridge Regression via Regularized Volume Sampling</li></ul></td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-amwm" rowspan="6">Tuesday</td>-->
<!--<td class="tg-baqh">09:00-10:00<br></td>-->
<!--<td class="tg-baqh">Invited Speaker: David Blei</td>-->
<!--<td class="tg-yw4l">Talk Title: Black Box Variational Inference and Deep Exponential Families</td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-baqh">10:10-11:30</td>-->
<!--<td class="tg-baqh">Oral Session: Bayesian Methods</td>-->
<!--<td class="tg-yw4l"><ul><li>Fast Threshold Tests for Detecting Discrimination</li>-->
<!--<li>Parallelised Bayesian Optimisation via Thompson Sampling</li>-->
<!--<li>Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition</li>-->
<!--<li>Factorial HMM with Collapsed Gibbs Sampling for optimizing long-term HIV Therapy</li></ul></td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-baqh">11:30-14:00</td>-->
<!--<td class="tg-baqh">Poster Session 2</td>-->
<!--<td class="tg-yw4l"></td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-baqh">14:00-15:30</td>-->
<!--<td class="tg-baqh">Lunch (on your own)</td>-->
<!--<td class="tg-yw4l"></td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-baqh">15:30-16:30</td>-->
<!--<td class="tg-baqh">Oral Session: Online learning</td>-->
<!--<td class="tg-yw4l"><ul><li>Random Warping Series: A Random Features Method for Time-Series Embedding</li>-->
<!--<li>Efficient and principled score estimation with Nyström kernel exponential families</li>-->
<!--<li>Multi-scale Nystrom Method</li></ul></td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-baqh">16:30-19:00</td>-->
<!--<td class="tg-baqh">Poster Session 3</td>-->
<!--<td class="tg-yw4l"></td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-amwm">Wednesday</td>-->
<!--<td class="tg-baqh">09:00-10:00</td>-->
<!--<td class="tg-baqh">Invited Talk: Andreas Krasuse</td>-->
<!--<td class="tg-yw4l">Talk Title: Towards Safe Reinforcement Learning</td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-amwm" rowspan="6"></td>-->
<!--<td class="tg-baqh">10:10-11:30</td>-->
<!--<td class="tg-baqh">Oral Session: Approximate Inference</td>-->
<!--<td class="tg-yw4l"><ul><li>Variational Sequential Monte Carlo</li>-->
<!--<li>VAE with a VampPrior</li>-->
<!--<li>Scaling up the Automatic Statistician: Scalable Structure Discovery using Gaussian Processes</li>-->
<!--<li>Multimodal Prediction and Personalization of Photo Edits with Deep Generative Models</li></ul></td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-baqh">11:30-14:00</td>-->
<!--<td class="tg-baqh">Poster Session 4</td>-->
<!--<td class="tg-yw4l"></td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-baqh">14:00-15:30</td>-->
<!--<td class="tg-baqh">Lunch (on your own)</td>-->
<!--<td class="tg-yw4l"></td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-baqh"></td>-->
<!--<td class="tg-baqh">Oral Session: Large Scale Learning</td>-->
<!--<td class="tg-yw4l"><ul><li>Sketching for Kronecker Product Regression and P-splines</li>-->
<!--<li>Towards Provable Learning of Polynomial Neural Networks Using Low-Rank Matrix Estimation</li>-->
<!--<li>Convergence diagnostics for stochastic gradient descent</li></ul></td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-baqh">16:30-17:00</td>-->
<!--<td class="tg-baqh">Coffee Break</td>-->
<!--<td class="tg-yw4l"></td>-->
<!--</tr>-->
<!--<tr>-->
<!--<td class="tg-baqh">17:00-18:40</td>-->
<!--<td class="tg-baqh">Oral Session: Optimization</td>-->
<!--<td class="tg-yw4l"><ul><li>Batch-Expansion Training: An Efficient Optimization Framework</li>-->
<!--<li>Adaptive balancing of gradient and update computation times using global geometry and approximate subproblems</li>-->
<!--<li>Frank-Wolfe Splitting via Augmented Lagrangian Method</li>-->
<!--<li>Structured Optimal Transport</li>-->
<!--<li>Tracking the gradients using the Hessian: A new look at variance reducing stochastic methods</li></ul></td>-->
<!--</tr>-->
<!--</table>-->
<!--<h2>Best Paper Awards</h2>-->
<!--<a href="http://proceedings.mlr.press/v54/newling17a.html">A Sub-Quadratic Exact Medoid Algorithm</a> <br>-->
<!--<font color=red>James Newling, Francois Fleuret</font><br><br>-->
<!--<a href="http://proceedings.mlr.press/v54/bahmani17a.html">Phase Retrieval Meets Statistical Learning Theory: A Flexible Convex Relaxation</a><br>-->
<!--<font color=red>Sohail Bahmani, Justin Romberg</font><br><br>-->
<!--<a href="http://proceedings.mlr.press/v54/naesseth17a.html">Reparameterization Gradients through Acceptance- Rejection Sampling Algorithms</a><br>-->
<!--<font color=red>Christian Naesseth, Francisco Ruiz, Scott Linderman, David Blei</font><br><br>-->
<!--{% for post in site.posts reversed %}-->
<!--{% if post.layout == "singletrack" %}-->
<!--{% include listsingle.html %}-->
<!--{% endif %}-->
<!--{% endfor %}-->