EN

新闻感知天下

清源研究院的最新讯息

“清源·学术”系列报告会-机器学习专场

2023-05-11 责任编辑:

        “清源·学术”系列报告会由上海交通大学清源研究院主办,

联合上海人工智能实验室、全球高校人工智能学术联盟共同举办,

将邀请人工智能领域专家、学者,

分享前沿科技、最新成果,

以期扩展学术视野、提供交流平台。


机器学习方向学术报告会

报告时间:05月11日(周四)13:30-15:30

报告地点:电信群楼3-200号


主持人:卢策吾 

上海交通大学清源研究院院长助理

计算机科学与工程系教授、博导


报告一:Theory of Deep Learning
摘要:While deep learning achieves great success in many applications, there is still lack of theoretical understandings. In this talk I will present our recent works on the theories of the representation power, optimization and generalization of deep learning. I first show deep neural networks with bounded width are universal approximators. Then I will talk about the training of a deep neural network. Traditional wisdom says that training deep nets is a highly nonconvex optimization problem. However, empirically one can often find global minima simply using gradient descent. I show that if the deep net is sufficiently wide, then starting from a random initialization, gradient descent provably finds global optima with a linear convergence rate. Finally, I will talk about why overparameterized deep neural networks can have good generalization.

简介:王立威,北京大学智能学院教授。长期从事机器学习研究。
在机器学习理论方面取得一系列成果。
在机器学习国际权威期刊会议发表高水平论文200余篇。
担任人工智能权威期刊TPAMI编委。
获ICLR 2023 Outstanding Paper Award。
曾入选AI’s 10 to Watch,是该奖项自设立以来首位获此荣誉的亚洲学者。

报告二:Understanding Self-Supervised Contrastive Learning
摘要:Self-supervised learning has recently attracted great attention since it only requires unlabeled data for model training. Contrastive learning is one popular method for self-supervised learning and has achieved promising empirical performance. However, the theoretical understanding of its generalization ability is still limited. In this talk, I will analyze self-supervised contrastive learning from a theoretical view and show its generalization ability is related to three key factors: alignment of positive samples, divergence of class centers, and concentration of augmented data. Moreover, I will show that self-supervised contrastive learning fails to learn domain-invariant features, which limits its transferability. To address this issue, I will introduce Augmentation-robust Contrastive Learning (ArCL) and show how it significantly improves the transferability of self-supervised contrastive learning。

简介:黄维然,上海交通大学清源研究院副教授、博士生导师,研究方向是机器学习理论、自监督学习、小样本学习等。
本科毕业于清华大学电子工程系,之后在清华大学交叉信息研究院获得博士学位。
研究成果在 ICML, NeurIPS, ICLR, AISTATS, CVPR, ICCV 等人工智能国际顶级会议上发表,同时也担任 ICML, NeurIPS, ICLR 等会议的审稿人。


同时开启线上B站直播通道——链接:https://live.bilibili.com/23005856
或在B站搜索【上海交大清源研究院】

上海交通大学清源研究院成立于2019年12月20日,
致力于构建世界一流的人工智能科研与教学队伍,
专注于人工智能的基础理论研究与技术创新,
以期取得具有国际领先水平的创新成果,
推动大学与产业的有机融合,
为人工智能的理论研究及产业发展作出贡献。
本系列讲座长期进行,更多发现敬请关注公号。

联系我们

地址:上海市闵行区东川路800号电院群楼3号楼301室
邮编:200240
电话:021 – 34204113
邮箱:qingyuan@sjtu.edu.cn

版权所有 © 上海交通大学清源研究院   沪交ICP备20200349  技术支持:SDGBD