非对称性身份再认证(Asymmetric Person Re-Identification)

报告人:郑伟诗教授 (清华大学教授)

所属单位:中山大学

报告时间:2017年3月20日(星期一)上午10:20-11:10

报告地点:广东工业大学工学一号馆216室

个人简介

郑伟诗博士,中山大学数据科学与计算机学院教授,机器智能与先进计算教育部重点实验室副主任。他主要面向大规模智能视频监控,展开视频图像信息与信号的处理研究,并开展大规模机器学习的算法和理论研究。他目前的主要研究应用领域是:视频监控下的行人身份识别与行为信息理解。面向大规模监控网络下的行人追踪问题,他在国内外较早和持续开展跨视域行人重识别的研究,发表一系列以跨视域度量学习为主线的研究工作,他提出的基于相对比较思想建模思路在行人重识别中被广泛深入研究。他已发表90余篇主要学术论文,其中60余篇发表在图像识别和模式分类IEEE TPAMI、IEEE TIP、IEEE TNN、PR、IEEE TCSVT、IEEE TSMC-B等国际主流权威期刊和ICCV、CVPR、IJCAI等计算机学会推荐A类国际学术会议。近5年来,与国内外同行一道,他在中国计算机学会推荐A类国际学术会议ICCV和CVPR上以及其它著名国际学术会议期间做Tutorial。他曾担任IEEE AVSS的Area Chair和Publication Chair,担任2012年、2015年和2016年全国生物特征识别学术会议的联合程序委员会主席。获国家优秀青年科学基金、英国皇家学会牛顿高级学者基金、广东省自然科学杰出青年基金等支持,也曾入选微软亚洲研究院青年学者铸星计划。曾获广东省科学技术进步奖二等奖、广州市科学技术进步奖一等奖。

(Dr. Wei-Shi Zheng has joint SUN YAT-SEN University since Jan. 2011 under the one-hundred-people program, and has been a full professor since Jan. 2016. Dr. Zheng has been a Deputy Director of Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education since July 2016. Dr. Zheng’s early research was on face recognition based on subspace methods, when he worked for Ph.D. degree in Applied Mathematics with Prof. Jianhuang Lai and Prof. Pong C. Yuen at Sun Yat-sen University and Hong Kong Baptist University. He has been a visiting student working with Prof. Stan Z. Li at the Institute of Automation, Chinese Academy of Sciences. There, He started the research on sparsity learning for pattern recognition with sparse one-sided non-negativity matrix factorization. After 2008, He worked as a Postdoctoral Researcher on the European SAMURAI project for person association with Prof. Shaogang Gong and Dr. Tao Xiang. In Queen Mary University of London, He proposed to use relative comparison to overcome the ill-posed matching problem between the appearance of people across non-overlapping camera views. The idea was also extended to group of people association. He also created an i-LIDS dataset for evaluating this matching problem. Dr. Zheng is now focusing on three research areas: 1) person re-identification under challenging scenarios, 2) action prediction and group activity recognition, and 3) the related large scale machine learning research. Dr. Zheng has published more than 90 papers, including the prestigious journals and top conferences, such as IEEE TPAMI,IEEE TIP,IEEE TNN,PR,IEEE TCSVT,IEEE TSMC-B,ICCV,CVPR,IJCAI,etc.)

报告摘要

视角、手势、照明和遮挡等变化引起的视觉外观变化,给身份再认证带来了挑战。身份再认证使得我们可在长时间的大分布空间中在不同位置、时间和在不重叠的摄像头图片上匹配到人。在这次演讲中,我们将介绍我们最近关于在身份再认证方面利用非对称建模学习特定视角下特征转换的工作。特别地,我们从特征增强的观点来定义一个新的特定视角下的身份再认证框架,称为Camera coRrelation Aware Feature augmenTation(CRAFT)。具体地,CRAFT通过从交叉视图视觉数据分布中自动测量相机相关性并且自适应地进行特征增强,以将原始特征变换为新的自适应空间来执行不同视角间自适应。我们提出的新框架和视角通用学习算法可以被推广到学习和优化特定视角子模型,同时建模视角间通用的判别信息。我们进行了广泛的实验,以验证我们提出的框架的优越性和优势,在人物数据集上与最先进的方法进行了对比。

Person re-identification is fundamentally challenging because of the large visual appearance changes caused by variations in view angle, gesture, lighting, background clutter, and occlusion. With the help of person re-identification, we are able to match people across non-overlapping camera views at different locations and different time in a large distributed space over a prolonged period of time. In this talk, we will introduce our recent work on asymmetric modeling for person re-identification for learning view-specific feature transformations, an under-studied approach. In particular, we formulate a novel view-specific person re-identification framework from the feature augmentation point of view, called Camera coRrelation Aware Feature augmenTation (CRAFT). Specifically, CRAFT performs cross-view adaptation by automatically measuring camera correlation from cross-view visual data distribution and adaptively conducting feature augmentation to transform the original features into a new adaptive space. Through our augmentation framework, view-generic learning algorithms can be readily generalized to learn and optimize view-specific sub-models whilst simultaneously modelling view-generic discrimination information. We conducted extensively comparative experiments to validate the superiority and advantages of our proposed framework over state-of-the-art competitors on contemporary challenging person re-id datasets.

How about some links?