
任子良,男,博士研究生,讲师,2017年6月毕业于华南理工大学电子与信息学院电路与系统专业,先后在三星广州技术研究院、中国科学院深圳先进技术研究院、永利304电子游戏从事技术研发、科学研究与教学等工作,2018年6月至2020年6月于中国科学院深圳先进技术研究院开展博士后工作。目前主要从事计算机视觉、深度学习、人体行为识别等方向的研究,先后在IEEE Transactions on Circuits and Systems for Video Technology、Neurocomputing、Multimedia tools and applications、Electronics letters、Frontiers in Neurorobtics、Mathematics、物理学报、电子科技大学学报(自然科学版)、计算机工程等国内外学术期刊和ICIP等国际学术会议发表论文三十余篇,申请和授权专利二十余项,主持广东省粤莞联合基金地区培育项目1项、广东省自然科学基金面上项目1项、东莞市科技特派员项目1项,作为核心成员/技术骨干参与科技部重点研发计划、国家自然科学基金、省市科学基金以及企业课题多项。
研究方向:计算机视觉,深度学习,人体行为识别,目标检测等
办公室:永利304电子游戏8A410
联系方式: 短号:6180, E-mail:renzl@dgut.edu.cn
教育经历:
(1) 2014/09-2017/06,华南理工大学,电路与系统,博士
(2) 2009/09-2012/06,华南理工大学,通信与信息系统,硕士
(3) 2005/09-2009/06,许昌学院,电子信息工程,学士
工作经历:
(1) 2021/12-至今,永利304电子游戏,永利304电子游戏,讲师
(2) 2020/07-2021/11,中国科学院深圳先进技术研究院,工程师
(3) 2018/06-2020/06,中国科学院深圳先进技术研究院,博士后
(4) 2012/07-2013/06,广州三星通讯研究院,工程师
教学经历:
(1)2022年春季,《Linux程序设计》,2020级计算机类5、6班;
(2)2022年秋季,《程序设计基础》,2022级计算机类6班;
(3)2022年秋季,《计算机视觉与人工智能》,公选课。
学术论文:
[1] Ziliang Ren, Huaqiang Yuan, Wenhong Wei, Tiezhu Zhao, Qieshi Zhang*. Convolutional non-local spatial-temporal learning for multi-modality action recognition, Electronics Letters, 58(20): 765-767, 2022.
[2] Huaigang Yang, Ziliang Ren*, Huaqiang Yuan, Wenhong Wei, Qieshi Zhang and Zhaolong Zhang. Multi-scale and attention enhanced graph convolution network for skeleton-based violence action recognition, Frontiers in Neurorobotics, 2022.
[3] Xiongjiang Xiao, Ziliang Ren*, Wenhong Wei, Huan Li, Hua Tan. Shift Swin Transformer Multimodal Networks for Action Recognition in Videos, ICSMD, 2022.
[4] Qingxia Li, Dali Gao, Qieshi Zhang, Wenhong Wei and Ziliang Ren*. Interactive Learning of a Dual Convolution Neural Network for Multi-Modal Action Recognition, Mathematics, 2022.
[5] Jiaojie Yan, Qieshi Zhang*, Jun Cheng, Ziliang Ren, Tian Li, Zhuo Yang. Indoor Target-Driven Visual Navigation Based On Spatial Semantic Information, ICIP, 571-575, 2022.
[6] Qin Cheng, Ziliang Ren, Zhen Liu, Jun Cheng, Qieshi Zhang and Jianming Liu. MIAM: Motion information aggregation module for action recognition, Electronics Letters, 58(10): 396-398, 2022.
[7] Zhen Liu, Jun Cheng?, Libo Liu, Ziliang Ren, Qieshi Zhang, Chengqun Song. Dual-stream cross-modality fusion transformer for RGB-D action recognition, Knowledge-Based Systems, 255: 109741, 2022.
[8] Yicheng Liu, Fuxiang Wu, Qieshi Zhang, Ziliang Ren,and Jun Cheng. EEP-Net: Enhancing Local Neighborhood Features and Efficient Semantic Segmentation of Scale Point Clouds, PRCV, 112-123, 2022.
[9] Qin Cheng, Zhen Liu, Ziliang Ren, Jun Cheng, Jianming liu. Spatial-temporal Information Aggregation and Cross-Modality Interactive Learning for RGB-D-based Human Action Recognition, IEEE ACCESS, 10: 104190-104201, 2022.
[10] Jun Cheng, Ziliang Ren*, Qieshi Zhang, Xiangyang Gao, and Fusheng Hao. Cross-modality compensation convolutional neural networks for RGB-D action recognition, IEEE Transactions on Circuits and Systems for Video Technology, 32(3): 1498-1509, 2022.
[11] Ziliang Ren, Qieshi Zhang, Jun Cheng*, Fusheng Hao, Xiangyang Gao. Segment spatial-temporal representation and cooperative learning of Convolution Neural Networks for multimodal-based action recognition, Neurocomputing, 433: 142-153, 2021.
[12] Ziliang Ren, Qieshi Zhang, Xiangyang Gao, Pengyi Hao, Jun Cheng*. Multi-modality Learning for Human Action Recognition, Multimedia Tools and Application, 80: 16185-16203, 2021.
[13] Ziliang Ren, Qieshi Zhang, Piye Qiao, Maolong Niu, Xiangyang Gao, and Jun Cheng*. Joint learning of convolution neural networks for RGB-D-based human action recognition, Electronics Letters, 56(21): 1112-1115, 2020.
[14] Qin Cheng, Ziliang Ren, Jun Cheng, Qieshi Zhang, Hao Yan and Jianming Liu. Skeleton-based Action Recognition with Multi-scale Spatial-temporal Convolutional Neural Network, IEEE International Conference on Real-time Computing and Robotics, 957-962, 2021.
[15] Hao Yan, Jun Cheng, Qieshi Zhang, Ziliang Ren, Shijie Sun, Qin Cheng. Two Stream Dynamic Threshold Network for Weakly-Supervised Temporal Action Localization, IEEE International Conference on Real-time Computing and Robotics, 963-967, 2021.
[16] Qin Cheng, Ziliang Ren, Jianming Liu, Jun Cheng*. Multiple Time Scale Motion Images for Action Recognition, IEEE International Conference on E-health Networking, Application & Services, 1-5, 2021.
[17] Shijie Sun, Qingsong Zhao, Ziliang Ren, Lei Wang, Jun Cheng*, Phase-Sensitive Model for Temporal Action Proposal Generation, IEEE International Conference on e-Health Networking, Applications and Services, 1-5, 2021.
[18] Guangxi Chen, Ling Hu, Qieshi Zhang*, Ziliang Ren, Xiangyang Gao and Jun Cheng. ST-LSTM: Spatio-Temporal Graph Based Long Short-Term Memory Network For Vehicle Trajectory Prediction, IEEE International Conference on Image Processing, 608-612, 2020.
[19] Yuan Liu, Rong Xiang, Qieshi Zhang*, Ziliang Ren and Jun Cheng. Loop Closure Detection Based on Improved Hybrid Deep Learning Architecture, IEEE International Conferences on Ubiquitous Computing & Communications and Data Science and Computational Intelligence and Smart Computing, Networking and Services, 312-317, 2019.
专利授权和申请:
[1] 一种人体动作识别和意图理解方法、终端设备及存储介质,发明专利,CN202210675830.9.
[2] 一种动作检测方法、装置、终端设备和 存储介质,授权发明专利,ZL202110889116.5.
[3] 一种行为识别方法、装置及终端设备,授权发明专利,ZL201910718037.0.
[4] 一种基于特征交互学习的动作识别方法及终端设备,发明专利, CN202011078182.6.
[5] 基于驾驶环境多模态人机交互的穿戴式数据采集系统,发明专利, CN201911284080.7.
[6] 视觉位置识别方法及装置、计算机设备及可读存储介质,发明专利,CN202011436657.4.
[7] 一种人体动作捕捉系统及方法,发明专利,CN202010268190.0.
[8] 一种图像深度估计方法、终端设备及计算机可读存储介质,发明专利,CN202010863390.0.
[9] 基于视频特征的动作时段定位方法与计算机设备,发明专利,CN202011331039.3.
[10] 一种基于轨迹相似性度量学习的多目标跟踪重定位方法,发明专利,CN202011435920.8.
[11] 模型的训练方法和装置、电子设备、机器可读存储介质,发明专利,CN202011449091.9.
项目研究:
[1] 粤莞联合基金-地区培育项目,基于人机协作关系表示的行为意图理解方法研究,2023年1月-2025年12月,30万,主持;
[2] 广东省自然科学基金-面上项目,基于多模态特征关系嵌入表示的动作序列识别与理解方法研究,2023年1月-2025年12月,10万,主持;
[3] 东莞市科技特派员项目,智慧园区复杂场景中目标检测与异常行为分析及预警系统研究,2022年9月-2023年8月,10万,主持;
[4] 国家自然科学基金联合项目,大范围复杂动态场景智能安保机器人关键技术,2020-01至2023-12,253万,在研,参与;
[5] 科技部重点研发计划,面向五金行业制造的国产机器人系统应用示范,2019-06至2022-05,1283万,在研,骨干成员;
[6] 深圳市科技计划,基于视觉的自动驾驶汽车行驶轨迹预测关键算法研究,2019-03至2022-12,200万,在研,骨干成员;
[7] 中广核工程有限公司,基于视频人体动作捕捉的不安全行为分析及预警系统研究,2020-05至2021-08,99.7万,结题,技术负责人。
获奖情况:
[1] 2022年深圳市科学技术奖(技术发明奖)二等奖,人体动作识别与交互技术及应用,排名5/6。