论文摘要
Traffic sign detection is one of the key components in autonomous driving. Advanced autonomous vehicles armed with high quality sensors capture high definition images for further analysis. Detecting traffic signs, moving vehicles, and lanes is important for localization and decision making. Traffic signs,especially those that are far from the camera, are small, and so are challenging to traditional object detection methods. In this work, in order to reduce computational cost and improve detection performance,we split the large input images into small blocks and then recognize traffic signs in the blocks using another detection module. Therefore, this paper proposes a three-stage traffic sign detector, which connects a Block Net with an RPN–RCNN detection network.Block Net, which is composed of a set of CNN layers, is capable of performing block-level foreground detection,making inferences in less than 1 ms. Then, the RPN–RCNN two-stage detector is used to identify traffic sign objects in each block; it is trained on a derived dataset named TT100 KPatch. Experiments show that our framework can achieve both state-of-the-art accuracy and recall; its fastest detection speed is 102 fps.
论文目录
文章来源
类型: 期刊论文
作者: Yizhi Song,Ruochen Fan,Sharon Huang,Zhe Zhu,Ruofeng Tong
来源: Computational Visual Media 2019年04期
年度: 2019
分类: 信息科技,工程科技Ⅱ辑
专业: 汽车工业
单位: Department of Computer Science, Purdue University,Department of Computer Science and Technology,Tsinghua University,College of Information Sciences and Technology, Penn State University,Department of Radiology, Duke University,College of Computer Science and Technology,Zhejiang University
基金: supported by the National Natural Science Foundation of China (No.61832016),Science and Technology Project of Zhejiang Province (No.2018C01080)
分类号: U463.6
页码: 403-416
总页数: 14
文件大小: 3798K
下载量: 17