Abstract
Information Fusion Based Learning for Frugal Traffic State Sensing / 2826
Vikas Joshi, Nithya Rajamani, Takayuki Katsuki, Naveen Prathapaneni, L. V. Subramaniam
Traffic sensing is a key baseline input for sustainable cities to plan and administer demand-supply management through better road networks, public transportation, urban policies etc., Humans sense the environment frugally using a combination of complementary information signals from different sensors. For example, by viewing and/or hearing traffic one could identify the state of traffic on the road. In this paper, we demonstrate a fusion based learning approach to classify the traffic states using low cost audio and image data analysis using real world dataset. Road side collected traffic acoustic signals and traffic image snapshots obtained from fixed camera are used to classify the traffic condition into three broad classes viz., Jam, Medium and Free. The classification is done on f10 sec audio,image snapshot in that 10 sec g data tuple. We extract traffic relevant features from audio and image data to form a composite feature vector. In particular, we extract the audio features comprising MFCC (Mel-Frequency Cepstral Coefficients) classifier based features, honk events and energy peaks. A simple heuristic based image classifier is used, where vehicular density and number of corner points within the road segment are estimated and are used as features for traffic sensing. Finally the composite vector is tested for its ability to discriminate the traffic classes using Decision tree classifier, SVM classifier, Discriminant classifier and Logistic regression based classifier. Information fusion at multiple levels (audio, image, overall) shows consistently better performance than individual level decision making. Low cost sensor fusion based on complementary weak classifiers and noisy features still generates high quality results with an overall accuracy of 93 - 96%.