Action Recognition with Trajectory-Pooled Deep-Convolutional Descriptors


Limin Wang, Yu Qiao, and Xiaoou Tang

Abstract

Visual features are of vital importance for human action understanding in videos. This paper presents a new video representation, called trajectory-pooled deep-convolutional descriptor (TDD), which shares the merits of both hand-crafted features and deep-learned features. Specifically, we utilize deep architectures to learn discriminative convolutional feature maps, and conduct trajectory-constrained pooling to aggregate these convolutional features into effective descriptors. To enhance the robustness of TDDs, we design two normalization methods to transform convolutional feature maps, namely spatiotemporal normalization and channel normalization. The advantages of our features come from (i) TDDs are automatically learned and contain high discriminative capacity compared with those hand-crafted features; (ii) TDDs take account of the intrinsic characteristics of temporal dimension and introduce the strategies of trajectory-constrained sampling and pooling for aggregating deep-learned features. We conduct experiments on two challenging datasets: HMDB51 and UCF101. Experimental results show that TDDs outperform previous hand-crafted features and deep-learned features. Our method also achieves superior performance to the state of the art on these datasets

Method

As shown in the Figure above, the whole process consists of three steps, 1) Extracting trajectories, 2) Learning convolutional feature maps, and 3) Constructing Trajectory-Pooled Deep-Convolutional Descriptors.

  • Extracting trajectories: We choose to use Improved Trajectories due its good performance on action recognition. However, we only track on a single scale to speed up the process of trajectory extraction.

  • Learning convolutional maps: We exploit two-stream ConvNets to learn discriminative feature maps from both RGB images and optical flow fields. Meanwhile, we construct the multi-scale pyramid representation of both modalities and thus obtain the multi-scale convolutional feature maps.

  • Constructing TDD: After the extraction of trajectories and convolutional feature maps, we perform trajectory-constrained pooling to aggregate feature maps into effective descriptors, called TDD. To enhance the discriminative capacity of TDD, we propose two kinds of normalization method: (i) spatio-temporal normalization and (ii) channel normalization.

  • For video representation, we resort resort to Fisher vector encoding to transform the TDDs of a video clip into a high-dimensional representation. Before Fisher vector encoding, we conduct PCA pre-processing to de-correlate the dimension of TDD and reduce it to 64-dimension.

    Results

    • Examples of Feature Maps

    • PCA Dimension and Normalization Methods

    • Performance of Different Layers

    • Evaluation of TDD:

    • Comparison with the State of The Art

    Downloads

    References

    If you use our trained model or TDD code, please cite the following paper:

    L. Wang, Y. Qiao, and Xiaoou Tang, Action Recognition with Trajectory-Pooled Deep-Convolutional Descriptors, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015

    Last Updated on 16th June, 2015