dc.contributor.author |
Kumawat, Sudhakar |
|
dc.contributor.author |
Verma, Manisha |
|
dc.contributor.author |
Nakashima, Yuta |
|
dc.contributor.author |
Raman, Shanmuganathan |
|
dc.date.accessioned |
2020-07-31T11:31:05Z |
|
dc.date.available |
2020-07-31T11:31:05Z |
|
dc.date.issued |
2020-07 |
|
dc.identifier.citation |
Kumawat, Sudhakar; Verma, Manisha; Nakashima, Yuta and Raman, Shanmuganathan, "Depthwise spatio-temporal STFT convolutional neural networks for human action recognition", arXiv, Cornell University Library, DOI: arXiv:2007.11365, Jul. 2020. |
en_US |
dc.identifier.uri |
http://arxiv.org/abs/2007.11365 |
|
dc.identifier.uri |
https://repository.iitgn.ac.in/handle/123456789/5588 |
|
dc.description.abstract |
Conventional 3D convolutional neural networks (CNNs) are computationally expensive, memory intensive, prone to overfitting, and most importantly, there is a need to improve their feature learning capabilities. To address these issues, we propose spatio-temporal short term Fourier transform (STFT) blocks, a new class of convolutional blocks that can serve as an alternative to the 3D convolutional layer and its variants in 3D CNNs. An STFT block consists of non-trainable convolution layers that capture spatially and/or temporally local Fourier information using a STFT kernel at multiple low frequency points, followed by a set of trainable linear weights for learning channel correlations. The STFT blocks significantly reduce the space-time complexity in 3D CNNs. In general, they use 3.5 to 4.5 times less parameters and 1.5 to 1.8 times less computational costs when compared to the state-of-the-art methods. Furthermore, their feature learning capabilities are significantly better than the conventional 3D convolutional layer and its variants. Our extensive evaluation on seven action recognition datasets, including Something-something v1 and v2, Jester, Diving-48, Kinetics-400, UCF 101, and HMDB 51, demonstrate that STFT blocks based 3D CNNs achieve on par or even better performance compared to the state-of-the-art methods. |
|
dc.description.statementofresponsibility |
by Sudhakar Kumawat, Manisha Verma, Yuta Nakashima and Shanmuganathan Raman |
|
dc.language.iso |
en_US |
en_US |
dc.publisher |
Cornell University Library |
en_US |
dc.title |
Depthwise spatio-temporal STFT convolutional neural networks for human action recognition |
en_US |
dc.type |
Pre-Print |
en_US |
dc.relation.journal |
arXiv |
|