Stereoscopic Video Deblurring Transformer

Article


Imani, H., Islam, M. B., Junayed, M, S. and Ahad, M. A R. 2024. Stereoscopic Video Deblurring Transformer. Scientific Reports. 14 (Art. 14342). https://doi.org/10.1038/s41598-024-63860-9
AuthorsImani, H., Islam, M. B., Junayed, M, S. and Ahad, M. A R.
Abstract

Stereoscopic cameras, such as those in mobile phones and various recent intelligent systems, are becoming increasingly common. Multiple variables can impact the stereo video quality, e.g., blur distortion due to camera/object movement. Monocular image/video deblurring is a mature research field, while there is limited research on stereoscopic content deblurring. This paper introduces a new Transformer-based stereo video deblurring framework with two crucial new parts: a self-attention layer and a feed-forward layer that realizes and aligns the correlation among various video frames. The traditional fully connected (FC) self-attention layer fails to utilize data locality effectively, as it depends on linear layers for calculating attention maps The Vision Transformer, on the other hand, also has this limitation, as it takes image patches as inputs to model global spatial information. 3D convolutional neural networks (3D CNNs) process successive frames to correct motion blur in the stereo video. Besides, our method uses other stereo-viewpoint information to assist deblurring. The parallax attention module (PAM) is significantly improved to combine the stereo and cross-view information for more deblurring. An extensive ablation study validates that our method efficiently deblurs the stereo videos based on the experiments on two publicly available stereo video datasets. Experimental results of our approach demonstrate state-of-the-art performance compared to the image and video deblurring techniques by a large margin.

KeywordsStereoscopic; Video Deblurring; Transformer; AI
JournalScientific Reports
Journal citation14 (Art. 14342)
ISSN2045-2322
Year2024
PublisherNature Research
Publisher's version
License
File Access Level
Anyone
Digital Object Identifier (DOI)https://doi.org/10.1038/s41598-024-63860-9
Publication dates
Online21 Jun 2024
Publication process dates
Accepted03 Jun 2024
Deposited19 Jul 2024
FunderScientific and Technological Research Council of Turkey (TÜBİTAK)
Copyright holder© 2024, The authors
Permalink -

https://repository.uel.ac.uk/item/8y06z

Download files


Publisher's version
s41598-024-63860-9.pdf
License: CC BY 4.0
File access level: Anyone

  • 5
    total views
  • 1
    total downloads
  • 5
    views this month
  • 1
    downloads this month

Export as

Related outputs

E2ETCA: End-to-end training of CNN and attention ensembles for rice disease diagnosis
Uddin, M. Z., Mahamood, M. N., Ray, A., Pramanik, M. I., Alnajjar, F. and Ahad, M. A. R. 2024. E2ETCA: End-to-end training of CNN and attention ensembles for rice disease diagnosis. Journal of Integrative Agriculture. In Press. https://doi.org/10.1016/j.jia.2024.03.075
Learn Programming with C: An Easy Step-by-Step Self-Practice Book for Learning C
Imran, S. M. S. and Ahad, M. A. R. 2024. Learn Programming with C: An Easy Step-by-Step Self-Practice Book for Learning C. CRC Press: Taylor & Francis Group.
Deep learning with image-based autism spectrum disorder analysis: A systematic review
Uddin, M. Z., Shahriar, M. A., Mahamood, M. N., Alnajjar, F., Pramanik, M. I. and Ahad, M. A. R. 2024. Deep learning with image-based autism spectrum disorder analysis: A systematic review. Engineering Applications of Artificial Intelligence. 127 (Art. 107185). https://doi.org/doi.org/10.1016/j.engappai.2023.107185
Annotator-dependent uncertainty-aware estimation of gait relative attributes
Shehata, A., Makihara, Y., Muramatsu, D., Ahad, M. and Yasushi, Y. 2023. Annotator-dependent uncertainty-aware estimation of gait relative attributes. Pattern Recognition. 136 (Art. 109197). https://doi.org/10.1016/j.patcog.2022.109197
HID 2022: The 3rd International Competition on Human Identification at a Distance
Yu, S., Huang, Y., Wang, L., Makihara, Y., Wang, S., Ahad, M. and Nixon, M. 2022. HID 2022: The 3rd International Competition on Human Identification at a Distance. IJCB 2022: IEEE International Joint Conference on Biometrics. Abu Dhabi, UAE 10 - 13 Dec 2023 IEEE. https://doi.org/10.1109/IJCB54206.2022.10007993
Automated detection approaches to autism spectrum disorder based on human activity analysis: A review
Rahman, S., Ahmed, S. F., Shahid, O., Arrafi, M. A. and Ahad, M. A. R. 2022. Automated detection approaches to autism spectrum disorder based on human activity analysis: A review. Cognitive Computation. 14, pp. 1773-1800. https://doi.org/10.1007/s12559-021-09895-w
A Sleep Monitoring System Using Ultrasonic Sensors
Shammi, U. A. and Ahad, M. 2022. A Sleep Monitoring System Using Ultrasonic Sensors. International Journal of Biomedical Soft Computing and Human Sciences. 27 (1), pp. 13-20. https://doi.org/10.24466/ijbschs.27.1_13
Can Ensemble of Classifiers Provide Better Recognition Results in Packaging Activity?
Nazmus Sakib, A. H. M., Basak, P., Doha Uddin, S., Mustavi Tasin, S. and Ahad, M. 2022. Can Ensemble of Classifiers Provide Better Recognition Results in Packaging Activity? 3rd International Conference on Activity and Behavior Computing (ABC 2021). Online 22 - 23 Oct 2021 Springer Singapore. https://doi.org/10.1007/978-981-19-0361-8_10
Identification of Food Packaging Activity Using MoCap Sensor Data
Anwar, A., Islam Tapotee, M., Saha, P. and Ahad, M. 2022. Identification of Food Packaging Activity Using MoCap Sensor Data. 3rd International Conference on Activity and Behavior Computing (ABC 2021). Online 22 - 23 Oct 2021 Springer Singapore. https://doi.org/10.1007/978-981-19-0361-8_11
Lunch-Box Preparation Activity Understanding from Motion Capture Data Using Handcrafted Features
Pritom, Y. A., Rahman, M. S., Rahman, H. R., Kowshik, M. A. and Ahad, M. 2022. Lunch-Box Preparation Activity Understanding from Motion Capture Data Using Handcrafted Features. 3rd International Conference on Activity and Behavior Computing (ABC 2021). Online 22 - 23 Oct 2021 Springer Singapore. https://doi.org/10.1007/978-981-19-0361-8_12
Bento Packaging Activity Recognition Based on Statistical Features
Rakib Sayem, F., Sheikh, M. M. and Ahad, M. 2022. Bento Packaging Activity Recognition Based on Statistical Features. 3rd International Conference on Activity and Behavior Computing (ABC 2021). Online 22 - 23 Oct 2021 Springer Singapore. https://doi.org/10.1007/978-981-19-0361-8_13
MUMAP: Modified Ultralightweight Mutual Authentication protocol for RFID enabled IoT networks
Raju, M. H., Ahmed, M. U. and Ahad, M. A. R. 2021. MUMAP: Modified Ultralightweight Mutual Authentication protocol for RFID enabled IoT networks. Journal of the Institute of Industrial Applications Engineers. 9 (2), pp. 33-39. https://doi.org/10.12792/JIIAE.9.33
Emotion Recognition from EEG Signal Focusing on Deep Learning and Shallow Learning Techniques
Islam, M. R., Moni, M. A., Islam, M. M., Rashed-Al-Mahfuz, M., Islam, M. S., Hasan, M. K., Hossain, M. S., Ahmad, M., Uddin, S., Azad, A., Alyami, S. A., Ahad, M. A. R. and Lió, P. 2021. Emotion Recognition from EEG Signal Focusing on Deep Learning and Shallow Learning Techniques. IEEE Access. 9, pp. 94601-94624. https://doi.org/10.1109/ACCESS.2021.3091487
Static Postural Transition-based Technique and Efficient Feature Extraction for Sensor-based Activity Recognition
Ahmed, M., Das Antar, A. and Ahad, M. 2021. Static Postural Transition-based Technique and Efficient Feature Extraction for Sensor-based Activity Recognition. Pattern Recognition Letters. 147, pp. 25-33. https://doi.org/10.1016/j.patrec.2021.04.001
Activity Recognition from Accelerometer Data Based on Supervised Learning for Wireless Sensor Network
Israt, F. A., Hossain, T., Inoue, S. and Ahad, M. A. R. 2021. Activity Recognition from Accelerometer Data Based on Supervised Learning for Wireless Sensor Network. International Journal of Biomedical Soft Computing and Human Sciences. 26 (2), pp. 73-86. https://doi.org/10.24466/ijbschs.26.2_73
Action recognition using Kinematics Posture Feature on 3D skeleton joint locations
Ahad, M. A. R., Ahmed, M., Antar, A. D., Makihara, Y. and Yagi. Y. 2021. Action recognition using Kinematics Posture Feature on 3D skeleton joint locations. Pattern Recognition Letters. 145, pp. 216-224. https://doi.org/10.1016/j.patrec.2021.02.013
Exploring Human Activities Using eSense Earable Device
Islam, M. S., Hossain, T., Ahad, M. and Inoue, S. 2021. Exploring Human Activities Using eSense Earable Device. in: Ahad, M., Inoue, S., Roggen, D. and Fujinami, K. (ed.) Activity and Behavior Computing Springer Singapore. pp. 169–185
Contactless Human Monitoring: Challenges and Future Direction
Mahbub, U., Rahman, T. and Ahad, M. 2021. Contactless Human Monitoring: Challenges and Future Direction. in: Ahad, M., Mahbub, U. and Ahad, M. (ed.) Contactless Human Activity Analysis Springer, Cham. pp. 335-364
Contactless Human Emotion Analysis Across Different Modalities
Nahid, N., Rahman, A. and Ahad, M. 2021. Contactless Human Emotion Analysis Across Different Modalities. in: Ahad, M., Mahbub, U. and Rahman, T. (ed.) Contactless Human Activity Analysis Springer, Cham. pp. 237-269
Contactless Fall Detection for the Elderly
Nahian, M. J. A., Raju, M. H., Tasnim, Z., Mahmud, M., Ahad, M. and Kaiser, M. S. 2021. Contactless Fall Detection for the Elderly. in: Ahad, M., Mahbub, U. and Rahman, T. (ed.) Contactless Human Activity Analysis Springer, Cham. pp. 203-235
Signal Processing for Contactless Monitoring
Billah, M. S., Ahad, M. and Mahbub, U. 2021. Signal Processing for Contactless Monitoring. in: Ahad, M., Mahbub, U. and Rahman, T. (ed.) Contactless Human Activity Analysis Springer, Cham. pp. 113-144
Skeleton-Based Activity Recognition: Preprocessing and Approaches
Sarker, S., Rahman, S., Hossain, T., Faiza Ahmed, S., Jamal, L. and Ahad, M. 2021. Skeleton-Based Activity Recognition: Preprocessing and Approaches. in: Ahad, M., Mahbub, U. and Rahman, T. (ed.) Contactless Human Activity Analysis Springer, Cham. pp. 48-81
IoT Sensor-Based Activity Recognition: Human Activity Recognition
Ahad, M., Antar, A. D. and Ahmed, M. 2021. IoT Sensor-Based Activity Recognition: Human Activity Recognition. Springer, Cham.
A Method for Sensor-Based Activity Recognition in Missing Data Scenario
Hossain, T., Ahad, M. A. R. and Inoue, S. 2020. A Method for Sensor-Based Activity Recognition in Missing Data Scenario. Sensors. 20 (14), pp. 1-23. https://doi.org/10.3390/s20143811
An AI-based Visual Aid with Integrated Reading Assistant for the Completely Blind
Khan, M. A., Paul, P., Rashid, M., Hossain, M. and Ahad, M. 2020. An AI-based Visual Aid with Integrated Reading Assistant for the Completely Blind. IEEE Transactions on Human-Machine Systems. 50 (6), pp. 507-517. https://doi.org/10.1109/THMS.2020.3027534