Adapting MIMO video restoration networks to low latency constraints

Valéry Dewil
Zhe Zheng
Arnaud Barral
Lara Raad
Nao Nicolas
Ioannis Cassagne
Jean-Michel Morel
Gabriele Facciolo
Bruno Galerne
Pablo
Arias


[Paper]
[GitHub]
[Dataset]
[Supplementary material]


Recently, MIMO (Multiple Input, Multiple Output) architectures has been proposed for the video restoration task. They have a better performance/running time trade-off than still single output architectures (MISO: Multiple Input, Multiple Output). In this paper, we focus on the three main state-of-the-art architectures for denoising in the low-latency setting (limited number of frames in the output stack, typically 5 or 7 frames). We show that the PSNR of MIMO networks is not uniform within the output stack and propose to use recurrence across stacks (RAS) and output stack overlap (OSO) to smooth this non uniformity. See Figure (a).

We also show that MIMO networks are temporally consistent within output stacks. However, they have very strong and unwanted changes at stack transitions. See Figure (b).

The proposed contributions (shortened to ROSO when applied together) significantly reduce these changes.


Project developed at the ENS Paris-Saclay, Centre Borelli and accepted at BMVC 2024.

Abstract

MIMO (multiple input, multiple output) approaches are a recent trend in neural network architectures for video restoration problems, where each network evaluation produces multiple output frames. The video is split into non-overlapping stacks of frames that are processed independently, resulting in a very appealing trade-off between output quality and computational cost. In this work we focus on the low-latency setting by limiting the number of available future frames. We find that MIMO architectures suffer from problems that have received little attention so far, namely (1) the performance drops significantly due to the reduced temporal receptive field, particularly for frames at the boundaries of the stack, (2) there are strong temporal discontinuities at stack transitions which induce a step-wise motion artifact. We propose two simple solutions to alleviate these problems: recurrence across MIMO stacks to boost the output quality by implicitly increasing the temporal receptive field, and overlapping of the output stacks to smooth the temporal discontinuity at stack transitions. These modifications can be applied to any MIMO architecture. We test them on three state-of-the-art video denoising networks with different computational cost. The proposed contributions result in a new state-of-the-art for low-latency networks, both in terms of reconstruction error and temporal consistency. As an additional contribution, we introduce a new benchmark consisting of drone footage that highlights temporal consistency issues that are not apparent in the standard benchmarks.


Proposed framework

Our proposed ROSO mechanism. We propose two contributions to drastically alleviate the problems mentionned above: Output Stack Overlapping (OSO) and Recurrence Across Stack (RAS). In the three figures, the red dashed lines indicates the RAS.
(a) Baseline MIMO network applied to two non-overlapping frame stacks (b) OSO: the output in the overlapped frames is computed as a weighted averafe of the two denoised versions. (c) OSO with RAS with a specific overlapped recurrence using the output overlapped frames from the previous stack.


Visual results

Visual results at stack transition. We display here the results of the baseline network and with our proposed RAS & OSO (abbreviated to ROSO). The network architecture is M2Mnet. We show the frames at stack transition t . We show the two last frames of one stack and the first frame of the next stack. Between them, we display the warping error. The contrast has been enhanced for better visualization.
The strong transition is clearly visible with the baseline network. Using RAS slightly alleviate the transition, while ROSO does not show larger transition inside a stack (see here at time t-2 to t-1 ) than acros two stacks (transition t-1 to t ).

Video results of M2Mnet, BasicVSR++ and ReMoNet. In this video, we present in a split view: the clean ground-truth, the noisy sequence (AWGN σ = 40), the results of the baseline networks and the results of ours baseline+ROSO. The video is played twice, the second time with enhanced contrast to highlight the differences between the methods. The sequence is a crop from one of the videos of the proposed drone benchmark containing an urban scene. The step-wise motion effect induced by the stack transitions can be noticed in low contrast textures (e.g. building facades), and also in the motion of the vehicles. With the extensions proposed in the paper (+ROSO), the textures are much more stable and the cars on the road follow a more fluid motion.


Landscapes of low-latency video denoising networks

In the following two plots, we show several several video denoising networks mapped in a measure of performance (PSNR) or measure of temporal consistency at stack transition vs per-frame running time. The landcapes were computed on the drone benchmark and the running times measured on a Nvidia A100 GPU. The values are obtained on the drone benchmark and averaged over noise levels σ = 10, 20, 30, 40, 50? The methods plotted with a star are variants proposed in this paper.

Using this visualization, we can identify a Pareto frontier showing the optimal trade-off between computational cost and PSNR or temporal consistency at stack transition. Methods in the gray region are sub-optimal in the sense that it is possible to achieve better results at lower or equal computational cost. The proposed strategy (ROSO) define a new Pareto frontier by improving both the PSNR and the temporal consistency at stack transition, in spite of the incease in the running tme, expected to be of a factor 1.4.

(left) PSNR vs running time landscape. The proposed ROSO improves over the current Pareto frontier by a significant increase in the PSNR

(right> inter-TC vs running time landscape. We used a metric based on the warping error designed to measure the temporal consistency at output stack transitions. This metric is called the inter-tc . Our contribution largely improves the inter-tc among low-latency networks.


Dataset

Video drone dataset. Temporal consistency issues are masked in existing benchmark datasets. Consequently, we introduce a new evaluation dataset of 14 stabilized videos taken with drone-mounted cameras. This dataset features smooth motions to highlight temporal consistency issues. We hope that this will encourage research towards better restoration on stabilized video, which is a very relevant use case. For that, we make this validation dataset publicly available.

 [Download our drone dataset]


Code

In the following github repository, we provide training and testing code for the three state-of-sthe-art video denoising network: BasicVSR++ [1] , M2Mnet [2] and ReMoNet [3] . For BasicVSR++, we adpated the code publicly available provided by the authors. For M2Mnet and ReMoNet, we implemented ourselves the corresponding architecture accordingly to the paper.
In particular, we provide a recurrent implementation of BasicVSR++ and M2Mnet.

We also let available the code for evaluation, in particular our temporal consistency metric.

Our code uses Python3 and PyTorch.


[1]``On the Generalization of BasicVSR++ to Video Deblurring and Denoising'', Chan et al , 2022
[2] ``Multiframe-to-Multiframe Network for Video Denoising'', Chen et al , 2021
[3] ``ReMoNet: Recurrent Multi-Output Network for Efficient Video Denoising'', Xiang et al , 2022


Paper

V. Dewil, Z. Zheng, A. Barral, L. Raad, N. Nicolas, I. Cassagne J-M. Morel, G. Facciolo B. Galerne and P. Arias
Adapting MIMO video restoration networks to low latency constraints.
In BMVC 2024.
(hosted on ArXiv)


To cite us

@article{dewil2024adapting,
title={Adapting MIMO video restoration networks to low latency constraints},
author={Dewil, Val{\'e}ry and Zheng, Zhe and Barral, Arnaud and Raad, Lara and Nicolas, Nao and Cassagne, Ioannis and Morel, Jean-michel and Facciolo, Gabriele and Galerne, Bruno and Arias, Pablo},
journal={arXiv preprint arXiv:2408.12439},
year={2024}
}



Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.