## Abstract

Successful single-frame fringe pattern preprocessing comprising high-frequency noise minimization and low-frequency background removal represents often the crucial step of the fringe pattern based full-field optical metrology (i.e., interferometry, moiré, structured light). It directly determines the measurement accuracy. Data-driven decomposition by means of the 2D empirical mode decomposition (EMD) serves the filtering purpose in adaptive and detail-preserving manner. The mode-mixing phenomenon resulting in troublesome automatic grouping of modes into three main fringe pattern components (background, information part and noise) is significantly limiting this process, however. In this paper we are introducing the unsupervised variational image decomposition (uVID) model especially tailored to overcome this preprocessing problem and assure successful sparse three-component fringe pattern decomposition. Comprehensive analysis and detailed studies of accomplished significant advancements ensuring automation, versatility and robustness of the proposed approach are provided. Main advancements include: (1) tailoring the VID calculation scheme to fringe pattern preprocessing purpose by focusing onto accurate fringe extraction with tolerance parameter and custom-made decomposition parameter values; (2) fringe pattern tailored BM3D denoising algorithm with fixed parameter values. Numerical and experimental investigations corroborate that the demonstrated uVID method compares favorably with the reference 2D EMD algorithm and classical VID model. Remarkable range of acceptable local variations of the fringe pattern orientation, period, noise, contrast and background terms is to be highlighted.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

The measurement result in the contemporary full-field optical metrology is not given in a straightforward way. Many techniques, e.g., interferometry [1–4], fringe projection [5] or moiré technique [6] lead to the fringe pattern recording, whilst the information about measured object is encoded in the intensity distribution of fringes - their shape or contrast. In general intensity distribution of a fringe pattern can be modelled by:

where *a(x,y)* denotes background illumination, *b(x,y)* denotes amplitude modulation, *φ(x,y)* denotes phase distribution and *n(x,y)* denotes noise. Considering this feature of full-field optical studies we need to calculate the phase function in order to retrieve the measurand. The fringe pattern intensity distribution can be described by an equation with many unknowns. We can find the phase function constructing the set of equations. This is how multi-frame fringe pattern analysis works [7]. Every equation needed for the analysis is constructed by introducing the phase shift (*Δφ*), either known [8] or unknown [9,10]:

Mentioned multi-frame fringe pattern analysis algorithms are the most accurate and computationally efficient methods used for the phase demodulation purpose. Nevertheless, they exhibit large errors in the case when background illumination, amplitude modulation or phase distribution change over the phase shifting recording time impeding their use in dynamic experiments. Therefore, recording the phase-shifted interferograms at single snap-shot (one moment) is desirable. Utilizing the polarization approaches one can employ the multi-camera system to record phase-shifted interferograms in different cameras [11] or use the pixelated phase-mask [12]. However, both approaches require a specific hardware, optical system implementation and polarization control, hence the methods enabling to retrieve the phase function from a single frame (single fringe pattern) are wanted and required.

The historically first and well-established example of such a method is the Fourier transform [13]. Although the Fourier transform is an excellent tool for phase demodulation purpose there is a meaningful limitation we need to take into account. The background and information parts need to be well separated in the Fourier domain. To fulfill that requirement, the carrier spatial frequency of recorded fringes must be high enough to ensure full separation of all spectral lobes and favorably phase and background should be described by non-complicated slowly varying functions. The appearance of closed fringes in the analyzed fringe pattern makes it impossible to correctly demodulate the phase function with the use of the pure Fourier transform method due to strong local fringe orientation variations (overlapped spectrum). The answer to that problem can be provided by the Hilbert spiral transform (HST) [14], as it is able to numerically introduce the phase shift equal to 0.5π to our fringe pattern (generate quadrature function). Together the HST output data and the HST input data create the analytic 2D signal with an easy access to its phase and amplitude modulation. It is important to emphasize that the input data needs to be zero-mean-valued for the Hilbert transform to perform well, hence background term rejection is essential. Additionally, unfiltered noise transfers itself into demodulated phase function. Slowly varying amplitude modulation does not damage the phase retrieval (according to the Bedrosian theorem).

Generally, both in the case of the multi-frame and the single-frame fringe pattern analysis the preprocessing is needed. By preprocessing we mean the background term and noise removal from the fringe pattern, before the phase extraction, to improve its efficiency and robustness. To serve this purpose we can use the empirical mode decomposition introduced by Norden Huang [15], further extended to the bidimensional empirical mode decomposition [16,17] and modified [18–25] in order to accelerate and improve the calculation process. Image decomposition into a set of bidimensional intrinsic mode functions (BIMFs) based on data itself not by setting any parameter value can be regarded as a main advantage of this algorithm. On the other hand, the large number of modes received as the result of empirical mode decomposition is problematic when we would like to break down the intensity distribution of fringe pattern into three main components defined in Eq. (1), i.e., background, information part and noise. Furthermore, we are dealing with the mode-mixing phenomenon, especially when the background and phase distributions are described by complicated functions with high dynamic range. Mode mixing means that energy connected with different image components can be stored together in a single mode (making them impossible to set apart) or energy of a single component can dissipate over a number of modes (need for stitching of the information). The attempts to solve mentioned problems were undertaken by reducing the number of BIMFs by tailoring the decomposition processing path and signal envelope estimation [23] or by automatic selective reconstruction (ASR) [19,20]. Other reported methods proposed to add the white noise to the analyzed data [24] or to introduce the bidimensional sinusoids-assisted empirical mode decomposition [25].

The solution to decomposition-based fringe pattern preprocessing problem can be also provided in a straightforward way by variational image decomposition (VID) – firstly used for image denoising [26–28] and then adapted to fringe pattern background and/or noise removal [29–34]. Great interest in this technique has resulted in using it also for denoising and segmenting optical coherence tomography images [35]. There are generally two main models of variational image decomposition – (1) two component decomposition: denoised image and noise, and (2) three component decomposition: structure, texture and noise. In the case of the fringe pattern preprocessing we are more interested in the second model, where structure can be regarded as background illumination term and texture can be regarded as information part understood as background-free and noise-free fringe component oscillating around zero. Every VID component is modelled in different function space and the result is estimated by minimization of the corresponding functional. The obvious gain in the meaning of straightforward fringe pattern decomposition into three components was established at the expense of the computation time elongation and problematic choice of the numerous VID parameter values, i.e., number of iterations, projection steps, hard threshold etc.

Considering all of the mentioned advantages and disadvantages of reviewed contemporary fringe pattern preprocessing methods we set ourselves a goal to develop an algorithm, which will automatically provide straightforward decomposition of fringe pattern into three components (structure - background, texture - information part and noise) without the need of laborious estimation of proper parameter values by the user. In this paper we are proposing the unsupervised variational image decomposition algorithm (uVID) with two meaningful advancements implemented to accomplish the goal.

First of all, we employ the state-of-the-art image denoising algorithm known as the block-matching 3D (BM3D) [36,37] for the fringe pattern noise estimation in our VID model. Mentioned algorithm provides a considerable improvement comparing with the wavelet-based denoising algorithms [38] widely used in the variational image decomposition so far. It was recently [39,40] found out that BM3D with fixed default set of parameters can ensure similarly high noise reduction efficiency for a wide variety of data. This study is expanded here with detailed analysis of different fringe pattern characteristics (image domain noise-based errors) and the retrieved phase accuracy (phase domain noise-based errors).

Our second main advancement is connected with the algorithm used for the fringe pattern information part (texture) estimation called Chambolle projection algorithm [41,42]. This is a iterative algorithm and setting the appropriate number of iterations is critical. It is possible that some of the information part will not be qualified as texture (fringe part) if the number of iterations is too low. On the other hand, too many iterations will lead to structure (fringe pattern background) appearance in the texture (fringe part). In this way fringe component will have locally and globally non-zero mean value and the HST could be severely jeopardized. An automatic stopping criterion [40,43] based on the tolerance parameter is proposed and studied here both on the simulated and experimental data. A detailed analysis of Chambolle projection parameter values on the convergence of the algorithm is also conducted and optimal values especially tailored for fringe analysis are recommended.

The paper is constructed as follows: section 2 provides the review over the function spaces applicable for modelling individual VID components and identifies the decomposition model chosen for further tailoring, section 3 describes the denoising method implemented in the uVID, section 4 introduces new automatic stopping criterion for iterative minimization, section 5 contains experimental evaluation of the proposed fringe pattern preprocessing, whereas section 6 concludes the paper.

## 2. Review of the variational image decomposition models

The key role in the variational image decomposition calculation is played by the appropriate choice of the proper function spaces to model each of the decomposition components (structure, texture and noise). We would like to describe shortly various function spaces used up to now and highlight the decomposition model we have chosen for further advancements and especial tailoring to form a state-of-the-art fringe pattern preprocessing method.

#### 2.1 Structure part of the decomposition

In the case of the cartoon part (decomposition structure and background in the meaning of the fringe pattern analysis), which can be described as piecewise smooth, the most commonly used function space is the BV space of the functions with bounded variation. The BV space is a subspace of functions $f\in {L}^{1}(\text{\Omega})$ for which the Total Variation (TV) is finite. The quantity called TV can be described as:

where in the case of 2D image processing Ω stands for the image domain - the range of the image pixels *(x,y)*. On BV the following norm can be defined:

Generally, the first element can be neglected and in the minimization process we can use the norm of the function we want to minimize as a Total Variation of this function. The functions belonging to the BV space are smooth but can have discontinuities along lines, which in image processing can be regarded as edges.

The Beppo-Levi (BL) space was also used for modelling the cartoon part in the case of fringe pattern filtration [33]. BL space is defined by second-order derivatives and the norm in this space can be described as:

Because of the second-order derivative calculation functions in this space are smoother than in the BV space.

In the decomposition model proposed in [34] for fringe pattern denoising purpose the cartoon part was modelled with the use of Sobolev space H^{1}, where norm can be defined with the use of the first order derivatives as:

H^{1} space contains functions which are also smoother than the ones belonging to BV space. Regarding the fact that in the case of real measurements fringe pattern background (cartoon part) is generally smooth and non-oscillatory but can also contain some discontinuities (edges) we decided to stay with BV space in our variational decomposition model.

#### 2.2 Texture part of the decomposition

In order to consider the oscillatory part of the image (decomposition texture and fringe pattern information part - fringes) Meyer proposed a new decomposition model [28,44] by introducing the new function space called the G space. The G space is the Banach space consisting of functions which can be described by:

where $g=\left({g}_{1},{g}_{2}\right)$ is a vector field and $div(g)$ represents a texture component of the decomposition. The norm in G space is defined as:

The feature of functions on G space is that they can preserve local characteristic well. The oscillating patterns (in the meaning of our analysis fringe pattern information part) have a small norm in this function space, which means that they can be easily caught with an energy minimization process. The minimization problem in the G space was solved with the use of Chambolle projection algorithm [41,42].

The other function space proposed for texture modelling [45] is adaptive Hilbert space. The adaptive Hilbert norm can be described as:

where Ψ denotes the decomposition on the local Fourier frame, vector field ξ represents the instantaneous frequencies of the image and Γ(ξ) are the weights depending on ξ value. Advantage of the adaptive Hilbert space over the G space is improved differentiation between the noise and the high frequency fringes. The adaptive Hilbert norm is low for locally parallel structures but high for generally random and uncorrelated noise. However, in the case of three components decomposition models considered by us, noise is removed by minimization in the different function space and we are not obligated to put so much attention into noise and high frequency fringes differentiation. In order to avoid the troublesome process of finding the adaptive vector field representing the instantaneous frequencies of the image and its local Fourier frame decomposition calculation we decided to use G space to describe texture part (fringe image information part) in our decomposition model.

#### 2.3 Noise part of the decomposition

In the first model of the variational image decomposition [26] noise was described with the help of a very simple and frequently used L^{2} space. The norm in this function space is described by:

We do not expect that this function space would provide satisfactory high frequency fringes and noise differentiation. Improvement of this quality can be provided by homogenous Besov space (E space). This space is connected with wavelet coefficient and minimization problem can be solved by wavelet shrinkage thresholding [38]. In the wavelet notation the function *f* can be described as:

where *a*, *θ*, *t* denote scale, rotation and translation parameters, respectively, *c _{a,θ,t}* denotes wavelet coefficient and

*ψ*denotes daughter wavelet. There are two approaches for the noise minimization in the wavelet domain. First of all, we can remove the daughter wavelets with the frequency (scale parameter) higher than some threshold. This is connected with the assumption that we are dealing with the high frequency noise. The disadvantage of this approach is that we can remove high frequency texture components (fringes) together with the noise. The other way to deal with the noise minimization is connected with the assumption that the noise has generally random and uncorrelated values. Because of that the wavelet coefficient values for noise are significantly smaller than for texture (fringes). Therefore, it is better, in the meaning of noise and high frequency texture components differentiation, to apply the threshold on wavelet coefficient values.

_{a,θ,t}Natural improvement of the wavelet transform is the shearlet transform [46,47] in which the directional information is not given by the rotation of the basis function but by changing its shear *s*. Thanks to that even for isotropic basis functions we have access to directional information. The shearlets can be described in the shearlet smoothness space as:

where

and *A _{a}* denotes scaling matrix and

*S*denotes shearing matrix. Minimization in this function space is performed in the same way as in the case of homogenous Besov space by shearlet shrink thresholding (SST). Having in mind good differentiation between noise and high frequency texture (fringes) components and better directional information we decided to use shearlet smoothness space in our variational decomposition model.

_{s}To sum up, the model of variational image decomposition chosen for further enhancement is TV-G-Shearlet, which was already adapted for fringe pattern filtration purpose [30]. The minimization problem is constructed as follows:

where *λ*, *μ* and *δ* are energy distribution coefficients, *f* denotes analyzed fringe pattern, *u*, *v* and *w* denote cartoon (background) part, texture (information) part and noise, respectively.

## 3. Denoising in the tailored variational image decomposition

For simulations purpose in this and subsequent sections we used Matlab computational environment. The synthetic interferogram can be described as:

where*x,y*denote image coordinates simulated here with the use of Matlab fuction

*meshgrid(1:512)*,

*a(x,y)*denotes background simulated as Gauss function,

*φ*denotes phase function,

*T*denotes fringes period,

*n*denotes standard deviation of the noise and

_{std}*randn(512)*is a Matlab function, which allows to simulate normally distributed pseudorandom numbers. Phase function (φ) is described by the shape of an unstressed red blood cell (

*z*) simulated according to the equation of Evans and Skalak (1980) [48]:

*D*= 7.82μm denotes the average cell diameter,

_{0}*a*= 0.0518,

_{0}*a*= 2.0026 and

_{1}*a*= −4.491 are experimentally derived constants. Fringe pattern period value used in Fig. 1 for simulations is equal to

_{2}*T = 12 px.*

As it was mentioned in the previous section the denoising with the use of the wavelet/shearlet-based thresholding provides a great noise and high-frequency fringes differentiation. However, calculation of shearlet transform and selection of proper thresholding parameter values is a challenging and complicated task. Natural improvement of the wavelet/shearlet-based denoising algorithms led to development of the block-matching 3D denoising algorithm (BM3D) [36,37]. The 2D signal (image) is grouped into 3D blocks of similar regions before calculating the 3D wavelet coefficients. In this way differentiation between signal and noise is even more accurate having whole process simplified and accelerated. We propose to use it in our VID model as fringe pattern denoising procedure and take advantage of fringe pattern spatial self-similarity. The BM3D as a fringe pattern denoising algorithm in VID was already briefly introduced in [32], however without the detailed analysis of its performance in the denoising of complex fringe patterns and with focusing only into the specific application of projection fringes. BM3D denoising was also highlighted as a very efficient and capable denoising technique in the case of different fringe pattern analysis algorithms, e.g., Windowed Fourier transform [49,50], empirical mode decomposition [23] and noise reduction in digital holography [51,52]. Due to the structured similarity of a fringe pattern which can be treated as spatially quasi periodic cosinusoidal intensity distribution it is easy to create well-established 3D blocks merging subsets encapsulating fringes of similar period and orientation.

The analysis of parameter values for the BM3D as fringe pattern denoising algorithm was conducted in [39]. The most important one considering denoising quality is sigma parameter (σ) which denotes the standard deviation of the noise estimate. In the case of the simulated data the logical assumption would be to set this parameter equal to the actual value of the simulated standard deviation of noise. However, considering the real life data exact assessment of the sigma value is cumbersome. Moreover, exact sigma for the fringe pattern denoising could not always be the best solution. High level of noise will lead to high sigma parameter value and this, in turn, could lead to the high frequency fringes removal. It is proven that setting the constant σ = 50 provides high quality denoising results for variety of fringe pattern characteristic, see Fig. 1. The rest of the parameters were set as default values proposed by authors of the original BM3D algorithm. This denoising approach will be applied in the further calculations.

To highlight the advantage of the BM3D denoising approach we would like to compare it with shearlet shrink thresholding method and denoising performed by fast adaptive bidimensional empirical mode decomposition (FABEMD) for interferogram/hologram enhancement in pre-filtering [19,20]. After decomposition of the fringe pattern into a set of bidimensional intrinsic mode functions we simply neglected the first mode as the one containing locally highest spatial frequencies present in the fringe pattern.

The comprehensive analysis of the results retrieved for different levels of noise is presented in Fig. 1. It is worth to notice that in the case of the FABEMD-based denoising the first mode neglected during denoising procedure contains significant amount of the energy belonging to the fringe pattern information part (dense fringes present in the error map). On the other hand, in the case of the high level of noise, rejecting only the first mode did not allow to remove all the noise from fringe pattern (noise presence in the higher modes). The denoising quality with the use of the shearlet shrink thresholding algorithm is also insufficient in the case of the high level of noise. Clearly visible rapid increase of the RMS error value for the standard deviation of noise higher than 0.2 is caused by the difficulty in selecting the hard-thresholding values for different scales of the shearlet transform. The denoising results obtained with the use of the BM3D are satisfactory both in the case of setting the sigma parameter value as a simulated standard deviation of the noise and in the case of setting its value as fixed sigma equal to 50. However, it can be seen that it is better to use the fixed sigma value rather than adjust its value to the level of the noise present in the filtered image. It is caused by the specific nature of the analyzed images with broad range of fringe spatial frequencies content (low frequency closed fringes and high frequency parallel ones present is a single interferogram). With the high value of sigma high frequency fringes may be removed from image together with the noise. In the meaning of the parameter choice process it is also easier and more convenient to use fixed sigma value for each analyzed fringe pattern; versatility is gained with simplicity preserved.

In order to prove the robustness and flexibility of the proposed BM3D denoising algorithm we analyzed how denoising quality changes for different carrier spatial frequencies of fringes. From now on, we only consider the performance of the BM3D algorithm with fixed sigma equal to 50. Charts presented in Fig. 2 prove that the BM3D algorithm provides better and more stable results than other studied algorithms for decreasing carrier spatial frequency of the interferograms spoiled with low-level (std = 0.1, Fig. 2(a)) and high-level (std = 0.9, Fig. 2(b)) of noise. Additionally, both SST and FABEMD algorithms qualify high frequency fringes as noise, which can be observed as increase of RMS value for lower fringe period values. Although SST seems to cope better than the FABEMD in this case since shearlet/wavelet transform based algorithms generally provides satisfactory high-frequency fringes and noise differentiation the increase of RMS error value for the highest analyzed carrier spatial frequencies can be seen in both cases. It could be minimized for the SST denoising algorithm after adding more scales to the transform calculation, but that in turn would elongate the computation time and make whole process more complicated. Independently of fringes carrier spatial frequency value the BM3D algorithm provides consequently better results than the SST and the FABEMD.

Comprehensive analysis of the decomposition results for different carrier spatial frequencies is presented on Fig. 2(c). In the case of the high spatial frequency carrier fringes (red frame) it is crucial to distinguish fringes from noise. It can be seen that both the SST and the FABEMD qualified majority of fringes as noise and still did not manage to remove all the noise from the image. The BM3D left fringes untouched while removing the majority of the noise even with strong noise present. The situation looks similar if we analyze the ‘middle’ frequency case (green frame) with the difference that this time SST does not qualified any fringes as noise. It is interesting that even in the case of low carrier spatial frequency fringes (blue frame) FABEMD qualified fringes with the highest visible frequency as noise. Empirical mode decomposition does not allow to distinguish fringes and noise in a straightforward way always storing locally highest frequency in first mode resulting in fringe information leakage.

Figure 3 highlights the importance of the denoising procedure in fringe pattern-based measurements. In most of the cases the ultimate goal of the fringe pattern analysis is phase retrieval, since phase function stores the information about measured object. The analysis, presented in Fig. 3, shows the influence of the noise on the retrieved phase function. The period of carrier fringes was equal to 24 px and phase was demodulated with the use of Hilbert spiral transform [14,22]. In order to extract the influence of the noise on the retrieved phase accuracy from other error sources the zero-mean value of the Hilbert transform input signal (fringe pattern background removal) was obtained by subtraction of known, simulated background. We can neglect the influence of the intrinsic Hilbert transform error as long as characteristic of fringes is the same for every compared image. One can notice (Fig. 3) that for some generally low levels of noise it is not crucial to perform denoising before phase demodulation. Noise transfers directly from the intensity domain into the phase domain and can be removed from the phase function the same way as from fringes. If the phase function is smooth, then the signal and noise differentiation should be much easier to perform in phase domain, especially in the case of the high frequency fringes. On the other hand, it should be also noted that high level of noise introduces errors to phase function caused by incorrect phase unwrapping and therefore it is strongly advised to remove it before phase demodulation. It is also worth to mention that the BM3D denoising algorithm is much more stable than other analyzed algorithms and allows for good phase demodulation results regardless the noise level.

## 4. Proposed stopping criterion for the Chambolle projection algorithm

The minimization problem in the G-space can be solved with the use of the Chambolle projection algorithm. This is an iterative gradient-based minimization algorithm, which can be described in the following steps:

where*f*denotes initial image and

*µ*parameter is responsible for energy distribution between fringes and noise (fringes estimation case), and background and fringes (background estimation case),

- ✓
*g*value estimation for the next step:where

*τ*is projection step value and*TV*denotes total variation operator, - ✓ calculations are ended after N iterations and final result can be defined as:
Per analogy background can be estimated as:

The parameters, which need to be set for the minimization process are iteration number *N*, projection step value *τ* and energy distribution parameter *μ*. The influence of the parameters values on the calculated result quality will be taken under consideration in the following subsections and Figs. 4-8.

#### 4.1. Automatic termination of the iterative process

The simplest solution for defining the end of the calculation is to define the specific iteration number. This kind of stopping criterion is called the exhaustion-based criterion [53]. In the case of the fringe pattern analysis we are dealing with varying spatially and temporally interferogram/hologram characteristics and as it can be seen in Fig. 4 the minimum error of VID fringe filtering (green dots in Fig. 4) is obtained after different number of iterations for each simulated fringe pattern (with different carrier spatial frequency). Therefore, exhaustion-based stopping criterion is not a profitable solution in the case of the fringe pattern analysis because we cannot define a single universal number of iterations for every considered pattern.

The other solution used in the case of the iterative algorithms is called the reference-based criterion [53]. The termination of the calculations is done when the difference between obtained result and ground truth is small enough. Although the idea is simple and should provide excellent outcome we cannot predict the ground truth result in the case of the experimentally recorded interferogram/hologram therefore this kind of stopping criterion does not apply to the real measurements.

The next type of stopping criterion is called the improvement-based criterion [53] and is connected with the difference between two consecutive steps, (Δ, Eq. (19)):

If the difference between the result obtained after*n + 1*iterations (

*x*) and the result obtained after

_{n + 1}*n*iterations (

*x*) is small enough the calculations should be ended. Although the idea can be correctly applied in the case of the real measurement, the simple difference between steps is not sufficient to properly end calculations in the case of variable fringe pattern characteristics (spatially non-uniform fringe period), as for high carrier spatial frequency of fringes (T = 6 px, blue plot line and box in Fig. 4) Δ = 0.0275, for fringes with the carrier fringes period 24 px (red plot line and box in Fig. 4) Δ = 0.5382 and for low spatial frequency of fringes (T = 150 px, yellow plot line and box in Fig. 4) Δ = 2.0159 in the minimum RMS value points. To make our stopping criterion automatic and robust we propose to introduce critical advancements.

_{n}To precisely define our problem, the number of iterations sufficient for successful background and fringe differentiation changes with the change of the fringe pattern complexity. We understand complexity here as a radical local variation of fringe orientation and period. For given underlying phase function complexity generally increases with the decrease of fringe carrier frequency. In general, the lower the carrier spatial frequency of fringes the more iterations needed. This statement can be corroborated with the analysis presented in Fig. 5. Higher period of carrier fringes in the case of fringe patterns simulated in this paper (Eq. (16)) means appearance of closed fringes in the analyzed image. With the increase of the carrier fringes period (decrease of the carrier spatial frequency) the baseband (autocorrelation term) and sidebands (cross-correlation terms) in the Fourier spectrum overlap and therefore more iterations are required for successful background and fringe-information differentiation. We could simply assume that setting the high number of iterations as a default should solve the problem of automation and provides good results regardless the complexity of the fringe pattern. This assumption is wrong for two reasons. First of all, too many unnecessary iterations elongate the calculation time. Second of all and more important, too many iterations may cause leakage of low-frequency background to the fringe part. Therefore, the goal is to stop our iterative process at the right point both because of the calculation time and the quality of decomposition result. Successful background removal is especially important with regard to subsequent phase retrieval with the use of the Hilbert spiral transform where one of the main requirements for input data is the signal (image) local and global zero-mean value.

Basing on the iterative nature of the Chambolle projection algorithm and assuming the convergence of the algorithm provided by proper projection step *τ* value (will be proven in consecutive subsection) the tolerance parameter (*tol*) is to be proposed as an universal determinant for the end of calculations. First of all, obtained difference between two consecutive steps (Eq. (19)) should be divided by initial image (*f*) norm to free from the initial image pixel range influence on the final result, (Δ_{norm}, Eq. (20)):

Then we propose to calculate the derivative of Δ_{norm}, (dΔ, Eq. (20)), because in the case of searching for the optimal point to end the calculations it is more interesting to know how the retrieved step difference value is changing rather than to study the value evolution itself. Lastly the derivative value should be divided by the difference obtained for two first steps, (tol, Eq. (20)). This way the tolerance parameter [40,43] was brought in as a universal determinant for the end of calculations ensuring greatly enhanced versatility and automation of the VID.

Analyzing results presented in Fig. 4 one can observe after how many iterations the calculations were ended for two exemplary tolerance values: 10^{−5} and 10^{−6}. The results of background and fringes differentiation estimated for every highlighted point in the chart are presented in the frames with corresponding colors. With higher tolerance the Chambolle projection algorithm was stopped quicker, which is beneficial in the case of high frequency fringes. Estimated RMS was already close to the minimum RMS value and at the same time we gained acceleration. However, in the case of the more complicated sparse fringes (*T = 150*), higher tolerance was insufficient for the correct background and fringes differentiation. As it can be seen a lot of low frequency fringes are present in the background part. Dealing with complex fringe patterns with wide overlapped Fourier spectrum it is therefore recommended to lower the tolerance value. The improvement of the filtration result is obtained at the expense of the calculation time elongation but in some general cases it is necessary for precise phase estimation. We use fixed tolerance value of 10^{−5} only lowering it if necessary.

#### 4.2. Chambolle projection μ and τ parameters analysis

In this subsection two remaining Chambolle projection parameters (*μ* as energy parameter and *τ* as projection step) will be taken under consideration. In general, *µ* value in the case of fringe estimation is different than in the case of background estimation. In literature [29,30] both fringes and background parts are estimated iteratively – we tend to adapt novel and more straightforward approach with iterative fringe estimation and background calculation by subtracting fringe term from starting denoised image.

The *μ* parameter value is responsible for energy distribution between two remaining decomposition components (fringes and background). The higher the *μ* value the more fringes are qualified as a texture (higher frequency part to be dissected from the image background). When gradient-based minimization process is used for denoising (estimating high frequency part to be removed from image) setting high *μ* value causes removal of the majority of the image details which is both neither expected nor recommended. This parameter value should be high enough to remove the noise from image but at the same time low enough to keep the image details. In our case we simply want to set it high enough to enable the efficient background and fringes differentiation. Noise is already successfully removed by the BM3D algorithm therefore we should not be concerned with high *μ* value. Corroboration of this statement can be seen in Fig. 6 where we study the relation between the fringe estimation error (defined in the intensity domain), *μ* parameter value and number of iterations (projections) N. Low μ value makes background and fringes differentiation impossible regardless the iteration number. On the other hand, increasing its value above 10 does not introduce any meaningful changes in retrieved result which is reported to be very similar even for *μ* = 1000. The line drawn for the case *μ* = 100 (green line) overlaps with the line drawn for the case *μ* = 1000 (black, dotted line).

The second analysis concerning *μ* parameter is connected with checking whether recommended *μ* value varies with the change of the carrier spatial frequency of the fringe pattern. If we analyze results presented in Fig. 7 we can notice that the *μ* value after which the filtration results stay stable (RMS error does not change its value significantly) is higher in the case of the low frequency fringes than for high frequency fringes. Boundary lines were set as *μ* = 1 for fringe period of 6 px (green line), *μ* = 10 for fringe period 24 px (blue line) and *μ* = 100 for fringe period 150 px (purple line). When analyzing obtained filtration results (frames with corresponding colors) one can notice that optimal *μ* value for dense fringe pattern is insufficient for fringes with lower carrier spatial frequencies (there is a leakage of fringe content to the background part). However, as it was proven, increasing *μ* value does not introduce any additional errors to the fringe estimation result and therefore we should not be concerned with setting high *μ* value even in the case of dense fringes. It can be seen, Fig. 7, that *μ* = 100 ensures remarkable quality for fringe estimation in a very wide range of the carrier spatial frequencies. This value is recommended as default in our algorithm - it ensures versatility in terms of analyzed fringe pattern shape. It is not to be further tailored.

Next parameter important for the Chambolle projection algorithm performance is the projection step *τ*. Its value is responsible for convergence of the algorithm and determines how quickly the result obtained during the calculation meets the correct expected fringe estimation result. It can be seen in Fig. 8 that the higher the *τ* value is the lower number of iterations is needed for estimating the fringes part (characterized by the minimum RMS error). The iterations needed for receiving the same, minimum RMS error value (0,06) were equal to 2797 in the case of *τ* = 0.1 and 1116 in the case of *τ* = 0.25 (see left chart in Fig. 8). However, it can be also noted that for *τ* = 0.26 the algorithm provides incorrect results. The RMS value increases together with increasing number of iterations and the estimation result is no longer convergent to the expected one. In the literature [41] the value τ = 0.25 is considered as a limit of algorithm’s convergence and larger steps do not lead to the correct result. We corroborated this statement using numerical simulations, see right chart in Fig. 8. Summarizing, τ parameter value should be set as high as possible in order to accelerate the calculation but without jeopardizing the convergence of the algorithm. Therefore, in our analysis *τ* = 0.25 will be considered as a selected and recommended value. It ensures versatility in terms of analyzed fringe pattern shape. It is not to be further tailored.

Considering the performance of variational image decomposition in the meaning of the computation time and accuracy of retrieved results we need to examine in detail the calculation scheme itself. In the model of variational image decomposition firstly adapted for fringe pattern filtration purpose [29] it was assumed that both background and fringes parts should be calculated with the use of the Chambolle projection algorithm. In the case when variational image decomposition serves as preprocessing algorithm preparing the data for the Hilbert spiral transform calculation the most critical aspect comprises determination of the fringe part. Keeping that in mind the improvement we incorporated in our modified variational image decomposition scheme is to perform the calculations with the use of the Chambolle projection algorithm only once for extracting just the fringes and then estimating the background as a difference between denoised initial image and fringes. That way we gained acceleration without the loss of the accuracy.

The second thing we noticed was the fact that after properly determining the number of projection iterations repeating the calculations (applying another decomposition of the texture part) to improve the result is unreasonable considering the calculation time.

To sum up, calculation scheme for the newly proposed modified variational image decomposition algorithm can be described as

noise estimation:

fringes estimation:

background estimation:

where *σ* is standard deviation of the noise (set to 50) and P is the Chambolle projection algorithm with parameters values defined in this subsection (0.25 projection step and 100 energy parameter).

## 5. Experimental verification

To verify the performance of our unsupervised variational image decomposition (uVID) filtration algorithm with experimental data we decided to consider two cases. In the first case complicated, low frequency fringe pattern recorded during the temporal phase shifting (TPS) series is selected. Having the complete TPS series we were able to calculate the “reference fringes” (Fig. 9(b)) by calculating the cosine function of the TPS-retrieved phase map. This action is justified by the fact that the TPS algorithm (as the multi-frame fringe pattern analysis algorithm) is the most accurate phase demodulation method and we can only try to get close to this accuracy with the single-frame fringe pattern demodulation algorithms, especially in the case of sparse closed fringes.

The single frame from the TPS series, which was chosen for the further analysis is presented in Fig. 9(a). The uVID filtration result is shown in Fig. 9(c) – for tolerance value equal to 10^{−5} and in Fig. 9(d) - for tolerance value equal to 10^{−6} while the comparative FABEMD result is presented in Fig. 9(e). It can be clearly seen that in the case of the lower tolerance value the fringes have higher contrast – there is more energy qualified as fringes. This result proves our previous statement that in the case of more complicated fringe patterns it is sometimes advised to lower the tolerance value. In Fig. 9(f) the characteristic describing the change of the RMS error value with increasing iteration number is presented. The RMS error value was calculated with the comparison to the “reference fringes” (Fig. 9(b)). It should be noted that the dynamic ranges of the “reference fringes” and fringes retrieved as a result of uVID or FABEMD filtration are different. We intentionally did not normalize estimated results because we believe that the normalization process as highly nonlinear operation introduces significant errors into the characteristic of the filtration result (the shape of function may be disturbed) and one of the main advantages of our algorithmic solution is preservation of the filtered function characteristics. The RMS error value is disrupted by mismatch of the compared images dynamic range, but the character of its change is preserved. The RMS error value is decreasing as long as more information is added to the result. After some iteration number the change is so small that it is not justified to continue the calculations. It can be seen that after reaching the tolerance value 10^{−5} point (pink dot in Fig. 9(f)) the change of the RMS value is evident and it is clear that calculations should be continued. On the other hand, it is debatable if the filtration process should be elongated after reaching the tolerance value 10^{−6} point (green dot in Fig. 9(f)). Comparing uVID estimated results with FABEMD estimated one visually we can clearly see that they look similar. The RMS error value estimated for uVID tolerance value 10^{−6} is slightly lower than for FABEMD filtration. The difference may be caused not because of errors present in the result, but because of dynamic range mismatch. However, we can conclude that the level of the compared algorithm accuracy was approached successfully.

The second analyzed fringe pattern is the evaporating droplet interferogram [54–56] shown in Fig. 10(a). In this case we are dealing with the dynamic event analysis and therefore the temporal phase shifting registration was not possible in the straightforward way. Retrieved results highlight the main advantage of our approach which is preservation of details. The uVID filtration result is presented in Fig. 10(b) and the FABEMD filtration result is presented in Fig. 10(c). The most interesting area of the analyzed data is marked by the red rectangle on retrieved phase maps (Figs. 10(d) and (e)). The small inclusions clearly visible in recorded interferogram were well-preserved both by variational image decomposition and empirical mode decomposition filtration. To prove our statement and to show that our algorithmic solution did not introduce any errors to the analyzed interferogram characteristic we moved to the phase domain and enlarge the regions of interest. Although in the image domain both the uVID and the FABEMD filtration results looked similar, in the phase domain the predominance of the uVID filtration can be clearly appreciated.

## 6. Conclusion and discussion

In this paper we are proposing the automatized and universal fringe pattern filtration algorithm called unspervised variational image decomposition (uVID). The merger of previously proposed modifications [39,40,43] makes it possible to design a compact end-to-end fringe pattern filtration algorithm. Additionally, for the first time all variational image decomposition parameters were analyzed and tailored for a specific task of the fringe pattern filtration. The whole calculation process was also meaningfully accelerated by change in the calculation scheme. One iteration of Chambolle projection algorithm lasts approximately 0.06s for the image size 512 × 512 px, and for complex fringe pattern it may be needed to calculate few thousands of iterations. Avoiding the repetition of iterative minimization process for the background part is beneficial.

Tailoring our algorithmic solution for such challenging problem as fringe pattern filtration we noticed that it is unnecessary to calculate Chambolle projection both for fringes (texture decomposition component) and background (structure decomposition component). In fringe pattern analysis we are focused on accurate estimation of the zero-mean-valued information part (fringes) and therefore we shortened three-step calculation scheme (noise estimation, texture estimation and structure estimation) to basically two-step calculation scheme. We minimize the noise with the use of the BM3D algorithm, calculate the information part (texture) with the use of Chambolle projection algorithm, and estimate the background (structure) by subtracting the noise and texture from the initial image. The presented comprehensive analysis of the additive Gaussian noise allowed to tailor the denoising method for the fringe pattern denoising task and to understand the noise influence onto the retrieved phase map. The outstanding performance of the proposed algorithm was verified using simulated and experimental data. In both cases we proved the usefulness of the proposed tolerance parameter to automatically stop the iteration process in a meaningful moment.

To sum up, novelties of this paper are: (1) new end-to-end, automatic and universal fringe pattern preprocessing algorithm uVID, (2) new variational image decomposition calculation scheme, (3) wide-spread analysis and specific tailoring of the variational image decomposition parameters for fringe pattern filtration, (4) comprehensive analysis of the fringe patterns spoiled with additive Gaussian noise and its influence on the retrieved phase quality.

As for the future work we are planning to introduce adjustments into the BM3D denoising algorithm. In this paper we analyzed the additive Gaussian noise, while in the real measurements noise can be different. The specific kind of noise, such as speckle noise, may disturb the BM3D denoising results. We plan to conduct the separate comparative studies considering discussion of the results of several state-of-the-art image denoising methods, e.g., nonlocal means [56], SPADEDH [57], learning dictionary K-SVD [58] or low rank WNNM [59]. The other direction of investigations considers further acceleration of algorithm by speeding up the iterative, gradient-based minimization algorithm.

## Funding

National Science Center, Poland (OPUS 13 2017/25/B/ST7/02049), Faculty of Mechatronics statutory funds (Dean funds), Polish National Agency for Academic Exchange.

## Acknowledgments

We acknowledge the support of Faculty of Mechatronics Warsaw University of Technology statutory funds. The authors would like to thank Dr Zofia Sunderland for the access to experimental data presented in Fig. 9 and Dr Sam Dehaeck for the access to experimental data presented in Fig. 10.

## References

**1. **J. Schwider, “Advanced evaluation techniques in interferometry,” in Progress in Optics, E. Wolf, ed., (North Holland, 1990).

**2. **D. Malacara, *Optical Shop Testing* (John Wiley, 2007).

**3. **D. W. Robinson and G. Reid, *Interferogram Analysis: Digital Fringe Pattern Measurement* (Institute of Physics Publishing, 1993).

**4. **D. Malacara, M. Servin, and Z. Malacara, *Interferogram Analysis for Optical Testing* (Marcel Dekker, 1998).

**5. **S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. **48**(2), 133–140 (2010). [CrossRef]

**6. **K. Patorski and M. Kujawińska, *Handbook of the Moirè Fringe Technique* (Elsevier, 1993).

**7. **R. Juarez-Salazar, C. Mendoza-Rodriguez, J. E. Hernandez-Beltran, and C. Robledo-Sanchez, “How do phase-shifting algorithms work?” Eur. J. Phys. **39**(6), 065302 (2018). [CrossRef]

**8. **J. H. Bruning, D. R. Herriott, J. E. Gallagher, D. P. Rosenfeld, A. D. White, and D. J. Brangaccio, “Digital wavefront measuring interferometer for testing optical surfaces and lenses,” Appl. Opt. **13**(11), 2693–2703 (1974). [CrossRef] [PubMed]

**9. **J. E. Greivenkamp, “Generalized data reduction for heterodyne interferometry,” Opt. Eng. **23**(4), 234350 (1984). [CrossRef]

**10. **Z. Wang and B. Han, “Advanced iterative algorithm for phase extraction of randomly phase-shifted interferograms,” Opt. Lett. **29**(14), 1671–1673 (2004). [CrossRef] [PubMed]

**11. **R. Smythe and R. Moore, “Instantaneous Phase Measuring Interferometry,” Opt. Eng. **23**(4), 361–364 (1984). [CrossRef]

**12. **J. E. Millerd, N. J. Brock, J. B. Hayes, M. B. North-Morris, M. Novak and J. C. Wyant, “Pixelated phase-mask dynamic interferometer,” Proc. SPIE 5531, Interferometry XII: Techniques and Analysis (2004).

**13. **M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. **72**(1), 156–160 (1982). [CrossRef]

**14. **K. G. Larkin, D. J. Bone, and M. A. Oldfield, “Natural demodulation of two-dimensional fringe patterns. I. General background of the spiral phase quadrature transform,” J. Opt. Soc. Am. A **18**(8), 1862–1870 (2001). [CrossRef] [PubMed]

**15. **N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, W. H. Shih, Q. Zheng, N. C. Yen, C. C. Tung, and H. H. Liu, “The empirical mode decomposition and the Hilbert spectrum for non-linear and non-stationary time series analysis,” Proc. R. Soc. A **454**(1971), 903–995 (1998). [CrossRef]

**16. **J. C. Nunes, Y. Bouaoune, E. Delechelle, O. Niang, and P. Bunel, “Image analysis by bidimensional empirical mode decomposition,” Image Vis. Comput. **21**(12), 1019–1026 (2003). [CrossRef]

**17. **M. Wielgus and K. Patorski, “Evaluation of amplitude encoded fringe patterns using the bidimensional empirical mode decomposition and the 2D Hilbert transform generalizations,” Appl. Opt. **50**(28), 5513–5523 (2011). [CrossRef] [PubMed]

**18. **S. M. A. Bhuiyan, R. R. Adhami, and J. F. Khan, “Fast and adaptive bidimensional empirical mode decomposition using order-statistics filter based envelope estimation,” EURASIP J. Adv. Signal Process. **2008**(164), 725356 (2008).

**19. **M. Trusiak, K. Patorski, and M. Wielgus, “Adaptive enhancement of optical fringe patterns by selective reconstruction using FABEMD algorithm and Hilbert spiral transform,” Opt. Express **20**(21), 23463–23479 (2012). [CrossRef] [PubMed]

**20. **M. Trusiak, M. Wielgus, and K. Patorski, “Advanced processing of optical fringe patterns by automated selective reconstruction and enhanced fast empirical mode decomposition,” Opt. Lasers Eng. **52**(1), 230–240 (2014). [CrossRef]

**21. **M. Trusiak, K. Patorski, and K. Pokorski, “Hilbert-Huang processing for single-exposure two-dimensional grating interferometry,” Opt. Express **21**(23), 28359–28379 (2013). [CrossRef] [PubMed]

**22. **M. Trusiak, Ł. Służewski, and K. Patorski, “Single shot fringe pattern phase demodulation using Hilbert-Huang transform aided by the principal component analysis,” Opt. Express **24**(4), 4221–4238 (2016). [CrossRef] [PubMed]

**23. **X. Zhou, A. G. Podoleanu, Z. Yang, T. Yang, and H. Zhao, “Morphological operation-based bi-dimensional empirical mode decomposition for automatic background removal of fringe patterns,” Opt. Express **20**(22), 24247–24262 (2012). [CrossRef] [PubMed]

**24. **Z. Wu and N. E. Huang, “Ensemble empirical mode decomposition: a noise-assisted data analysis method,” Adv. Adapt. Data Anal. **1**(1), 1–41 (2009). [CrossRef]

**25. **C. Wang, Q. Kemao, and F. Da, “Automatic fringe enhancement with novel bidimensional sinusoids-assisted empirical mode decomposition,” Opt. Express **25**(20), 24299–24311 (2017). [CrossRef] [PubMed]

**26. **L. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D **60**(1–4), 259–268 (1992). [CrossRef]

**27. **L. Vese and S. Osher, “Modeling Textures with Total Variation Minimization and Oscillating Patterns in Image Processing,” J. Sci. Comput. **19**(1–3), 553–572 (2003). [CrossRef]

**28. **A. Aujol and A. Chambolle, “Dual norms and image decomposition models,” Int. J. Comput. Vis. **63**(1), 85–104 (2005). [CrossRef]

**29. **X. Zhu, Z. Chen, and C. Tang, “Variational image decomposition for automatic background and noise removal of fringe patterns,” Opt. Lett. **38**(3), 275–277 (2013). [CrossRef] [PubMed]

**30. **X. Zhu, C. Tang, B. Li, C. Sun, and L. Wang, “Phase retrieval from single frame projection fringe pattern with variational image decomposition,” Opt. Lasers Eng. **59**(8), 25–33 (2014). [CrossRef]

**31. **X. Chen, C. Tang, B. Li, and Y. Su, “Variational image decomposition for estimation of fringe orientation and density from electronic speckle pattern interferometry fringe patterns with greatly variable density,” Opt. Lasers Eng. **86**(11), 197–205 (2016). [CrossRef]

**32. **B. Li, C. Tang, X. Zhu, X. Chen, Y. Su, and Y. Cai, “A 3D shape retrieval method for orthogonal fringe projection based on a combination of variational image decomposition and variational mode decomposition,” Opt. Lasers Eng. **86**(11), 345–355 (2016). [CrossRef]

**33. **B. Li, C. Tang, G. Gao, M. Chen, S. Tang, and Z. Lei, “General filtering method for electronic speckle pattern interferometry fringe images with various densities based on variational image decomposition,” Appl. Opt. **56**(16), 4843–4853 (2017). [CrossRef] [PubMed]

**34. **S. Fu and C. Zhang, “Fringe pattern denoising via image decomposition,” Opt. Lett. **37**(3), 422–424 (2012). [CrossRef] [PubMed]

**35. **K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process. **16**(8), 2080–2095 (2007). [CrossRef] [PubMed]

**36. **M. Lebrun, “An Analysis and Implementation of the BM3D Image Denoising Method,” Image Processing Online **2**, 175–213 (2012). [CrossRef]

**37. **A. Charnbolle, R. A. DeVore, N. Y. Lee, and B. J. Lucier, “Nonlinear Wavelet Image Processing: Variational Problems, Compression, and Noise Removal Through Wavelet Shrinkage,” IEEE Trans. Image Process. **7**(3), 319–335 (1998). [CrossRef] [PubMed]

**38. **M. Cywińska, M. Trusiak, V. Mico and K. Patorski, “Single-frame fringe pattern analysis using modified variational image decomposition aided by the Hilbert transform for fast full-field quantitative phase imaging,” Proc. SPIE 10677, Unconventional Optical Imaging, 106772B (2018).

**39. **M. Cywińska, M. Trusiak, and K. Patorski, “Biological phase sample study using variational Hilbert imaging technique,” Proc. SPIE 10887, Quantitative Phase Imaging **V**, 108872F (2019).

**40. **A. Chambolle, “An Algorithm for Total Variation Minimization and Applications,” J. Math. Imaging Vis. **20**(1), 89–97 (2004).

**41. **J. Duran, B. Coll, and C. Sbert, “Chambolle’s Projection Algorithm for Total Variation Denoising,” Image Process. On Line **3**, 311–331 (2013). [CrossRef]

**42. **M. Cywińska, M. Trusiak, K. Patorski, J. A. Picazo-Bueno, and V. Mico, “Modified variational image decomposition algorithm aided by the Hilbert transform as an alternative to 2D Hilbert-Huang transform for fringe pattern phase retrieval,” Proc. SPIE 10834, Speckle 2018: VII International Conference on Speckle Metrology, 1083422 (2018). [CrossRef]

**43. **Y. Meyer, *Oscillating Patterns in Image Processing and Nonlinear Evolution Equations* (American Mathematical Society, 2001).

**44. **P. Maurel, J.-F. Aujol, and G. Peyré, “Locally Parallel Texture Modeling,” SIAM J. Imaging Sci. **4**(1), 413–447 (2011). [CrossRef]

**45. **G. Kutyniok and T. Sauer, “From Wavelets to Shearlets and back again,” in Approximation Theory XII, M. Neamtu and L. L. Schumaker, eds., (Nashboro Press, 2008).

**46. **W. Q. Lim, “The Discrete Shearlet Transform: A New Directional Transform and Compactly Supported Shearlet Frames,” IEEE Trans. Image Process. **19**(5), 1166–1180 (2010). [CrossRef] [PubMed]

**47. **E. A. Evans and R. Skalak, Mechanics and Thermodynamics of Biomembranes, (CRC Press, 1980).

**48. **M. Zhao and Q. Kemao, “WFF-BM3D: A hybrid denoising scheme for fringe patterns,” Proc. SPIE 9524, International Conference on Optical and Photonic Engineering (icOPEN 2015), 952423 (2015).

**49. **M. Zhao and Q. Kemao, “A comparison study of denoising techniques in fringe pattern analysis,” Proc. SPIE 9302, International Conference on Experimental Mechanics 2014, 930208 (2015).

**50. **V. Bianco, P. Memmolo, M. Leo, S. Montresor, C. Distante, M. Paturzo, P. Picart, B. Javidi, and P. Ferraro, “Strategies for reducing speckle noise in digital holography,” Light Sci. Appl. **7**(48), 48 (2018). [CrossRef] [PubMed]

**51. **S. Montresor and P. Picart, “Quantitative appraisal for noise reduction in digital holographic phase imaging,” Opt. Express **24**(13), 14322–14343 (2016). [CrossRef] [PubMed]

**52. **K. Zielinski, D. Peters, and R. Laur, “Stopping Criteria for Single-Objective Optimization,” in: Proceedings of Third International Conference on Computational Intelligence, Robotics and Autonomus Systems, Singapore, 1–6 (2005).

**53. **Y. Tsoumpas, S. Dehaeck, A. Rednikov, and P. Colinet, “Effect of Marangoni Flows on the Shape of Thin Sessile Droplets Evaporating into Air,” Langmuir **31**(49), 13334–13340 (2015). [CrossRef] [PubMed]

**54. **S. Dehaeck, Y. Tsoumpas, and P. Colinet, “Analyzing closed-fringe images using two-dimensional Fan wavelets,” Appl. Opt. **54**(10), 2939–2952 (2015). [CrossRef] [PubMed]

**55. **S. Dehaeck and P. Colinet, “Improving speed and precision of local frequency analysis using Gaussian ridge interpolation for wavelet and windowed Fourier ridge algorithms,” Opt. Lasers Eng. **77**, 54–63 (2016). [CrossRef]

**56. **Y. Tounsi, M. Kumar, A. Nassim, and F. Mendoza-Santoyo, “Speckle noise reduction in digital speckle pattern interferometric fringes by nonlocal means and its related adaptive kernel-based methods,” Appl. Opt. **57**(27), 7681–7690 (2018). [CrossRef] [PubMed]

**57. **P. Memmolo, M. Iannone, M. Ventre, P. A. Netti, A. Finizio, M. Paturzo, and P. Ferraro, “Quantitative phase maps denoising of long holographic sequences by using SPADEDH algorithm,” Appl. Opt. **52**(7), 1453–1460 (2013). [CrossRef] [PubMed]

**58. **M. Elad and M. Aharon, “Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries,” IEEE Trans. Image Process. **15**(12), 3736–3745 (2006). [CrossRef] [PubMed]

**59. **S. Gu, Q. Xie, D. Meng, W. Zuo, X. Feng, and L. Zhang, “Weighted Nuclear Norm Minimization and Its Applications to Low Level Vision,” ,” Int. J. Comput. Vis. **121**(2), 183–208 (2017). [CrossRef]