×

Sparse signal reconstruction via recurrent neural networks with hyperbolic tangent function. (English) Zbl 1522.94017

Summary: In this paper, several recurrent neural networks (RNNs) for solving the \(L_1\)-minimization problem are proposed. First, a one-layer RNN based on the hyperbolic tangent function and the projection matrix is designed. In addition, the stability and global convergence of the previously presented RNN are proved by the Lyapunov method. Then, the sliding mode control technique is introduced into the former RNN to design finite-time RNN (FTRNN). Under the condition that the projection matrix satisfies the Restricted Isometry Property (RIP), a suitable Lyapunov function is constructed to prove that the FTRNN is stable in the Lyapunov sense and has the finite-time convergence property. Finally, we make a comparison of the proposed RNN and FTRNN with the existing RNNs. To achieve this, we implement experiments for sparse signal reconstruction and image reconstruction. The results further demonstrate the effectiveness and superior performance of the proposed RNN and FTRNN.

MSC:

94A12 Signal theory (characterization, reconstruction, filtering, etc.)
94A08 Image processing (compression, reconstruction, etc.) in information and communication theory
68T07 Artificial neural networks and deep learning

Software:

ICALAB
Full Text: DOI

References:

[1] Afonso, M. V.; Bioucas-Dias, J. M.; Figueiredo, M. A., An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems, IEEE Transactions on Image Processing, 20, 681-695 (2010) · Zbl 1372.94004
[2] Akl, A.; Feng, C.; Valaee, S., A novel accelerometer-based gesture recognition system, IEEE Transactions on Signal Processing, 59, 6197-6205 (2011) · Zbl 1393.94095
[3] Balavoine, A.; Romberg, J.; Rozell, C. J., Convergence and rate analysis of neural networks for sparse approximation, IEEE Transactions on Neural Networks and Learning Systems, 23, 1377-1389 (2012)
[4] Bao, C., Wu, Y., Ling, H., & Ji, H. (2012). Real time robust l1 tracker using accelerated proximal gradient approach. In IEEE conference on computer vision and pattern recognition (pp. 1830-1837).
[5] Bertsekas, D. P., Nonlinear programming, Journal of the Operational Research Society, 48, 334 (1997)
[6] Bruckstein, A. M.; Donoho, D. L.; Elad, M., From sparse solutions of systems of equations to sparse modeling of signals and images, SIAM Review, 51, 34-81 (2009) · Zbl 1178.68619
[7] Byrd, R. H.; Hribar, M. E.; Nocedal, J., An interior point algorithm for large-scale nonlinear programming, SIAM Journal on Optimization, 9, 877-900 (1999) · Zbl 0957.65057
[8] Candes, E. J., The restricted isometry property and its implications for compressed sensing, Comptes Rendus Mathematique, 346, 589-592 (2008) · Zbl 1153.94002
[9] Candes, E. J.; Romberg, J.; Tao, T., Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52, 489-509 (2006) · Zbl 1231.94017
[10] Candes, E.; Tao, T., The dantzig selector: Statistical estimation when \(p\) is much larger than \(n\), The Annals of Statistics, 35, 2313-2351 (2007) · Zbl 1139.62019
[11] Candes, E. J.; Wakin, M. B., An introduction to compressive sampling, IEEE Signal Processing Magazine, 25, 21-30 (2008)
[12] Cheng, L.; Hou, Z. G.; Lin, Y.; Tan, M.; Zhang, W. C.; Wu, F. X., Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks, IEEE Transactions on Neural Networks, 22, 714-726 (2011)
[13] Cichocki, A.; Amari, S. I., Adaptive blind signal and image processing: learning algorithms and applications (2002), John Wiley & Sons
[14] Cochocki, A.; Unbehauen, R., Neural networks for optimization and signal processing, 253 (1993), John Wiley & Sons, Inc · Zbl 0824.68101
[15] Elhamifar, E.; Vidal, R., Sparse subspace clustering: Algorithm, theory, and applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 2765-2781 (2013)
[16] Figueiredo, M. A.T.; Nowak, R. D.; Wright, S. J., Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems, IEEE Journal of Selected Topics in Signal Processing, 1, 586-597 (2008)
[17] Guo, C.; Yang, Q., A neurodynamic optimization method for recovery of compressive sensed signals with globally converged solution approximating to \(l_0\) minimization, IEEE Transactions on Neural Networks and Learning Systems, 26, 1363-1374 (2014)
[18] Han, B.; Liu, C. L.; Zhang, W. J., A method to measure the resilience of algorithm for operation management, IFAC PapersOnLine, 49, 1442-1447 (2016)
[19] He, X.; Wen, H.; Huang, T., A fixed-time projection neural network for solving \(L_1\)-minimization problem, IEEE Transactions on Neural Networks and Learning Systems (2021)
[20] Ju, X.; Hu, D.; Li, C.; He, X.; Feng, G., A novel fixed-time converging neurodynamic approach to mixed variational inequalities and applications, IEEE Transactions on Cybernetics (2021)
[21] LaSalle, J. P., An invariance principle in the theory of stability, (Differential equations and dynamical systems stability and control (1966), Academic: Academic New York, NY, USA) · Zbl 0183.09401
[22] Li, Y.; Cichocki, A.; Amari, S. I., Analysis of sparse representation and blind source separation, Neural Computation, 16, 1193-1234 (2004) · Zbl 1089.68101
[23] Li, Y.; Cichocki, A.; Amari, S. I., Blind estimation of channel parameters and source components for EEG signals: A sparse factorization approach, IEEE Transactions on Neural Networks, 17, 419-431 (2006)
[24] Li, G.; Yan, Z., Reconstruction of sparse signals via neurodynamic optimization, International Journal of Machine Learning and Cybernetics, 10, 15-26 (2019)
[25] Liu, Q.; Wang, J., A one-layer projection neural network for non-smooth optimization subject to linear equalities and bound constraints, IEEE Transactions on Neural Networks and Learning Systems, 24, 812-824 (2013)
[26] Liu, Q.; Wang, J., \( L_1\)-Minimization algorithms for sparse signal reconstruction based on a projection neural network, IEEE Transactions on Neural Networks and Learning Systems, 27, 698-707 (2015)
[27] Mairal, J., Bach, F., Ponce, J., Sapiro, G., & Zisserman, A. (2009). Non-local sparse models for image restoration. In Proc. IEEE 12th international conference on computer vision (pp. 2272-2279).
[28] Malioutov, D. M., Cetin, M., & Willsky, A. S. (2005). Homotopy continuation for sparse signal representation. In IEEE international conference on acoustics, speech, and signal processing, vol. 5 (pp. v/733-v/736). · Zbl 1370.94191
[29] Natarajan, B. K., Sparse approximate solutions to linear systems, SIAM Journal on Computing, 24, 227-234 (1995) · Zbl 0827.68054
[30] Ren, J.; Yu, L.; Lyu, C.; Zheng, G.; Barbot, J. P.; Sun, H., Dynamical sparse signal recovery with fixed-time convergence, Signal Processing, 162, 65-74 (2019)
[31] Rozell, C. J.; Johnson, D. H.; Baraniuk, R. G.; Olshausen, B. A., Sparse coding via thresholding and local competition in neural circuits, Neural Computation, 20, 2526-2563 (2008)
[32] Tomioka, R.; Sugiyama, M., Dual-augmented lagrangian method for efficient sparse reconstruction, IEEE Signal Processing Letters, 16, 1067-1070 (2009)
[33] Wagner, A.; Wright, J.; Ganesh, A.; Zhou, Z.; Mobahi, H.; Ma, Y., Toward a practical face recognition system: Robust alignment and illumination by sparse representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 34, 372-386 (2011)
[34] Wang, Y.; Cheng, L.; Hou, Z. G.; Yu, J.; Tan, M., Optimal formation of multirobot systems based on a recurrent neural network, IEEE Transactions on Neural Networks and Learning Systems, 27, 322-333 (2015)
[35] Wang, J.; Yang, J.; Yu, K.; Lv, F.; Huang, T.; Gong, Y., Locality-constrained linear coding for image classification, (2010 IEEE computer society conference on computer vision and pattern recognition (2010), IEEE), 3360-3367
[36] Wang, Y.; Zhou, G.; Caccetta, L.; Liu, W., An alternative lagrange-dual based algorithm for sparse signal reconstruction, IEEE Transactions on Signal Processing, 59, 1895-1901 (2010)
[37] Wen, H.; Wang, H.; He, X., A neurodynamic algorithm for sparse signal reconstruction with finite-time convergence, Circuits, Systems, and Signal Processing, 39, 6058-6072 (2020) · Zbl 1517.94032
[38] Xia, Y.; Wang, J., A recurrent neural network for solving linear projection equations, Neural Networks, 13, 337-350 (2000)
[39] Xu, C.; Liu, Q., An inertial neural network approach for robust time-of-arrival localization considering clock asynchronization, Neural Networks, 146, 98-106 (2022)
[40] Yu, L.; Zheng, G.; Barbot, J. P., Dynamical sparse recovery with finite-time convergence, IEEE Transactions on Signal Processing, 65, 6146-6157 (2017) · Zbl 1415.94295
[41] Zhang, W. J.; Lin, Y., On the principle of design of resilient systems-application to enterprise information systems, Enterprise Information Systems, 4, 99-110 (2010)
[42] Zhao, Y.; Liao, X.; He, X.; Tang, R., Centralized and collective neurodynamic optimization approaches for sparse signal reconstruction via \(L_1\)-minimization, IEEE Transactions on Neural Networks and Learning Systems (2021)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.