×

Tensor Q-rank: new data dependent definition of tensor rank. (English) Zbl 07465658

Summary: Recently, the Tensor Nuclear Norm (TNN) regularization based on t-SVD has been widely used in various low tubal-rank tensor recovery tasks. However, these models usually require smooth change of data along the third dimension to ensure their low rank structures. In this paper, we propose a new definition of data dependent tensor rank named tensor Q-rank by a learnable orthogonal matrix \(\mathbf{Q}\), and further introduce a unified data dependent low rank tensor recovery model. According to the low rank hypothesis, we introduce two explainable selection methods of \(\mathbf{Q}\), under which the data tensor may have a more significant low tensor Q-rank structure than that of low tubal-rank structure. Specifically, maximizing the variance of singular value distribution leads to Variance Maximization Tensor Q-Nuclear norm (VMTQN), while minimizing the value of nuclear norm through manifold optimization leads to Manifold Optimization Tensor Q-Nuclear norm (MOTQN). Moreover, we apply these two models to the low rank tensor completion problem, and then give an effective algorithm and briefly analyze why our method works better than TNN based methods in the case of complex data with low sampling rate. Finally, experimental results on real-world datasets demonstrate the superiority of our proposed models in the tensor completion problem with respect to other tensor rank regularization models.

MSC:

68T05 Learning and adaptive systems in artificial intelligence

Software:

COIL-20; CIFAR; tproduct

References:

[1] Absil, P-A; Mahony, R.; Sepulchre, R., Optimization algorithms on matrix manifolds (2009), Princeton, NJ: Princeton University Press, Princeton, NJ · Zbl 1147.65043
[2] Cai, J-F; Chan, RH; Shen, Z., A framelet-based image inpainting algorithm, Applied and Computational Harmonic Analysis, 24, 2, 131-149 (2008) · Zbl 1135.68056 · doi:10.1016/j.acha.2007.10.002
[3] Candès, EJ; Recht, B., Exact matrix completion via convex optimization, Foundations of Computational Mathematics, 9, 6, 717 (2009) · Zbl 1219.90124 · doi:10.1007/s10208-009-9045-5
[4] Candès, EJ; Tao, T., The power of convex relaxation: Near-optimal matrix completion, IEEE Transactions on Information Theory, 56, 5, 2053-2080 (2010) · Zbl 1366.15021 · doi:10.1109/TIT.2010.2044061
[5] Edelman, A.; Arias, TA; Smith, ST, The geometry of algorithms with orthogonality constraints, SIAM Journal on Matrix Analysis and Applications, 20, 2, 303-353 (1998) · Zbl 0928.65050 · doi:10.1137/S0895479895290954
[6] Friedland, S.; Lim, L-H, Nuclear norm of higher-order tensors, Mathematics of Computation, 87, 311, 1255-1281 (2018) · Zbl 1383.15018 · doi:10.1090/mcom/3239
[7] Fu, Y.; Gao, J.; Tien, D.; Lin, Z.; Hong, X., Tensor lrr and sparse coding-based subspace clustering, IEEE Transactions on Neural Networks and Learning Systems, 27, 10, 2120-2133 (2016) · doi:10.1109/TNNLS.2016.2553155
[8] Håstad, J., Tensor rank is NP-complete, Journal of Algorithms, 11, 4, 644-654 (1990) · Zbl 0716.65043 · doi:10.1016/0196-6774(90)90014-6
[9] Hillar, CJ; Lim, L-H, Most tensor problems are NP-hard, Journal of the ACM, 60, 6, 45 (2013) · Zbl 1281.68126 · doi:10.1145/2512329
[10] Hitchcock, FL, The expression of a tensor or a polyadic as a sum of products, Studies in Applied Mathematics, 6, 1-4, 164-189 (1927) · JFM 53.0095.01
[11] Hitchcock, FL, Multiple invariants and generalized rank of a p-way matrix or tensor, Journal of Mathematics and Physics, 7, 1-4, 39-79 (1928) · JFM 53.0097.01 · doi:10.1002/sapm19287139
[12] Hu, W.; Tao, D.; Zhang, W.; Xie, Y.; Yang, Y., The twist tensor nuclear norm for video completion, IEEE Transactions on Neural Networks and Learning Systems, 28, 12, 2961-2973 (2016) · doi:10.1109/TNNLS.2016.2611525
[13] Jiang, T-X; Huang, T-Z; Zhao, X-L; Ji, T-Y; Deng, L-J, Matrix factorization for low-rank tensor completion using framelet prior, Information Sciences, 436, 403-417 (2018) · Zbl 1448.15015 · doi:10.1016/j.ins.2018.01.035
[14] Jiang, T-X; Ng, MK; Zhao, X-L; Huang, T-Z, Framelet representation of tensor nuclear norm for third-order tensor completion, IEEE Transactions on Image Processing, 29, 7233-7244 (2020) · Zbl 07586396 · doi:10.1109/TIP.2020.3000349
[15] Kasai, H., & Mishra, B. (2016). Low-rank tensor completion: A Riemannian manifold preconditioning approach. In International conference on machine learning, pp. 1012-1021.
[16] Kernfeld, E.; Kilmer, M.; Aeron, S., Tensor-tensor products with invertible linear transforms, Linear Algebra and its Applications, 485, 545-570 (2015) · Zbl 1323.15016 · doi:10.1016/j.laa.2015.07.021
[17] Kiers, HA, Towards a standardized notation and terminology in multiway analysis, Journal of Chemometrics, 14, 3, 105-122 (2000) · doi:10.1002/1099-128X(200005/06)14:3<105::AID-CEM582>3.0.CO;2-I
[18] Kilmer, ME; Braman, K.; Hao, N.; Hoover, RC, Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging, SIAM Journal on Matrix Analysis and Applications, 34, 1, 148-172 (2013) · Zbl 1269.65044 · doi:10.1137/110837711
[19] Kilmer, ME; Martin, CD, Factorization strategies for third-order tensors, Linear Algebra and its Applications, 435, 3, 641-658 (2011) · Zbl 1228.15009 · doi:10.1016/j.laa.2010.09.020
[20] Kolda, TG; Bader, BW, Tensor decompositions and applications, SIAM Review, 51, 3, 455-500 (2009) · Zbl 1173.65029 · doi:10.1137/07070111X
[21] Kong, H.; Xie, X.; Lin, Z., t-Schatten-\( p\) norm for low-rank tensor recovery, IEEE Journal of Selected Topics in Signal Processing, 12, 6, 1405-1419 (2018) · doi:10.1109/JSTSP.2018.2879185
[22] Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images, tech. rep., Citeseer.
[23] Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., & Serre, T. (2011). HMDB: A large video database for human motion recognition. In IEEE international conference on computer vision, pp. 2556-2563.
[24] Li, C., Guo, L., Tao, Y., Wang, J., Qi, L., & Dou, Z. (2016). Yet another Schatten norm for tensor recovery. In International conference on neural information processing, pp. 51-60.
[25] Lin, Z., Liu, R., & Su, Z. (2011). Linearized alternating direction method with adaptive penalty for low-rank representation. Advances in Neural Information Processing Systems, 612-620.
[26] Lin, Z.; Liu, R.; Li, H., Linearized alternating direction method with parallel splitting and adaptive penalty for separable convex programs in machine learning, Machine Learning, 99, 2, 287 (2015) · Zbl 1331.68189 · doi:10.1007/s10994-014-5469-5
[27] Liu, J.; Musialski, P.; Wonka, P.; Ye, J., Tensor completion for estimating missing values in visual data, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 1, 208-220 (2013) · doi:10.1109/TPAMI.2012.39
[28] Liu, Y.; Shang, F.; Fan, W.; Cheng, J.; Cheng, H., Generalized higher order orthogonal iteration for tensor learning and decomposition, IEEE Transactions on Neural Networks and Learning Systems, 27, 12, 2551-2563 (2015) · doi:10.1109/TNNLS.2015.2496858
[29] Lu, C., Feng, J., Chen, Y., Liu, W., Lin, Z., & Yan, S. (2016). Tensor robust principal component analysis: Exact recovery of corrupted low-rank tensors via convex optimization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5249-5257.
[30] Lu, C., Feng, J., Lin, Z., & Yan, S. (2018). Exact low tubal rank tensor recovery from gaussian measurements. In International conference on artificial intelligence.
[31] Lu, C., Peng, X., & Wei, Y. (2019). Low-rank tensor completion with a new tensor nuclear norm induced by invertible linear transforms. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5996-6004.
[32] Lu, C.; Feng, J.; Chen, Y.; Liu, W.; Lin, Z.; Yan, S., Tensor robust principal component analysis with a new tensor nuclear norm, IEEE Transactions on Pattern Analysis and Machine Intelligence, 42, 4, 925-938 (2019) · doi:10.1109/TPAMI.2019.2891760
[33] Lu, C.; Feng, J.; Yan, S.; Lin, Z., A unified alternating direction method of multipliers by majorization minimization, IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 3, 527-541 (2017) · doi:10.1109/TPAMI.2017.2689021
[34] Nene, S. A., Nayar, S. K., Murase, H., et al. (1996). Columbia object image library (coil-20).
[35] Ng, MK; Chan, RH; Tang, W-C, A fast algorithm for deblurring models with Neumann boundary conditions, SIAM Journal on Scientific Computing, 21, 3, 851-866 (1999) · Zbl 0951.65038 · doi:10.1137/S1064827598341384
[36] Petersen, KB; Pedersen, MS, The matrix cookbook, Technical University of Denmark, 7, 15, 510 (2008)
[37] Romera-Paredes, B., & Pontil, M. (2013). A new convex relaxation for tensor completion. Advances in Neural Information Processing Systems, 2967-2975.
[38] Song, G., Ng, M. K., & Zhang, X. (2019). Robust tensor completion using transformed tensor svd. arXiv preprint arXiv:1907.01113. · Zbl 07217207
[39] Tomioka, R., & Suzuki, T. (2013). Convex tensor decomposition via structured Schatten norm regularization. In Advances in neural information processing systems, pp. 1331-1339.
[40] Tomioka, R., Hayashi, K., & Kashima, H. (2010). On the extension of trace norm to tensors. In NIPS workshop on tensors, kernels, and machine learning, p. 7.
[41] Tucker, LR, Some mathematical notes on three-mode factor analysis, Psychometrika, 31, 3, 279-311 (1966) · doi:10.1007/BF02289464
[42] Wen, Z.; Yin, W., A feasible method for optimization with orthogonality constraints, Mathematical Programming, 142, 1-2, 397-434 (2013) · Zbl 1281.49030 · doi:10.1007/s10107-012-0584-1
[43] Wimalawarne, K., Sugiyama, M., & Tomioka, R. (2014). Multitask learning meets tensor factorization: Task imputation via convex optimization. Advances in Neural Information Processing Systems, 2825-2833.
[44] Xu, W.-H., Zhao, X.-L., & Ng, M. (2019). A fast algorithm for cosine transform based tensor singular value decomposition. arXiv preprint arXiv:1902.03070.
[45] Xu, Y.; Hao, R.; Yin, W.; Su, Z., Parallel matrix factorization for low-rank tensor completion, Inverse Problems & Imaging, 9, 2, 601-624 (2017) · Zbl 1359.15021 · doi:10.3934/ipi.2015.9.601
[46] Xu, Y.; Yin, W., A block coordinate descent method for regularized multi-convex optimization with applications to nonnegative tensor factorization and completion, SIAM Journal on Imaging Sciences, 6, 3, 1758-1789 (2015) · Zbl 1280.49042 · doi:10.1137/120887795
[47] Yin, M.; Gao, J.; Xie, S.; Guo, Y., Multiview subspace clustering via tensorial t-product representation, IEEE Transactions on Neural Networks and Learning Systems, 30, 3, 851-864 (2018) · doi:10.1109/TNNLS.2018.2851444
[48] Yuan, M.; Zhang, C-H, On tensor completion via nuclear norm minimization, Foundations of Computational Mathematics, 16, 4, 1031-1068 (2016) · Zbl 1378.90066 · doi:10.1007/s10208-015-9269-5
[49] Zhang, Z., Ely, G., Aeron, S., Hao, N., & Kilmer, M. (2014). Novel methods for multilinear data completion and de-noising based on tensor-SVD. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3842-3849.
[50] Zhang, Z.; Aeron, S., Exact tensor completion using t-SVD, IEEE Transactions on Signal Processing, 65, 6, 1511-1526 (2017) · Zbl 1414.94741 · doi:10.1109/TSP.2016.2639466
[51] Zhou, P.; Lu, C.; Lin, Z.; Zhang, C., Tensor factorization for low-rank tensor completion, IEEE Transactions on Image Processing, 27, 3, 1152-1163 (2018) · Zbl 1409.94796 · doi:10.1109/TIP.2017.2762595
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.