Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-zzh7m Total loading time: 0 Render date: 2024-04-28T13:59:49.294Z Has data issue: false hasContentIssue false

Bibliography

Published online by Cambridge University Press:  30 June 2022

Romain Couillet
Affiliation:
Université Grenoble Alpes
Zhenyu Liao
Affiliation:
Huazhong University of Science and Technology, China
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2022

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Adamczak, Radoslaw. On the Marcenko-Pastur and Circular Laws for Some Classes of Random Matrices with Dependent Entries. Electronic Journal of Probability, 16(0):1065-95, 2011. ISSN 1083-6489. https://doi.org/10.1214/ejp.v16-899.Google Scholar
Adamic, Lada A. and Glance, Natalie. The Political Blogosphere and the 2004 U.S. Election: Divided They Blog. In LinkKDD’05: Proceedings of the 3rd International Workshop on Link Discovery, pages 36-43. ACM, 2005. ISBN 9781595932151. https://doi.org/10.1145/1134271.1134277.Google Scholar
Adlam, Ben, Levinson, Jake, and Pennington, Jeffrey. A Random Matrix Perspective on Mixtures of Nonlinearities for Deep Learning. 2019. https://arxiv.org/abs/1912.00827.Google Scholar
Adlam, Ben and Pennington, Jeffrey. The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 74-84. PMLR, 2020. http://proceedings.mlr.press/v119/adlam20a.html.Google Scholar
Advani, Madhu S., Saxe, Andrew M., and Sompolinsky, Haim. High-Dimensional Dynamics of Generalization Error in Neural Networks. Neural Networks, 132:428–46, 2020. ISSN 08936080. https://doi.org/10.1016Zj.neunet.2020.08.022.CrossRefGoogle ScholarPubMed
Ajanki, Oskari, Erdos, Laszlo, and Kruger, Torben. Quadratic Vector Equations on Complex Upper Half-Plane. Memoirs of the American Mathematical Society, 261(1261), 2019. ISSN 0065-9266. https://doi.org/10.1090/memo/1261.CrossRefGoogle Scholar
Akhiezer, Naum Ilich and Glazman, Izrail Markovich. Theory of Linear Operators in Hilbert Space. Dover Books on Mathematics. Dover Publications, 2013. ISBN 9780486677484. http://cds.cern.ch/record/2009887.Google Scholar
Ali, Hafiz Tiomoko and Couillet, Romain. Improved Spectral Community Detection in Large Heterogeneous Networks. Journal of Machine Learning Research, 18(225):1-49, 2018. http://jmlr.org/papers/v18/17-247.html.Google Scholar
Allen-Zhu, Zeyuan, Li, Yuanzhi, and Liang, Yingyu. Learning and Generalization in Over- parameterized Neural Networks, Going Beyond Two Layers. In NIPS’19: Advances in Neural Information Processing Systems, volume 32, pages 6158-69. Curran Associates, Inc., 2019. https://proceedings.neurips.cc/paper/2019/file/62dad6e273d32235ae02b7d321578ee8-Paper.pdf.Google Scholar
Amini, Arash A., Chen, Aiyou, Bickel, Peter J., and Levina, Elizaveta. Pseudo-likelihood Methods for Community Detection in Large Sparse Networks. The Annals of Statistics, 41(4): 2097-122, 2013. ISSN 0090-5364. https://doi.org/10.1214/13-aos1138.Google Scholar
Anderson, Greg W., Guionnet, Alice, and Zeitouni, Ofer. An Introduction to Random Matrices, volume 118 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, 2010. ISBN 9780511801334. https://doi.org/10.1017/cbo9780511801334.Google Scholar
Anderson, Theodore Wilbur. Asymptotic Theory for Principal Component Analysis. Annals of Mathematical Statistics, 34(1):122-48, 1963.CrossRefGoogle Scholar
Andrzejak, Ralph G., Lehnertz, Klaus, Mormann, Florian et al. Indications of Nonlinear Deterministic and Finite-Dimensional Structures in Time Series of Brain Electrical Activity: Dependence on Recording Region and Brain State. Physical Review E, 64(6):061907, 2001. ISSN 1539-3755. https://doi.org/10.1103/physreve.64.061907.Google Scholar
Arnold, Ludwig, Gundlach, Volker Matthias, and Demetrius, Lloyd. Evolutionary Formalism for Products of Positive Random Matrices. The Annals ofApplied Probability, 4(3):859-901, 1994. ISSN 1050-5164. https://doi.org/10.1214/aoap/1177004975.Google Scholar
Arora, Sanjeev, Du, Simon S., Hu, Wei, Li, Zhiyuan, and Wang, Ruosong. Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 322-332. 2019a. http://proceedings.mlr.press/v97/arora19a.html.Google Scholar
Arora, Sanjeev, Du, Simon S., Hu, Wei et al. On Exact Computation with an Infinitely Wide Neural Net. In NIPS’19: Advances in Neural Information Processing Systems, volume 32, pages 8141-50. Curran Associates, Inc., 2019b. https://proceedings.neurips.cc/paper/2019/file/dbc4d84bfcfe2284ba11beffb853a8c4-Paper.pdf.Google Scholar
Arora, Sanjeev, Ge, Rong, Ma, Tengyu, and Moitra, Ankur. Simple, Efficient, and Neural Algorithms for Sparse Coding. In Proceedings of the 28th Conference on Learning Theory, volume 40 of Proceedings of Machine Learning Research, pages 113-149, Paris, France, 2015. http://proceedings.mlr.press/v40/Arora15.html.Google Scholar
Arous, Gerard Ben and Peche, Sandrine. Universality of Local Eigenvalue Statistics for Some Sample Covariance Matrices. Communications on Pure and Applied Mathematics, 58(10): 1316-57, 2005. ISSN 1097-0312. https://doi.org/10.1002/cpa.20070.Google Scholar
Au, Benson, Cebron, Guillaume, Dahlqvist, Antoine, Gabriel, Franck, and Male, Camille. Large Permutation Invariant Random Matrices Are Asymptotically Free Over the Diagonal. 2018. https://arxiv.org/abs/1805.07045.Google Scholar
Auguin, Nicolas, Morales-Jimenez, David, McKay, Matthew R., and Couillet, Romain. Large- Dimensional Behavior of Regularized Maronna’s M-Estimators of Covariance Matrices. IEEE Transactions on Signal Processing, 66(13):3529-42, 2018. ISSN 1053-587X. https://doi.org/10.1109/tsp.2018.2831629.Google Scholar
Avrachenkov, Konstantin, Mishenin, Alexey, Gon§alves, Paulo, and Sokol, Marina. Generalized Optimization Framework for Graph-Based Semi-supervised Learning. In SDM’12 Proceedings of the 2012 SIAM International Conference on Data Mining, pages 966-74. SIAM, 2012. ISBN 9781611972320. https://doi.org/10.1137Z1.9781611972825.83.Google Scholar
Bai, Zhidong and Jack, W. Silverstein. No Eigenvalues Outside the Support of the Limiting Spectral Distribution of Large-Dimensional Sample Covariance Matrices. The Annals of Probability, 26(1):316-45, 1998. ISSN 0091-1798. https://doi.org/10.1214/aop/1022855421.Google Scholar
Bai, Zhidong and Jack, W. Silverstein. Exact Separation of Eigenvalues of Large Dimensional Sample Covariance Matrices. The Annals of Probability, 27(3):1536-55, 1999. ISSN 00911798. https://doi.org/10.1214/aop/1022677458.Google Scholar
Bai, Zhidong and Jack, W. Silverstein. CLT for Linear Spectral Statistics of Large-Dimensional Sample Covariance Matrices. The Annals of Probability, 32(1A):553-605, 2004. ISSN 00911798. https://doi.org/10.1214/aop/1078415845.Google Scholar
Bai, Zhidong and Jack, W. Silverstein. Spectral Analysis of Large Dimensional Random Matrices, volume 20 of Springer Series in Statistics. Springer-Verlag New York, 2 edition, 2010. ISBN 9781441906601. https://doi.org/10.1007/978-1-4419-0661-8.Google Scholar
Bai, Zhidong and Jian-feng Yao. Central Limit Theorems for Eigenvalues in a Spiked Population Model. Annales de l’Institut Henri Poincare, Probabilites et Statistiques, 44(3):447-74, 2008. ISSN 0246-0203. https://doi.org/10.1214/07-aihp118.Google Scholar
Bai, Zhidong, Silverstein, Jack W., and Yin, Y. Q.. A Note on the Largest Eigenvalue of a Large Dimensional Sample Covariance Matrix. Journal of Multivariate Analysis, 26(2):166-8, 1988. ISSN 0047-259X. https://doi.org/10.1016/0047-259x(88)90078-4.CrossRefGoogle Scholar
Baik, Jinho and Jack, W. Silverstein. Eigenvalues of Large Sample Covariance Matrices of Spiked Population Models. Journal of Multivariate Analysis, 97(6):1382-408, 2006. ISSN 0047-259X. https://doi.org/10.1016/jjmva.2005.08.003.CrossRefGoogle Scholar
Baik, Jinho, Arous, Gerard Ben, and Peche, Sandrine. Phase Transition of the Largest Eigenvalue for Nonnull Complex Sample Covariance Matrices. The Annals of Probability, 33(5):1643- 97, 2005. ISSN 0091-1798. https://doi.org/10.1214/009117905000000233.CrossRefGoogle Scholar
Baldi, Pierre, Sadowski, Peter, and Lu, Zhiqin. Learning in the Machine: Random Backprop- agation and the Deep Learning Channel. Artificial Intelligence, 260:1–35, 2018. ISSN 0004-3702. https://doi.org/10.1016/j.artint.2018.03.003.CrossRefGoogle Scholar
Bandeira, Afonso S., Lodhia, Asad, and Rigollet, Philippe. Marcenko-Pastur Law for Kendall’s Tau. Electronic Communications in Probability, 22(0), 2017. ISSN 1083-589X. https://doi.org/10.1214/17-ecp59.CrossRefGoogle Scholar
Baraniuk, Richard G. Compressive Sensing. IEEE Signal Processing Magazine, 24(4):118-21, 2007. ISSN 1053-5888. https://doi.org/10.1109/msp.2007.4286571.Google Scholar
Bartlett, Peter L., Foster, Dylan J., and Matus, J. Telgarsky. Spectrally-normalized Margin Bounds for Neural Networks. In NIPS’17: Advances in Neural Information Processing Systems, volume 30, pages 6240-9. Curran Associates, Inc., 2017. https://proceedings.neurips.cc/paper/2017/file/b22b257ad0519d4500539da3c8bcf4dd-Paper.pdf.Google Scholar
Bauschke, Heinz H. and Patrick, L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Number 2 in CMS Books in Mathematics. Springer International Publishing, 2 edition, 2017. ISBN 9783319483108. https://doi.org/10.1007/978-3-319-48311-5.Google Scholar
Bejaoui, Amine, Elkhalil, Khalil, Kammoun, Abla, Alouni, Mohamed Slim, and Tarek Al- Naffouri. Improved Design of Quadratic Discriminant Analysis Classifier in Unbalanced Settings. 2020. https://arxiv.org/abs/2006.06355.Google Scholar
Belkin, Mikhail and Niyogi, Partha. Semi-supervised Learning on Riemannian Manifolds. Machine Learning, 56(1-3):209-39, 2004. ISSN 0885-6125. https://doi.org/10.1023/b:mach.0000033120.25363.1e.Google Scholar
Belkin, Mikhail, Matveeva, Irina, and Niyogi, Partha. Regularization and Semi-supervised Learning on Large Graphs. In International Conference on Computational Learning Theory (COLT), C0LT’04, pages 624-38. Springer, 2004. https://doi.org/10.1007/978-3-540-27819-1_43.Google Scholar
Belkin, Mikhail, Hsu, Daniel, Ma, Siyuan, and Mandal, Soumik. Reconciling Modern Machine- Learning Practice and the Classical Bias-Variance Trade-off. Proceedings of the National Academy of Sciences, 116(32):15849-54, 2019. ISSN 0027-8424. https://doi.org/10.1073/pnas.1903070116.Google Scholar
Benaych-Georges, Florent and Couillet, Romain. Spectral Analysis of the Gram Matrix of Mixture Models. ESAIM: Probability and Statistics, 20:217–37, 2016. ISSN 1292-8100. https://doi.org/10.1051/ps/2016007.Google Scholar
Benaych-Georges, Florent and Nadakuditi, Raj Rao. The Eigenvalues and Eigenvectors of Finite, Low Rank Perturbations of Large Random Matrices. Advances in Mathematics, 227(1):494- 521, 2011. ISSN 0001-8708. https://doi.org/10.1016/j.aim.2011.02.007.CrossRefGoogle Scholar
Benaych-Georges, Florent and Nadakuditi, Raj Rao. The Singular Values and Vectors of Low Rank Perturbations of Large Rectangular Random Matrices. Journal of Multivariate Analysis, 111:120–35, 2012. ISSN 0047-259X. https://doi.org/10.10167j.jmva.2012.04.019.Google Scholar
Bengio, Yoshua, Courville, Aaron, and Vincent, Pascal. Representation Learning: A Review and New Perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8): 1798-828, 2013. ISSN 0162-8828. https://doi.org/10.1109/tpami.2013.50.Google Scholar
Benigni, Lucas and Peche, Sandrine. Eigenvalue Distribution of Nonlinear Models of Random Matrices. 2019. https://arxiv.org/abs/1904.03090.Google Scholar
Bianchi, Pascal, Debbah, Merouane, and Najim, Jamal. Asymptotic Independence in the Spectrum of the Gaussian Unitary Ensemble. Electronic Communications in Probability, 15(0): 376-95, 2010. ISSN 1083-589X. https://doi.org/10.1214/ecp.v15-1568.Google Scholar
Bianchi, Pascal, Debbah, Merouane, Maida, Mylene, and Najim, Jamal. Performance of Statistical Tests for Single-Source Detection using Random Matrix Theory. IEEE Transactions on Information Theory, 57(4):2400-19, 2011. ISSN 0018-9448. https://doi.org/10.1109/tit.2011.2111710.Google Scholar
Biane, Philippe. Free Probability for Probabilists. 1998. https://arxiv.org/abs/math/9809193.Google Scholar
Bietti, Alberto and Mairal, Julien. On the Inductive Bias of Neural Tangent Kernels. In NIPS’19: Advances in Neural Information Processing Systems, volume 32, pages 12893-904. Curran Associates, Inc., 2019. https://proceedings.neurips.cc/paper/2019/file/c4ef9c39b300931b69a36fb3dbb8d60e-Paper.pdf.Google Scholar
Billingsley, Patrick. Probability and Measure. Wiley Series in Probability and Statistics. John Wiley & Sons, Ltd, 3 edition, 2012. ISBN 9781118122372. www.wiley.com/en-us/Probability+and+Measure%2C+Anniversary+Edition-p-9781118122372.Google Scholar
Bishop, Christopher M. Pattern Recognition and Machine Learning. Information Science and Statistics. Springer-Verlag New York, 1 edition, 2006. ISBN 0387310738. www.springer.com/cn/book/9780387310732.Google Scholar
Bordenave, Charles. Eigenvalues of Euclidean Random Matrices. Random Structures & Algorithms, 33(4):515-32, 2008. ISSN 1098-2418. https://doi.org/10.1002/rsa.20228.Google Scholar
Bordenave, Charles and Chafai, Djalil. Modern Aspects of Random Matrix Theory. Proceedings of Symposia in Applied Mathematics, pages 1-34, 2014. ISSN 0160-7634. https://doi.org/10.1090/psapm/072/00617.Google Scholar
Bordenave, Charles and Lelarge, Marc. Resolvent of Large Random Graphs. Random Structures & Algorithms, 37(3):332-52, 2010. ISSN 1098-2418. https://doi.org/10.1002/rsa.20313.Google Scholar
Bordenave, Charles, Lelarge, Marc, and Salez, Justin. The Rank of Diluted Random Graphs. The Annals of Probability, 39(3):1097-121, 2011. ISSN 0091-1798. https://doi.org/10.1214/10-aop567.Google Scholar
Borgs, Christian, Chayes, Jennifer T., Cohn, Henry, and Zhao, Yufei. An $Lp$ Theory of Sparse Graph Convergence I: Limits, Sparse Random Graph Models, and Power Law Distributions. Transactions of the American Mathematical Society, 372(5):3019-62, 2019. ISSN 00029947. https://doi.org/10.1090/tran/7543.Google Scholar
Boucheron, Stephane, Lugosi, Gabor, and Massart, Pascal. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, 2013. ISBN 9780199535255. https://doi.org/10.1093/acprof:oso/9780199535255.001.0001.Google Scholar
Boyd, Stephen, Boyd, Stephen P., and Vandenberghe, Lieven. Convex Optimization. Cambridge University Press, 2004.Google Scholar
Bray, Alan J. and David, S. Dean. Statistics of Critical Points of Gaussian Fields on Large- Dimensional Spaces. Physical Review Letters, 98(15):150201, 2007. ISSN 0031-9007. https://doi.org/10.1103/physrevlett.98.150201.Google Scholar
Brock, Andrew, Donahue, Jeff, and Simonyan, Karen. Large Scale GAN Training for High Fidelity Natural Image Synthesis. In International Conference on Learning Representations, ICLR’19, 2019. https://openreview.net/forum?id=B1xsqj09Fm.Google Scholar
Bun, Joel, Bouchaud, Jean-Philippe, and Potters, Marc. Cleaning Large Correlation Matrices: Tools from Random Matrix Theory. Physics Reports, 666:1–109, 2017. ISSN 0370-1573. https://doi.org/10.10167j.physrep.2016.10.005.CrossRefGoogle Scholar
Canaday, Daniel M. Modeling and Control of Dynamical Systems with Reservoir Computing. PhD thesis, 2019.Google Scholar
Candes, Emmanuel J. and Tao, Terence. Decoding by Linear Programming. IEEE Transactions on Information Theory, 51(12):4203-15, 2005.Google Scholar
Candes, Emmanuel J. The Restricted Isometry Property and Its Implications for Compressed Sensing. Comptes Rendus Mathematique, 346(9-10):589-92, 2008. ISSN 1631-073X. https://doi.org/10.1016/jxrma.2008.03.014.Google Scholar
Candes, Emmanuel J. and Sur, Pragya. The Phase Transition for the Existence of the Maximum Likelihood Estimate in High-Dimensional Logistic Regression. The Annals of Statistics, 48 (1):27-42, 2020. ISSN 0090-5364. https://doi.org/10.1214/18-AOS1789.CrossRefGoogle Scholar
Candes, Emmanuel J., Li, Xiaodong, and Soltanolkotabi, Mahdi. Phase Retrieval via Wirtinger Flow: Theory and Algorithms. IEEE Transactions on Information Theory, 61(4):1985-2007, 2015. ISSN 0018-9448. https://doi.org/10.1109/tit.2015.2399924.Google Scholar
Capitaine, Mireille. Exact Separation Phenomenon for the Eigenvalues of Large Information- plus-Noise Type Matrices. Application to Spiked Models. Indiana University Mathematics Journal, 63(6):1875-910, 2014. ISSN 0022-2518. https://doi.org/10.1512/iumj.2014.63.5432.Google Scholar
Chen, Minmin, Pennington, Jeffrey, and Schoenholz, Samuel. Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 873-82, Stockholmsmassan, Stockholm Sweden, 2018. PMLR. http://proceedings.mlr.press/v80/chen18i.html.Google Scholar
Chen, Yuxin and Emmanuel, J. Candes. Solving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems. Communications on Pure and Applied Mathematics, 70(5):822-83, 2017. ISSN 1097-0312. https://doi.org/10.1002/cpa.21638.Google Scholar
Cheng, Xiuyuan and Singer, Amit. The Spectrum of Random Inner-Product Kernel Matrices. Random Matrices: Theory and Applications, 02(04):1350010, 2013. ISSN 2010-3263. https://doi.org/10.1142/s201032631350010x.Google Scholar
Chiani, Marco. Distribution of the Largest Eigenvalue for Real Wishart and Gaussian Random Matrices and a Simple Approximation for the Tracy-Widom Distribution. Journal of Multivariate Analysis, 129:69–81, 2014. ISSN 0047-259X. https://doi.org/10.1016/j.jmva.2014.04.002.Google Scholar
Chizat, Lena’ic, Oyallon, Edouard, and Bach, Francis. On Lazy Training in Differentiable Programming. In NIPS’19: Advances in Neural Information Processing Systems, volume 32, pages 2937-47. Curran Associates, Inc., 2019. https://proceedings.neurips.cc/paper/2019/file/ae614c557843b1df326cb29c57225459-Paper.pdf.Google Scholar
Choromanska, Anna, Henaff, MIkael, Mathieu, Michael, Arous, Gerard Ben, and LeCun, Yann. The Loss Surfaces of Multilayer Networks. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, volume 38 of Proceedings of Machine Learning Research, pages 192-204, San Diego, California, USA, 2015. PMLR. http://proceedings.mlr.press/v38/choromanska15.html.Google Scholar
Chung, Fan R. K. Spectral Graph Theory. CBMS Regional Conference Series in Mathematics, 1996. ISSN 0160-7642. https://doi.org/10.1090/cbms/092.Google Scholar
Clanuwat, Tarin, Bober-Irizar, Mikel, Asanobu Kitamoto et al. Deep Learning for Classical Japanese Literature. 2018. https://doi.org/10.20676/00000341. https://arxiv.org/abs/1812.01718.Google Scholar
Coja-Oghlan, Amin and Lanka, Andre. Finding Planted Partitions in Random Graphs with General Degree Distributions. SIAM Journal on Discrete Mathematics, 23(4):1682-714, 2010. ISSN 0895-4801. https://doi.org/10.1137/070699354.Google Scholar
Coste, Simon and Zhu, Yizhe. Eigenvalues of the Non-backtracking Operator Detached from the Bulk. Random Matrices: Theory and Applications, page 2150028, 2020. ISSN 2010-3263. https://doi.org/10.1142/s2010326321500283.CrossRefGoogle Scholar
Couillet, Romain. Robust Spiked Random Matrices and a Robust G-MUSIC Estimator. Journal of Multivariate Analysis, 140:139–61, 2015. ISSN 0047-259X. https://doi.org/10.1016/j.jmva.2015.05.009.Google Scholar
Couillet, Romain and Florent Benaych-Georges. Kernel Spectral Clustering of Large Dimensional Data. Electronic Journal of Statistics, 10(1):1393-454, 2016. ISSN 1935-7524. https://doi.org/10.1214/16-ejs1144.Google Scholar
Couillet, Romain and Debbah, Merouane. Random Matrix Methods for Wireless Communications. Cambridge University Press, 2011. ISBN 9780511994746. https://doi.org/10.1017/cbo9780511994746.CrossRefGoogle Scholar
Couillet, Romain and Hachem, Walid. Fluctuations of Spiked Random Matrix Models and Failure Diagnosis in Sensor Networks. IEEE Transactions on Information Theory, 59(1):509-25, 2013. ISSN 0018-9448. https://doi.org/10.1109/tit.2012.2218572.Google Scholar
Couillet, Romain and Hachem, Walid. Analysis of the Limiting Spectral Measure of Large Random Matrices of the Separable Covariance Type. Random Matrices: Theory and Applications, 03(04):1450016, 2014. ISSN 2010-3263. https://doi.org/10.1142/s2010326314500166.Google Scholar
Couillet, Romain and Kammoun, Abla. Random Matrix Improved Subspace Clustering. In 2016 50th Asilomar Conference on Signals, Systems and Computers, 2016 50th Asilomar Conference on Signals, Systems and Computers, pages 90-4. IEEE, 2016. ISBN 9781538639559. https://doi.org/10.1109/acssc.2016.7869000.Google Scholar
Couillet, Romain and McKay, Matthew. Large Dimensional Analysis and Optimization of Robust Shrinkage Covariance Matrix Estimators. Journal of Multivariate Analysis, 131: 99-120, 2014. ISSN 0047-259X. https://doi.org/10.1016/jomva.2014.06.018.Google Scholar
Couillet, Romain, Debbah, Merouane, and Jack, W. Silverstein. A Deterministic Equivalent for the Analysis of Correlated MIMO Multiple Access Channels. IEEE Transactions on Information Theory, 57(6):3493-514, 2011. ISSN 0018-9448. https://doi.org/10.1109/tit.2011.2133151.Google Scholar
Couillet, Romain, Hoydis, Jakob, and Debbah, Merouane. Random Beamforming over Quasi-static and Fading Channels: A Deterministic Equivalent Approach. IEEE Transactions on Information Theory, 58(10):6392-425, 2012. ISSN 0018-9448. https://doi.org/10.1109/tit.2012.2201913.Google Scholar
Couillet, Romain, Pascal, Frederic, and Jack, W. Silverstein. The Random Matrix Regime of Maronna’s M-Estimator with Elliptically Distributed Samples. Journal of Multivariate Analysis, 139:56–78, 2015. ISSN 0047-259X. https://doi.org/10.10167j.jmva.2015.02.020.Google Scholar
Couillet, Romain, Kammoun, Abla, and Pascal, Frederic. Second Order Statistics of Robust Estimators of Scatter. Application to GLRT Detection for Elliptical Signals. Journal of Multivariate Analysis, 143:249–74, 2016a. ISSN 0047-259X. https://doi.org/10.1016/j.jmva.2015.08.021.Google Scholar
Couillet, Romain, Wainrib, Gilles, Sevi, Harry, and Hafiz Tiomoko Ali. The Asymptotic Performance of Linear Echo State Neural Networks. Journal of Machine Learning Research, 17 (178):1-35, 2016b. http://jmlr.org/papers/v17/16-076.html.Google Scholar
Couillet, Romain, Tiomoko, Malik, Zozor, Steeve, and Moisan, Eric. Random Matrix-Improved Estimation of Covariance Matrix Distances. Journal of Multivariate Analysis, 174:104531, 2019. ISSN 0047-259X. https://doi.org/10.1016/jjmva.2019.06.009.Google Scholar
Couillet, Romain, Cinar, Yagmur Gizem, Gaussier, Eric, and Imran, Muhammad. Word Representations Concentrate and This Is Good News! In CoNLL’20: Proceedings of the 24th Conference on Computational Natural Language Learning, pages 325-34. Association for Computational Linguistics, 2020. https://doi.org/10.18653/v1/2020.conll-1.25. www.aclweb.org/anthology/2020.conll-1.25.Google Scholar
Cox, Michael A. A. and Trevor, F. Cox. Multidimensional Scaling. In Handbook of Data Visualization, pages 315-47. Springer, 2008.Google Scholar
Dalal, Navneet and Triggs, Bill. Histograms of Oriented Gradients for Human Detection. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 1:886–93, 2005. https://doi.org/10.1109/cvpr.2005.177.CrossRefGoogle Scholar
Dall’Amico, Lorenzo, Couillet, Romain, and Tremblay, Nicolas. Revisiting the Bethe- Hessian: Improved Community Detection in Sparse Heterogeneous Graphs. In NIPS’19: Advances in Neural Information Processing Systems, volume 32, pages 4037-47. Curran Associates, Inc., 2019. https://proceedings.neurips.cc/paper/2019/file/3e6260b81898beacda3d16db379ed329-Paper.pdf.Google Scholar
Dall’Amico, Lorenzo, Couillet, Romain, and Tremblay, Nicolas. A Unified Framework for Spectral Clustering in Sparse Graphs. Journal of Machine Learning Research, 22(217):1-56, 2021. http://jmlr.org/papers/v22/20-261.html.Google Scholar
Dauphin, Yann N, Pascanu, Razvan, Gulcehre, Caglar et al. Identifying and Attacking the Saddle Point Problem in High-Dimensional Non-convex Optimization. In NIPS’14: Advances in Neural Information Processing Systems, volume 27, pages 2933-41. Curran Associates, Inc., 2014. https://proceedings.neurips.cc/paper/2014/file/17e23e50bedc63b4095e3d8204ce063b-Paper.pdf.Google Scholar
Davis, Chandler. All Convex Invariant Functions of Hermitian Matrices. Archiv der Mathematik, 8(4):276-8, 1957. ISSN 0003-889X. https://doi.org/10.1007/bf01898787. https://doi.org/10.1007/BF01898787.Google Scholar
Debbah, Merouane, Hachem, Walid, Loubaton, Philippe, and Marc De Courville. MMSE Analysis of Certain Large Isometric Random Precoded Systems. IEEE Transactions on Information Theory, 49(5):1293, 2003. ISSN 0018-9448. https://doi.org/10.1109/tit.2003.810641.Google Scholar
Decelle, Aurelien, Krzakala, Florent, Moore, Cristopher, and Zdeborova, Lenka. Inference and Phase Transitions in the Detection of Modules in Sparse Networks. Physical Review Letters, 107(6):065701, 2011. ISSN 0031-9007. https://doi.org/10.1103/physrevlett.107.065701.Google Scholar
Deng, Jia, Dong, Wei, Socher, Richard et al. ImageNet: A Large-Scale Hierarchical Image Database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-55, 2009. ISSN 1063-6919. https://doi.org/10.1109/cvpr.2009.5206848.Google Scholar
Deng, Zeyu, Kammoun, Abla, and Thrampoulidis, Christos. A Model of Double Descent for High-Dimensional Binary Linear Classification. Information and Inference: A Journal of the IMA, 2021. https://doi.org/10.1093/imaiai/iaab002.Google Scholar
Do, Yen and Vu, Van. The Spectrum of Random Kernel Matrices: Universality Results for Rough and Varying Kernels. Random Matrices: Theory and Applications, 02(03):1350005, 2013. ISSN 2010-3263. https://doi.org/10.1142/s2010326313500056.Google Scholar
Dokmanic, Ivan, Parhizkar, Reza, Ranieri, Juri, and Vetterli, Martin. Euclidean Distance Matrices: Essential Theory, Algorithms, and Applications. IEEE Signal Processing Magazine, 32(6): 12-30, 2015. ISSN 1053-5888. https://doi.org/10.1109/msp.2015.2398954.Google Scholar
Domingos, Pedro. A Few Useful Things to Know about Machine Learning. Communications of the ACM, 55(10):78-87, 2012. ISSN 0001-0782. https://doi.org/10.1145/2347736.2347755.Google Scholar
Donoho, David and Montanari, Andrea. High Dimensional Robust M-Estimation: Asymptotic Variance via Approximate Message Passing. Probability Theory and Related Fields, 166 (3-4):935-69, 2016. ISSN 0178-8051. https://doi.org/10.1007/s00440-015-0675-z.Google Scholar
Donoho, David, Gavish, Matan, and Iain, M. Johnstone. Optimal Shrinkage of Eigenvalues in the Spiked Covariance Model. The Annals of Statistics, 46(4):1742-78, 2018. ISSN 0090-5364. https://doi.org/10.1214/17-aos1601.Google Scholar
Donoho, David L. Compressed Sensing. IEEE Transactions on Information Theory, 52(4): 1289-306, 2006. ISSN 0018-9448. https://doi.org/10.1109/tit.2006.871582.Google Scholar
Brent, Dozier, R. and Silverstein, Jack W.. On the Empirical Distribution of Eigenvalues of Large Dimensional Information-Plus-Noise-Type Matrices. Journal of Multivariate Analysis, 98 (4):678-94, 2007. ISSN 0047-259X. https://doi.org/10.1016/jjmva.2006.09.006.Google Scholar
Du, Simon, Lee, Jason, Li, Haochuan, Wang, Liwei, and Zhai, Xiyu. Gradient Descent Finds Global Minima of Deep Neural Networks. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 1675-85. PMLR, 2019. http://proceedings.mlr.press/v97/du19c.html.Google Scholar
Dumont, Julien, Hachem, Walid, Lasaulce, Samson, Loubaton, Philippe, and Najim, Jamal. On the Capacity Achieving Covariance Matrix for Rician MIMO Channels: An Asymptotic Approach. IEEE Transactions on Information Theory, 56(3):1048-69, 2010. ISSN 0018-9448. https://doi.org/10.1109/tit.2009.2039063.Google Scholar
El Karoui, Noureddine. Spectrum Estimation for Large Dimensional Covariance Matrices using Random Matrix Theory. The Annals of Statistics, 36(6):2757-90, 2008. ISSN 0090-5364. https://doi.org/10.1214/07-aos581.Google Scholar
El Karoui, Noureddine. Concentration of Measure and Spectra of Random Matrices: Applications to Correlation Matrices, Elliptical Distributions and Beyond. The Annals of Applied Probability, 19(6):2362-405, 2009. ISSN 1050-5164. https://doi.org/10.1214/08-aap548.Google Scholar
El Karoui, Noureddine, Bean, Derek, Bickel, Peter J., Lim, Chinghway, and Yu, Bin. On Robust Regression with High-Dimensional Predictors. Proceedings of the National Academy of Sciences, 110(36):14557-62, 2013. ISSN 0027-8424. https://doi.org/10.1073/pnas.1307842110.Google Scholar
Elkhalil, Khalil, Kammoun, Abla, Zhang, Xiangliang, Mohamed-Slim Alouini, and Tareq Al- Naffouri. Risk Convergence of Centered Kernel Ridge Regression with Large Dimensional Data. IEEE Transactions on Signal Processing, 68:1574–88, 2019. ISSN 1053-587X. https://doi.org/10.1109/tsp.2020.2975939.Google Scholar
Elkhalil, Khalil, Kammoun, Abla, Couillet, Romain, Tareq Y Al-Naffouri, and Mohamed-Slim Alouini. A Large Dimensional Study of Regularized Discriminant Analysis. IEEE Transactions on Signal Processing, 68:2464–79, 2020. ISSN 1053-587X. https://doi.org/10.1109/tsp.2020.2984160.Google Scholar
Erdos, Laszlo. Universality of Wigner Random Matrices: A Survey of Recent Results. Russian Mathematical Surveys, 66(3):507-626, 2011. ISSN 0036-0279. https://doi.org/10.1070/rm2011v066n03abeh004749.Google Scholar
Erdos, Laszlo, Peche, Sandrine, Ramirez, Jose A., Schlein, Benjamin, Horng-Tzer, and Yau. Bulk Universality for Wigner Matrices. Communications on Pure and Applied Mathematics, 63 (7):895-925, 2010. ISSN 1097-0312. https://doi.org/10.1002/cpa.20317.Google Scholar
Fan, Zhou and Montanari, Andrea. The Spectral Norm of Random Inner-Product Kernel Matrices. Probability Theory and Related Fields, 173(1-2):27-85, 2019. ISSN 0178-8051. https://doi.org/10.1007/s00440-018-0830-4.Google Scholar
Fan, Zhou and Wang, Zhichao. Spectra of the Conjugate Kernel and Neural Tangent Kernel for Linear-Width Neural Networks. In Advances in Neural Information Processing Systems, volume 33, pages 7710-21. Curran Associates, Inc., 2020. https://proceedings.neurips.cc/paper/2020/file/572201a4497b0b9f02d4f279b09ec30d-Paper.pdf.Google Scholar
Fienup, James R. Phase Retrieval Algorithms: A Comparison. Applied Optics, 21(15):2758, 1982. ISSN 1539-4522. https://doi.org/10.1364/ao.21.002758.Google Scholar
Fix, Evelyn and Hodges, J. L.. Discriminatory Analysis. Nonparametric Discrimination: Consistency Properties. International Statistical Review/Revue Internationale de Statistique, 57 (3):238-47, 1989. ISSN 0306-7734. https://doi.org/10.2307/1403797. www.jstor.org/stable/1403797.Google Scholar
Frankle, Jonathan and Carbin, Michael. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. In International Conference on Learning Representations, ICLR’19, 2019. https://openreview.net/forum?id=rJl-b3RcF7.Google Scholar
Frenkel, Charlotte, Lefebvre, Martin, and Bol, David. Learning Without Feedback: Direct Random Target Projection as a Feedback-Alignment Algorithm with Layerwise Feedforward Training. 2019. https://arxiv.org/abs/1909.01311.Google Scholar
Friedman, Jerome, Hastie, Trevor, and Tibshirani, Robert. The Elements of Statistical Learning, volume 1 of Springer Series in Statistics. Springer-Verlag New York, 1 edition, 2001. ISBN 9781489905192. https://doi.org/10.1007/978-0-387-21606-5.Google Scholar
Ganguli, Surya, Huh, Dongsung, and Sompolinsky, Haim. Memory Traces in Dynamical Systems. Proceedings of the National Academy of Sciences, 105(48):18970-75, 2008. ISSN 0027-8424. https://doi.org/10.1073/pnas.0804451105.Google Scholar
Gelenbe, Erol. Learning in the Recurrent Random Neural Network. Neural Computation, 5(1): 154-64, 1993. ISSN 0899-7667. https://doi.org/10.1162/neco.1993.5.1.154.Google Scholar
Gilboa, Dar, Chang, Bo, Chen, Minmin et al. Dynamical Isometry and a Mean Field Theory of LSTMs and GRUs. 2019. https://arxiv.org/abs/1901.08987.Google Scholar
Girko, V. L. Circular Law. Theory of Probability & Its Applications, 29(4):694-706,1985. ISSN 0040-585X. https://doi.org/10.1137/1129095.Google Scholar
Girko, Vyacheslav L. Theory of Stochastic Canonical Equations, volume 535 of Mathematics and Its Applications. Springer Netherlands, 1 edition, 2001. ISBN 97894-010-0989-8. https://doi.org/10.1007/978-94-010-0989-8. www.springer.com/cn/book/9781402000751.Google Scholar
Glass, Leon and Michael, C. Mackey. A Simple Model for Phase Locking of Biological Oscillators. Journal of Mathematical Biology, 7(4):339-352, 1979. ISSN 0303-6812. https://doi.org/10.1007/BF00275153.Google Scholar
Glorot, Xavier and Bengio, Yoshua. Understanding the Difficulty of Training Deep Feedforward Neural Networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 249-56. JMLR Workshop and Conference Proceedings, 2010. http://proceedings.mlr.press/v9/glorot10a.html.Google Scholar
Goldberg, Andrew, Zhu, Xiaojin, Singh, Aarti, Xu, Zhiting, and Nowak, Robert. Multi-Manifold Semi-Supervised Learning. In Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, volume 5 of Proceedings of Machine Learning Research, pages 169-76, Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA, 2009. PMLR. http://proceedings.mlr.press/v5/goldberg09a.html.Google Scholar
Goodfellow, Ian, Pouget-Abadie, Jean, Mehdi Mirza et al. Generative Adversarial Nets. In NIPS’14: Advances in Neural Information Processing Systems, volume 27, pages 2672-80. Curran Associates, Inc., 2014. https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf.Google Scholar
Gordon, Yehoram. Some Inequalities for Gaussian Processes and Applications. Israel Journal of Mathematics, 50(4):265-89, 1985. ISSN 0021-2172. https://doi.org/10.1007/bf02759761.Google Scholar
Gray, Robert M. Toeplitz and Circulant Matrices: A Review. Foundations and Trends in Communications and Information Theory, 2:155–239, 2006. ISSN 1567-2190. http://dx.doi.org/10.1561/0100000006.Google Scholar
Gulikers, Lennart, Lelarge, Marc, and Massoulie, Laurent. A Spectral Method for Community Detection in Moderately Sparse Degree-Corrected Stochastic Block Models. Advances in Applied Probability, 49(3):686-721, 2017. ISSN 0001-8678. https://doi.org/10.1017/apr.2017.18.Google Scholar
Haasdonk, Bernard. Feature Space Interpretation of SVMs with Indefinite Kernels. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(4):482-92, 2005. ISSN 0162-8828. https://doi.org/10.1109/tpami.2005.78.Google Scholar
Hachem, Walid, Loubaton, Philippe, and Najim, Jamal. Deterministic Equivalents for Certain Functionals of Large Random Matrices. The Annals of Applied Probability, 17(3):875-930, 2007. ISSN 1050-5164. https://doi.org/10.1214/105051606000000925.Google Scholar
Hachem, Walid, Loubaton, Philippe, and Najim, Jamal. A CLT for Information-Theoretic Statistics of Gram Random Matrices with a Given Variance Profile. Annals ofApplied Probability, 18(6):2071-130, 2008. ISSN 1050-5164. https://doi.org/10.1214/08-aap515.Google Scholar
Hachem, Walid, Moustakas, Aris, and Leonid, A. Pastur. The Shannon’s Mutual Information of a Multiple Antenna Time and Frequency Dependent Channel: An Ergodic Operator Approach. Journal of Mathematical Physics, 56(11):113501, 2015.Google Scholar
Hamilton, James Douglas. Time Series Analysis. Princeton University Press, Princeton, 1994. ISBN 978-0-691-21863-2. https://doi.org/10.1515/9780691218632. www.degruyter.com/princetonup/view/title/592052.Google Scholar
Han, Donghyeon, Lee, Jinsu, Lee, Jinmook, Hoi-Jun, and Yoo. A 1.32 TOPS/W Energy Efficient Deep Neural Network Learning Processor with Direct Feedback Alignment based Heterogeneous Core Architecture. In 2019 Symposium on VLSI Circuits, volume 00 of 2019 Symposium on VLSI Circuits, pages C304-C305, 2019. ISBN 9781728109145. https://doi.org/10.23919/vlsic.2019.8778006.Google Scholar
Hanin, Boris and Nica, Mihai. Finite Depth and Width Corrections to the Neural Tangent Kernel. In International Conference on Learning Representations, 2020. https://openreview.net/forum?id=SJgndT4KwB.Google Scholar
Hastie, Trevor, Montanari, Andrea, Rosset, Saharon, and Ryan J Tibshirani. Surprises in High-Dimensional Ridgeless Least Squares Interpolation. 2019. https://arxiv.org/abs/1903.08560.Google Scholar
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In 2015 IEEE International Conference on Computer Vision (ICCV), 2015 IEEE International Conference on Computer Vision (ICCV), pages 1026-34, 2015. https://doi.org/10.1109/iccv.2015.123.Google Scholar
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-8. IEEE, 2016. ISBN 9781467388528. https://doi.org/10.1109/cvpr.2016.90.Google Scholar
Hiai, Fumio and Petz, Denes. The Semicircle Law, Free Random Variables and Entropy. Mathematical Surveys and Monographs. American Mathematical Society, 2006. ISBN 9780821841358. https://doi.org/10.1090/surv/077.Google Scholar
Hinton, Geoffrey E. and Roweis, Sam. Stochastic Neighbor Embedding. In NIPS’03: Advances in Neural Information Processing Systems, volume 15, pages 857-64. MIT Press, 2003. https://proceedings.neurips.cc/paper/2002/file/6150ccc6069bea6b5716254057a194ef-Paper.pdf.Google Scholar
Hochreiter, Sepp and Schmidhuber, Jurgen. Long Short-Term Memory. Neural Computation, 9 (8):1735-80, 1997. ISSN 0899-7667. https://doi.org/10.1162/neco.1997.9.8.1735.Google Scholar
Horn, Roger A. and Charles, R. Johnson. Matrix Analysis. Cambridge University Press, 2 edition, 2012. ISBN 9780521548236. www.cambridge.org/9780521548236.Google Scholar
Houben, Sebastian, Stallkamp, Johannes, Salmen, Jan, Schlipsing, Marc, and Igel, Christian. Detection of Traffic Signs in Real-World Images: The German Traffic Sign Detection Benchmark. The 2013 International Joint Conference on Neural Networks (IJCNN), pages 1-8, 2013. https://doi.org/10.1109/ijcnn.2013.6706807.Google Scholar
Huang, Gao, Liu, Zhuang, Laurens van der Maaten, and Kilian Q. Weinberger. Densely Connected Convolutional Networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2261-9. IEEE, 2017. ISBN 9781538604588. https://doi.org/10.1109/cvpr.2017.243.Google Scholar
Huang, Guang-Bin, Zhou, Hongming, Ding, Xiaojian, and Zhang, Rui. Extreme Learning Machine for Regression and Multiclass Classification. IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics, 42(2):513-29, 2012. ISSN 1083-4419. https://doi.org/10.1109/tsmcb.2011.2168604.Google Scholar
Huber, Peter J. Robust Statistics. Wiley Series in Probability and Statistics. John Wiley & Sons, Ltd, 2011. ISBN 9780471725251. https://doi.org/10.1002/0471725250.Google Scholar
Ioffe, Sergey and Szegedy, Christian. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. 2015.Google Scholar
Jacot, Arthur, Gabriel, Franck, and Hongler, Clement. Neural Tangent Kernel: Convergence and Generalization in Neural Networks. In NIPS’18: Advances in Neural Information Processing Systems, volume 31, pages 8571-80. Curran Associates, Inc., 2018. https:// proceedings.neurips.cc/paper/2018/file/5a4be1fa34e62bb8a6ec6b91d2462f5a-Paper.pdf.Google Scholar
Jain, Prateek, Netrapalli, Praneeth, and Sanghavi, Sujay. Low-Rank Matrix Completion Using Alternating Minimization. In STOC’13: Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, page 665-74, New York, NY, USA, 2013. Association for Computing Machinery. ISBN 9781450320290. https://doi.org/10.1145/2488608.2488693. https://doi.org/10.1145/2488608.2488693.Google Scholar
Joachims, Thorsten. Transductive Learning via Spectral Graph Partitioning. In ICML’03: Proceedings of the Twentieth International Conference on Machine Learning, volume 3, pages 290-297. AAAI Press, 2003. www.aaai.org/Library/ICML/2003/icml03-040.php.Google Scholar
Johnstone, Iain M. On the Distribution of the Largest Eigenvalue in Principal Components Analysis. The Annals of Statistics, 29(2):295-327, 2001. ISSN 0090-5364. https://doi.org/10.1214/aos/1009210544.Google Scholar
Johnstone, Iain M. Multivariate Analysis and Jacobi Ensembles: Largest Eigenvalue, Tracy- Widom Limits and Rates of Convergence. The Annals of Statistics, 36(6):2638-716, 2008. ISSN 0090-5364. https://doi.org/10.1214/08-aos605.Google Scholar
Joseph, Antony and Yu, Bin. Impact of Regularization on Spectral Clustering. The Annals of Statistics, 44(4):1765-91, 2016. ISSN 0090-5364. https://doi.org/10.1214/16-aos1447.Google Scholar
Kammoun, Abla and Alouini, Mohamed. On the Precise Error Analysis of Support Vector Machines. IEEE Open Journal of Signal Processing, pages 1-1, 2021. https://doi.org/10.1109/ojsp.2021.3051849.Google Scholar
Kammoun, Abla and Couillet, Romain. Subspace Kernel Spectral Clustering of Large Dimensional Data. 2017. www.laneas.com/sites/default/files/attachments-186/paper_kernel.pdf.Google Scholar
Kammoun, Abla, Kharouf, Malika, Hachem, Walid, and Najim, Jamal. A Central Limit Theorem for the SINR at the LMMSE Estimator Output for Large-Dimensional Signals. IEEE Transactions on Information Theory, 55(11):5048-63, 2009. ISSN 0018-9448. https://doi.org/10.1109/tit.2009.2030463.Google Scholar
Kammoun, Abla, Couillet, Romain, Pascal, Frederic, Mohamed-Slim, and Alouini. Optimal Design of the Adaptive Normalized Matched Filter Detector Using Regularized Tyler Estimators. IEEE Transactions on Aerospace and Electronic Systems, 54(2):755-69, 2017. ISSN 0018-9251. https://doi.org/10.1109/taes.2017.2766538.Google Scholar
Kar, Purushottam and Karnick, Harish. Random Feature Maps for Dot Product Kernels. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, volume 22 of Proceedings of Machine Learning Research, pages 583-91, La Palma, Canary Islands, 2012. PMLR. http://proceedings.mlr.press/v22/kar12.html.Google Scholar
Karrer, Brian and Mark, E. J. Newman. Stochastic Blockmodels and Community Structure in Networks. Physical Review E, 83(1):016107, 2011. ISSN 1539-3755. https://doi.org/10.1103/physreve.83.016107.Google Scholar
Kendall, Maurice G. A New Measure of Rank Correlation. Biometrika, 30(1/2):81-93, 1938. ISSN 0006-3444. https://doi.org/10.2307/2332226.Google Scholar
Keshavan, Raghunandan H., Montanari, Andrea, and Oh, Sewoong. Matrix Completion from a Few Entries. IEEE Transactions on Information Theory, 56(6):2980-98, 2010. ISSN 00189448. https://doi.org/10.1109/tit.2010.2046205.Google Scholar
Khorunzhy, Alexei M. and Leonid, A. Pastur. On the Eigenvalue Distribution of the Deformed Wigner Ensemble of Random Matrices. Spectral Operator Theory and Related Topics, pages 97-127, 1994. ISSN 1051-8037. https://doi.org/10.1090/advsov/019/05.Google Scholar
Krizhevsky, Alex, Sutskever, Ilya, and Geoffrey, E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. Communications of the ACM, 60(6):84-90, 2017. ISSN 0001-0782. https://doi.org/10.1145/3065386.Google Scholar
Krzakala, Florent, Moore, Cristopher, Mossel, Elchanan et al. Spectral Redemption in Clustering Sparse Networks. Proceedings of the National Academy of Sciences, 110(52):20935-40, 2013. ISSN 0027-8424. https://doi.org/10.1073/pnas.1312486110.Google Scholar
Laloux, Laurent, Cizeau, Pierre, Potters, Marc, Jean-Philippe, and Bouchaud. Random Matrix Theory and Financial Correlations. International Journal of Theoretical and Applied Finance, 03(03):391-7, 2000. ISSN 0219-0249. https://doi.org/10.1142/s0219024900000255.Google Scholar
LeCun, Yann, Bottou, Leon, Bengio, Yoshua, and Haffner, Patrick. Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE, 86(11):2278-324, 1998. ISSN 0018-9219. https://doi.org/10.1109Z5.726791.Google Scholar
Ledoit, Olivier and Peche, Sandrine. Eigenvectors of Some Large Sample Covariance Matrix Ensembles. Probability Theory and Related Fields, 151(1-2):233-64, 2011. ISSN 01788051. https://doi.org/10.1007/s00440-010-0298-3.Google Scholar
Ledoit, Olivier and Wolf, Michael. Nonlinear Shrinkage Estimation of Large-Dimensional Covariance Matrices. The Annals of Statistics, 40(2):1024-60, 2012. ISSN 0090-5364. https://doi.org/10.1214/12-aos989.Google Scholar
Ledoux, Michel. The Concentration of Measure Phenomenon. Mathematical Surveys and Monographs. 2005. ISBN 9780821837924. https://doi.org/10.1090/surv/089.Google Scholar
Lee, Jaehoon, Xiao, Lechao, Samuel S. Schoenholz et al. Wide Neural Networks of Any Depth Evolve as Linear Models under Gradient Descent. Journal of Statistical Mechanics: Theory and Experiment, 2020(12):124002, 2020. https://doi.org/10.1088/1742-5468/abc62b.Google Scholar
Lee, Kiryung, Li, Yanjun, Junge, Marius, and Bresler, Yoram. Blind Recovery of Sparse Signals From Subsampled Convolution. IEEE Transactions on Information Theory, 63(2):802-21, 2017. ISSN 0018-9448. https://doi.org/10.1109/tit.2016.2636204.CrossRefGoogle Scholar
Lelarge, Marc and Miolane, Leo. Asymptotic Bayes Risk for Gaussian Mixture in a Semi- supervised Setting. 2019. https://arxiv.org/abs/1907.03792.Google Scholar
Leskovec, Jure and Krevl, Andrej. SNAP Datasets: Stanford Large Network Dataset Collection, 2014. http://snap.stanford.edu/data.Google Scholar
Li, Jian and Stoica, Petre. MIMO Radar with Colocated Antennas. IEEE Signal Processing Magazine, 24(5):106-14, 2007. ISSN 1053-5888. https://doi.org/10.1109/msp.2007.904812.Google Scholar
Li, Ker-Chau. On Principal Hessian Directions for Data Visualization and Dimension Reduction: Another Application of Stein’s Lemma. Journal of the American Statistical Association, 87(420):1025-39, 1992. ISSN 0162-1459. https://doi.org/10.1080/01621459.1992.10476258.Google Scholar
Liao, Zhenyu and Couillet, Romain. The Dynamics of Learning: A Random Matrix Approach. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3072-81, Stockholmsmassan, Stockholm Sweden, 2018a. PMLR. http://proceedings.mlr.press/v80/liao18b.html.Google Scholar
Liao, Zhenyu and Couillet, Romain. On the Spectrum of Random Features Maps of High Dimensional Data. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3063-71, Stockholmsmassan, Stockholm Sweden, 2018b. PMLR. http://proceedings.mlr.press/v80/liao18a.html.Google Scholar
Liao, Zhenyu and Couillet, Romain. Inner-Product Kernels Are Asymptotically Equivalent to Binary Discrete Kernels. 2019a. https://arxiv.org/abs/1909.06788.Google Scholar
Liao, Zhenyu and Couillet, Romain. A Large Dimensional Analysis of Least Squares Support Vector Machines. IEEE Transactions on Signal Processing, 67(4):1065-74, 2019b. ISSN 1053-587X. https://doi.org/10.1109/tsp.2018.2889954.Google Scholar
Liao, Zhenyu and Michael, W. Mahoney. Hessian Eigenspectra of More Realistic Nonlinear Models. 2021.Google Scholar
Liao, Zhenyu, Couillet, Romain, and Michael, W. Mahoney. A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian Kernel, a Precise Phase Transition, and the Corresponding Double Descent. In Advances in Neural Information Processing Systems, volume 33, pages 13939-50. Curran Associates, Inc., 2020. https://proceedings.neurips.cc/paper/2020/file/a03fa30821986dff10fc66647c84c9c3-Paper.pdf.Google Scholar
Liao, Zhenyu, Couillet, Romain, and Michael, W. Mahoney. Sparse Quantized Spectral Clustering. In The Ninth International Conference on Learning Representations (ICLR’2021), 2021.Google Scholar
Lillicrap, Timothy P., Cownden, Daniel, Tweed, Douglas B., and Colin, J. Akerman. Random Synaptic Feedback Weights Support Error Backpropagation for Deep Learning. Nature Communications, 7(1):13276, 2016. https://doi.org/10.1038/ncomms13276.Google Scholar
Lim, Lek-Heng. Singular Values and Eigenvalues of Tensors: A Variational Approach. In 1st IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, 2005, 2005 IEEE International Workshop on Computational Advances in Multi- Sensor Adaptive Processing (CAMSAP), pages 129-32. IEEE, 2005. ISBN 9780780393228. https://doi.org/10.1109/camap.2005.1574201.Google Scholar
Ling, Zenan and Robert, C. Qiu. Spectrum Concentration in Deep Residual Learning: A Free Probability Approach. IEEE Access, 7:105212–23, 2019. ISSN 2169-3536. https://doi.org/10.1109/access.2019.2931991.Google Scholar
Louart, Cosme and Couillet, Romain. Concentration of Measure and Large Random Matrices with an application to Sample Covariance Matrices. 2018. https://arxiv.org/pdf/1805.08295.Google Scholar
Louart, Cosme, Liao, Zhenyu, and Couillet, Romain. A Random Matrix Approach to Neural Networks. Annals of Applied Probability, 28(2):1190-248, 2018. ISSN 1050-5164. https://doi.org/10.1214/17-AAP1328.Google Scholar
Loubaton, Philippe and Vallet, Pascal. Almost Sure Localization of the Eigenvalues in a Gaussian Information Plus Noise Model. Application to the Spiked Models. Electronic Journal of Probability, 16(70):1934-59, 2011. ISSN 1083-6489. https://doi.org/10.1214/EJP.v16-943.Google Scholar
Lozier, Daniel W. NIST Digital Library of Mathematical Functions. Annals of Mathematics and Artificial Intelligence, 38(1-3):105-19, 2003. ISSN 1012-2443. https://doi.org/10.1023/a:1022915830921.Google Scholar
Lu, Lu, Li, Geoffrey Ye, Lee Swindlehurst, A., Ashikhmin, Alexei, and Zhang, Rui. An Overview of Massive MIMO: Benefits and Challenges. IEEE Journal of Selected Topics in Signal Processing, 8(5):742-58, 2014. ISSN 1932-4553. https://doi.org/10.1109/jstsp.2014.2317671.Google Scholar
Lu, Yue M. Lu and Gen Li. Phase Transitions of Spectral Initialization for High-Dimensional Non-convex Estimation. Information and Inference: A Journal of the IMA, 9(3):507-41, 2019. ISSN 2049-8772. https://doi.org/10.1093/imaiai/iaz020.Google Scholar
Luss, Ronny and D’Aspremont, Alexandre. Support Vector Machine Classification with Indefinite Kernels. In NIPS’08: Advances in Neural Information Processing Systems, volume 20, pages 953-60. Curran Associates, Inc., 2008. https://proceedings.neurips.cc/paper/2007/file/c0c7c76d30bd3dcaefc96f40275bdc0a-Paper.pdf.Google Scholar
Von Luxburg, Ulrike. A Tutorial on Spectral Clustering. Statistics and Computing, 17(4):395- 416, 2007. ISSN 0960-3174. https://doi.org/10.1007/s11222-007-9033-z.Google Scholar
Ulrike, von Luxburg, Belkin, Mikhail, and Bousquet, Olivier. Consistency of Spectral Clustering. The Annals of Statistics, 36(2):555-86, 2008. ISSN 0090-5364. https://doi.org/10.1214/009053607000000640.Google Scholar
Lytova, Anna and Pastur, Leonid. Central Limit Theorem for Linear Eigenvalue Statistics of Random Matrices with Independent Entries. The Annals of Probability, 37(5):1778-840, 2009. ISSN 0091-1798. https://doi.org/10.1214/09-aop452.CrossRefGoogle Scholar
Ma, Zongming. Accuracy of the Tracy-Widom Limits for the Extreme Eigenvalues in White Wishart Matrices. Bernoulli, 18(1):322-59, 2012. ISSN 1350-7265. https://doi.org/10.3150/10-bej334. https://doi.org/10.3150/10-BEJ334.Google Scholar
Maas, Andrew L., Hannun, Awni Y., and Andrew, Y. Ng. Rectifier Nonlinearities Improve Neural Network Acoustic Models. In ICML Workshop on Deep Learning for Audio, Speech and Language Processing, ICML Workshop, page 3, 2013.Google Scholar
Maaten, van der, Laurens and Geoffrey Hinton. Visualizing Data using t-SNE. Journal of Machine Learning Research, 9:2579–605, 2008. http://jmlr.org/papers/v9/vandermaaten08a.html.Google Scholar
Mai, Xiaoyi. Methods of Random Matrices for Large Dimensional Statistical Learning. PhD thesis, 2019.Google Scholar
Mai, Xiaoyi and Couillet, Romain. A Random Matrix Analysis and Improvement of Semi- supervised Learning for Large Dimensional Data. Journal of Machine Learning Research, 19(79):1-27, 2018. http://jmlr.org/papers/v19/17-421.html.Google Scholar
Mai, Xiaoyi and Couillet, Romain. Consistent Semi-supervised Graph Regularization for High Dimensional Data. Journal of Machine Learning Research, 22(94):1-48, 2021. http://jmlr.org/papers/v22/19-081.html.Google Scholar
Mai, Xiaoyi and Liao, Zhenyu. High Dimensional Classification via Regularized and Unregularized Empirical Risk Minimization: Precise Error and Optimal Loss. 2019. https://arxiv.org/abs/1905.13742.Google Scholar
Manning, Christopher D., Schutze, Hinrich, and Raghavan, Prabhakar. Introduction to Information Retrieval. Cambridge University Press, 2008. ISBN 9780511809071. https://doi.org/10.1017/cbo9780511809071.Google Scholar
Marcenko, Vladimir A. and Pastur, Leonid Andreevich. Distribution of Eigenvalues for Some Sets of Random Matrices. Mathematics of the USSR-Sbornik, 1(4):457, 1967. ISSN 00255734. https://doi.org/10.1070/sm1967v001n04abeh001994.Google Scholar
Maronna, Ricardo A., Douglas Martin, R., Yohai, Victor J., and Salibian-Barrera, Matias. Robust Statistics: Theory and Methods (with R). Wiley Series in Probability and Statistics. John Wiley & Sons, Ltd, 2 edition, 2018. ISBN 9781119214656. https://doi.org/10.1002/9781119214656.Google Scholar
Maronna, Ricardo Antonio. Robust M-Estimators of Multivariate Location and Scatter. The Annals of Statistics, 4(1):51-67, 1976. ISSN 0090-5364. https://doi.org/10.1214/aos/1176343347.Google Scholar
Massoulie, Laurent. Community Detection Thresholds and the Weak Ramanujan Property. In STOC’14: Proceedings of the Forty-Sixth Annual ACM Symposium on Theory of Computing, page 694-703, New York, NY, USA, 2014. Association for Computing Machinery. ISBN 9781450327107. https://doi.org/10.1145/2591796.2591857. https://doi.org/10.1145/2591796.2591857.Google Scholar
Mehta, Madan Lal and Gaudin, Michel. On the Density of Eigenvalues of a Random Matrix. Nuclear Physics, 18:420–27, 1960. ISSN 0029-5582. https://doi.org/10.1016/0029-5582(60)90414-4.Google Scholar
Mei, Song and Montanari, Andrea. The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve. Communications on Pure and Applied Mathematics, 2021. ISSN 0010-3640. https://doi.org/10.1002/cpa.22008.Google Scholar
Mestre, Xavier. Improved Estimation of Eigenvalues and Eigenvectors of Covariance Matrices Using Their Sample Estimates. IEEE Transactions on Information Theory, 54(11):5113-29, 2008. ISSN 0018-9448. https://doi.org/10.1109/tit.2008.929938.Google Scholar
Mestre, Xavier and Lagunas, Miguel Angel. Modified Subspace Algorithms for DoA Estimation with Large Arrays. IEEE Transactions on Signal Processing, 56(2):598-614, 2008. ISSN 1053-587X. https://doi.org/10.1109/tsp.2007.907884.Google Scholar
Mika, Sebastian, Ratsch, Gunnar, Weston, Jason, Scholkopf, Bernhard, Klaus-Robert, and Mullers. Fisher Discriminant Analysis with Kernels. In Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop, NeuralGoogle Scholar
Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468), pages 41-8, 1999. ISBN 978078035673X. https://doi.org/10.1109/nnsp.1999.788121.Google Scholar
Mikolov, Tomas, Chen, Kai, Corrado, Greg, and Dean, Jeffrey. Efficient Estimation of Word Representations in Vector Space. 2013. https://arxiv.org/abs/1301.3781.Google Scholar
Mingo, James A. and Speicher, Roland. Free Probability and Random Matrices, volume 35 of Fields Institute Monographs. Springer-Verlag New York, 1 edition, 2017. ISBN 9781493969418. https://doi.org/10.1007/978-1-4939-6942-5.Google Scholar
Mirza, Mehdi and Osindero, Simon. Conditional Generative Adversarial Nets. 2014. https://arxiv.org/abs/1411.1784.Google Scholar
Miyato, Takeru, Kataoka, Toshiki, Koyama, Masanori, and Yoshida, Yuichi. Spectral Normalization for Generative Adversarial Networks. In International Conference on Learning Representations, ICLR’18, 2018. https://openreview.net/forum?id=B1QRgziT-.Google Scholar
Mondelli, Marco and Montanari, Andrea. Fundamental Limits of Weak Recovery with Applications to Phase Retrieval. Foundations of Computational Mathematics, 19(3):703-73, 2019. ISSN 1615-3375. https://doi.org/10.1007/s10208-018-9395-y.Google Scholar
Morales-Jimenez, David, Couillet, Romain, and Matthew, R. McKay. Large Dimensional Analysis of Robust M-Estimators of Covariance with Outliers. IEEE Transactions on Signal Processing, 63(21):5784-97, 2015. ISSN 1053-587X. https://doi.org/10.1109/tsp.2015.2460225.Google Scholar
Moscovich, Amit, Jaffe, Ariel, and Boaz, Nadler. Minimax-Optimal Semi-supervised Regression on Unknown Manifolds. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings ofMachine Learning Research, pages 933-42. PMLR, 2016. http://proceedings.mlr.press/v54/moscovich17a.html.Google Scholar
Mossel, Elchanan, Neeman, Joe, and Sly, Allan. Reconstruction and Estimation in the Planted Partition Model. Probability Theory and Related Fields, 162(3-4):431-61, 2015. ISSN 01788051. https://doi.org/10.1007/s00440-014-0576-6.Google Scholar
Mezard, M., Parisi, G., and Zee, A.. Spectra of Euclidean Random Matrices. Nuclear Physics B, 559(3):689-701, 1999. ISSN 0550-3213. https://doi.org/10.1016/s0550-3213(99)00428-9.Google Scholar
Mezard, Marc and Montanari, Andrea. Information, Physics, and Computation. Oxford University Press, 2009. ISBN 9780198570837. https://doi.org/10.1093/acprof:oso/9780198570837.001.0001.Google Scholar
Najim, Jamal and Yao, Jianfeng. Gaussian Fluctuations for Linear Spectral Statistics of Large Random Covariance Matrices. The Annals of Applied Probability, 26(3):1837-87, 2016. ISSN 1050-5164. https://doi.org/10.1214/15-aap1135.Google Scholar
Nakkiran, Preetum, Kaplun, Gal, Bansal, Yamini et al. Deep Double Descent: Where Bigger Models and More Data Hurt. In International Conference on Learning Representations, ICLR’19, 2020. https://openreview.net/forum?id=B1g5sA4twr.Google Scholar
Nelder, John Ashworth and Wedderburn, R. W. M.. Generalized Linear Models. Journal of the Royal Statistical Society: Series A (General), 135(3):370-84, 1972. ISSN 0035-9238. https://doi.org/10.2307/2344614.Google Scholar
Newman, Mark E. Modularity, J. and Community Structure in Networks. Proceedings of the National Academy of Sciences, 103(23):8577-82, 2006. ISSN 0027-8424. https://doi.org/10.1073/pnas.0601602103.Google Scholar
Ng, Andrew, Jordan, Michael I., and Weiss, Yair. On Spectral Clustering: Analysis and an Algorithm. In NIPS’02: Advances in Neural Information Processing Systems, volume 14, pages 849-56. MIT Press, 2002. https://proceedings.neurips.cc/paper/2001/file/801272ee79cfde7fa5960571fee36b9b-Paper.pdf.Google Scholar
Nica, Alexandru and Speicher, Roland. Lectures on the Combinatorics of Free Probability, volume 13 of London Mathematical Society Lecture Note Series. Cambridge University Press, 2006. ISBN 9780511735127. https://doi.org/10.1017/cbo9780511735127.Google Scholar
Novak, Roman, Xiao, Lechao, Bahri, Yasaman et al. Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes. In International Conference on Learning Representations, ICLR’19, 2019. https://openreview.net/forum?id=B1g30j0qF7.Google Scholar
NOkland, Arild. Direct Feedback Alignment Provides Learning in Deep Neural Networks. In NIPS’16: Advances in Neural Information Processing Systems, volume 29, pages 1037-45. Curran Associates, Inc., 2016. https://proceedings.neurips.cc/paper/2016/file/d490d7b4576290fa60eb31b5fc917ad1-Paper.pdf. backprop-less NN.Google Scholar
Olivier, Chapelle, Bernhard, Scholkopf, and Alexander, Zien. Semi-supervised Learning. The MIT Press, 2006. ISBN 9780262033589. https://doi.org/10.7551/mitpress/9780262033589.001.0001.Google Scholar
O’Rourke, Sean. A Note on the Marcenko-Pastur Law for a Class of Random Matrices with Dependent Entries. Electronic Communications in Probability, 17(0):13, 2012. ISSN 1083- 589X. https://doi.org/10.1214/ecp.v17-2020.Google Scholar
Pajor, Alain and Pastur, Lenoid. On the Limiting Empirical Measure of Eigenvalues of the Sum of Rank One Matrices with Log-Concave Distribution. Studia Mathematica, 195(1):11-29, 2009. ISSN 0039-3223. https://doi.org/10.4064/sm195-1-2.Google Scholar
Papazafeiropoulos, Anastasios K. and Ratnarajah, Tharmalingam. Deterministic Equivalent Performance Analysis of Time-Varying Massive MIMO Systems. IEEE Transactions on Wireless Communications, 14(10):5795-809, 2015. ISSN 1536-1276. https://doi.org/10.1109/twc.2015.2443040.Google Scholar
Pastur, Leonid. On Random Matrices Arising in Deep Neural Networks. Gaussian Case. 2020. https://arxiv.org/abs/2001.06188.Google Scholar
Pastur, Leonid and Figotin, Alexander. Spectra of Random and Almost-Periodic Operators. 1992.Google Scholar
Pastur, Leonid and Slavin, Victor. On Random Matrices Arising in Deep Neural Networks: General I.I.D. Case. 2020. https://arxiv.org/abs/2011.11439.Google Scholar
Pastur, Leonid A. A Simple Approach to the Global Regime of Gaussian Ensembles of Random Matrices. Ukrainian Mathematical Journal, 57(6):936-66, 2005. ISSN 0041-5995. https://doi.org/10.1007/s11253-005-0241-4. https://doi.org/10.1007/s11253-005-0241-4.Google Scholar
Pastur, Leonid andreevich and Shcherbina, Mariya. Eigenvalue Distribution of Large Random Matrices, volume 171 of Mathematical Surveys and Monographs. American Mathematical Society, 2011. https://doi.org/10.1090/surv/1.Google Scholar
Paul, Debashis. Asymptotics of Sample Eigenstructure for a Large Dimensional Spiked Covariance Model. Statistica Sinica, 17(4):1617-42, 2007. www.jstor.org/stable/24307692.Google Scholar
Paul, Debashis and Jack, W. Silverstein. No Eigenvalues Outside the Support of the Limiting Empirical Spectral Distribution of a Separable Covariance Matrix. Journal of Multivariate Analysis, 100(1):37-57, 2009. ISSN 0047-259X. https://doi.org/10.1016/j.jmva.2008.03.010.Google Scholar
Pearl, Judea. Fusion, Propagation, and Structuring in Belief Networks. Artificial Intelligence, 29(3):241-88, 1986. ISSN 0004-3702. https://doi.org/10.1016/0004-3702(86)90072-x.Google Scholar
Pennington, Jeffrey and Bahri, Yasaman. Geometry of Neural Network Loss Surfaces via Random Matrix Theory. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 2798-806, International Convention Centre, Sydney, Australia, 2017. PMLR. http://proceedings.mlr.press/v70/pennington17a.html.Google Scholar
Pennington, Jeffrey and Worah, Pratik. Nonlinear Random Matrix Theory for Deep Learning. In NIPS’17: Advances in Neural Information Processing Systems, volume 30, pages 2637-46. Curran Associates, Inc., 2017. https://proceedings.neurips.cc/paper/2017/file/0f3d014eead934bbdbacb62a01dc4831-Paper.pdf.Google Scholar
Pennington, Jeffrey, Schoenholz, Samuel, and Ganguli, Surya. Resurrecting the Sigmoid in Deep Learning through Dynamical Isometry: Theory and Practice. In NIPS’17: Advances in Neural Information Processing Systems, volume 30, pages 4785-95. Curran Associates, Inc., 2017. https://proceedings.neurips.cc/paper/2017/file/d9fc0cdb67638d50f411432d0d41d0ba-Paper.pdf.Google Scholar
Prabhu, Vinay Uday. Kannada-MNIST: A New Handwritten Digits Dataset for the Kannada Language. 2019. https://arxiv.org/abs/1908.01242.Google Scholar
Qin, Tai and Rohe, Karl. Regularized Spectral Clustering under the Degree-Corrected Stochastic Blockmodel. In NIPS’13: Advances in Neural Information Processing Systems, volume 26, pages 3120-28. Curran Associates, Inc., 2013. https://proceedings.neurips.cc/paper/2013/file/0ed9422357395a0d4879191c66f4faa2-Paper.pdf.Google Scholar
Rahimi, Ali and Recht, Benjamin. Random Features for Large-Scale Kernel Machines. In NIPS’08: Advances in Neural Information Processing Systems, volume 20, pages 1177-84. Curran Associates, Inc., 2008. https://proceedings.neurips.cc/paper/2007/file/013a006f03dbc5392effeb8f18fda755-Paper.pdf.Google Scholar
Rosasco, Lorenzo, Vito, Ernesto De, Caponnetto, Andrea, Piana, Michele, and Verri, Alessandro. Are Loss Functions All the Same? Neural Computation, 16(5):1063-76, 2004. ISSN 08997667. https://doi.org/10.1162/089976604773135104.Google Scholar
Rosenblatt, Frank. The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychological Review, 65(6):386-408, 1958. ISSN 0033-295X. https://doi.org/10.1037/h0042519.Google Scholar
Rozanov, Yu. A. Stationary Random Processes. Holden-Day Series in Time Series Analysis. Holden-Day, San Francisco, 1967. https://openlibrary.org/books/OL21849368M/.Google Scholar
Rudelson, Mark and Vershynin, Roman. Hanson-Wright Inequality and Sub-gaussian Concentration. Electronic Communications in Probability, 18(none), 2013. ISSN 1083-589X. https://doi.org/10.1214/ecp.v18-2865.Google Scholar
Rudin, Walter. Principles of Mathematical Analysis, volume 3 of International Series in Pure and Applied Mathematics. McGraw-Hill Education, 3 edition, 1964. ISBN 9780070542358. www.mheducation.com/highered/product/principles-mathematical-analysis-rudin/M9780070542358.html.Google Scholar
Saade, Alaa, Krzakala, Florent, and Zdeborova, Lenka. Spectral Clustering of Graphs with the Bethe Hessian. In NIPS’14: Advances in Neural Information Processing Systems, volume 27, pages 406-14. Curran Associates, Inc., 2014. https://proceedings.neurips.cc/paper/2014/file/63923f49e5241343aa7acb6a06a751e7-Paper.pdf.Google Scholar
Salez, Justin. Some Implications of Local Weak Convergence for Sparse Random Graphs. PhD thesis, 2011.Google Scholar
Salez, Justin. Spectral Atoms of Unimodular Random Trees. Journal of the European Mathematical Society, 22(2):345-63, 2019. ISSN 1435-9855. https://doi.org/10.4171/jems/923.Google Scholar
Scardapane, Simone and Wang, Dianhui. Randomness in Neural Networks: An Overview. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 7(2):e1200, 2017. ISSN 1942-4787. https://doi.org/10.1002/widm.1200.Google Scholar
Schapire, Robert E. A Brief Introduction to Boosting. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, IJCAI-99, volume 99 of IJCAI’99, pages 1401-06. International Joint Conferences on Artificial Intelligence Organization, 1999. www.ijcai.org/Proceedings/99-2/Papers/103.pdf.Google Scholar
Schmidt, Ralph. Multiple Emitter Location and Signal Parameter Estimation. IEEE Transactions on Antennas and Propagation, 34(3):276-80, 1986. ISSN 0018-926X. https://doi.org/10.1109/tap.1986.1143830.Google Scholar
Schmidt, Wouter, Kraaijveld, Martin, and Duin, Robert. Feedforward Neural Networks with Random Weights. In 11th IAPR International Conference on Pattern Recognition, volume 1 of ICPR, pages 1-4. IEEE, 1992. https://doi.ieeecomputersociety.org/10.1109/ICPR.1992.201708.Google Scholar
Scholkopf, Bernhard and Alexander, J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. The MIT Press, 2018. ISBN 9780262256933. https://doi.org/10.7551/mitpress/4175.001.0001.Google Scholar
Seddik, Mohamed El Amine, Tamaazousti, Mohamed, and Couillet, Romain. Kernel Random Matrices of Large Concentrated Data: The Example of GAN-Generated Images. In 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2019), 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2019), pages 7480-84. IEEE, 2019. ISBN 9781479981328. https://doi.org/10.1109/icassp.2019.8683333.Google Scholar
Seddik, Mohamed El Amine, Louart, Cosme, Tamaazousti, Mohamed, and Couillet, Romain. Random Matrix Theory Proves that Deep Learning Representations of GAN-Data Behave as Gaussian Mixtures. In Proceedings of the 37th International Conference on Machine Learning, Proceedings of Machine Learning Research, pages 8573-82. PMLR, 2020. http://proceedings.mlr.press/v119/seddik20a.html.Google Scholar
Silverstein, Jack W. The Limiting Eigenvalue Distribution of a Multivariate F Matrix. SIAM Journal on Mathematical Analysis, 16(3):641-6, 1985. ISSN 0036-1410. https://doi.org/10.1137/0516047.Google Scholar
Silverstein, Jack W. and Bai, Zhidong. On the Empirical Distribution of Eigenvalues of a Class of Large Dimensional Random Matrices. Journal of Multivariate Analysis, 54(2):175-92, 1995. ISSN 0047-259X. https://doi.org/10.1006/jmva.1995.1051.Google Scholar
Silverstein, Jack W. and Choi, Sang-Il. Analysis of the Limiting Spectral Distribution of Large Dimensional Random Matrices. Journal of Multivariate Analysis, 54(2):295-309, 1995. ISSN 0047-259X. https://doi.org/10.1006/jmva.1995.1058.Google Scholar
Simonyan, Karen and Zisserman, Andrew. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations, ICLR’14, 2014. http://arxiv.org/abs/1409.1556.Google Scholar
Soshnikov, Alexander. Universality at the Edge of the Spectrum in Wigner Random Matrices. Communications in Mathematical Physics, 207(3):697-733, 1999. ISSN 0010-3616. https://doi.org/10.1007/s002200050743.Google Scholar
Soshnikov, Alexander B. Gaussian Fluctuation for the Number of Particles in Airy, Bessel, Sine, and Other Determinantal Random Point Fields. Journal of Statistical Physics, 100 (3-4):491-522, 2000. ISSN 0022-4715. https://doi.org/10.1023/a:1018672622921.Google Scholar
Soudry, Daniel, Hoffer, Elad, Nacson, Mor Shpigel, Gunasekar, Suriya, and Srebro, Nathan. The Implicit Bias of Gradient Descent on Separable Data. Journal of Machine Learning Research, 19(70):1-57, 2018. http://jmlr.org/papers/v19/18-188.html.Google Scholar
Spearman, Charles. The Proof and Measurement of Association between Two Things. The American Journal of Psychology, 100(3/4):441, 1987. ISSN 0002-9556. https://doi.org/10.2307/1422689.Google Scholar
Speicher, Roland and Vargas, Carlos. Free Deterministic Equivalents, Rectangular Random Matrix Models, and Operator-Valued Free Probability Theory. Random Matrices: Theory and Applications, 01(02):1150008, 2012. ISSN 2010-3263. https://doi.org/10.1142/s2010326311500080.Google Scholar
Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Ruslan Salakhut- dinov. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15(56):1929-58, 2014. http://jmlr.org/papers/v15/srivastava14a.html.Google Scholar
Stein, Charles M. Estimation of the Mean of a Multivariate Normal Distribution. The Annals of Statistics, 9(6):1135-51, 1981. ISSN 0090-5364. https://doi.org/10.1214/aos/1176345632.Google Scholar
Stein, Elias M. and Shakarchi, Rami. Complex Analysis, volume 2 of Princeton Lectures in Analysis. Princeton University Press, 2003. ISBN 9780691113852.Google Scholar
Sur, Pragya and Candes, Emmanuel J. A Modern Maximum-Likelihood Theory for High- Dimensional Logistic Regression. Proceedings of the National Academy of Sciences, 116 (29):14516-25, 2019. www.pnas.org/content/116/29/14516.Google Scholar
Suykens, Johan A. K. Suykens and Joos Vandewalle. Least Squares Support Vector Machine Classifiers. Neural Processing Letters, 9(3):293-300, 1999. ISSN 1370-4621. https://doi.org/10.1023/a:1018628609742.Google Scholar
Szummer, Martin and Jaakkola, Tommi. Partially Labeled Classification with Markov Random Walks. In NIPS’02: Advances in Neural Information Processing Systems, volume 14, pages 945-52. MIT Press, 2002. https://proceedings.neurips.cc/paper/2001/file/a82d922b133be19c1171534e6594f754-Paper.pdf.Google Scholar
Taheri, Hossein, Pedarsani, Ramtin, and Thrampoulidis, Christos. Sharp Guarantees for Solving Random Equations with One-Bit Information. In 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), volume 00 of The 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pages 765-72, 2019. ISBN 9781728131528. https://doi.org/10.1109/allerton.2019.8919905.Google Scholar
Taheri, Hossein, Pedarsani, Ramtin, and Thrampoulidis, Christos. Fundamental Limits of Ridge-Regularized Empirical Risk Minimization in High Dimensions. 2020a. https://arxiv.org/abs/2006.08917.Google Scholar
Taheri, Hossein, Pedarsani, Ramtin, and Thrampoulidis, Christos. Optimality of Least-Squares for Classification in Gaussian-Mixture Models. 2020 IEEE International Symposium on Information Theory (ISIT), 00:2515–20, 2020b. https://doi.org/10.1109/isit44484.2020.9174267.Google Scholar
Taheri, Hossein, Pedarsani, Ramtin, and Thrampoulidis, Christos. Sharp Asymptotics and Optimal Performance for Inference in Binary Models. 2020c. https://arxiv.org/abs/2002.07284.Google Scholar
Talagrand, Michel. Concentration of Measure and Isoperimetric Inequalities in Product Spaces. Publications Mathematiques de l’Institut des Hautes Etudes Scientifiques, 81(1):73-205, 1995. ISSN 0073-8301. https://doi.org/10.1007/bf02699376.Google Scholar
Tanaka, Gouhei, Yamane, Toshiyuki, Heroux, Jean Benoit et al. Recent Advances in Physical Reservoir Computing: A Review. Neural Networks, 115:100–23 , 2019. ISSN 0893-6080. https://doi.org/10.1016/j.neunet.2019.03.005.Google Scholar
Tao, Terence. Topics in Random Matrix Theory, volume 132 of Graduate Studies in Mathematics. 2012. ISBN 9780821874301. https://doi.org/10.1090/gsm/132. www.ams.org/books/gsm/132/.Google Scholar
Tao, Terence and Vu, Van. Random Matrices: The Circular Law. Communications in Contemporary Mathematics, 10(02):261-307, 2008. ISSN 0219-1997. https://doi.org/10.1142/s0219199708002788.Google Scholar
Thrampoulidis, Christos, Abbasi, Ehsan, and Hassibi, Babak. Precise Error Analysis of Regularized M-Estimators in High Dimensions. IEEE Transactions on Information Theory, 64(8): 5592-628, 2018. ISSN 0018-9448. https://doi.org/10.1109/tit.2018.2840720.Google Scholar
Tiomoko, Malik and Couillet, Romain. Estimation of Covariance Matrix Distances in the High Dimension Low Sample Size Regime. In 2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), pages 341-5. IEEE, 2019a.Google Scholar
Tiomoko, Malik and Couillet, Romain. Random Matrix-Improved Estimation of the Wasser- stein Distance between two Centered Gaussian Distributions. In 2019 27th European Signal Processing Conference (EUSIPCO), volume 00 of 2019 27th European Signal Processing Conference (EUSIPCO), pages 1-5, 2019b. ISBN 9781538673003. https://doi.org/10.23919/eusipco.2019.8902795.Google Scholar
Ali, Hafiz Tiomoko, Kammoun, Abla, and Couillet, Romain. Random Matrix-Improved Kernels For Large Dimensional Spectral Clustering. In 2018 IEEE Statistical Signal Processing Workshop (SSP), 2018 IEEE Statistical Signal Processing Workshop (SSP), pages 453-7. IEEE, 2018. ISBN 9781538615720. https://doi.org/10.1109/ssp.2018.8450705.Google Scholar
Titchmarsh, E. C. The Theory of Functions. Oxford University Press, New York, NY, USA, 1939.Google Scholar
Tracy, Craig A. and Widom, Harold. On Orthogonal and Symplectic Matrix Ensembles. Communications in Mathematical Physics, 177(3):727-54, 1996. ISSN 0010-3616. https://doi.org/10.1007/bf02099545.Google Scholar
Tracy, Craig A. and Widom, Harold. The Distribution of the Largest Eigenvalue in the Gaussian Ensembles: /3 = 1,2,4. In Calogero-Moser- Sutherland Models, pages 461-72. Springer New York, 2000. ISBN 978-1-4612-1206-5. https://doi.org/10.1007/978-1-4612-1206-5_29.Google Scholar
Tropp, Joel A. An Introduction to Matrix Concentration Inequalities. Foundations and Trends in Machine Learning, 8(1-2):1-230, 2015. ISSN 1935-8237. https://doi.org/10.1561/2200000048.Google Scholar
Tulino, Antonia M. and Verdu, Sergio. Random Matrix Theory and Wireless Communications. Foundations and Trends in Communications and Information Theory, 1(1):1-182, 2004. ISSN 1567-2190. https://doi.org/10.1561/0100000001.Google Scholar
Tyler, David E. Robustness and Efficiency Properties of Scatter Matrices. Biometrika, 70(2): 411, 1983. ISSN 0006-3444. https://doi.org/10.2307/2335555.Google Scholar
Vaart, Van der, Aad, W. Asymptotic Statistics, volume 3 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2000. ISBN 9780521784504. https://doi.org/10.1017/cbo9780511802256.Google Scholar
Vallet, Pascal, Mestre, Xavier, and Loubaton, Philippe. Performance Analysis of an Improved Music DOA Estimator. IEEE Transactions on Signal Processing, 63(23):6407-22, 2015.Google Scholar
Vapnik, Vladimir. Principles of Risk Minimization for Learning Theory. In NIPS’92: Advances in Neural Information Processing Systems, volume 4, pages 831-8. Morgan- Kaufmann, 1992. https://proceedings.neurips.cc/paper/1991/file/ff4d5fbbafdf976cfdc032e3bde78de5-Paper.pdf.Google Scholar
Vedaldi, Andrea and Zisserman, Andrew. Efficient Additive Kernels via Explicit Feature Maps. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(3):480-92, 2012. ISSN 0162-8828. https://doi.org/10.1109/tpami.2011.153.Google Scholar
Vershynin, Roman. Introduction to the Non-asymptotic Analysis of Random Matrices. In Yonina C. Eldar and GittaEditors Kutyniok, editors, Compressed Sensing: Theory and Applications, page 210-68. Cambridge University Press, 2012. https://doi.org/10.1017/cbo9780511794308.006.Google Scholar
Vershynin, Roman. High-Dimensional Probability: An Introduction with Applications in Data Science. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2018. ISBN 9781108415194. https://doi.org/10.1017/9781108231596.001.Google Scholar
Voiculescu, Dan, Dykema, Kenneth, and Nica, Alexandra. Free Random Variables. CRM Monograph Series, pages 55-66, 1992. ISSN 1065-8599. https://doi.org/10.1090/crmm/001/05.Google Scholar
Wagner, Sebastian, Couillet, Romain, Debbah, Merouane, and Slock, Dirk T. M.. Large System Analysis of Linear Precoding in Correlated MISO Broadcast Channels Under Limited Feedback. IEEE Transactions on Information Theory, 58(7):4509-37, 2012. ISSN 0018-9448. https://doi.org/10.1109/tit.2012.2191700.Google Scholar
Wax, Mati and Kailath, Thomas. Detection of Signals by Information Theoretic Criteria. IEEE Transactions on Acoustics, Speech, and Signal Processing, 33(2):387-92, 1985.Google Scholar
Wen, Chao-Kai, Pan, Guangming, Wong, Kai-Kit, Guo, Meihui, and Chen, Jung-Chieh. A Deterministic Equivalent for the Analysis of Non-Gaussian Correlated MIMO Multiple Access Channels. IEEE Transactions on Information Theory, 59(1):329-52, 2013. ISSN 0018-9448. https://doi.org/10.1109/tit.2012.2218571.Google Scholar
Wigner, Eugene P. Characteristic Vectors of Bordered Matrices with Infinite Dimensions. The Annals of Mathematics, 62(3):548, 1955. ISSN 0003-486X. https://doi.org/10.2307/1970079.Google Scholar
Williams, Christopher. Computing with Infinite Networks. In NIPS’97: Advances in Neural Information Processing Systems, volume 9, pages 295-301. MIT Press, 1997. https:// proceedings.neurips.cc/paper/1996/file/ae5e3ce40e0404a45ecacaaf05e5f735-Paper.pdf.Google Scholar
Wishart, John. The Generalised Product Moment Distribution in Samples from a Normal Multivariate Population. Biometrika, 20A(1/2):32-52, 1928. ISSN 0006-3444. https://doi.org/10.2307/2331939.Google Scholar
Wold, Svante, Esbensen, Kim, and Geladi, Paul. Principal Component Analysis. Chemo- metrics and Intelligent Laboratory Systems, 2(1-3):37-52, 1987. ISSN 0169-7439. https://doi.org/10.1016/0169-7439(87)80084-9.Google Scholar
Xiao, Han, Rasul, Kashif, and Vollgraf, Roland. Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. 2017. https://arxiv.org/abs/1708.07747.Google Scholar
Xiao, Lechao, Bahri, Yasaman, Sohl-Dickstein, Jascha, Schoenholz, Samuel, and Pennington, Jeffrey. Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 5393-402, Stockholmsmassan, Stockholm Sweden, 2018. PMLR. http://proceedings.mlr.press/v80/xiao18a.html.Google Scholar
Yang, Liusha, Couillet, Romain, and Matthew R McKay. A Robust Statistics Approach to Minimum Variance Portfolio Optimization. IEEE Transactions on Signal Processing, 63(24): 6684-97, 2015. ISSN 1053-587X. https://doi.org/10.1109/tsp.2015.2474298.Google Scholar
Yao, Jianfeng, Couillet, Romain, Najim, Jamal, and Debbah, Merouane. Fluctuations of an Improved Population Eigenvalue Estimator in Sample Covariance Matrix Models. IEEE Transactions on Information Theory, 59(2):1149-63, 2013. ISSN 0018-9448. https://doi.org/10.1109/tit.2012.2222862.Google Scholar
Yin, Y. Q., Bai, Z. D., and Krishnaiah, P. R.. Limiting Behavior of the Eigenvalues of a Multivariate F Matrix. Journal of Multivariate Analysis, 13(4):508-16, 1983. ISSN 0047-259X. https://doi.org/10.1016/0047-259x(83)90036-2.Google Scholar
Zarrouk, Tayeb, Couillet, Romain, Chatelain, Florent, and Bihan, Nicolas Le. Performance- Complexity Trade-off in Large Dimensional Statistics. In 2020 IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP), volume 00 of 2020 IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP), pages 1-6. IEEE, 2020. ISBN 9781728166636. https://doi.org/10.1109/mlsp49062.2020.9231568.Google Scholar
Zhang, Chiyuan, Bengio, Samy, Hardt, Moritz, Recht, Benjamin, and Vinyals, Oriol. Understanding Deep Learning Requires Rethinking Generalization. In 5th International Conference on Learning Representations, ICLR’16, 2016. https://openreview.net/forum?id=Sy8gdB9xx.Google Scholar
Zhang, Teng, Cheng, Xiuyuan, and Singer, Amit. Marcenko-Pastur Law for Tyler’s M-Estimator. 2014. https://arxiv.org/abs/1401.3424.Google Scholar
Zheng, Shurong, Bai, Zhidong, and Yao, Jianfeng. CLT for Eigenvalue Statistics of Large- Dimensional General Fisher Matrices with Applications. Bernoulli, 23(2):1130-78, 2017. ISSN 1350-7265. https://doi.org/10.3150/15-bej772.Google Scholar
Zhou, Dengyong, Bousquet, Olivier, Lal, Thomas Navin, Weston, Jason, and Scholkopf, Bernhard. Learning with Local and Global Consistency. In NIPS’04: Advances in Neural Information Processing Systems, volume 16, pages 321-8. MIT Press, 2004. https:// proceedings.neurips.cc/paper/2003/file/87682805257e619d49b8e0dfdc14affa-Paper.pdf.Google Scholar
Zhou, Xueyuan and Belkin, Mikhail. Semi-supervised Learning by Higher Order Regulariza- tion. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, volume 15 of Proceedings of Machine Learning Research, pages 892900, Fort Lauderdale, FL, USA, 2011. JMLR Workshop and Conference Proceedings. http://proceedings.mlr.press/v15/zhou11b.html.Google Scholar
Zhu, Xiaojin. Semi-supervised Learning Literature Survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences, 9 2005. https://minds.wisconsin.edu/bitstream/handle/1793/60444/TR1530.pdf.Google Scholar
Zhu, Xiaojin and Ghahramani, Zoubin. Learning from Labeled and Unlabeled Data with Label Propagation. Technical Report, Citeseer, 2002. http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=46D8FA6E43437BB5FAE59DF68862E758?doi=10.1.1.13.8280&rep= rep1&type=pdf.Google Scholar
Zhu, Xiaojin, Ghahramani, Zoubin, and Lafferty, John. Semi-supervised Learning Using Gaussian Fields and Harmonic Functions. In Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003), volume 3 of ICML’03, pages 912-9. AAAI Press, 2003. www.aaai.org/Library/ICML/2003/icml03-118.phpGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×