Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-cfpbc Total loading time: 0 Render date: 2024-04-19T10:46:42.844Z Has data issue: false hasContentIssue false

Bibliography

Published online by Cambridge University Press:  28 September 2018

Ankur Moitra
Affiliation:
Massachusetts Institute of Technology
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2018

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Achlioptas, D. and McSherry, F.. On spectral learning of mixtures of distributions. In COLT, pages 458–469, 2005.Google Scholar
Agarwal, A., Anandkumar, A., Jain, P., Netrapalli, P., and Tandon, R.. Learning sparsely used overcomplete dictionaries via alternating minimization. arXiv:1310.7991, 2013.Google Scholar
Agarwal, A., Anandkumar, A., and Netrapalli, P.. Exact recovery of sparsely used overcomplete dictionaries. arXiv:1309.1952, 2013.Google Scholar
Aharon, M.. Overcomplete dictionaries for sparse representation of signals. PhD thesis, 2006.Google Scholar
Aharon, M., Elad, M., and Bruckstein, A.. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54(11):43114322, 2006.Google Scholar
Ahlswede, R. and Winter, A.. Strong converse for identification via quantum channels. IEEE Trans. Inf. Theory, 48(3):569579, 2002.Google Scholar
Alon, N.. Tools from higher algebra. In Handbook of Combinatorics, editors: Graham, R. L., Gr otschel, M., and Lovász, L.. Cambridge, MA: MIT Press, 1996, pages 17491783.Google Scholar
Anandkumar, A., Foster, D., Hsu, D., Kakade, S., and Liu, Y.. A spectral algorithm for latent Dirichlet allocation. In NIPS, pages 926–934, 2012.Google Scholar
Anandkumar, A., Ge, R., Hsu, D., and Kakade, S.. A tensor spectral approach to learning mixed membership community models. In COLT, pages 867–881, 2013.Google Scholar
Anandkumar, A., Hsu, D., and Kakade, S.. A method of moments for hidden Markov models and multi-view mixture models. In COLT, pages 33.1–33.34, 2012.Google Scholar
Anderson, J., Belkin, M., Goyal, N., Rademacher, L, and Voss, J.. The more the merrier: The blessing of dimensionality for learning large Gaussian mixtures. arXiv:1311.2891, 2013.Google Scholar
Arora, S., Ge, R., Halpern, Y., Mimno, D., Moitra, A., Sontag, D., Wu, Y., and Zhu, M.. A practical algorithm for topic modeling with provable guarantees. In ICML, pages 280–288, 2013.Google Scholar
Arora, S., Ge, R., Kannan, R., and Moitra, A.. Computing a nonnegative matrix factorization – provably. In STOC, pages 145–162, 2012.Google Scholar
Arora, S., Ge, R., and Moitra, A.. Learning topic models – going beyond SVD. In FOCS, pages 1–10, 2012.Google Scholar
Arora, S., Ge, R., and Moitra, A.. New algorithms for learning incoherent and overcomplete dictionaries. arXiv:1308.6273, 2013.Google Scholar
Arora, S., Ge, R., Ma, T., and Moitra, A.. Simple, efficient, and neural algorithms for sparse coding. In COLT, pages 113–149, 2015.Google Scholar
Arora, S., Ge, R., Moitra, A., and Sachdeva, S.. Provable ICA with unknown Gaussian noise, and implications for Gaussian mixtures and autoencoders. In NIPS, pages 2384–2392, 2012.Google Scholar
Arora, S., Ge, R., Sachdeva, S., and Schoenebeck, G.. Finding overlapping communities in social networks: Towards a rigorous approach. In EC, 2012.Google Scholar
Arora, S. and Kannan, R.. Learning mixtures of separated nonspherical Gaussians. Ann. Appl. Probab., 15(1A):6992, 2005.Google Scholar
Balcan, M., Blum, A., and Gupta, A.. Clustering under approximation stability. J. ACM, 60(2): 134, 2013.Google Scholar
Balcan, M., Blum, A., and Srebro, N.. On a theory of learning with similarity functions. Mach. Learn., 72(1–2):89112, 2008.CrossRefGoogle Scholar
Balcan, M., Borgs, C., Braverman, M., Chayes, J., and Teng, S.-H.. Finding endogenously formed communities. In SODA, 2013.CrossRefGoogle Scholar
Bandeira, A., Rigollet, P., and Weed, J.. Optimal rates of estimation for multi-reference alignment. arXiv:1702.08546, 2017.Google Scholar
Barak, B., Hopkins, S., Kelner, J., Kothari, P., Moitra, A., and Potechin, A.. A nearly tight sum-of-squares lower bound for the planted clique problem. In FOCS, pages 428–437, 2016.CrossRefGoogle Scholar
Barak, B., Kelner, J., and Steurer, D.. Dictionary learning and tensor decomposition via the sum-of-squares method. In STOC, pages 143–151, 2015.Google Scholar
Barak, B. and Moitra, A.. Noisy tensor completion via the sum-of-squares hierarchy. In COLT, pages 417–445, 2016.Google Scholar
Belkin, M. and Sinha, K.. Toward learning Gaussian mixtures with arbitrary separation. In COLT, pages 407–419, 2010.Google Scholar
Belkin, M. and Sinha, K.. Polynomial learning of distribution families. In FOCS, pages 103–112, 2010.CrossRefGoogle Scholar
Berthet, Q. and Rigollet, P.. Complexity theoretic lower bounds for sparse principal component detection. In COLT, pages 1046–1066, 2013.Google Scholar
Bhaskara, A., Charikar, M., and Vijayaraghavan, A.. Uniqueness of tensor decompositions with applications to polynomial identifiability. In COLT, pages 742–778, 2014.CrossRefGoogle Scholar
Bhaskara, A., Charikar, M., Moitra, A., and Vijayaraghavan, A.. Smoothed analysis of tensor decompositions. In STOC, pages 594–603, 2014.Google Scholar
Bilu, Y. and Linial, N.. Are stable instances easy? In Combinatorics, Probability and Computing, 21(5):643660, 2012.CrossRefGoogle Scholar
Bittorf, V., Recht, B., Re, C., and Tropp, J.. Factoring nonnegative matrices with linear programs. In NIPS, 2012.Google Scholar
Blei, D.. Introduction to probabilistic topic models. Commun. ACM , 55(4):7784, 2012.Google Scholar
Blei, D. and Lafferty, J.. A correlated topic model of science. Ann. Appl. Stat., 1(1):1735, 2007.Google Scholar
Blei, D., Ng, A., and Jordan, M.. Latent Dirichlet allocation. J. Mach. Learn. Res., 3:9931022, 2003.Google Scholar
Blum, A., Kalai, A., and Wasserman, H.. Noise-tolerant learning, the parity problem, and the statistical query model. J. ACM, 50:506519, 2003.Google Scholar
Blum, A. and Spencer, J.. Coloring random and semi-random k-colorable graphs. Journal of Algorithms, 19(2):204234, 1995.Google Scholar
Borgwardt, K.. The Simplex Method: A Probabilistic Analysis. New York: Springer, 2012.Google Scholar
Brubaker, S. C. and Vempala, S.. Isotropic PCA and affine-invariant clustering. In FOCS, pages 551–560, 2008.Google Scholar
Candes, E. and Recht, B.. Exact matrix completion via convex optimization. Found. Comput. Math., 9(6):717772, 2008.CrossRefGoogle Scholar
Candes, E., Romberg, J., and Tao, T.. Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math., 59(8):12071223, 2006.Google Scholar
Candes, E. and Tao, T.. Decoding by linear programming. IEEE Trans. Inf. Theory, 51(12):42034215, 2005.Google Scholar
Candes, E., Li, X., Ma, Y., and Wright, J.. Robust principal component analysis? J. ACM, 58(3):137, 2011.Google Scholar
Chandrasekaran, V. and Jordan, M.. Computational and statistical tradeoffs via convex relaxation. Proc. Natl. Acad. Sci. U.S.A., 110(13):E1181E1190, 2013.Google Scholar
Chandrasekaran, V., Recht, B., Parrilo, P., and Willsky, A.. The convex geometry of linear inverse problems. Found. Comput. Math., 12(6):805849, 2012.Google Scholar
Chang, J.. Full reconstruction of Markov models on evolutionary trees: Identifiability and consistency. Math. Biosci., 137(1):5173, 1996.CrossRefGoogle ScholarPubMed
Chaudhuri, K. and Rao, S.. Learning mixtures of product distributions using correlations and independence. In COLT, pages 9–20, 2008.Google Scholar
Chaudhuri, K. and Rao, S.. Beyond Gaussians: Spectral methods for learning mixtures of heavy-tailed distributions. In COLT, pages 21–32, 2008.Google Scholar
Chen, S., Donoho, D., and Saunders, M.. Atomic decomposition by basis pursuit. SIAM J. Sci. Comput., 20(1):3361, 1998.Google Scholar
Cohen, A., Dahmen, W., and DeVore, R.. Compressed sensing and best k-term approximation. J. AMS, 22(1):211231, 2009.Google Scholar
Cohen, J. and Rothblum, U.. Nonnegative ranks, decompositions and factorizations of nonnegative matrices. Linear Algebra Appl., 190:149168, 1993.Google Scholar
Comon, P.. Independent component analysis: A new concept? Signal Processing, 36(3):287314, 1994.CrossRefGoogle Scholar
Dasgupta, A.. Asymptotic Theory of Statistics and Probability . New York: Springer, 2008.Google Scholar
Dasgupta, A., Hopcroft, J., Kleinberg, J., and Sandler, M.. On learning mixtures of heavy-tailed distributions. In FOCS, pages 491–500, 2005.Google Scholar
Dasgupta, S.. Learning mixtures of Gaussians. In FOCS, pages 634–644, 1999.Google Scholar
Dasgupta, S. and Schulman, L. J.. A two-round variant of EM for Gaussian mixtures. In UAI, pages 152–159, 2000.Google Scholar
Davis, G., Mallat, S., and Avellaneda, M.. Greedy adaptive approximations. Constr. Approx., 13:5798, 1997.Google Scholar
De Lathauwer, L. J Castaing, , and Cardoso, J.. Fourth-order cumulant-based blind identification of underdetermined mixtures. IEEE Trans. Signal Process., 55(6):29652973, 2007.Google Scholar
Deerwester, S., Dumais, S., Landauer, T., Furnas, G., and Harshman, R.. Indexing by latent semantic analysis. J. Assoc. Inf. Sci. Technol., 41(6):391407, 1990.Google Scholar
Dempster, A. P., Laird, N. M., and Rubin, D. B.. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Series B Stat. Methodol., 39(1):138, 1977.Google Scholar
Donoho, D. and Elad, M.. Optimally sparse representation in general (non-orthogonal) dictionaries via 1-minimization. Proc. Natl. Acad. Sci. U.S.A., 100(5):21972202, 2003.CrossRefGoogle Scholar
Donoho, D. and Huo, X.. Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inf. Theory, 47(7):28452862, 1999.Google Scholar
Donoho, D. and Stark, P.. Uncertainty principles and signal recovery. SIAM J. Appl. Math., 49(3):906931, 1989.Google Scholar
Donoho, D. and Stodden, V.. When does nonnegative matrix factorization give the correct decomposition into parts? In NIPS, 2003.Google Scholar
Downey, R. and Fellows, M.. Parameterized Complexity. New York: Springer, 2012.Google Scholar
Elad, M.. Sparse and Redundant Representations. New York: Springer, 2010.Google Scholar
Engan, K., Aase, S., and Hakon-Husoy, J.. Method of optimal directions for frame design. Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 5:24432446, 1999.Google Scholar
Erdos, P., Steel, M., Szekely, L., and Warnow, T.. A few logs suffice to build (almost) all trees. I. Random Struct. Algorithms, 14:153184, 1997.3.0.CO;2-R>CrossRefGoogle Scholar
Fazel, M.. Matrix rank minimization with applications. PhD thesis, Stanford University, 2002.Google Scholar
Feige, U. and Kilian, J.. Heuristics for semirandom graph problems. J. Comput. Syst. Sci., 63(4):639671, 2001.Google Scholar
Feige, U. and Krauthgamer, R.. Finding and certifying a large hidden clique in a semirandom graph. Random Struct. Algorithms, 16(2):195208, 2009.Google Scholar
Feldman, J., Servedio, R. A., and O’Donnell, R.. PAC learning axis-aligned mixtures of Gaussians with no separation assumption. In COLT, pages 20–34, 2006.Google Scholar
Frieze, A., Jerrum, M., and Kannan, R.. Learning linear transformations. In FOCS, pages 359–368, 1996.Google Scholar
Garnaev, A. and Gluskin, E.. The widths of a Euclidean ball. Sov. Math. Dokl., 277(5):200204, 1984.Google Scholar
Ge, R. and Ma, T.. Decomposing overcomplete 3rd order tensors using sum-of-squares algorithms. In RANDOM, pages 829–849, 2015.Google Scholar
Gilbert, A., Muthukrishnan, S., and Strauss, M.. Approximation of functions over redundant dictionaries using coherence. In SODA, pages 243–252, 2003.Google Scholar
Gillis, N.. Robustness analysis of hotttopixx, a linear programming model for factoring nonnegative matrices. arXiv:1211.6687, 2012.Google Scholar
Goyal, N., Vempala, S., and Xiao, Y.. Fourier PCA. In STOC, pages 584–593, 2014.Google Scholar
Gross, D.. Recovering low-rank matrices from few coefficients in any basis. arXiv:0910.1879, 2009.Google Scholar
Gross, D., Liu, Y.-K., Flammia, S., Becker, S., and Eisert, J.. Quantum state tomography via compressed sensing. Phys. Rev. Lett ., 105(15):150401, 2010.Google Scholar
Guruswami, V., Lee, J., and Razborov, A.. Almost Euclidean subspaces of via expander codes. Combinatorica, 30(1):4768, 2010.Google Scholar
Hardt, M.. Understanding alternating minimization for matrix completion. In FOCS, pages 651–660, 2014.CrossRefGoogle Scholar
Harshman, R.. Foundations of the PARAFAC procedure: model and conditions for an “explanatory” multi-mode factor analysis. UCLA Working Papers in Phonetics, 16:184, 1970.Google Scholar
Håstad, J.. Tensor rank is NP-complete. J. Algorithms, 11(4):644654, 1990.Google Scholar
Hillar, C. and Lim, L.-H.. Most tensor problems are NP-hard. arXiv:0911.1393v4, 2013 Google Scholar
Hofmann, T.. Probabilistic latent semantic analysis. In UAI, pages 289–296, 1999.Google Scholar
Horn, R. and Johnson, C.. Matrix Analysis. New York: Cambridge University Press, 1990.Google Scholar
Hsu, D. and Kakade, S.. Learning mixtures of spherical Gaussians: Moment methods and spectral decompositions. In ITCS, pages 11–20, 2013.Google Scholar
Huber, P. J.. Projection pursuit. Ann. Stat., 13:435475, 1985.Google Scholar
Hummel, R. A. and Gidas, B. C.. Zero crossings and the heat equation. Courant Institute of Mathematical Sciences, TR-111, 1984.Google Scholar
Impagliazzo, R. and Paturi, R.. On the complexity of k-SAT. J. Comput. Syst. Sci., 62(2):367375, 2001.CrossRefGoogle Scholar
Jain, P., Netrapalli, P., and Sanghavi, S.. Low rank matrix completion using alternating minimization. In STOC, pages 665–674, 2013.Google Scholar
Kalai, A. T., Moitra, A., and Valiant, G.. Efficiently learning mixtures of two Gaussians. In STOC, pages 553–562, 2010.Google Scholar
Karp, R.. Probabilistic analysis of some combinatorial search problems. In Algorithms and Complexity: New Directions and Recent Results. New York: Academic Press, 1976, pages 119.Google Scholar
Kashin, B. and Temlyakov, V.. A remark on compressed sensing. Manuscript, 2007.Google Scholar
Khachiyan, L.. On the complexity of approximating extremal determinants in matrices. J. Complexity, 11(1):138153, 1995.CrossRefGoogle Scholar
Koller, D. and Friedman, N.. Probabilistic Graphical Models. Cambridge, MA: MIT Press, 2009.Google Scholar
Kruskal, J.. Three-way arrays: Rank and uniqueness of trilinear decompositions with applications to arithmetic complexity and statistics. Linear Algebra Appl., 18(2):95138, 1997.Google Scholar
Kumar, A., Sindhwani, V., and Kambadur, P.. Fast conical hull algorithms for near-separable non-negative matrix factorization. In ICML, pages 231–239, 2013.Google Scholar
Lee, D. and Seung, H.. Learning the parts of objects by non-negative matrix factorization. Nature, 401(6755):788791, 1999.CrossRefGoogle ScholarPubMed
Lee, D. and Seung, H.. Algorithms for non-negative matrix factorization. In NIPS, pages 556–562, 2000.Google Scholar
Leurgans, S., Ross, R., and Abel, R.. A decomposition for three-way arrays. SIAM J. Matrix Anal. Appl., 14(4):10641083, 1993.Google Scholar
Lewicki, M. and Sejnowski, T.. Learning overcomplete representations. Comput., 12:337365, 2000.Google Scholar
Li, W. and McCallum, A.. Pachinko allocation: DAG-structured mixture models of topic correlations. In ICML, pp. 633–640, 2007.Google Scholar
Lindsay, B.. Mixture Models: Theory, Geometry and Applications. Hayward, CA: Institute for Mathematical Statistics, 1995.Google Scholar
Logan, B. F.. Properties of high-pass signals. PhD thesis, Columbia University, 1965.Google Scholar
Lovász, L. and Saks, M.. Communication complexity and combinatorial lattice theory. J. Comput. Syst. Sci., 47(2):322349, 1993.Google Scholar
McSherry, F.. Spectral partitioning of random graphs. In FOCS, pages 529–537, 2001.Google Scholar
Mallat, S.. A Wavelet Tour of Signal Processing. New York: Academic Press, 1998.Google Scholar
Mallat, S. and Zhang, Z.. Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process., 41(12):33973415, 1993.Google Scholar
Moitra, A.. An almost optimal algorithm for computing nonnegative rank. In SODA, pages 1454–1464, 2013.Google Scholar
Moitra, A.. Super-resolution, extremal functions and the condition number of Vandermonde matrices. In STOC, pages 821–830, 2015.Google Scholar
Moitra, A. and Valiant, G.. Setting the polynomial learnability of mixtures of Gaussians. In FOCS, pages 93–102, 2010.Google Scholar
Mossel, E. and Roch, S.. Learning nonsingular phylogenies and hidden Markov models. In STOC, pages 366–375, 2005.Google Scholar
Nesterov, Y.. Introductory Lectures on Convex Optimization: A Basic Course. New York: Springer, 2004.Google Scholar
Olshausen, B. and Field, B.. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37(23):33113325, 1997.Google Scholar
Papadimitriou, C., Raghavan, P., Tamaki, H., and Vempala, S.. Latent semantic indexing: A probabilistic analysis. J. Comput. Syst. Sci., 61(2):217235, 2000.Google Scholar
Pati, Y., Rezaiifar, R., and Krishnaprasad, P.. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. Asilomar Conference on Signals, Systems, and Computers, pages 40–44, 1993.Google Scholar
Pearson, K.. Contributions to the mathematical theory of evolution. Philos. Trans. Royal Soc. A, 185: 71110, 1894.Google Scholar
Rabani, Y., Schulman, L., and Swamy, C.. Learning mixtures of arbitrary distributions over large discrete domains. In ITCS, pages 207–224, 2014.Google Scholar
Raz, R.. Tensor-rank and lower bounds for arithmetic formulas. In STOC, pages 659–666, 2010.Google Scholar
Recht, B.. A simpler approach to matrix completion. J. Mach. Learn. Res., 12:34133430, 2011.Google Scholar
Recht, B., Fazel, M., and Parrilo, P.. Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization. SIAM Rev., 52(3):471501, 2010.Google Scholar
Redner, R. A. and Walker, H. F.. Mixture densities, maximum likelihood and the EM algorithm. SIAM Rev., 26(2):195239, 1984.Google Scholar
Renegar, J.. On the computational complexity and geometry of the first-order theory of the reals. J. Symb. Comput., 13(1):255352, 1991.Google Scholar
Rockefellar, T.. Convex Analysis. Princeton, NJ: Princeton University Press, 1996.Google Scholar
Seidenberg, A.. A new decision method for elementary algebra. Ann. Math., 60(2):365374, 1954.Google Scholar
de Silva, V. and Lim, L.-H.. Tensor rank and the ill-posedness of the best low rank approximation problem. SIAM J. Matrix Anal. Appl., 30(3):10841127, 2008.Google Scholar
Spielman, D. and Teng, S.-H.. Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time. In Journal of the ACM, 51(3):385463, 2004.Google Scholar
Spielman, D., Wang, H., and Wright, J.. Exact recovery of sparsely-used dictionaries. J. Mach. Learn. Res., 23:118, 2012.Google Scholar
Srebro, N. and Shraibman, A.. Rank, trace-norm and max-norm. In COLT, pages 545–560, 2005.Google Scholar
Steel, M.. Recovering a tree from the leaf colourations it generates under a Markov model. Appl. Math. Lett., 7:1924, 1994.Google Scholar
Tarski, A.. A decision method for elementary algebra and geometry. Berkeley and Los Angeles: University of California Press, 1951.Google Scholar
Teicher, H.. Identifiability of mixtures. Ann. Math. Stat., 31(1):244248, 1961.Google Scholar
Tropp, J.. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inf. Theory, 50(10):22312242, 2004.Google Scholar
Tropp, J., Gilbert, A., Muthukrishnan, S., and Strauss, M.. Improved sparse approximation over quasi-incoherent dictionaries. IEEE International Conference on Image Processing, 1:3740, 2003.Google Scholar
Valiant, L.. A theory of the learnable. Commun. ACM, 27(11):11341142, 1984.Google Scholar
Vavasis, S.. On the complexity of nonnegative matrix factorization. SIAM J. Optim., 20(3):13641377, 2009.Google Scholar
Vempala, S. and Xiao, Y.. Structure from local optima: Learning subspace juntas via higher order PCA. arXiv:abs/1108.3329, 2011.Google Scholar
Vempala, S. and Wang, G.. A spectral algorithm for learning mixture models. J. Comput. Syst. Sci., 68(4):841860, 2004.Google Scholar
Wainwright, M. and Jordan, M.. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1–2): 1305, 2008.Google Scholar
Wedin, P.. Perturbation bounds in connection with singular value decompositions. BIT Numer. Math., 12:99111, 1972.Google Scholar
Yannakakis, M.. Expressing combinatorial optimization problems by linear programs. J. Comput. Syst. Sci., 43(3):441466, 1991.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Bibliography
  • Ankur Moitra, Massachusetts Institute of Technology
  • Book: Algorithmic Aspects of Machine Learning
  • Online publication: 28 September 2018
  • Chapter DOI: https://doi.org/10.1017/9781316882177.010
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • Bibliography
  • Ankur Moitra, Massachusetts Institute of Technology
  • Book: Algorithmic Aspects of Machine Learning
  • Online publication: 28 September 2018
  • Chapter DOI: https://doi.org/10.1017/9781316882177.010
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • Bibliography
  • Ankur Moitra, Massachusetts Institute of Technology
  • Book: Algorithmic Aspects of Machine Learning
  • Online publication: 28 September 2018
  • Chapter DOI: https://doi.org/10.1017/9781316882177.010
Available formats
×