Skip to main content Accessibility help
×
Home
Hostname: page-component-559fc8cf4f-x5fd4 Total loading time: 0.548 Render date: 2021-02-28T09:35:20.523Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "metricsAbstractViews": false, "figures": false, "newCiteModal": false, "newCitedByModal": true }

Prototype-based Models for the Supervised Learning of Classification Schemes

Published online by Cambridge University Press:  30 May 2017

Michael Biehl
Affiliation:
Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen, P.O. Box 407, 9700 AK Groningen, The Netherlands email: m.biehl@rug.nl
Barbara Hammer
Affiliation:
CITEC Center of Excellence, Bielefeld University, Univ.-Str. 21-23, 33594 Bielefeld, Germany email: bhammer@techfak.uni-bielefeld.de
Thomas Villmann
Affiliation:
Computational Intelligence Group, Univ. of Applied Sciences, Technikumplatz 17, 09648 Mittweida, Germany email: villmann@hs-mittweida.de
Rights & Permissions[Opens in a new window]

Abstract

An introduction is given to the use of prototype-based models in supervised machine learning. The main concept of the framework is to represent previously observed data in terms of so-called prototypes, which reflect typical properties of the data. Together with a suitable, discriminative distance or dissimilarity measure, prototypes can be used for the classification of complex, possibly high-dimensional data. We illustrate the framework in terms of the popular Learning Vector Quantization (LVQ). Most frequently, standard Euclidean distance is employed as a distance measure. We discuss how LVQ can be equipped with more general dissimilarites. Moreover, we introduce relevance learning as a tool for the data-driven optimization of parameterized distances.

Type
Contributed Papers
Copyright
Copyright © International Astronomical Union 2017 

References

Hastie, T., Tibshirani, R. & Friedman, J. 2009, The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Berlin: Springer CrossRefGoogle Scholar
Kohonen, T. 1990, Self-Organizing Maps. Berlin: Springer Google ScholarPubMed
Martinetz, T., Berkovich, S. & Schulten, K. 1993 IEEE Trans. Neural Networks, 4, 558569.CrossRefGoogle Scholar
Biehl, M., Hammer, B., Verleysen, M., & Villmann, T. (eds.) 2009, Similiarity based clustering. Berlin: Springer, Lecture Notes in Artificial Intelligence 5400CrossRefGoogle Scholar
Biehl, M., Hammer, B., & Villmann, T. 2016, Wiley Interdisciplinary Reviews: Cognitive Science, 7 (2), 92111 Google Scholar
Cover, T. & Hart, P. 1967, IEEE Trans. Information Theory, 13, 2127 CrossRefGoogle Scholar
Duda, R., Hart, P., & Stork, D. 2001, Pattern Classification. Hoboken, NJ: Wiley Google Scholar
Hart, P. 1968, IEEE Trans. Information Theory, 14, 515516 CrossRefGoogle Scholar
Nova, D. & Estévez, P. A. 2014, Neural Computing and Appl., 25, 11524 CrossRefGoogle Scholar
Robbins, H. & Monro, S. 1951, The Ann. of Math. Statistics, 22, 405 CrossRefGoogle Scholar
Kohonen, T. 1990, In: Proc. Int. Joint Conf. on Neural Networks, Vol. 1, 545550 Google Scholar
Seo, S., Bode, M., & Obermayer, K. 2003, IEEE Trans. Neural Networks, 14, 390–398Google Scholar
Sato, A. & Yamada, K. 1996, In: Touretzky, D.S. & Hasselmo, M.E. (eds.), Proc. Adv. in Neural Proc. Information Processing Systems 8. Cambridge, MA: MIT Press, 423429 Google Scholar
Bottou, L. 1998, In: Saad, D. (ed.), Online Learning and Neural Networks. New York: Cambridge Univ. Press, 942 Google Scholar
Sra, S., Nowozin, S., & Wright, S. (eds.) 2011, Optimization for Machine Learning. Cambridge, MA: MIT Press Google Scholar
Hammer, B. & Villmann, T. 2005, In: Verleysen, M. (ed.) Proc. Europ. Symp. on Artificial Neural Networks. Evere: d-side publishing, 303316 Google Scholar
Biehl, M., Hammer, B. & Villmann, T. 2014 In: Grandinetti, L., Lippert, T. & Petkov, N. (eds.), Proc. BrainComp2013. Berlin: Springer, Lecture Notes in Computer Science 8603, 110116 Google Scholar
Mahalonobis, P. 1936, In: Proc. of the National Inst. of Sciences of India, 2, 4955 Google Scholar
Schölkopf, B. 2001, In: Proc. Adv. in neural information processing systems, 301307.Google Scholar
Biehl, M., Breitling, R., & Li, Y. 2007, In: Yin, H., Tino, P., Corchado, E., Byrne, W. & Yao, X. (eds.), Proc. Intelligent Data Engineering and Automated Learning, IDEAL. Berlin: Springer, Lecture Notes in Computer Science 4881, 880889.Google Scholar
Golubitsky, O. & Watt, S. 2010, Int. J. on Document Analysis and Recognition (IJDAR), 13, 133146 CrossRefGoogle Scholar
Shawe-Taylor, J. & Cristianini, N. 2004, Kernel Methods for Pattern Analysis. New York: Cambridge University Press CrossRefGoogle Scholar
Schölkopf, B. & Smola, A. 2002, Learning with Kernels. Cambridge, MA: MIT Press Google Scholar
Villmann, T., Kästner, M., Nebel, D., & Riedel, M. 2012, In: Proc. of the International Conference on Machine Learning Applications (ICMLA). New York: IEEE, 710 Google Scholar
Cichocki, A., Zdunek, R., Phan, A., & Amari, S. 2009, Nonnegative Matrix and Tensor Factorizations. Hoboken, NJ: Wiley CrossRefGoogle Scholar
Mwebaze, E., Schneider, P., Schleif, F.-M., Aduwo, J., Quinn, J., Haase, S., Villmann, T., & Biehl, M. 2011, Neurocomputing, 74, 14291435 CrossRefGoogle Scholar
Lange, M. & Villmann, T. 2013, Machine Learning Reports, MLR-03-2013Google Scholar
Hammer, B. & Villmann, T. 2002, Neural Networks, 15 (8), 10591068 CrossRefGoogle Scholar
Schneider, P., Biehl, M., & Hammer, B. 2009, Neural Computation, 21, 3532–3561Google Scholar
Schneider, P., Bunte, K., Stiekema, H., Hammer, B., Villmann, T., & Biehl, M. 2010, IEEE Trans. Neural Networks, 21, 831840 CrossRefGoogle Scholar
Bunte, K., Schneider, P., Hammer, B., Schleif, F.-M., Villmann, T., & Biehl, M. 2012, Neural Networks, 26, 159173 CrossRefGoogle Scholar
Biehl, M., Bunte, K., & Schneider, P. 2013, PLOS ONE, 8, e59401 CrossRefGoogle Scholar
Arlt, W., Biehl, M., Taylor, A., Hahner, S., Libe, R., Hughes, B., Schneider, P., Smith, D., Stiekema, H., Krone, N., Porfiri, E., Opocher, G., Bertherat, J., Mantero, F., Allolio, B., Terzolo, M., Nightingale, P., Shackleton, C., Bertagna, X., Fassnacht, M., & Stewart, P. 2011, J. Clinical Endocrinology and Metabolism, 44, 18921902 Google Scholar
Biehl, M., Schneider, P., Smith, D., Stiekema, H., Taylor, A., Hughes, B., Shackleton, C., Stewart, P. & Arlt, W. 2012 In: Verleysen, M. (ed.) Proc. Europ. Symp. on Artificial Neural Networks (ESANN). Evere: d-side publishing, 423428 Google Scholar
Leo, Y., Adlard, N., Biehl, M., Juarez, M., Samllie, T., Snow, M., Buckley, C. D., Raza, K., Filer, A., & Scheel-Toellner, D. 2016, Ann. of the Rheumatic Disease, 75, 763771 Google Scholar
Denecke, A., Wersing, H., Steil, J., & Körner, E. 2009, Neurocomputing, 72, 14701482 CrossRefGoogle Scholar
Weinberger, K. & Saul, L. 2009, J. of Machine Learning Res., 10, 207244 Google Scholar
Backhaus, A., Ashok, P., Praveen, B., Dholakia, K., & Seiffert, U. 2012, In: Verleysen, M. (ed.) Proc. Europ. Symp. on Artificial Neural Networks. Evere: d-side publishing, 411416 Google Scholar
Boareto, M., Cesar, J., Leite, V., & Caticha, N. 2015, IEEE/ACM Trans. Computational Biology and Bioinformatics, 12 (3), 705711 CrossRefGoogle Scholar
Biehl, M., Hammer, B., Schleif, F.-M., Schneider, P., & Villmann, T. 2015, Proc. Int. Joint Conf. Neural Networks. New York: IEEE, 8 pagesGoogle Scholar
Fisher, R. 1936, Annual Eugenics, 7, 179188 CrossRefGoogle Scholar
Lichman, M. 2013, UCI machine learning repository, website: http://archive.ics.uci.edu/ml.Google Scholar

Full text views

Full text views reflects PDF downloads, PDFs sent to Google Drive, Dropbox and Kindle and HTML full text views.

Total number of HTML views: 0
Total number of PDF views: 199 *
View data table for this chart

* Views captured on Cambridge Core between 30th May 2017 - 28th February 2021. This data will be updated every 24 hours.

Access

Send article to Kindle

To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Prototype-based Models for the Supervised Learning of Classification Schemes
Available formats
×

Send article to Dropbox

To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

Prototype-based Models for the Supervised Learning of Classification Schemes
Available formats
×

Send article to Google Drive

To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

Prototype-based Models for the Supervised Learning of Classification Schemes
Available formats
×
×

Reply to: Submit a response


Your details


Conflicting interests

Do you have any conflicting interests? *