Mostra i principali dati dell'item

dc.contributor.authorCorchado Rodríguez, Emilio Santiago 
dc.contributor.authorMacDonald, Donald
dc.contributor.authorFyfe, Colin
dc.date.accessioned2017-09-05T11:02:24Z
dc.date.available2017-09-05T11:02:24Z
dc.date.issued2004
dc.identifier.citationData Mining and Knowledge Discovery. Volumen 8 (3), pp. 203-225. Springer Science + Business Media.
dc.identifier.issn1384-5810 (Print)
dc.identifier.urihttp://hdl.handle.net/10366/134458
dc.description.abstractIn this paper, we review an extension of the learning rules in a Principal Component Analysis network which has been derived to be optimal for a specific probability density function. We note that this probability density function is one of a family of pdfs and investigate the learning rules formed in order to be optimal for several members of this family. We show that, whereas we have previously (Lai et al., 2000; Fyfe and MacDonald, 2002) viewed the single member of the family as an extension of PCA, it is more appropriate to view the whole family of learning rules as methods of performing Exploratory Projection Pursuit. We illustrate this on both artificial and real data sets.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherSpringer Science + Business Media
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 Unported
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/3.0/
dc.subjectComputer Science
dc.titleMaximum and Minimum Likelihood Hebbian Learning for Exploratory Projection Pursuit
dc.typeinfo:eu-repo/semantics/article
dc.rights.accessRightsinfo:eu-repo/semantics/openAccess


Files in questo item

Thumbnail

Questo item appare nelle seguenti collezioni

Mostra i principali dati dell'item

Attribution-NonCommercial-NoDerivs 3.0 Unported
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution-NonCommercial-NoDerivs 3.0 Unported