Mostrar el registro sencillo del ítem

dc.contributor.authorLopez-Martin, Manuel
dc.contributor.authorSánchez-Esguevillas, Antonio
dc.contributor.authorArribas, Juan Ignacio
dc.contributor.authorCarro, Belen
dc.date.accessioned2024-01-25T09:25:25Z
dc.date.available2024-01-25T09:25:25Z
dc.date.issued2022-03
dc.identifier.issn1566-2535
dc.identifier.urihttp://hdl.handle.net/10366/154669
dc.description.abstractContrastive learning makes it possible to establish similarities between samples by comparing their distances in an intermediate representation space (embedding space) and using loss functions designed to attract/repel similar/dissimilar samples. The distance comparison is based exclusively on the sample features. We propose a novel contrastive learning scheme by including the labels in the same embedding space as the features and performing the distance comparison between features and labels in this shared embedding space. Following this idea, the sample features should be close to its ground-truth (positive) label and away from the other labels (negative labels). This scheme allows to implement a supervised classification based on contrastive learning. Each embedded label will assume the role of a class prototype in embedding space, with sample features that share the label gathering around it. The aim is to separate the label prototypes while minimizing the distance between each prototype and its same-class samples. A novel set of loss functions is proposed with this objective. Loss minimization will drive the allocation of sample features and labels in embedding space. Loss functions and their associated training and prediction architectures are analyzed in detail, along with different strategies for label separation. The proposed scheme drastically reduces the number of pair-wise comparisons, thus improving model performance. In order to further reduce the number of pair-wise comparisons, this initial scheme is extended by replacing the set of negative labels by its best single representative: either the negative label nearest to the sample features or the centroid of the cluster of negative labels. This idea creates a new subset of models which are analyzed in detail. The outputs of the proposed models are the distances (in embedding space) between each sample and the label prototypes. These distances can be used to perform classification (minimum distance label), features dimensionality reduction (using the distances and the embeddings instead of the original features) and data visualization (with 2 or 3D embeddings). Although the proposed models are generic, their application and performance evaluation is done here for network intrusion detection, characterized by noisy and unbalanced labels and a challenging classification of the various types of attacks. Empirical results of the model applied to intrusion detection are presented in detail for two well-known intrusion detection datasets, and a thorough set of classification and clustering performance evaluation metrics are included.es_ES
dc.language.isoenges_ES
dc.subjectLabel embeddinges_ES
dc.subjectcontrastive learninges_ES
dc.subjectMax margin losses_ES
dc.subjectDeep learninges_ES
dc.subjectEmbeddings fusiones_ES
dc.subjectNetwork intrusion detectiones_ES
dc.titleSupervised contrastive learning over prototype-label embeddings for network intrusion detectiones_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.relation.publishversionhttps://doi.org/10.1016/j.inffus.2021.09.014
dc.subject.unesconeurociencias
dc.subject.unescoingeniería de las teleco
dc.identifier.doi10.1016/j.inffus.2021.09.014
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.journal.titleInformation Fusiones_ES
dc.volume.number79es_ES
dc.page.initial200es_ES
dc.page.final228es_ES
dc.type.hasVersioninfo:eu-repo/semantics/publishedVersiones_ES


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem