Utilize este identificador para referenciar este registo: https://hdl.handle.net/1822/79434

Registo completo
Campo DCValorIdioma
dc.contributor.authorSantos, Flávio Arthur Oliveirapor
dc.contributor.authorZanchettin, Cleberpor
dc.contributor.authorSilva, José Vitor Santospor
dc.contributor.authorMatos, Leonardo Nogueirapor
dc.contributor.authorNovais, Paulopor
dc.date.accessioned2022-09-07T13:37:48Z-
dc.date.issued2021-
dc.identifier.citationSantos, F.A.O., Zanchettin, C., Silva, J.V.S., Matos, L.N., Novais, P. (2021). A Hybrid Post Hoc Interpretability Approach for Deep Neural Networks. In: Sanjurjo González, H., Pastor López, I., García Bringas, P., Quintián, H., Corchado, E. (eds) Hybrid Artificial Intelligent Systems. HAIS 2021. Lecture Notes in Computer Science(), vol 12886. Springer, Cham. https://doi.org/10.1007/978-3-030-86271-8_50-
dc.identifier.isbn978-3-030-86270-1-
dc.identifier.issn0302-9743-
dc.identifier.urihttps://hdl.handle.net/1822/79434-
dc.description.abstractEvery day researchers publish works with state-of-the-art results using deep learning models, however as these models become common even in production, ensuring fairness is a main concern of the deep learning models. One way to analyze the model fairness is based on the model interpretability, obtaining the essential features to the model decision. There are many interpretability methods to produce the deep learning model interpretation, such as Saliency, GradCam, Integrated Gradients, Layer-wise relevance propagation, and others. Although those methods make the feature importance map, different methods have different interpretations, and their evaluation relies on qualitative analysis. In this work, we propose the Iterative post hoc attribution approach, which consists of seeing the interpretability problem as an optimization view guided by two objective definitions of what our solution considers important. We solve the optimization problem with a hybrid approach considering the optimization algorithm and the deep neural network model. The obtained results show that our approach can select the features essential to the model prediction more accurately than the traditional interpretability methods.por
dc.description.sponsorshipFCT -Fundação para a Ciência e a Tecnologia(UIDB/00319/2020)por
dc.language.isoengpor
dc.publisherSpringer, Cham-
dc.relationinfo:eu-repo/grantAgreement/FCT/6817 - DCRRNI ID/UIDB%2F00319%2F2020/PTpor
dc.rightsrestrictedAccesspor
dc.subjectDeep learningpor
dc.subjectOptimizationpor
dc.subjectInterpretabilitypor
dc.subjectFairnesspor
dc.titleA hybrid post hoc interpretability approach for deep neural networkspor
dc.typeconferencePaperpor
dc.peerreviewedyespor
oaire.citationStartPage600por
oaire.citationEndPage610por
oaire.citationVolume12886 LNAIpor
dc.date.updated2022-08-29T17:24:23Z-
dc.identifier.doi10.1007/978-3-030-86271-8_50por
dc.date.embargo10000-01-01-
dc.identifier.eisbn978-3-030-86271-8-
sdum.export.identifier12332-
sdum.journalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)por
Aparece nas coleções:CAlg - Artigos em livros de atas/Papers in proceedings

Ficheiros deste registo:
Ficheiro Descrição TamanhoFormato 
HAIS54.pdf
Acesso restrito!
765,02 kBAdobe PDFVer/Abrir

Partilhe no FacebookPartilhe no TwitterPartilhe no DeliciousPartilhe no LinkedInPartilhe no DiggAdicionar ao Google BookmarksPartilhe no MySpacePartilhe no Orkut
Exporte no formato BibTex mendeley Exporte no formato Endnote Adicione ao seu ORCID