Utilize este identificador para referenciar este registo: https://hdl.handle.net/1822/86333

Registo completo
Campo DCValorIdioma
dc.contributor.authorOliveira Santos, Flavio Arthurpor
dc.contributor.authorZanchettin, Cleberpor
dc.contributor.authorMatos, Leonardo Nogueirapor
dc.contributor.authorNovais, Paulopor
dc.date.accessioned2023-09-12T14:33:42Z-
dc.date.issued2022-
dc.identifier.citationFlávio Arthur Oliveira Santos, Cleber Zanchettin, Leonardo Nogueira Matos, Paulo Novais, On the Impact of Interpretability Methods in Active Image Augmentation Method, Logic Journal of the IGPL, Volume 30, Issue 4, August 2022, Pages 611–621, https://doi.org/10.1093/jigpal/jzab006por
dc.identifier.issn1367-0751-
dc.identifier.urihttps://hdl.handle.net/1822/86333-
dc.description.abstractRobustness is a significant constraint in machine learning models. The performance of the algorithms must not deteriorate when training and testing with slightly different data. Deep neural network models achieve awe-inspiring results in a wide range of applications of computer vision. Still, in the presence of noise or region occlusion, some models exhibit inaccurate performance even with data handled in training. Besides, some experiments suggest deep learning models sometimes use incorrect parts of the input information to perform inference. Active image augmentation (ADA) is an augmentation method that uses interpretability methods to augment the training data and improve its robustness to face the described problems. Although ADA presented interesting results, its original version only used the vanilla backpropagation interpretability to train the U-Net model. In this work, we propose an extensive experimental analysis of the interpretability method's impact on ADA. We use five interpretability methods: vanilla backpropagation, guided backpropagation, gradient-weighted class activation mapping (GradCam), guided GradCam and InputXGradient. The results show that all methods achieve similar performance at the ending of training, but when combining ADA with GradCam, the U-Net model presented an impressive fast convergence.por
dc.description.sponsorshipThis work has been supported by FundacAo para a Ciencia e Tecnologia within the Project Scope: UIDB/00319/2020. The authors also thank CoordenacAo de Aperfeicoamento de Pessoal de Nivel Superior and Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (Brazilian Researcher Agencies) for the financial support.por
dc.language.isoengpor
dc.publisherOxford University Presspor
dc.relationinfo:eu-repo/grantAgreement/FCT/6817 - DCRRNI ID/UIDB%2F00319%2F2020/PTpor
dc.rightsrestrictedAccesspor
dc.subjectData augmentationpor
dc.subjectrobustnesspor
dc.subjectinterpretabilitypor
dc.titleOn the impact of interpretability methods in active image augmentation methodpor
dc.typearticlepor
dc.peerreviewedyespor
dc.relation.publisherversionhttps://academic.oup.com/jigpal/article/30/4/611/6123345por
oaire.citationStartPage611por
oaire.citationEndPage621por
oaire.citationIssue4por
oaire.citationVolume30por
dc.date.updated2023-07-31T23:14:45Z-
dc.identifier.doi10.1093/jigpal/jzab006por
dc.date.embargo10000-01-01-
dc.subject.wosScience & Technology-
sdum.export.identifier12656-
sdum.journalLogic Journal of the IGPLpor
oaire.versionAMpor
Aparece nas coleções:CAlg - Artigos em revistas internacionais / Papers in international journals

Ficheiros deste registo:
Ficheiro Descrição TamanhoFormato 
2102.12354.pdf
Acesso restrito!
2,55 MBAdobe PDFVer/Abrir

Partilhe no FacebookPartilhe no TwitterPartilhe no DeliciousPartilhe no LinkedInPartilhe no DiggAdicionar ao Google BookmarksPartilhe no MySpacePartilhe no Orkut
Exporte no formato BibTex mendeley Exporte no formato Endnote Adicione ao seu ORCID