Utilize este identificador para referenciar este registo: https://hdl.handle.net/1822/85724

Registo completo
Campo DCValorIdioma
dc.contributor.authorRodrigues, Nelson Ricardo Pereirapor
dc.contributor.authorCosta, Nuno Miguel Cerqueirapor
dc.contributor.authorMelo, César Gonçalo Macedopor
dc.contributor.authorAbbasi, Alipor
dc.contributor.authorFonseca, Jaime C.por
dc.contributor.authorCardoso, Paulopor
dc.contributor.authorBorges, Joãopor
dc.date.accessioned2023-07-26T11:09:28Z-
dc.date.available2023-07-26T11:09:28Z-
dc.date.issued2023-06-15-
dc.identifier.citationRodrigues, N.R.P.; da Costa, N.M.C.; Melo, C.; Abbasi, A.; Fonseca, J.C.; Cardoso, P.; Borges, J. Fusion Object Detection and Action Recognition to Predict Violent Action. Sensors 2023, 23, 5610. https://doi.org/10.3390/s23125610por
dc.identifier.issn1424-8220por
dc.identifier.urihttps://hdl.handle.net/1822/85724-
dc.description.abstractIn the context of Shared Autonomous Vehicles, the need to monitor the environment inside the car will be crucial. This article focuses on the application of deep learning algorithms to present a fusion monitoring solution which was three different algorithms: a violent action detection system, which recognizes violent behaviors between passengers, a violent object detection system, and a lost items detection system. Public datasets were used for object detection algorithms (COCO and TAO) to train state-of-the-art algorithms such as YOLOv5. For violent action detection, the MoLa InCar dataset was used to train on state-of-the-art algorithms such as I3D, R(2+1)D, SlowFast, TSN, and TSM. Finally, an embedded automotive solution was used to demonstrate that both methods are running in real-time.por
dc.description.sponsorshipWork has been supported by FCT—Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020. This work was partly financed by European social funds through the Portugal 2020 program, and via national funds through FCT—Foundation for Science and Technology, within the scope of projects POCH-02-5369-FSE-000006. The author would also like to acknowledge FCT for the attributed Doctoral grant PD/BDE/150500/2019.por
dc.language.isoengpor
dc.publisherMultidisciplinary Digital Publishing Institute (MDPI)por
dc.relationinfo:eu-repo/grantAgreement/FCT/6817 - DCRRNI ID/UIDB%2F00319%2F2020/PTpor
dc.relationPOCH-02-5369-FSE-000006por
dc.relationinfo:eu-repo/grantAgreement/FCT/POR_NORTE/PD%2FBDE%2F150500%2F2019/PTpor
dc.rightsopenAccesspor
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/por
dc.subjectMachine learningpor
dc.subjectVisual intelligencepor
dc.subjectObject detectionpor
dc.subjectImage processingpor
dc.subjectAction recognitionpor
dc.subjectAutonomous vehiclespor
dc.titleFusion object detection and action recognition to predict violent actionpor
dc.typearticlepor
dc.peerreviewedyespor
dc.relation.publisherversionhttps://www.mdpi.com/1424-8220/23/12/5610por
oaire.citationStartPage1por
oaire.citationEndPage21por
oaire.citationIssue12por
oaire.citationVolume23por
dc.date.updated2023-06-27T13:23:26Z-
dc.identifier.eissn1424-8220-
dc.identifier.doi10.3390/s23125610por
dc.identifier.pmid37420776por
sdum.journalSensorspor
oaire.versionVoRpor
dc.identifier.articlenumber5610por
Aparece nas coleções:BUM - MDPI

Ficheiros deste registo:
Ficheiro Descrição TamanhoFormato 
sensors-23-05610.pdf3,55 MBAdobe PDFVer/Abrir

Este trabalho está licenciado sob uma Licença Creative Commons Creative Commons

Partilhe no FacebookPartilhe no TwitterPartilhe no DeliciousPartilhe no LinkedInPartilhe no DiggAdicionar ao Google BookmarksPartilhe no MySpacePartilhe no Orkut
Exporte no formato BibTex mendeley Exporte no formato Endnote Adicione ao seu ORCID