Altuğlu, Tuğçe BallıAltun, Kerem2021-05-152021-05-152015978-1-4503-3912-4https://doi.org/10.1145/2818346.2830600https://hdl.handle.net/20.500.12939/6172015 ACM International Conference on Multimodal Interaction -- NOV 09-13, 2015 -- Seattle, WAAltun, Kerem/0000-0002-5493-8921In this study, we performed touch gesture recognition on two sets of data provided by "Recognition of Social Touch Gestures Challenge 2015". For the first dataset, dubbed Corpus of Social Touch (CoST), touch is performed on a mannequin arm, whereas for the second dataset (Human-Animal Affective Robot Touch HAART) touch is performed in a human-pet interaction setting. CoST includes 14 gestures and HAART includes 7 gestures. We used the pressure data, image features, Hurst exponent, Hjorth parameters and autoregressive model coefficients as features, and performed feature selection using sequential forward floating search. We obtained classification results around 60%-70% for the HAART dataset. For the CoST dataset, the results range from 26% to 95% depending on the selection of the training/test sets.eninfo:eu-repo/semantics/closedAccessGesture RecognitionHuman-Robot InteractionRandom ForestsFeature SelectionSequential Floating Forward SearchRecognizing touch gestures for social human-robot interactionConference Object10.1145/2818346.28306004074132-s2.0-84959258854N/AWOS:000380609500071N/A