The MUG Facial Expression Database was created by the MUG group. It contains several image sequences of an adequate number of subjects for the development and the evaluation of facial expression recognition systems that use posed and induced expressions. The image sequences were captured in a laboratory environment with high resolution and no occlusions. In the first part of the database eighty six subjects performed the six basic expressions according to the ’emotion prototypes’ as defined in the Investigator’s Guide in the FACS manual. The image sequences start and stop at neutral state and follow the onset, apex, offset temporal pattern. The publicly available sequences count up to 1462. Landmark point annotation of 1222 images is available as well. In the second part of the database eighty two of the subjects were recorded while they were watching an emotion inducing video. This way several authentic expressions are recorded.

The Parkinson’s Disease Postural Tremor Dataset contains IMU signals captured in-the-wild via the accelerometer sensor embedded in modern smartphones, for the purpose of detecting tremorous episodes, related to Parkinson’s Disease (PD). A group of 31 PD patients and 14 Healthy controls contributed accelerometer data using their personal smartphones, for a period spanning many months.Tri-axial acceleration values were recorded automatically whenevera phone call was realized. The recording lasted for 75 seconds at the most. Each phone call thus resulted in one recorded accelerometer signal, also referred to as session. Each subject contributed a different amount of sessions depending on the number of phone calls they realized during the data collection period as well as their participation time (they were free to drop-out at any time). A detailed description of the capturing process as well as analysis results, can be found in the related research article.

The Parkinson’s Disease Smartphone Sensor Dataset contains accelerometer recodings and keyboard typing data contributed by Parkinson’s Disease patients and Healthy Controls. Accelerometer data consists of acceleration values recorded during phone calls and typing data consist of virtual keyboard press and release timestamps. The dataset is divided into two parts: the first part, called SData, contains data from a small, medically evaluated, set of users, while the second part, called GData, contains recordings from a large body of users with self-reported PD labels.

The Food Intake Cycle (FIC) was created by the MUG group towards the investigation of in-meal eating behavior. FIC contains the 6-DoF IMU sensor recordings from the 21 eating activities of 12 unique subjects in the restaurant of Aristotle University of Thessaloniki, with an average duration of 11.7 minutes. In more detail, FIC contains the 3D accelerometer and gyroscope signals originating from off-the-shelf commercial smartwatches such as the Microsoft Band 2 and the Sony Smartwatch 2. For ground truth, we also provide the hand-labeled start and end points for each in-meal hand micromovement.

The Free-living Food Intake Cycle (FreeFIC) dataset was created by the Multimedia Understanding Group towards the investigation of in-the-wild eating behavior. This is achieved by recording the subjects’ meals as a small part part of their everyday life, unscripted, activities. The FreeFIC dataset contains the 3D acceleration and orientation velocity signals (6 DoF) from 16 in-the-wild sessions provided by 6 unique subjects. All sessions were recorded using a commercial smartwatch (Ticwatch™) while the participants performed their everyday activities. In addition, FreeFIC also contains the start and end moments of each meal session as reported by the participants.

The SPLENDID chewing detection challenge dataset was created in the context of the EU funded SPLENDID project. This dataset contains approximately 60 hours of recordings from a prototype chewing detection system. The sensor signals include photoplethysmography (PPG) and processed audio from the ear-worn chewing sensor, and signals from a belt-mounted 3D accelerometer. The recording sessions include 14 participants and were conducted in the context of the EU funded SPLENDID project, at Wageningen University, The Netherlands, during the summer of 2015. The purpose of the dataset is to help develop effective algorithms for chewing detection based PPG, audio and accelerometer signals.