<metaproperty="og:image"content="Path to my teaser.png"/><!-- Facebook automatically scrapes this. Go to https://developers.facebook.com/tools/debug/ if you update and want to force Facebook to rescrape. -->
<metaproperty="og:title"content="Creative and Descriptive Paper Title."/>
We present a dataset of Functional Capacity Evaluation (FCE) movements to train and evaluate machine learning models for FCE tasks. This dataset encompasses 728 RGB videos of 11 subjects performing a variety of FCE movements, frequently repeated in FCE tests, such as external and internal shoulder rotation, heel-toe gait, head extension, and low-back pain movements. Skeletal data for these sequences was extracted using two alternative approaches. The first approach employs an RGB-D (Kinect v2) sensor that provides the 3D coordinates and orientations of each joint and the corresponding projected 2D keypoints. The second approach utilizes a learning-based model to estimate an accurate 3D human pose and the body bounding box from a single RGB image. The sequences were annotated by medical experts from the field of occupational health, who provided a continuous quality score. The datasets additionally include relevant information about the subjects, such as age, gender, height, and weight.
This template was originally made by <ahref="http://web.mit.edu/phillipi/">Phillip Isola</a> and <ahref="http://richzhang.github.io/">Richard Zhang</a> for a <ahref="http://richzhang.github.io/colorization/">colorful</a> ECCV project; the code can be found <ahref="https://github.com/richzhang/webpage-template">here</a>.