From A-Pose to AR-Pose: Animating Characters in Mobile AR
Abstract
We present AR-Pose, a mobile AR app to generate keyframe-based animations of rigged humanoid characters. The smartphone's positional and rotational degrees of freedom are used for two purposes: (i) as a 3D cursor to interact with inverse kinematic (IK) controllers placed on or near the character's joints; and (ii) as a virtual camera that enables users to freely move around the character. Through the touch screen, users can activate/deactivate actions such as selecting an IK controller or pressing animation control buttons placed in a hovering 3D panel. By systematically re-positioning and saving the positions of the IK controllers, different poses can be achieved and, therefore, used to generate a 3D animation.
Video
Takeaways
A smartphone's 6DoF makes a surprisingly capable 3D cursor
By mounting a virtual pointer at a fixed point in camera coordinates, AR-Pose converts a commodity smartphone into a spatial input device. Users could select, reposition, and release IK handles simply by moving the phone through space and tapping a thumb button — no specialist controller required.
Off-the-shelf mobile hardware already embeds the sensing needed for expressive 3D interaction; the challenge is mapping those degrees of freedom to meaningful actions without overwhelming the user.
Getting close enough to interact means losing the full picture
To activate a joint handle the user had to physically approach it, which narrowed the field of view and cut off the overall character silhouette. Fine-grained control over one limb came at the cost of losing gestalt awareness of the pose — a fundamental tension in handheld AR animation tools.
Future systems should explore variable pointer reach (decoupling the cursor distance from the phone distance) to let users zoom in on a joint without committing their body to that viewing angle.
Character size is an underexplored design variable in AR animation
AR-Pose worked across scales from tabletop figurine to near life-size, but each introduced different problems: miniature characters required precise micro-movements that drifted 3DoF tracking, while large characters demanded wide sweeping motions risking the "gorilla arms" fatigue effect.
A systematic study comparing scale conditions (e.g., 1:2, 1:1, 2:1) would reveal whether there is a sweet spot that minimises both tracking error and physical fatigue.
Citation
APA
Andreia Valente, Augusto Esteves, and Daniel Lopes. 2021. From A-Pose to AR-Pose: Animating Characters in Mobile AR. In ACM SIGGRAPH 2021 Appy Hour (SIGGRAPH '21). Association for Computing Machinery, New York, NY, USA, Article 4, 1–2. https://doi.org/10.1145/3450415.3464401
BibTeX
@inproceedings{valente2021from,
title = {From {A}-Pose to {AR}-Pose: Animating Characters in Mobile {AR}},
author = {Valente, Andreia and Esteves, Augusto and Lopes, Daniel},
booktitle = {ACM SIGGRAPH 2021 Appy Hour},
articleno = {4},
numpages = {2},
year = {2021},
month = {aug},
day = {9--13},
doi = {10.1145/3450415.3464401},
url = {https://doi.org/10.1145/3450415.3464401},
isbn = {9781450383585},
publisher = {ACM},
address = {New York, NY, USA},
series = {SIGGRAPH '21},
location = {Virtual Event, USA},
keywords = {smartphone, mobile augmented reality, character animation}
}