-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy path2025-minimates.html
51 lines (47 loc) · 2.18 KB
/
2025-minimates.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
layout: publication
year: 2025
month: 05
selected: false
coming-soon: true
hidden: false
external : false
# link: https://dl.acm.org/doi/10.1145/3613904.3642394
# pdf: https://doi.org/10.1145/3706598.3714328
title: "MiniMates: Miniature Avatars for AR Remote Meetings within Limited Physical Spaces"
authors:
- Akihiro Kiuchi
- Jonathan Wieland
- Takeo Igarashi
- David Lindlbauer
# blog:
# doi: 10.1145/3706598.3714328
venue_location: Yokohama, Japan
venue_url: https://chi2025.acm.org/
venue_tags:
- ACM CHI
type:
- Conference
tags:
- Science
- Augmented Reality
- Communication
- Telepresence
venue: ACM CHI
#video-thumb: 7K3eouLCcSw
#video-30sec: 7K3eouLCcSw
#video-suppl: GAvys0HLqw0
#video-talk-5min: l9ycUrf50TE
#video-talk-15min: gmPoMoTaYAE
bibtex: "@inproceedings {Cheng25SensingNoticeability, \n
author = {Kiuchi, Akihiro and Wieland, Jonathan and Igarashi, Takeo and Lindlbauer, David}, \n
title = {MiniMates: Miniature Avatars for AR Remote Meetings within Limited Physical Spaces}, \n
year = {2025}, \n
publisher = {Association for Computing Machinery}, \n
address = {New York, NY, USA}, \n
keywords = {Augmented Reality, Communication, Telepresence}, \n
location = {Yokohama, Japan}, \n
series = {CHI '25} \n
}"
---
Designing notifications in Augmented Reality (AR) that are noticeable yet unobtrusive is challenging since achieving this balance heavily depends on the user’s context. However, current AR systems tend to be context-agnostic and require explicit feedback to determine whether a user has noticed a notification. This limitation restricts AR systems from providing timely notifications that are integrated with users’ activities. To address this challenge, we studied how sensors can infer users’ detection of notifications while they work in an office setting. We collected 98 hours of data from 12 users, including their gaze, head position, computer interactions, and engagement levels. Our findings showed that combining gaze and engagement data most accurately classified noticeability (AUC = 0.81). Even without engagement data, the accuracy was still high (AUC = 0.76). Our study also examines time windowing methods and compares general and personalized models.