-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy path2025-sensing_noticeability.html
48 lines (44 loc) · 2.06 KB
/
2025-sensing_noticeability.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
layout: publication
year: 2025
month: 05
selected: false
coming-soon: true
hidden: false
external : false
# link: https://dl.acm.org/doi/10.1145/3613904.3642394
# pdf: https://dl.acm.org/doi/10.1145/3613904.3642394
title: "Sensing Noticeability in Ambient Information Environments"
authors:
- Yi Fei Cheng
- David Lindlbauer
# blog:
# doi: 10.1145/3613904.3642394
venue_location: Yokohama, Japan
venue_url: https://chi2025.acm.org/
venue_tags:
- ACM CHI
type:
- Conference
tags:
- Science
- Extended Reality
- Adaptive User Interfaces
venue: ACM CHI
#video-thumb: 7K3eouLCcSw
#video-30sec: 7K3eouLCcSw
#video-suppl: GAvys0HLqw0
#video-talk-5min: l9ycUrf50TE
#video-talk-15min: gmPoMoTaYAE
bibtex: "@inproceedings {Cheng25SensingNoticeability, \n
author = {Cheng, Yi Fei and Lindlbauer, David}, \n
title = {Sensing Noticeability in Ambient Information Environments}, \n
year = {2025}, \n
publisher = {Association for Computing Machinery}, \n
address = {New York, NY, USA}, \n
keywords = {Ambient displays, noticeability, computational interaction}, \n
location = {Yokohama, Japan}, \n
series = {CHI '25} \n
}"
---
Designing notifications in Augmented Reality (AR) that are noticeable yet unobtrusive is challenging since achieving this balance heavily depends on the user’s context. However, current AR systems tend to be context-agnostic and require explicit feedback to determine whether a user has noticed a notification. This limitation restricts AR systems from providing timely notifications that are integrated with users’ activities. To address this challenge, we studied how sensors can infer users’ detection of notifications while they work in an office setting. We collected 98 hours of data from 12 users, including their gaze, head position, computer interactions, and engagement levels. Our findings showed that combining gaze and engagement data most accurately classified noticeability (AUC = 0.81). Even without engagement data, the accuracy was still high (AUC = 0.76). Our study also examines time windowing methods and compares general and personalized models.