Abstract two-panel validation concept with stable segments routed through a gate

Validation and Pilots

Evidence from real-world studies assessing usability, deployment acceptability, and disciplined signal quality during everyday interaction.

Exploratory studies. Quiet by default. Reliable when active.

2
studies completed

In-home natural-use and public-setting acceptability evaluations.

View study snapshots
3
public sites

Multi-site library evaluation in real shared environments.

View Case B
100+
participants engaged

Hands-on sessions with structured interviews and surveys.

View Case B

Methodology

How we evaluate surface-based contact-activated physiological sensing in real environments.

We run exploratory field evaluations focused on usability, deployment acceptability, and disciplined output during everyday interaction. Our aim is to understand what holds up in real workflows, what requires iteration, and what integration partners should expect at pilot stage.

Methodology

What we evaluate

Usability and learnability

Can people use the system naturally with minimal instruction, and does it fit into normal routines?

Deployment acceptability

Would users accept the experience in the intended environment, including shared or public settings?

Disciplined output during interaction

When contact occurs, can the system deliver a stable, quality-screened physiological signal stream suitable for downstream use, without requiring the user to perform explicit measurement behavior?

What we evaluate

Format A: In-home natural-use evaluation

Designed to observe real interaction patterns over time.

Protocol summary

  • Participants replace their existing device with a tcs-enabled device and continue normal daily use.

  • No prescribed measurement steps are provided, to avoid biasing behavior.

  • After the evaluation period, participants complete structured feedback surveys, with an optional follow-up interview to contextualize responses.

What this format is best for

  • Natural dwell time and motion-light windows

  • Day-to-day usability

  • Operational reliability signals that only appear over time

Format A: In-home natural-use evaluation

Format B: Public-setting acceptability evaluation

Designed to test perception, trust, and usability in shared environments.

Protocol summary

  • Participants are recruited on site, interact hands-on with the system, and complete structured surveys.

  • Sessions are run as short interviews with optional recording and researcher notes.

  • Responses are collected digitally for analysis and aggregation.

What this format is best for

  • Privacy comfort and trust triggers

  • Acceptability and perceived value

  • UX clarity and language comprehension in diverse populations

Format B: Public-setting acceptability evaluation

Measures and instruments

System Usability Scale (SUS)

We use SUS as a standardized usability metric to benchmark experience and track improvement across iterations.

USE questionnaire and structured feedback

Where appropriate, we use additional structured questions to identify specific friction points and guide follow-up interviews.

Qualitative insight capture

We collect written notes and, where consent is given, recordings to understand why users respond the way they do, not just what they rate.

Interaction patterns

In natural-use settings, we examine how contact occurs during routine use and whether contact windows are sufficient to support passive capture without explicit user effort.

Measures and instruments

Data handling and reporting posture

Aggregate reporting

We report results in aggregate and focus on design and deployment insights.

Purpose-bound capture

Data is collected to evaluate system behavior and user experience, and to inform iteration and pilot readiness.

Data handling and reporting posture

Case snapshots

Evidence from real environments