The increasing cognitive demands of digital systems highlight the need for structured ways to assess attention-aware design. In collaboration with the Calm Technology Institute, a framework has been developed that examines human-computer attention relationships across six dimensions: attentional load, peripheral awareness, system reliability, light, sound, and material interaction.
This presentation will cover the development and overview of an 81-point assessment framework that provides specific criteria for evaluating how technological products and services impact human attention. This builds on existing usability evaluation approaches by considering both immediate cognitive demands and background processing requirements. The framework helps analyze how technologies distribute attention demands between focused and peripheral awareness.
By treating attention as a limited resource, the framework proposes metrics for assessing how technical systems affect cognitive load. These criteria provide concrete checkpoints for evaluating design choices that either support or burden human attention.
Early applications of this assessment system have helped identify design elements that influence cognitive demands. These initial findings suggest structured evaluation of attention impacts could support development of more attention-respectful technical systems.
Underneath each of the key principles of successful commons approaches is the efficacy and efficiency of communication. As the complexity of the boundaries, resources, and rules increase, and the diversity of the perspectives, ontology, and goals of participating organizations grows, communications become strained. We tend to reconcile the challenges of noisy inputs using standardization, but standardization comes with tradeoffs to expression and flexibility - and one organization's noise may be another organization's signal. In practice, these tradeoffs are often revealed when organizations work together to collaboratively produce formal information products, such as multi-party grant applications. It is not uncommon for even well-resourced organizations to fail to resolve differences in order to meet deadlines for grant proposals or to simply externalize the process entirely, often at great cost.
This presentation will investigate the insights gained from a 5-year exploration of the development and implementation of protocols for collaborative production of complex information products within an interdisciplinary ecosystem of small-firms, community and university labs, nonprofits and NGOs, communities of practice, and government agencies. Originally driven by the questions of "What happens when the diversity of perspectives is a community's strength and the standardization of communications might undermine it?" and "How can we provide knowledge infrastructure for a community that maintains genuine variety while still allowing for genuine variety and differences in approach, while still reliably allowing for eventual convergence?", the results of this exploration show that, despite the complexity of natural language, best practices from Social Systems Engineering, Memetic Analysis, and Model-Based Systems Engineering allow for protocolization and reliability in communication. The resulting frameworks and insights have broad implications for collaborative knowledge production, safe use of natural language AI, and information exchange between firms with varied requirements and contexts of operation.
The Institutional Grammar (IG) provides a data model for reasoning about how an organization “moves” through time via the underlying behavioral strategies, norms and rules – its institutions. If the position of an organization is described by its team members, activities, resources, and the relationships between them, then a data set of institutional statements can serve as an estimate of that organization’s velocity. It’s possible to extrapolate an informed estimate of an organization’s future position from knowledge about its current position and its velocity (i.e. its rules, norms and strategies for allocating resources), along with beliefs or projections about its future (external) circumstances. The choices that affect an organization’s velocity thus constitute the form of “steering” commonly called “governance”: the application of forces that directly or indirectly induce changes in organizational space.
This presentation explores recent developments in the formalization of the Institutional Grammar for computational methods, and shares learnings from the design and deployment of an ongoing research initiative focused on applying these developments in a real-world context through the experimental implementation of organizational state estimation at an engineering firm. This initiative involves wiring up a feedback system that uses the firm’s internal data infrastructure to continuously sense data about the organization itself, then parses that data via the IG perspective, and makes it visible to members of the firm – who will then be asked to reflect on its accuracy and usefulness. If it is found to be useful, research will shift to questions of dynamics: What is a sensible time scale for such self-observation and modeling? Shifting from a comparative-static view to a view that captures processes of change, and potentially exposes subtle trends not overtly observable? Can a continuous view of an organization’s “velocity” encourage or enable better “steering” and “navigation” – governance of a real world organization?
© 2025 | Privacy & Cookies Policy