iLab Invited Talk Series

2022-04-08

Into the theatreverse we go! Bits and pieces and other metaverse paraphernalia at the crossroads of live performance and XR technologies

Abstract

As the metaverse hype grows, so does the interest in creating and developing artistic activities in it. Even if one does not clearly know what the metaverse is or will become, it is quite clear that in recent years, and with a specific intensity since the Covid-19 pandemic hit, we are witnessing the rise of emergent formats and practices of live performance combined with XR technologies and platforms. From medium to large live acting performances in XR such as the ones produced by the Royal Shakespeare Company, Ferryman Collective, Double-Eye Studios or La Cuarta Pared VR, to seemingly spontaneous performative moments such as poetry slams on VR Chat or Rec Room, live performance is an essential part of a post-pandemic future where digital performance will play a key role in giving live to the metaverse. In this keynote presentation, XR researcher and performer António Baía Reis will provide an overview of the main concepts and ideas related to live performance in XR, as well as the main potentialities and challenges surrounding the future of performing arts in the metaverse experience economy.

Bio

Antonio BaÃa Reis is a researcher, professor, and digital artist. His work is interdisciplinary, combining areas such as emergent media, communication and performing arts, with a strong focus on immersive media (VR, AR, MR), film, collaborative practices, practice-based research, creativity studies, innovation in education, and social impact.

2022-03-18

What do we know about humanitarian visualizations?

Abstract

Humanitarian issues such as extreme poverty or refugee crises have been acknowledged worldwide over the past few decades. Organizations often have campaigns that focus on emotional strategies, such as showing the picture of a starving child or dead animals to persuade people to act. Another common tactic used by news media is reporting the issues through data visualizations. The psychology literature has extensively investigated the former approach, but there is still a lack of research about the effect of data visualizations to evoke emotions such as compassion or prosocial behaviors such as donating. This presentation shows recent contributions in that direction and discusses future perspectives of research.

Bio

Luiz Morais holds a Ph.D. in Computer Science from the Universidade Federal de Campina Grande, Brazil, and did an internship at the Sorbonne Universite, France. During his Ph.D. studies, he investigated how visualization design can affect people's compassion towards others. He is currently a postdoctoral fellow at Inria Bordeaux, France, where he investigates the use of situated visualizations to support housework management. Morais also collaborates with other institutions around the globe such as Inria Paris-Saclay, University of Toronto, and Monash University, where he works on projects about the use of visualization for dealing with humanitarian issues or for sustainability.

2022-03-04

From delegation to education: using technology to increase access to surgery

Abstract

Surgery is an indivisible and indispensable part of healthcare, yet 70% of the world population lacks access to safe and affordable surgical care. Having an expert surgeon either travel to the patient or operate remotely through surgical robots is not scalable given the staggering shortage of more than 140 million surgeries annually. I propose to move away from surgical _delegation_ and towards surgical _education_ with the goal of studying and building interactive systems that innovate how surgery is learned, in a vision to reverse the alarming trend of fewer surgeons per population. I will talk about two of my research directions. First, surgical telementoring, when an expert surgeon both remotely assists and teaches a surgeon in real-time while they operate. My aim is to move telementoring forward from the sheer transmission of audio and video, by conceptualizing novel mechanisms that shatter the boundaries of face-to-face settings. Second, interactive videos as learning material both produced and used by surgeons. My aim is to move the creation and consumption of videos away from classic video editing and playback tools, by conceptualizing novel mechanisms that rely on embedded semantic information in various dimensions. Together, these two directions target the alarmingly decreasing rate of surgeons per population and difficult access to surgical care that goes with it, supporting expert-to-expert remote training and expert-to-novice asynchronous training.

Bio

Ignacio Avellino is a permanent researcher (Charg de Recherche) for the CNRS at Sorbonne Universite, France. He earned his PhD from Saclay University, after completing his MSc at RWTH Aachen. Ignacio specializes in Human–Computer Interaction (HCI) and Computer Supported Cooperative Work (CSCW) research in the health domain, focusing on collaborative systems for telementoring as well as interactive systems for creating and consuming video as learning materials. He served as Technical Program Chair Assistant for ACM CHI 2020, and is a regular Associate Chair both at CHI and CSCW.

2022-02-19

Repurposing Commodity RFID Tags for HCI

Abstract

In this talk, I will share our recent development on repurposing commodity RFID Tags for enabling batteryless and wireless wearable and tangible user interfaces for human-computer interaction. The presentation will cover several research publications including GaussRFID (CHI '16), RFIBricks (CHI '18, CHI '21), RFIMatch (UIST '18), RFTouchPad (UIST '19), and NFCSense (CHI '21). I will also discuss the challenges and opportunities for future work in this direction.

Bio

Rong-Hao Liang is an assistant professor at the Department of Industrial Design and the Department of Electrical Engineering, Eindhoven University of Technology, the Netherlands. He received his Ph.D. in Computer Science (2014) from National Taiwan University and M.S. in Electrical Engineering (2010) from National Taiwan University. He also worked as an assistant research fellow in Intel-NTU Research Center (2014-2016) and co-found a company, GaussToys Inc. (2015). His research focuses on sensing systems and user interface technology for HCI and ubiquitous computing, and he has been awarded the Best Paper Award / Honorable Mentions from ACM CHI and DIS conferences. Until 2021 he has more than 50 technical research publications and held more than 10 granted or published user interface hardware patents. (Personal Website: https://ronghaoliang.page)

2022-02-09

Supporting makers while creating physical computing prototypes

Abstract

This talk is about tools for supporting makers while creating physical computing prototypes - specifically when engaged in the creation of electronic circuits. I will present several systems based on smart-breadboards and voice-based conversational agents that can be used to support makers in their creative explorations of physical computing prototypes and assist them while assembling circuits. I will also try to show how research can bridge the gap with education in the classroom and try to highlight what are possible future directions in the field.

Bio

Andrea Bianchi is associate professor in the Department of Industrial Design at KAIST, Korea, where he directs the MAKinteract lab. He researches in the field of Human-Computer Interaction (HCI) focusing on building tools for prototyping and devices for body augmentation. Before joining KAIST, he worked at Sungkyunkwan University (Korea) as a faculty member in the Department of Computer Science, and as a video game programmer for a New York startup. Andrea received a Ph.D. from KAIST (Korea) in 2012, an MS in Computer Science from NYU (USA), and a Laurea (BSc+MS) in business administration from Bocconi University (Italy).

2021-09-13

Integrating Interactive Devices with the User’s Body

Abstract

When we look back to the early days of computing, user and device were distant, often located in separate rooms. Then, in the ’70s, personal computers 'moved in' with users. In the ’90s, mobile devices moved computing into users’ pockets. More recently, wearable devices brought computing into constant physical contact with the user’s skin. These transitions proved useful: moving closer to users allowed interactive devices to sense more of their user and act more personal. The main question that drives my research is: what is the next interface paradigm that supersedes wearable devices? The primary way researchers have been investigating this is by asking where future interactive devices will be located with respect to the user’s body. Many posit that the next generation of interfaces will be implanted inside the user’s body. However, I argue that their location with respect to the user’s body is not the primary factor; in fact, implanted devices are already happening in that we have pacemakers, insulin pumps, etc. Instead, I argue that the key factor is how devices will integrate with the user’s biological senses and actuators. This body-device integration allowed us to engineer interactive devices that intentionally borrow parts of the body for input and output, rather than adding more technology to the body. For example, one such type of body-integrated devices, which I have advanced recently, are interactive systems based on electrical muscle stimulation. These devices move their user’s muscles using computer-controlled electrical impulses, achieving the functionality of robotic exoskeletons without the bulky motors. The key insight is that engineering devices that intentionally borrow parts of the user’s biology puts forward a new generation of miniaturized devices; allowing us to circumvent traditional physical constraints. For instance, in the case of our devices based on electrical muscle stimulation, they demonstrate how our body-device integration circumvents the constraints imposed by the ratio of electrical power and size of a motor (i.e., the stronger/larger a motor is, the more current needed to actuate it). Taking this further, we demonstrate how our body-device integration approach allowed us to also miniaturize thermal feedback (hot/cold sensations) without the need for power-hungry devices like Peltiers, air conditioners, or heaters. We believe that these bodily-integrated devices are the natural succession to wearable interfaces and allow us to investigate how interfaces might connect to our bodies in a more direct and personal way.

Bio

Pedro Lopes is an Assistant Professor in Computer Science at the University of Chicago. Pedro focuses onintegrating computer interfaces with the human body—exploring the interface paradigm that supersedes wearable computing. Some of these new integrated-devices include: muscle stimulation wearable that allows users to manipulate tools they have never seen before or that accelerate their reaction time, or a device that leverages the sense of smell to create an illusion of temperature. Pedro’s work has received a number ofacademic awards, such as four ACM CHI/UIST Best Papers. It also captured the interest of the media, such asNew York Times or Wired and was exhibited at Ars Electronica & World Economic Forum. (More: https://lab.plopes.org)

2021-08-30

Sensing Physical Input Devices

Abstract

Physical input devices are the ultimate bridge between humans and computers, as they translate human actions into digital commands. My work pushes for a world where such devices fit a specific user’s needs for a particular time, place, and task, culminating in deeply custom interfaces designed, fabricated, or simply picked up to solve a problem. I will discuss my thesis work, which embeds cutting-edge sensing techniques into 3D design tools, allowing fast, cheap, and flexible prototyping for physical devices. I will also describe some in-progress work which measures human anatomy to richly sense user interactions with existing objects, removing the need for dedicated input devices. I'll finish with a brief sketch of what I want to work on next here in Copenhagen: examining how sensing the body’s dynamic capabilities can enhance input device design.

Bio

Valkyrie Savage is a newly-minted Assistant Professor at the University of Copenhagen in the Human-Centred Computing Section of the Department of Computer Science. She received her PhD from UC Berkeley in 2016, where she worked with Björn Hartmann; her thesis there was entitled 'Fabbed to Sense: Integrated Design of Geometry and Sensing for Interactive Objects.' Valkyrie has lived many lives before and after that, including founding and working for startups, interning at Google and CERN, presenting work at SXSW, organizing the March for Science Toronto, and becoming an internationally-renowned player in the sport of jugger. She is curious about all kinds of things.

2021-08-23

Embodied Data Interaction

Abstract

Visualising data is fundamental to understand complex phenomena and patterns, predict future trends and eventually make informed decisions. The data visualisation process requires human input and therefore significant amounts of interaction are required to support effective exploration of data (e.g. selecting data, filtering, specifying views etc). Interaction with data and their visual representation has largely been designed and developed for 2D screen/mouse/ keyboards devices, which creates a barrier between the data and people who seek to understand them. Modern VR/AR technology allows for more natural, 3D rich interaction and present unique opportunities to redesign the data interaction pipeline to visually explore data in immersive environments to enhance data understanding. In this talk I will present my work on embodied data interaction in mixed and virtual reality. I will present the emerging design space of spatial interaction with complex data and present some exemplar work that aim at reducing the gap between people and their data.

Bio

Maxime Cordeil is a Lecturer of Human Computer Interaction at Monash University and a member of the Data Visualisation and Immersive Analytics Lab. His research focuses on Immersive Analytics, Human Computer Interaction and Data Science. Prior to this he was a Postdoctoral Research Fellow at Monash University, a Software Engineer for the French Civil Aviation R&D department, and he received his PhD from the Higher French Institute of Aeronautics and Space (ISAE). He is actively involved in the IEEE VIS and VR research communities.

2021-08-16

Designing Not Knowing

Abstract

As a design researcher and educator working in human-computer interaction, I often find myself in the business of 'empowering' students by teaching them design. As a professor, I write grant proposals that use the magic of design to bring forth preferable futures. Yet, within the present socio-environ-political context, I find myself increasingly conflicted by these claims and asking myself, what, really, can design do? I will not be able to answer any of these questions during this talk because I don't know, but I will argue that the position of not-knowing, humility, and non-expert is useful for critically reflecting on the relevance of our practices. I will present ways that myself, collaborators, and the students with whom I work have been using weaving (sometimes with circuits, some without) as a practice through which to to try to probe, question, and understand what counts as design and the kinds of narratives we must take on in order to be 'designers'. I aim for this talk to inspire reflection and offer a few tactics for unknowing in order to think otherwise.

Bio

Laura Devendorf is an assistant professor in Information Science and the ATLAS Institute at the University of Colorado Boulder. Her research questions the role of design and making in the wake of increasingly pressing global challenges. She directs the Unstable Design Lab where she works closely with students across engineering, information science, and art to speculate on alternative futures for technology. The lab currently focuses on weaving smart textiles and how themes of slowness, presence, and material negotiation can be used as both a practice and metaphor to formulate these visions. She earned bachelors' degrees in studio art and computer science from the University of California Santa Barbara before earning her Ph.D. at UC Berkeley School of Information. Her research has been featured on National Public Radio and has received multiple best paper awards at top conferences in the field of human-computer interaction.

2021-08-09

The Power of Representation in Human-Computer Interaction

Abstract

From the dawn of computing, we have been striving to leverage computation to augment our productivity and creativity. While interactive technologies have become increasingly powerful, employing computation for creativity support and problem solving is still a rigid and laborious process. As a Human-Computer Interaction researcher, I seek to enable users to directly and flexibly express their intentions to computers, effectively integrating computation into their thinking processes. My research takes a fundamental approach, where I have been focusing on inventing new representations and primitives of the graphical user interfaces that better match our mental processes. Coupled with the afforded interaction techniques, I will demonstrate how they lead to novel user interface paradigms, which allow us to directly express our intentions to computation and quickly perform interactions that were previously tedious, or even impossible.

Bio

Haijun Xia is an Assistant Professor in the Cognitive Science, Computer Science and Engineering, and the Design Lab at the University of California, San Diego. His research area is Human-Computer Interaction, in which he focuses on augmenting our productivity and creativity. He approaches this through the development of novel representations and interaction techniques. He received his Ph.D. from the DGP Lab at the Department of Computer Science, University of Toronto. For more of his recent work, please visit https://haijunxia.ucsd.edu/

2021-07-19

Password Sharing in Bangladesh

Abstract

Although forbidden by almost all password policies, password-sharing is a common practice worldwide. However, the specific ways in which people manage shared passwords in combination with culturally-based expectations are not well-understood. In this work, we interviewed 30 Bangladeshi users about their password-sharing practices. We found that password sharing is a near ubiquitous practice in Bangladesh, and that a variety of cultural factors affect people’s expectations, practices, and experiences in sharing passwords.

Bio

Elizabeth Stobert is an assistant professor in the School of Computer Science at Carleton University, where she teaches in the computer science and human-computer interaction programs. Her research is in usable security, examining the human factors in security systems. She has worked extensively on the usability of passwords and authentication systems, and also done recent research in security education and healthcare security. Prior to joining Carleton in 2019, she held research roles at the National Research Council of Canada, Concordia University, and ETH Zurich.

2021-07-12

Beyond Shape: 3D Printing Kinetic Objects for Interactivity

Abstract

Emerging 3D printing technology has enabled the rapid creation of physical shapes. However, 3D-printed objects are typically static with limited or no moving parts. Creating 3D printable objects with kinetic behaviors such as deformation and motion is inherently challenging. To enrich the literature for making movable 3D-printed parts and support a wider spectrum of applications, I introduce the concept of “print driver”, a class of parametric mechanisms that use uniquely designed mechanical elements and are printed in place to augment 3D-printed objects with the ability of deformation, actuation, and sensing. In this talk, I will present a series of my research works to showcase how the print drivers can be used to lower the barrier for making 3D-printed kinetic objects and to support augmented 3D printable behaviors for interactivity. I will also share my personal thoughts on how to incorporate print drivers into objects, the human body, and space for good.

Bio

Liang He is a Ph.D. candidate in Computer Science & Engineering at the University of Washington, advised by Jon E. Froehlich. He works at the intersection of Human-Computer Interaction (HCI) and digital fabrication. He takes a mechanical perspective to create novel design techniques by exploiting parametric mechanical properties and to develop computational design tools for the design, control, and fabrication of 3D printable augmented behaviors. Prior to joining UW, Liang received his M.S. in Computational Design at Carnegie Mellon University, his M.S. in Computer Science at the University of Chinese Academy of Sciences, and his B.S. in Software Engineering at Beihang University. He also worked in HP Labs, in the VIBE group at Microsoft Research (Redmond), and at Keio-NUS CUTE Center. Liang publishes at top HCI venues such as CHI, UIST, TEI, and ASSETS, and received two best paper awards and one best paper nominee.

2021-07-05

Portable Laser Cutting

Abstract

A portable format for laser cutting will enable millions of users to benefit from laser-cut models as opposed to the 1000s of tech enthusiasts that engage with laser cutting today. What holds widely adopted use back is the limited ability to modify and fabricate existing models. It may seem like a portable format already exist, as laser cut models are already widely shared in the form of 2D cutting plans. However, such files are susceptible to variations in cutter properties (aka kerf) and do not allow modifying the model in any meaningful way. I consider this format machine specific. In computing, this problem was solved in the 50s by developing compilers. This allowed developers to abstract away from the hardware and as a result, write code that remained relevant to this day. The resulting code is portable, e.g. it can be transferred from one machine to another. This transition has revolutionized not only computing but also all fields that use digital formats like desktop publishing, digital video, digital audio, etc. I believe that by transitioning towards a portable format for laser cutting we can make a similar transition from 1000s of users and one-off models towards millions of users and advanced models developed by multiple creators. My first take on the challenge is to see how far we get by building on the de-facto standard, i.e., 2D cutting plans. I wrote software tools to modify 2D cutting plans, replacing non-portable elements with portable counterparts. This makes the models portable, but it is still hard to modify them. I thus take a more radical approach, which is to move to a 3D exchange format (kyub). This guarantees portability by generating a new machine-specific 2D cutting plan for the local machine when exported. And the models inherently allow for parametric modifications. Instead, it raises the question of compatibility: Files already exist in 2D—how to get them into 3D? I demonstrate a software tool, assembler3, to reconstruct the 3D geometry of the model encoded in a 2D cutting plan, allows modifying it using a 3D editor, and re-encodes it to a 2D cutting plan. I demonstrate how this approach allows me to make a much wider range of modifications, including scaling, changing material thickness, and even remixing models.

Bio

Thijs Roumen is a PhD candidate in Human Computer Interaction in the lab of Patrick Baudisch, Hasso Plattner Institute in Potsdam, Germany. He received his MSc from the University of Southern Denmark, Sønderborg in 2013 and BSc from the Technical University of Eindhoven, Netherlands in 2011. Between the PhD and master he worked at the National University of Singapore as a Research Assistant with Shengdong Zhao. His research interests are in personal fabrication, digital collaboration and enabling increased complexity for laser cutting. His papers are published as full papers in top-tier ACM conferences CHI and UIST. He serves on several ACM program committees including ACM UIST.

2021-06-28

Where did this @#$^@#$%# AV learn to drive?

Abstract

Drivers communicate and negotiate with other drivers, pedestrians and road users implicitly and explicitly through the movement of their cars, as well as through honking, verbal communication, body language and gaze. It is widely recognized that these interaction patterns vary culturally; the advent of autonomy will necessitate a more explicit understanding of the complex manner in which drivers interact. Mismatches in perception, understanding and action between road users can easily cause accidents. We are exploring how drivers implicitly communicate and coordinate with others on the road, and to assess how these driving interactions differ across cultures. By staging situations that demand negotiation, such as ambiguous four-way stops, we can capture how participants communicate with other drivers or pedestrians to coordinate joint action, implicitly through the movements of their virtual car or bodily movement, or explicitly through verbal or gestural exchange. By comparing how people from different cultures coordinate in comparable situations, we can better understand cultural differences in driving interaction.

Bio

Wendy Ju is an Associate Professor at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion and in the Information Science field at Cornell University. Her work in the areas of human-robot interaction and automated vehicle interfaces highlights the ways that interactive devices can be designed to be safer, more predictable, and more socially appropriate. Professor Ju has innovated numerous methods for early-stage prototyping of automated systems to understand how people will respond to systems before the systems are built. She has a PhD in Mechanical Engineering from Stanford, and a Master’s in Media Arts and Sciences from MIT. Her monograph on The Design of Implicit Interactions was published in 2015.

2021-06-07

Human-Centered Design for Connected Teams and Communities: A Multilingual Perspective

Abstract

It is increasingly common for people today to interact with others who do not speak the same native language as they do. Despite the wide adoption of English as the lingua franca, extensive research has shown that information exchange in a multilingual setting frequently happens in 'a cocktail of languages' In this talk, I will present a series of studies investigating 1) when people shift between different languages to satisfy their situational needs, and 2) how people manage the costs and benefits of their language choices through daily communication practices. I will conclude by describing several technical solutions (e.g., human-centered translation tools) my group is currently exploring for building connected teams and communities.

Bio

Ge Gao is an Assistant Professor in the College of Information Studies (iSchool) at the University of Maryland, with a joint appointment at the University of Maryland Institute for Advanced Computer Studies (UMIACS). She obtained her PhD in Communication at Cornell University. Ge’s research interests cover the behavioural aspect of human-computer interaction (HCI). Her recent projects focus on understanding the designing for computer-based work communication, information seeking, knowledge sharing across language boundaries. Findings of this research have been published at top-tier HCI venues such as ACM CHI, CSCW, and Ubicomp.

2021-05-03

Advancing Personal Fabrication by Making Physical Objects as Reprogrammable as Digital Data

Abstract

Computing has revolutionized how we process and interact with data today, unfortunately, these capabilities are constraint to the digital realm and cannot yet be applied to physical matter. For instance, today, we can already quickly update the appearance of a digital photo by applying a filter or adding and removing elements. However, updating physical objects in the same way is not possible today. In this talk, I will show my research group’s latest developments that bring us closer to a future in which physical objects are as reprogrammable as data is today. As a first example of this, I will show our research on a new reprogrammable material that can be applied to the surface of physical objects and that allows them to change their appearance within a few minutes. This allows us to update the color of clothing, shoes, and even entire rooms in the same way as we can update a digital photo today. I will then show additional developments that extend this concept to further integrate computing capabilities into physical objects, show our research on how we can print functional objects in one go without the need for assembly, and demonstrate how we can create unified prototyping environments that support engineers and designers in fabricating new types of physical objects.

Bio

Stefanie Mueller is the X-Career Development Assistant Professor in the MIT EECS department joint with MIT Mechanical Engineering and Head of the HCI Engineering Group at MIT CSAIL. For her research, Stefanie has received an NSF CAREER Award, an Alfred P. Sloan Fellowship, a Microsoft Research Faculty Fellowship, and was also named a Forbes 30 under 30 in Science. In addition, Stefanie’s work has been awarded several Best Paper and Honorable Mention Awards at the ACM CHI and ACM UIST conferences, the premier venues in Human-Computer Interaction. Stefanie has also served as the Program Chair of the ACM UIST 2020 conference and was a Subcommittee Chair for ACM CHI 2019 and 2020. At MIT, Stefanie served as a Program Co-Chair for the MIT EECS Rising Star Workshop in 2018 and is currently serving as the Head of the Human Computer Interaction Communities of Research (HCI CoR) at MIT CSAIL.

2021-04-26

Beyond Visualization Wizardry: The Role of Interaction in Data Visualization

Abstract

Visualization is not just a way of creating pretty pictures and "intuitive dashboards". It is not a magic wand that you can apply to your dataset to automatically turn a data mess into "actionable insights for transformative results". Far from this wizardry, I argue that understanding data comes at the cost of interacting with it. I will go through several research projects - ranging from manual reordering of matrices to active reading of visualizations to interaction discoverability to composite physicalizations to direct manipulation of graphical encodings - in an attempt to convince you that we can, and should, find better ways for people to interact with data visualizations.

Bio

Charles Perin is an Assistant Professor of Computer Science at the University of Victoria, where he co-leads the Victoria Interactive eXperiences with Information research group specializing in Human-Computer Interaction and Information Visualization. He and his students are particularly interested in designing and studying new interactions for visualizations and in understanding how people may make use of and interact with visualizations in their everyday lives; in designing visualization tools for authoring personal visualizations and for exploring and communicating open data; in sports visualization; and in visualization beyond the desktop. Before joining the University of Victoria in 2018, Charles was a Lecturer at City, University of London, before that a post-doctoral researcher at the University of Calgary, before that a PhD student at University Paris-Sud/INRIA, and long before that a kid in Brittany.

2021-04-19

Journey through the Design Space of Cross-Device Interactions

Abstract

Designing interfaces or applications that move beyond the bounds of a single device screen enables new ways to engage with digital content. In this talk, I will guide you through the design space of cross-device interactions. In particular, I will give an overview of what the research field of cross-device interactions looks like, and what kind of techniques we can use for designing fluid cross-device interactions. I’ll also discuss a few of the open issues in the research field and suggest opportunities of where we can go next.

Bio

Nicolai Marquardt is Associate Professor at the University College London, where he is part of the Department of Computer Science, Faculty of Engineering and the Faculty of Brain Sciences. At the UCL Interaction Centre, he works on projects in the research areas of cross-device interaction, sensor-based systems, prototyping toolkits, and design methods. He received his PhD in Computer Science from the University of Calgary, Canada. Nicolai is co-author of the Sketching User Experiences Workbook (Morgan Kaufmann 2011) and the Proxemic Interactions textbook (Morgan & Claypool 2015).

2021-04-12

The Immersive Canvas: Data Visualization and Interaction for Immersive Analytics

Abstract

This talk explores the role of interactive data visualization for immersive analytics. Immersive analytics is becoming a complex field that combines many fields of expertise: analytics, big data, infrastructure, virtual and augmented systems, image recognition and many others, as well as human-computer interaction and visualization. In order to make sense of complex data, we need visualization interfaces: in immersive environments, data visualization is freed of the limitedness of the traditional desktop screen; able to expand into an infinite canvas and the third dimension. What potential does this bring for data visualization and immersive visualization? How can we leverage this potential? How is this changing our approach to visualizing and interacting with data?

Bio

Dr Benjamin Bach is a Lecturer in Design Informatics and Visualization at the University of Edinburgh. His research designs and investigates interactive information visualization interfaces to help people explore, communicate, and understand data. Before joining the University of Edinburgh in 2017, Benjamin worked as a postdoc at Harvard University (Visual Computing Group), Monash University, as well as the Microsoft-Research Inria Joint Centre. Benjamin was visiting researcher at the University of Washington and Microsoft Research in 2015. He obtained his PhD in 2014 from the Université Paris Sud where he worked at the Aviz Group at Inria. The PhD thesis entitled Connections, Changes, and Cubes: Unfolding Dynamic Networks for Visual Exploration got awarded an honorable mention as the Best Thesis by the IEEE Visualization Committee.

2021-03-29

Warning, This robot is not what it seems! A discussion on deception and the future of social robots

Abstract

Social robots are designed to interact with people using human- or animal-like language, gestures, or other techniques. This approach promises intuitive interaction, and can be designed to shape a person's mood and behavior; social robots can even serve as companions. However, I argue that social robots - by design - are fundamentally rooted in deception, which highlights real potential dangers as these robots enter society. On the flip side, considering this deception also provides a positive way forward, a path for developing social robots that can be successful in both our everyday lives.

Bio

Dr. James Young is a professor of computer science at the University of Manitoba.

2021-03-15

On Body & Out of Body Interactions

Abstract

Mobile devices have become ubiquitous over the last decade, changing the way we interact with technology and with one another. Mobile devices were at first personal devices carried in our hands or pockets. They are now changing form to fit our lifestyles and an increasingly demanding amount and diversity of information to display. My research focuses on the design, development, and evaluation of novel interaction techniques with mobile devices using a human-centered approach. In this presentation, I will in particular focus on two types of mobile technologies: wearables and drones. I will discuss the use of multiple modalities to interact with technology and in particular how haptics on wearables can support long-term tasks without interrupting the user’s attention. I will then discuss how autonomous devices such as drones re-invent our understanding of ubiquitous computing and present my current research on collocated natural human-drone interaction.

Bio

Dr. Jessica Cauchard is a lecturer in the department of Industrial Engineering and Management at Ben Gurion University of the Negev in Israel, where she recently founded the Magic Lab. Her research is rooted in the fields of Human-Computer and Human-Robot Interaction with a focus on novel interaction techniques and ubiquitous computing. Previously, she was faculty of Computer Science at the Interdisciplinary Center Herzliya between 2017 and 2019. Before moving to Israel, Dr. Cauchard worked as a postdoctoral scholar at Stanford University. She has a strong interest in autonomous vehicles and intelligent devices and how they change our device ecology. She completed her PhD in Computer Science at the University of Bristol, UK in 2013 and received a Magic Grant for her work on interacting with drones by the Brown Institute for Media Innovation in 2015

2021-03-08

Soft Infrastructures

Abstract

The COVID-19 pandemic has exposed the fragility of existing models of housing, collective life, and infrastructure. The 99% have been disproportionately marginalized by shelter-in-place orders and quarantines that assume they have the resources to weather this moment of extreme instability. The transition from a quarantine to a post-pandemic city will not only be a fight for collective human health and wellbeing, but will also be the staging ground for our last stand to prevent a forthcoming climate catastrophe. New paradigms of urban design and civic infrastructure must be decoupled from society’s carbon-intensive practices and archaic fetishes for "solidity" in building. "Soft Infrastructures" present an alternative modality for urban design. This lecture will discuss three on-going projects by the Center for Civilization: Civic Commons Catalyst, Eternal Ephemera, and Soft City/Soft Haus.

Bio

Alberto de Salvatierra is an assistant professor of urbanism and data in architecture at the University of Calgary's School of Architecture, Planning and Landscape, director of the Center for Civilization—a design research lab and international think tank, the founding principal of PROXIIMA, and a Global Shaper at the Calgary Hub of the Global Shapers Community—an initiative by the World Economic Forum based in Geneva, Switzerland. An interdisciplinary polymath, architectural designer, and landscape urbanist, Alberto’s research and work focuses on material flows as infrastructure at the urban and civilizational scales, while his collaborative research agenda centers on fostering, developing and writing on interdisciplinary pedagogy and practices. His work has been published widely and exhibited both domestically and abroad, such as in the United States, the United Kingdom, Mexico, Italy, Japan, Sweden and Serbia, and in such venues as the Priscilla Fowler Fine Art Gallery in Las Vegas, NV, Calatrava-designed Milwaukee Art Museum in Milwaukee, WI, and the National Building Museum in Washington, D.C. In 2019, he was part of the Harvard Kennedy School’s inaugural STS (Science, Technology and Society) program on Expertise, Trust and Democracy, and an invited panelist and delegate to the United Nations.

2021-03-01

Design in the Age of Intelligent Machines

Abstract

Bio

Alicia Nahmad Vazquez is the founder of Architecture Extrapolated (R-Ex) and an assistant professor in robotics and AI at the University of Calgary School of Architecture Planning and Landscape (SAPL) . She is also co-director of the Laboratory for Integrative Design at UofC. For the past 5 years, Alicia worked as studio master at the Architectural Associational Design Research Laboratory (DRL) master’s program. As a research-based practising architect, Alicia explores materials and digital design and fabrication technologies along with the digitization of building trades and the wisdom of traditional building cultures. Her projects include the construction of award-winning ‘Knit-Candela’ and diverse collaborations with practice and academic institutions. She holds a PhD in human-robot collaborative (HRC) design from Cardiff University and a MArch from the AADRL. Alicia previously worked on developing design tools for practices like Populous and Zaha Hadid Architects. Alicia has also been an Artist-In-Residence at Autodesk Pier 9 and has taught and lectured extensively in Latin America and Europe. Her research has been widely published internationally in journals and conference proceedings.

2021-02-08

Dynamic Graphics as a Language

Abstract

How can we make animation as easy as sketching? How can we create dynamic contents in real-time, in the speed of thought? How will dynamic graphics shape our real-time communications and language? In this talk, I'm going to present my research on animation, storytelling, and design, including the design of Sketchbook Motion that was crowned as "The best iPad app of the year 2016" by Apple. Most of us experience the power of animated media every day: animation makes it easy to communicate complex ideas beyond verbal language. However, only few of us have the skills to express ourselves through this medium. By making animation as easy, accessible, and fluid as sketching, I intend to make dynamic graphics a powerful medium to think, create, and communicate rapidly.

Bio

Rubaiat Habib is a Sr. Research Scientist at Adobe Research. His research interest lies at the intersection of Computer Graphics and HCI for creative thinking, design, and storytelling. His research in dynamic drawings and animations turned into products that reach a global audience. Rubaiat received several awards for his work including Apple App of the year 2016, three ACM CHI Best Paper Nominations, ACM CHI and ACM UIST Peoples choice best talk awards, and ACM CHI Golden Mouse awards for best research videos. For his PhD at the National University of Singapore, Rubaiat also received a Microsoft Research Asia PhD fellowship. Prior to Adobe, he worked at Autodesk Research and Microsoft Research. rubaiathabib.me

2021-01-25

Creating Smart Everyday Things

Abstract

In my vision, the user interfaces of the future are in a blend of smart physical and virtual environments. My research focuses on the physical side by bringing interactivity to everyday things. I believe this vision is only achievable if people with varying backgrounds and abilities can work together in an accessible and collaborative environment. In this talk, I will describe two major threads of research in interactive everyday things and hardware prototyping tools. The first thread investigates interactive systems to sense the context of use of the things or estimate a user’s intention when touch input data is noisy. For example, I will demonstrate a tablecloth augmented with a fabric sensor that can sense and recognize non-metallic objects placed on a table, such as food, different types of fruits, liquids, plastic, and paper products. I will also show examples of how this technique can be used for contextual applications. The second thread investigates tools to lower the bar of entry to prototyping electronics, which is an essential skill needed to create smart everyday things. The goal of this line of work is to enable more people with varying backgrounds and abilities to create smart everyday things and eventually a better user experience of smart environments. For example, I will demonstrate an audio-tactile tutorial system for blind or low vision learners to understand circuit diagrams, which is an important task in the circuit prototyping pipeline. Both of these threads share a common goal that is to create a better user experience in smart environments.

Bio

Xing-Dong Yang is an Assistant Professor of Computer Science at Dartmouth College. His research is broadly in Human-Computer Interaction (HCI), where he creates interactive systems using sensing techniques and haptics to enable new applications in smart physical and virtual environments. Xing-Dong’s work is recognized through a Best Paper award at UIST 2019, eight Honorable Mention awards with one at UIST 2020, six at CHI (2010, 2016, 2018, 2019 × 2, 2020), and one at MobileHCI 2009. Aside from academic publications, Xing-Dong’s work attracts major public interest via news coverage from a variety of media outlets with different mediums, including TV (e.g., Discovery Daily Planet), print (e.g., The Wall Street Journal, Forbes), and Internet News (e.g., MIT Technology Review, New Scientist).

2021-01-15

'Mechanical Shells' for Actuated Tangible UIs - Hybrid Architecture of Active and Passive Machines for Interaction Design

Abstract

Research on actuated and shape-changing Tangible User Interfaces (TUIs) in the field of HCI has been explored widely in the past few decades to enrich interaction with digital information in physical and dynamic ways. In this effort, various types of generic devices of actuated TUIs have been investigated including pin-based shape displays, actuated curve interfaces, and swarm user interfaces. While these approaches are intended to be dynamically reconfigurable to offer generic interactivity, each hardware is inherently limited to the fixed configurations. How can we further expand the versatility of the actuated TUIs for fully expanding their capability for tangible interactions and motion / shape representations? In my talk, I propose a ‘mechanical shell’, a design concept for actuated TUIs with modular interchangeable components that extends and converts the shape, motion, and interactivity of the hardware. By doing so, compared with the actuated TUI itself, each mechanical shell would bring much more specialized and customized interactivity, while, as the whole architecture, the system can adapt to much more versatile interactions. I present two research instances that demonstrate this concept based-on pin-based shape display and swarm user interface, and introduce proof-of-concept implementation as well as a range of applications. By introducing the novel interaction architecture, my research envisions the future of the physical environment where active and passive machines exist together for enriching tangible and embodied interactions.

Bio

Ken is an interaction designer and HCI researcher from Japan. Currently, he is a Ph.D. Candidate of Tangible Media Group, MIT Media Lab. He is interested in developing interfaces that combine digital information or computational aids into daily physical tools and materials, to develop novel physical and perceptual experiences. His research has been presented in top HCI conferences (ACM CHI, UIST, TEI, etc), and demonstrated in various exhibitions and awards including Ars Electronica, A' Design Award, and Japan Media Arts Festival.