Design practitioners have become familiar with an array of evolving technologies such as virtual and augmented reality (VR/AR), artificial intelligence (AI), the internet of things (IoT), building information modeling (BIM), and robotics. What we contemplate less often, however, is what happens when these technologies are combined.
Enter the world of teleoperation, which is the control of a machine or system from a physical distance. The concept of a remote-controlled machine is nothing new, but advances in AR and communication technologies are making teleoperability more sophisticated and commonplace. One ultimate goal of teleoperability is telepresence, which is commonly used to describe videoconferencing, but increasingly it also pertains to remote manipulation. Telerobotics refers specifically to the remote operation of semi-autonomous robots. These approaches all involve a human–machine interface (HMI). As one might guess, advances in HMI technology represent significant potential transformations in building design and construction.
In one example, researchers at the University of Tasmania, in Australia, joined forces with local builder All Brick Tasmania to demonstrate the construction precision of a geometrically intricate brick wall. Using Fologram, an application that allows users to see CAD-based 3D modeling information superimposed over an actual view of a project site, bricklayers installed individual bricks to align with their digital counterparts, expediting a task that would otherwise require constant field measurement and verification. In this case, the mason takes the place of the robot, augmenting their field experience with computer vision. Nevertheless, the Fologram tool anticipates a robot-led construction process in the future.
Combining VR and computer vision with AI and robotics, Tokyo-based SE4 has created a Semantic Control system that can anticipate user choices relative to the robot’s environment. “With semantic-style understanding, a robot in the environment can use its own sensors and interpret human instructions through VR,” said SE4 CEO Lochlainn Wilson in a July interview with The Robot Report.
Developed for construction applications, the system can anticipate potential collisions between physical objects, or between objects and the site, as well as how to move objects precisely into place (like the “snap” function in drawing software).
Phil Rader, University of Minnesota VR research fellow, says that “the day will come when robots can move freely around and, using AI, will be able to discern the real-world conditions and make real-time decisions.” Rader, an architectural designer who researches VR and telerobotics technologies, imagines that future jobsites will likely be populated by humans as well as humanoids, one working alongside the other.
This story was excerpted from PROSALES’s sister publication ARCHITECT.