CoBots: Collaborative Robots Servicing Multi-Floor Buildings Manuela Veloso1 , Joydeep Biswas2 , Brian Coltin2 , Stephanie Rosenthal1 , Tom Kollar1 , Cetin Mericli1 , Mehdi Samadi1 , Susana Brand˜ao3,4 , and Rodrigo Ventura4 I. R ESEARCH OVERVIEW for more than three years without any hardware failures, and with minimal maintenance. Our robots purposefully In this video we briefly illustrate the progress and contri- include a modest variety of sensing and computing devices, butions made with our mobile, indoor, service robots CoBots including the Microsoft Kinect depth-camera, vision cameras (Collaborative Robots), since their creation in 2009. Many for telepresence and interaction, a small Hokuyo LIDAR for researchers, present authors included, aim for autonomous obstacle avoidance and localization comparison studies (no mobile robots that robustly perform service tasks for humans longer present in the most recent CoBot-4), a touch-screen in our indoor environments. The efforts towards this goal and speech-enabled tablet, microphones and speakers, as well have been numerous and successful, and we build upon them. as wireless signal access and processing. However, there are clearly many research challenges remain- ing until we can experience intelligent mobile robots that are The CoBot robots can perform multiple classes of tasks: fully functional and capable in our human environments. • A single destination task, in which the user asks the Our research and continuous indoor deployment of the robot to go to a specific location— the Go-To-Room CoBot robots in multi-floor office-style buildings pro- task— and, in addition, to deliver a specified spoken vides multiple contributions, including: robust real-time au- message— the Deliver-Message task; tonomous localization [1], based on WIFI data [2], and • An item transport task, in which the user requests the on depth camera information [3]; symbiotic autonomy in robot to retrieve an item at a specified location, and to which the deployed robots can overcome their perceptual, deliver it to a destination location: this Transport task cognitive, and actuation limitations by proactively asking for also acts as the task to accompany a person between help from humans [4], [5], and, in ongoing experiments, locations when the item to transport is a person. from the web [6], [7], and from other robots [8], [9]; • A task to escort a person to a specified location, the human-centered planning in which models of humans are Escort task, in which the robot waits for a person in explicitly used in robot task and path planning [10]; semi- front of the elevator on the floor of the destination autonomous telepresence enabling the combination of rich location, and guides the person to the location. remote visual and motion control with autonomous robot • A semi-autonomous telepresence task, the Telepresence localization and navigation [11]; web-based user task selec- task, in which users may request to be remotely present tion and information interfaces [12]; and creative multi-robot on the mobile robot with autonomous navigation and task scheduling and execution [12]. Furthermore, we have obstacle avoidance. Users select destination points on developed a 3D simulation of the multi-floor, multi-person the map or on the robot image view to move to remotely environment which will allow extensive learning experiments through the telepresence web interface. Furthermore, to provide approximate initial models and parameters to be they can control the robot through a rich motion and refined with the real robots’ experiences. Finally, our robot perception-controlled web-based interface [11]. platform is extremely effective, in particular with its stable low-clearance, omnidirectional base. The CoBot robots were The above tasks are equivalent from a navigational point designed and built by Michael Licitra, (

[email protected]

), of view, as they are achieved by the same navigation planner and the base is a scaled-up version of the CMDragons generating plans to reach destinations in the building. A task small-size soccer robots [13], also designed and built by planner generates different interaction plans for each task Licitra. Remarkably, the robots have operated over 200km and its symbiotic autonomy needs. 1 Computer Science Department, Carnegie Mellon University, Pittsburgh We are currently focusing on several research direc- PA 15213, USA. veloso, srosenthal, tkollar, cmericli, tions: multi-modal speech interaction, interactions among msamadi @cs.cmu.edu our multiple robots, learning from human demonstration, 2 The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA human observation, and human correction. We continue to 15213, USA. joydeepb, bcoltin @cs.cmu.edu 3 Electrical and Computer Engineering Department, Carnegie Mellon investigate depth-based camera 3D image processing for University, Pittsburgh, PA 15217, USA.

[email protected]

object and person detection and person following. We further 4 Electrical and Computer Engineering Department, Instituto Supe- investigate robust execution monitoring, and active learning rior T´ecnico, Lisbon, Portugal (This work was carried while visit- ing the Computer Science Department at Carnegie Mellon University). for learning of the environment and effective factored human-

[email protected]

robot interaction plans. II. V IDEO C ONTENT Finally, we have created an algorithm for CoBot to find We organize the video from 2009 to the present. and search for arbitrary objects in the building using the web a) 2009: The initial CoBot in 2009 (later named [6]. The central idea behind this research, as with symbiotic CoBot-1), was designed to serve as a visitor companion autonomy, is that we recognize the limitations of robots as robot.1 CoBot guides the visitor through the building ac- they currently are and devise strategies to work around these cording to the visit’s schedule. The robot plans its path limitations with human help (or with human-generated data, between locations and provides information about the places in this case) rather than setting the issues aside and limiting and hosts in the visit, both taking the initiative and re- the robots to the lab or strictly controlled conditions. sponding to requests [4]. For its autonomous localization d) 2012 and Beyond: In 2012, CoBot-3 and CoBot-4 and navigation, CoBot-1 used the signal strengths of WiFi were completed. CoBot-3 is being used offsite for telepres- access points [2]. As we realized that the robot would ence. Both are much quieter than CoBot-1 and CoBot-2, but inevitably have limitations in localization accuracy, even retain the same basic design. if seldomly, we introduced from early on the concept of The CoBots continue to be available to users daily. As symbiotic autonomy, in which the robot would proactively we continue to increase the functionality and robustness of ask for help from humans when its localization uncertainty individual CoBots, we are beginning to explore the potential was high [4], [10]. CoBot could navigate and effectively of multiple CoBots. In three years, we have made great avoid obstacles on a single floor of an office building, with progress towards our goal to deploy multiple robust and multiple long corridors with different flooring (carpet, tile, reliable robots in an office building, building upon the past and slate) and handling different wall construction materials 25 years of research by the robotics community. (cement and dry wall) which interfered differently with the R EFERENCES WiFi data. The portion of the video for 2009 is of low quality [1] J. Biswas, B. Coltin, and M. Veloso, “Corrective gradient refinement as we did not collect better video at that time. for mobile robot localization,” in Intelligent Robots and Systems b) 2010: In 2010, CoBot moved to a new building, (IROS), 2011 IEEE International Conference on. IEEE, 2011. [2] J. Biswas and M. Veloso, “Wifi localization and navigation for quite rich from an architectural point of view with glass autonomous indoor mobile robots,” in International Conference on walls and bridges, non-straight turns in corridors, different Robotics and Automation (ICRA). IEEE, 2010, pp. 4379–4384. flooring, and wide lounge areas. CoBot-2 was completed, [3] ——, “Depth camera based indoor mobile robot localization and navigation,” in Proceedings of ICRA’12, the IEEE International Con- which included a powerful pan/tilt/zoom camera which was ference on Robotics and Automation, 2012. used for web-based telepresence [11]. CoBot-2 has attended [4] S. Rosenthal, J. Biswas, and M. Veloso, “An effective personal mobile meetings on behalf of remote users, who could effectively robot agent through symbiotic human-robot interaction,” in Proceed- ings of the 9th International Conference on Autonomous Agents and communicate and move around physically in the environ- Multiagent Systems: volume 1-Volume 1. International Foundation ment. We developed Corrective Gradient Refinement (CGR) for Autonomous Agents and Multiagent Systems, 2010, pp. 915–922. Localization, a novel localization algorithm which uses [5] S. Rosenthal, M. Veloso, and A. Dey, “Task behavior and interaction planning for a mobile service robot that occasionally requires help,” CoBots LIDAR [1]. Furthermore, CoBot escorted visitors in Workshops at the Twenty-Fifth AAAI Conference on Artificial at a crowded open house, demonstrating its robust obstacle Intelligence, 2011. avoidance and navigation capabilities. [6] T. Kollar, M. Samadi, and M. Veloso, “Enabling robots to find and c) 2011: In 2011, we opened a website where users fetch objects by querying the web,” in Proceedings of AAMAS’12, the Eleventh International Joint Conference on Autonomous Agents and could schedule tasks on CoBots, such as sending messages, Multi-Agent Systems, 2012. escorting visitors, and making deliveries [12]. The robots [7] M. Samadi, T. Kollar, and M. Veloso, “Using the Web to Interactively have completed hundreds of user tasks, such as delivering Learn to Find Objects,” in Proceedings of the Twenty-Sixth Conference on Artificial Intelligence (AAAI-12), Toronto, Canada, July 2012. mail, collecting printouts, and sending messages. A few of [8] A. Hristoskova, C. Aguero, M. Veloso, and F. Turck, “Personalized these tasks are shown in the video. Guided Tour by Multiple Robots through Semantic Profile Definition We also added functionality for CoBots to ride the elevator and Dynamic Redistribution of Participants,” in Proceedings of the 8th International Cognitive Robotics Workshop at AAAI-12, Toronto, with human help [5], another application of symbiotic auton- Canada, July 2012. omy in which robots and humans collaborate to complement [9] C. Aguero and M. Veloso, “Transparent Multi-Robot Communication one another’s shortcomings. We developed a 3D simulator Exchange for Executing Robot Behaviors,” in Proc. of 10th Inter- national Conference on Practical Applications of Agents and Multi- for the CoBots, which will be used for larger scale testing Agent Systems (PAAMS 2012), ser. Advances in Intelligent and Soft than is possible in the physical world, as well as for rapid Computing, vol. 156. Springer, April 2012, pp. 215–222. prototyping and testing. Furthermore, we extended CGR [10] S. Rosenthal, M. Veloso, and A. Dey, “Is someone in this office available to help me?” Journal of Intelligent & Robotic Systems, pp. localization to use the Kinect RGB-D camera [3]. The Kinect 1–17, 2011. is much cheaper than the LIDAR sensor it replaced, bringing [11] B. Coltin, J. Biswas, D. Pomerleau, and M. Veloso, “Effective semi- indoor robots a step closer to feasible deployment. autonomous telepresence,” Proceedings of the RoboCup Symposium, pp. 289–300, July 2011. 1 We used the initial CoBot term standing for Companion RoBot, which [12] B. Coltin, M. Veloso, and R. Ventura, “Dynamic user task scheduling we later changed to meaning Collaborative RoBot due to the subsequent for mobile robots,” in Workshop on Automated Action Planning for introduction of symBiotic autonomy as a collaborative relationship with the Autonomous Mobile Robots at the Twenty-Fifth AAAI Conference on humans in the environment. The term “cobot” has been used in a variety of Artificial Intelligence, 2011. research projects, in particular in an early robot manufacturing helper. We [13] J. Bruce, S. Zickler, M. Licitra, and M. Veloso, “CMDragons: Dy- basically use the term for its general good sounding. Collaborative robots namic Passing and Strategy on a Champion Robot Soccer Team,” in have also been recently termed co-robots. Proceedings of ICRA’2008, Pasadena, CA, 2008.