Researchers from Tufts University, Brown University, and Rensselaer Polytechnic Institute are teaming with the US Navy to explore the challenges of infusing autonomous robots with a sense for right, wrong, and the consequences of both.
"Moral competence can be roughly thought about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree," said principal investigator Matthias Scheutz, professor of computer science at Tufts School of Engineering and director of the Human-Robot Interaction Laboratory (HRI Lab) at Tufts.
The project, funded by the Office of Naval Research (ONR) in Arlington, will first isolate essential elements of human moral competence through theoretical and empirical research.
Based on the results, the researchers will develop formal frameworks for modelling human-level moral reasoning that can be verified. Next, it will implement corresponding mechanisms for moral competence in a computational architecture.
"Our lab will develop unique algorithms and computational mechanisms integrated into an existing and proven architecture for autonomous robots," said Scheutz.
Once architecture is established, researchers can begin to evaluate how machines perform in human-robot interaction experiments where robots face various dilemmas, make decisions, and explain their decisions in ways that are acceptable to humans.
Selmer Bringsjord, head of the Cognitive Science Department at RPI, and Naveen Govindarajulu, post-doctoral researcher working with him, are focused on how to engineer ethics into a robot so that moral logic is intrinsic to these artificial beings.
In Bringsjord's approach, all robot decisions would automatically go through at least a preliminary, lightning-quick ethical check using simple logics inspired by today's most advanced artificially intelligent and question-answering computers.
"We're talking about robots designed to be autonomous; hence the main purpose of building them in the first place is that you don't have to tell them what to do," Bringsjord said.
"When an unforeseen situation arises, a capacity for deeper, on-board reasoning must be in place, because no finite rule set created ahead of time by humans can anticipate every possible scenario," Bringsjord added.
You’ve reached your limit of {{free_limit}} free articles this month.
Subscribe now for unlimited access.
Already subscribed? Log in
Subscribe to read the full story →
Smart Quarterly
₹900
3 Months
₹300/Month
Smart Essential
₹2,700
1 Year
₹225/Month
Super Saver
₹3,900
2 Years
₹162/Month
Renews automatically, cancel anytime
Here’s what’s included in our digital subscription plans
Exclusive premium stories online
Over 30 premium stories daily, handpicked by our editors


Complimentary Access to The New York Times
News, Games, Cooking, Audio, Wirecutter & The Athletic
Business Standard Epaper
Digital replica of our daily newspaper — with options to read, save, and share


Curated Newsletters
Insights on markets, finance, politics, tech, and more delivered to your inbox
Market Analysis & Investment Insights
In-depth market analysis & insights with access to The Smart Investor


Archives
Repository of articles and publications dating back to 1997
Ad-free Reading
Uninterrupted reading experience with no advertisements


Seamless Access Across All Devices
Access Business Standard across devices — mobile, tablet, or PC, via web or app
