doi.org/10.1109/JPROC.2019.2898267

Preview meta tags from the doi.org website.

Linked Hostnames

2

Thumbnail

Search Engine Appearance

Google

https://doi.org/10.1109/JPROC.2019.2898267

On Proactive, Transparent, and Verifiable Ethical Reasoning for Robots

Previous work on ethical machine reasoning has largely been theoretical, and where such systems have been implemented, it has, in general, been only initial proofs of principle. Here, we address the question of desirable attributes for such systems to improve their real world utility, and how controllers with these attributes might be implemented. We propose that ethically critical machine reasoning should be proactive, transparent, and verifiable. We describe an architecture where the ethical reasoning is handled by a separate layer, augmenting a typical layered control architecture, ethically moderating the robot actions. It makes use of a simulation-based internal model and supports proactive, transparent, and verifiable ethical reasoning. To do so, the reasoning component of the ethical layer uses our Python-based belief-desire-intention (BDI) implementation. The declarative logic structure of BDI facilitates both transparency, through logging of the reasoning cycle, and formal verification methods. To prove the principles of our approach, we use a case study implementation to experimentally demonstrate its operation. Importantly, it is the first such robot controller where the ethical machine reasoning has been formally verified.



Bing

On Proactive, Transparent, and Verifiable Ethical Reasoning for Robots

https://doi.org/10.1109/JPROC.2019.2898267

Previous work on ethical machine reasoning has largely been theoretical, and where such systems have been implemented, it has, in general, been only initial proofs of principle. Here, we address the question of desirable attributes for such systems to improve their real world utility, and how controllers with these attributes might be implemented. We propose that ethically critical machine reasoning should be proactive, transparent, and verifiable. We describe an architecture where the ethical reasoning is handled by a separate layer, augmenting a typical layered control architecture, ethically moderating the robot actions. It makes use of a simulation-based internal model and supports proactive, transparent, and verifiable ethical reasoning. To do so, the reasoning component of the ethical layer uses our Python-based belief-desire-intention (BDI) implementation. The declarative logic structure of BDI facilitates both transparency, through logging of the reasoning cycle, and formal verification methods. To prove the principles of our approach, we use a case study implementation to experimentally demonstrate its operation. Importantly, it is the first such robot controller where the ethical machine reasoning has been formally verified.



DuckDuckGo

https://doi.org/10.1109/JPROC.2019.2898267

On Proactive, Transparent, and Verifiable Ethical Reasoning for Robots

Previous work on ethical machine reasoning has largely been theoretical, and where such systems have been implemented, it has, in general, been only initial proofs of principle. Here, we address the question of desirable attributes for such systems to improve their real world utility, and how controllers with these attributes might be implemented. We propose that ethically critical machine reasoning should be proactive, transparent, and verifiable. We describe an architecture where the ethical reasoning is handled by a separate layer, augmenting a typical layered control architecture, ethically moderating the robot actions. It makes use of a simulation-based internal model and supports proactive, transparent, and verifiable ethical reasoning. To do so, the reasoning component of the ethical layer uses our Python-based belief-desire-intention (BDI) implementation. The declarative logic structure of BDI facilitates both transparency, through logging of the reasoning cycle, and formal verification methods. To prove the principles of our approach, we use a case study implementation to experimentally demonstrate its operation. Importantly, it is the first such robot controller where the ethical machine reasoning has been formally verified.

  • General Meta Tags

    12
    • title
      On Proactive, Transparent, and Verifiable Ethical Reasoning for Robots | IEEE Journals & Magazine | IEEE Xplore
    • google-site-verification
      qibYCgIKpiVF_VVjPYutgStwKn-0-KBB6Gw4Fc57FZg
    • Description
      Previous work on ethical machine reasoning has largely been theoretical, and where such systems have been implemented, it has, in general, been only initial pro
    • Content-Type
      text/html; charset=utf-8
    • viewport
      width=device-width, initial-scale=1.0
  • Open Graph Meta Tags

    3
    • og:image
      https://ieeexplore.ieee.org/assets/img/ieee_logo_smedia_200X200.png
    • og:title
      On Proactive, Transparent, and Verifiable Ethical Reasoning for Robots
    • og:description
      Previous work on ethical machine reasoning has largely been theoretical, and where such systems have been implemented, it has, in general, been only initial proofs of principle. Here, we address the question of desirable attributes for such systems to improve their real world utility, and how controllers with these attributes might be implemented. We propose that ethically critical machine reasoning should be proactive, transparent, and verifiable. We describe an architecture where the ethical reasoning is handled by a separate layer, augmenting a typical layered control architecture, ethically moderating the robot actions. It makes use of a simulation-based internal model and supports proactive, transparent, and verifiable ethical reasoning. To do so, the reasoning component of the ethical layer uses our Python-based belief-desire-intention (BDI) implementation. The declarative logic structure of BDI facilitates both transparency, through logging of the reasoning cycle, and formal verification methods. To prove the principles of our approach, we use a case study implementation to experimentally demonstrate its operation. Importantly, it is the first such robot controller where the ethical machine reasoning has been formally verified.
  • Twitter Meta Tags

    1
    • twitter:card
      summary
  • Link Tags

    9
    • canonical
      https://ieeexplore.ieee.org/document/8648363/
    • icon
      /assets/img/favicon.ico
    • stylesheet
      https://ieeexplore.ieee.org/assets/css/osano-cookie-consent-xplore.css
    • stylesheet
      /assets/css/simplePassMeter.min.css?cv=20240305_000000
    • stylesheet
      /assets/dist/ng-new/styles.css?cv=20240305_000000

Links

17