How can lawyers help roboticists increase their robots’ safety through non-technical methods?
Lawyers can help engineers create value, by helping them discover new non-technical methods to reducing risks in robots or other technologies
While we wait for public regulators to come with solutions for the challenges posed by AI and advanced robotics, tech lawyers are well equipped to help roboticists create value in their products through fostering trust while clarifying legal risks.
Together, Synch and Teknologisk Institut‘s COVR project are developing the knowledge needed to identify sources of unforeseeability and distrust in AI and Robotics, not only at technical levels, but also at commercial, legal, and ethical levels.
Building trust and clarifying legal risks are intimately intertwined and there is a way to secure both with the same measures.
- COVR is a Horizon 2020 funded project led by several tech-clusters in the European Union, it aims to give key in hand solutions to Cobot producers for making safer robots, safearoundrobots.com
Roboticists seek the best balance between economy, function, safety, and trust
The main drive for roboticists is to find the perfect solutions for the most profitable balance between the economy of a product, and its functionality, safety, and trust-worth.
With increased complexity and more open and unpredictable environments, the price for a safe design or for the added layer of a redundant safety mechanism can climb very quickly and make a robot either unsellable, or dangerous. However, lawyers have a way of thinking of risks that can help reduce those costs.
Governmental regulators have not provided clear answers to the specific needs of roboticists
As physical robots are given more complex tasks in areas where human beings are also present, the possibility of physical accidents creates a greater need for demonstrating that the producer has done everything to offer a product that is safe and efficient. AI is a promising technology and many producers have advanced functional pilot projects but decide not to deploy their technologies because of trust and predictability issues.
While the deployment of robotics is likely to reduce the overall occurrence of incidents, the development of Artificial Intelligence and its integration in robotics increases the likelihood of accidents occurring because of unforeseeable safety failures. Unforeseeable risks are by nature harder to identify and mitigate, which again increases the need for trust. As they are unforeseeable, it also becomes difficult for producers to make predictions as to their risks of legal liabilities.
Please feel free to get in touch with us at Synch, if you want to talk about your AI and robotics project.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 779966.