• WeAreHuman
  • Posts
  • Avoiding Harm in Technology Innovation | MIT Sloan Management Review (2024)

Avoiding Harm in Technology Innovation | MIT Sloan Management Review (2024)

This comprehensive article examines how organisations can responsibly develop and deploy emerging technologies like AI while mitigating potential harms. It provides a framework for ethical innovation and commercialisation.

RESPONSIBLE AI

MIT Sloan Management Review | Avoiding Harm in Technology Innovation | This comprehensive article examines how organisations can responsibly develop and deploy emerging technologies like AI while mitigating potential harms, providing a framework for ethical innovation and commercialisation.

DID YOU KNOW?

Did you know that in a 2024 case involving an AI chatbot, Air Canada faced legal consequences when its automated customer service system misrepresented company policy on bereavement rates, leading to a lawsuit and negative publicity?

NEED AN EXECUTIVE SUMMARY?

 OVERVIEW

The article "Avoiding Harm in Technology Innovation" by Tania Bucic and Gina Colarelli O'Connor explores the critical challenge of responsibly developing and deploying emerging technologies like AI. Based on interviews with executives and managers involved in technology scouting, adoption, and commercialisation, the authors identify common problems in companies' cultures and processes that limit consideration of potential harms. They propose a Responsible Innovation and Commercialisation (RIC) framework adapted from the Responsible Research and Innovation principles to help organisations capitalise on emerging technologies while mitigating unanticipated consequences.

🧩 CONTEXT

Advances in AI and other technologies promise to solve global problems and fuel economic growth. However, they also raise ethical questions and potential adverse consequences that many companies are ill-equipped to address. Innovators are often ahead of regulations that might provide guardrails around commercialisation decisions. This requires organisations to establish systematic processes for considering new technologies' ethical implications and potential harms before releasing them. The article addresses this crucial issue by providing a framework for responsible innovation that can be applied to AI and other emerging technologies.

🔍 WHY IT MATTERS

➡️ Failure to consider potential outcomes can lead to negative consequences—The Air Canada case demonstrates how enthusiasm for AI-powered customer service can override understanding of its limitations, resulting in legal and reputational damage. This highlights the need to consider AI's potential impacts before deployment carefully.

➡️ Few systematic processes exist for ethical consideration—The research revealed that companies often lack formal guidance or processes for questioning the moral implications of the technologies they develop. This gap increases the risk of unintended harm when deploying AI systems.

➡️ Business imperatives often overshadow ethical concerns—Companies focus primarily on using new technologies like AI, their market objectives, and their business model constraints, giving little forethought to potential adverse effects on society or nefarious uses.

➡️ External stakeholder perspectives are often overlooked—Many organisations avoid engaging with external stakeholders during AI development, potentially missing essential insights about societal impacts and public acceptance.

➡️ Risk mitigation often focuses on liability rather than harm prevention—Some companies prioritise offloading liability for AI systems onto other parties rather than avoiding causing harm altogether, which can lead to ethical blind spots.

💡 KEY INSIGHTS

➡️ Anticipation is crucial for proactive risk management—Before deployment, Organisations must systematically consider all potential uses and consequences of AI technologies, including positive and negative outcomes for various stakeholder groups.

➡️ Reflexivity helps surface biases and assumptions—Questioning organisational norms and industry standards regarding AI development can reveal ethical shortcomings and lead to more responsible innovation practices.

➡️ Inclusion of diverse stakeholders improves decision-making—Engaging with relevant external groups, including scientists, advocacy organisations, and end users, helps ensure a more comprehensive understanding of AI's potential impacts.

➡️ Responsiveness is critical to addressing unforeseen consequences—Organisations need mechanisms to monitor for unintended effects of AI systems after deployment and swiftly resolve issues that arise.

➡️ A systematic framework can guide ethical innovation—The Responsible Innovation and Commercialisation (RIC) framework provides a structured approach for organisations to consider ethical implications throughout the AI development and deployment process.

🚀 ACTIONS FOR LEADERS

➡️ Implement a systematic review process—Establish a formal procedure for evaluating the ethical imp’ ethical implications and potential harms before and during development, using the RIC framework as a guide.

➡️ Cultivate diverse expertise—Build teams that include technical experts, ethicists, social scientists, and other professionals who can offer diverse perspectives on AI's potential impacts.

➡️ Engage external stakeholders—To comprehensively understand AI's societal implications, proactively seek input from relevant external groups, including potential users, advocacy organisations, and domain experts.

➡️ Develop clear ethical guidelines—Create and communicate ethical principles and decision-making criteria for AI development and deployment that align with the organisation's values and mission.

➡️ Establish monitoring and response mechanisms—Implement systems to continuously monitor deployed AI technologies for unintended consequences and have clear protocols for addressing issues.

🔗 CONCLUSION

As AI and other emerging technologies advance rapidly, organisations face increasing responsibility to understand and mitigate their potential negative impacts on society. The Responsible Innovation and Commercialisation framework provides a structured approach for companies to navigate these challenges, enabling them to capitalise on technological innovations while minimising harm. By embracing anticipation, reflexivity, inclusion, and responsiveness, organisations can foster a culture of ethical innovation that mitigates risks, builds trust with stakeholders, and contributes positively to society.

🎯 KEY TAKEAWAY

To harness the full potential of AI while minimising risks, organisations must implement systematic processes for ethical consideration throughout the innovation lifecycle, engaging diverse perspectives and remaining responsive to unforeseen consequences.

Explore Our Themed Collections: Uncover a curated selection of current and past articles on our website.

Join 10,000+ Professionals: Follow me on LinkedIn for daily insights.