Opinion #other
Read time: 04'01''
12 December 2023

Businesses must take a human-centric approach to AI development

We stand at the precipice of an artificial intelligence revolution that promises not only to reshape business and society but also to enhance our lives in unprecedented ways. From finance to manufacturing to healthcare, AI's integration across industries automates routine tasks, generates valuable insights from data, and enhances decision-making. However, this powerful technology risks causing more harm than good without a human-centric approach to AI implementation.

Several cautionary tales of real-world AI systems that failed to consider human needs and values have already emerged. As more organisations wake up to these dangers, it is becoming imperative that AI development adopt a human-centric approach. Putting people first when designing, deploying and governing AI systems can help maximise their benefits while minimising risks. This requires incorporating ethical thinking at all levels, from the data scientists building the algorithms to the executives mapping out business strategy.

Businesses can only prevent AI from running amok by elevating human considerations over efficiency and optimisation. Those embracing human-centric AI will now gain a competitive advantage and public trust. On the other hand, firms that turn a blind eye to AI’s societal impacts may face a consumer backlash and a skilled talent drain. The incentives are clear for taking an ethical approach, but enacting it requires concrete policies and practices.

The risks of dehumanised AI

Several high-profile cases have revealed the dangers of AI systems that ignore human needs. In 2018, Amazon had to scrap an AI recruiting tool that exhibited bias against women. The system was trained on historical data that reflected male-dominated hiring patterns, so it penalised resumes containing words like “women’s chess club captain.” This shows how AI can amplify existing biases if deployment precedes rigorous testing.

ProPublica also found that an algorithm used in US courts was incorrectly labeling black defendants as “high-risk” nearly twice as often as white defendants. Such algorithmic bias can worsen discrimination against already marginalised groups. Lack of transparency around how the system makes decisions compounds the problem.

These examples underscore the hazards of “black box” AI. When AI decisions are incomprehensible, it becomes impossible to audit for issues like bias, thus disregarding key ethical considerations in pursuit of optimisation and efficiency.

Prioritising ethics and human oversight

To avoid these pitfalls, businesses must govern AI in an ethical, socially conscious way. Performing impact assessments before deployment can help identify potential harms like biases. Ongoing audits are also essential to ensure models remain fair over time as new data is ingested.

Keeping humans in the loop is equally essential instead of handing full autonomy to AI. Humans must continuously monitor outputs and be ready to override incorrect or dangerous AI decisions. For example, even as self-driving cars become capable of complete autonomy, most experts advocate for supervised autonomy with humans behind the wheel as a failsafe.

Some organisations are establishing ethics boards to instill accountability in AI development.

These groups issue guidelines aligned with corporate values and monitor projects to avert unethical applications.  In addition to ethics boards, businesses must involve a diverse range of stakeholders in AI development, including ethicists, community representatives, and end-users, to ensure a variety of perspectives and concerns are considered.

Promoting transparency and explainability

Many advanced AI techniques are black boxes, making their decisions opaque. To enable oversight, businesses should prioritize explainable AI (XAI) models whose reasoning can be deciphered. XAI reveals which data features inform predictions, helping identify underlying issues.

Interactive visualizations can also open up AI black boxes for non-technical users. Stakeholders must comprehend model logic to build trust in AI systems. Transparency empowers humans to audit AI fairness and safety, averting “out of sight, out of mind” deployment of unvetted models.

Moreover, businesses should explore technological solutions like algorithmic auditing tools and AI explainability interfaces, demonstrating that technology can aid in ethical AI development.

Responsible and staged AI implementation

Even with transparency, AI can still cause inadvertent harm if deployed recklessly. Businesses should carefully test systems before full launch to assess benefits and risks. Starting with narrow, low-stakes applications allows for evaluating AI in a controlled setting. AI should also be tailored to business needs to solve problems without unnecessary data collection or autonomy.

For instance, AI systems designed with ethical considerations have shown significant positive impacts in sectors like healthcare, where they assist in diagnosing diseases while respecting patient privacy and data security.  In another example, an HR chatbot need not access sensitive employee data irrelevant to its role. Such measured deployment reduces the potential downsides of AI systems becoming too powerful too quickly.

The way forward

Adopting human-centric AI will be crucial for enterprises to navigate the coming decade responsibly. Businesses should not only align with existing ethical frameworks and international standards, such as the EU’s AI regulations or IEEE’s Ethically Aligned Design, but also actively contribute to shaping these standards.

Businesses should train their data scientists and developers on ethical coding practices and perform impact assessments. They must implement stringent testing protocols for AI systems before launch, repeating evaluations regularly post-deployment. Ongoing audits by ethics boards and external auditors can provide independent oversight.

Transparency and explainability should become non-negotiable requirements for all AI systems. Businesses can also appoint dedicated AI ethics officers to institutionalize ethical thinking at the leadership level.

With a responsible approach and ongoing governance, AI can evolve as an empowering ally to humanity, not an existential threat. We can build an ethical AI future by harnessing AI’s upside while respecting human autonomy and avoiding detrimental impacts. The tools and frameworks are in our hands – it is up to businesses, policymakers, and individuals to use them wisely.

Mark Minevich is a digital cognitive AI strategist, AI expert, investor, UN Advisor, and advocate with expertise in Artificial Intelligence. Mark is also a Co-Founder and Co-Chair of AI for Planet Alliance with UN Agencies, a Senior Advisor to BootstrapLabs Venture Capital, an Executive Advisor to Artefact, a President and founding partner of Going Global Ventures and the author of the newly released award-winning book published by Wiley, “Our Planet Powered by AI.”

 

Buy Our Planet Powered by AI Now