Group of people using mobile phones in a row.

Asking the tough questions about AI: Where to from here?

Our Thinking | insight

Published

Author

6 Minute Read

RELATED TOPICS

Share insight

Idea In Brief

When adopting AI, there are a lot of questions to consider

It is essential to examine the ethical risks and challenges associated with the development of AI and its implementation. What you need consider will differ depending on your own specific business context.

Most organisations will not be developing AI models themselves

Such organisations will need to think about what roles can be performed by AI, what decisions can be delegated to it, and how they are going to support their staff through the transformation.

Others will be developers and be faced with governance questions

Whose values shape the development of these tools and how might they be used? These are questions that those developing their own products will need to ask, as will those whose task it is to regulate AI?

In an era where artificial intelligence is rapidly transforming industries and societies, it is essential to examine the ethical risks and challenges associated with its development and implementation. This exploration goes beyond technical capabilities but is rather rooted in questions about how you can best implement AI in your organisation while remaining mindful of its broader societal impact.

By posing these questions, we aim to illuminate the potential consequences and responsibilities that come with harnessing AI technology. In doing so, we hope to encourage thoughtfully crafted policies and practices that ensure AI serves the greater good, upholding ethical standards while driving innovation and progress.

When implementing AI-based solutions…

Most organisations will not be developing AI models themselves. Instead, they will choose to implement tools built by technology companies—tools like ChatGPT, Google Gemini, Microsoft Copilot, and others. Here are some questions to consider if your organisation is in this group.

Can AI replace humans?

No doubt many organisations are already cutting costs through intelligent automation of workflows and tasks and looking to reduce or even remove humans from the equation. However, they must think carefully about the intangible value that people bring to an organisation. The ideal scenarios will be ones where AI and humans are set up to work hand-in-hand. There is a fundamental humanity – a shared sense of purpose and values, the ability to be empathetic (not just mimic empathy), the agency to make different decisions based on situational context and critical thinking – that people bring and that intelligent automation cannot fully replace. Have you fully considered the implications of reducing or losing this human element?

What decisions can you hand over to AI?

Consider three scenarios in which AI tools can feasibly be applied today with existing capability.

  1. A predictive intelligence system that tells you the most common orders to prepare during rush hour, adjusted for day of week and time of year.
  2. A financial tool trained on historic datasets that identifies a person’s lending risk to recommend an interest rate.
  3. A facial recognition technology that identifies if individuals are the perpetrator of a crime.

In the first example, fully automating the system and handing over the decision-making to AI is fine. It may even be beneficial, improving efficiency and worker wellbeing. In the second, you want a human critically reviewing all outputs and decisions, and accounting for the model’s limitations. If this model identified an individual as posing a higher lending risk and offered them a higher interest rate based on biased datasets, this embeds and exacerbates the bias. In the third, you want a human leading the analysis and the decision-making, with the model serving only as one source of input. Consider the consequence of the tool inaccurately identifying an individual as the perpetrator of a violent crime due to insufficient training data for a particular racial background – it would be catastrophic for them, their family and community. How are you thinking about the decision-making that you delegate to AI tools, and do you have a consistent framework to guide it?

How are you supporting your people through your AI transition?

Regardless of the extent to which AI is being integrated into your business model, your staff need to understand its potential, applications, and governance. They will have varying attitudes towards new technologies and their potential applications. Some may be deeply sceptical about its potential, and others may already be using shadow AI, which is the use of AI tools without authorisation or approval. Effective implementation and change management require you to establish the right systems, forums, tools, information, and resources to support and guide your people, while offering a safe space for them to experiment and learn. In addition to exploring potential applications of the technology itself, have you established the right scaffolding for your staff to be supported through what might be a significant change impacting their role?

Do your customers and stakeholders understand how AI is being applied in decisions and interactions that impact them?

People have a range of reactions towards technology, particularly something as new and mysterious as AI. They also have a right to know when a technology they may not trust is influencing their experiences. One of the concerning things about AI is that it is already a part of many of our services and systems, and it is not always clear when and how it is being used. How can you improve clarity and transparency about where and how you are using AI to build trust?

When developing (and regulating) AI models…

There is a different set of questions to ask when it comes to the development or regulation of AI models themselves. These questions are also useful for all organisations to consider when thinking about AI governance.

Who develops these tools?

Increasingly, the vast resources required to build AI models – particularly that required to process the machine learning that underpins the intelligence itself – are concentrated in a handful of tech companies. They have the knowledge, the data centres, the resources (people and capital), and the ambition. These companies are currently largely unregulated and are racing to develop increasingly intelligent AI systems, which some predict are set to surpass human intelligence at some point in the near future. What incentives drive these companies, and how might they be regulated effectively?

Whose values shape the development of these tools and how?

AI “understands” the world through the data it is fed. Historic datasets are biased and incomplete. Internet data, unfiltered, can represent some of the worst impulses of humanity. AI cannot fundamentally “understand” the concepts and values that humans prioritise intrinsically in decision-making: ideas about fairness, dignity, freedom, agency, the value of hard work are as foreign to it as its underlying operations are to most of us. How do the creators and developers of AI tools understand and embed these (albeit imperfectly, if at all) in their creations? How might they develop models that are aligned with human values? How might your in-house AI models, should you go down that path, be made to align with such values?

How might they be actually used?

If we have learned anything from history and human behaviour, it is that inventions and innovations have applications and uses well beyond what their creators intended them for. We can’t stop or change this – it is part of what makes humans creative and innovative – but we can be mindful of it when developing new technologies. Even if AI’s creators have beneficial use cases in mind, how might they carefully consider and address potential nefarious applications?

How should we think about the intelligence of these systems?

AI models are basically vast neural networks: brain-like creations who operations are run through vast data centres. They are ‘trained’ on data, learning and ‘growing’ in their abilities. The models being built are getting increasingly more ‘intelligent’. When do we need to start considering the potential that they may be or become entities with their own rights? How might we think about the rights that AI might have in the future, relative to human rights, animal rights, and other rights? This is obviously a potentially controversial topic, as most current debate and discussion focuses on human rights.

Why it’s important to ask these questions

We are at a pivotal point in human history. AI technology is rapidly making its way into every digital product, service and system that we interact with and that collectively shape our lives. They do not always do in visible ways, and the models are rapidly getting more intelligent, set to surpass human intelligence. The models and tools we see today are only the beginning of the vast potential that this technology holds. How we design, implement and embed AI models and tools today will define our collective realities into the future. The stakes are high. The consequences of not thoughtfully exploring these questions range from potentially catastrophic outcomes like powerful models being controlled by and serving the goals of a small number of elites to being surrounded by products, services and systems powered by misaligned AI systems delivering subpar outcomes that do not align with human values and goals.

Get in touch to discuss how your organisation can embrace the potential of AI.

Connect with Trisha Santhanam on LinkedIn.

A version of this piece was originally published on Medium on 13 May 2025.