Should we really be concerned about AI autonomously learning Bengali? Is the AI Black Box problem truly a significant issue?

AI is learning new languages and skills on its own. Should we be concerned? Let's delve into the depths of AI's black box.



An AI program spoke in a foreign language it had never been trained on before. Such mysterious behavior is referred to as "emergent behavior" – where an AI unexpectedly teaches itself a new skill. A recent example is an AI program that has been adapting itself to Bengali based on a few prompts. With just a few prompts, the AI can now fully translate into Bengali.


What is this phenomenon?

An AI program that learns a language without prior training raises several questions - what is the reason behind it, and what do we call this process? Based on recent reports, the most plausible explanation appears to be what is known as an AI black box event.

In simple terms, the AI black box is a system where its input and operations are not apparent to any other relevant entity. It is known as an opaque system. The surprising element is that black box AI models arrive at their decisions but do not disclose how they arrived at them.

To understand the AI black box, we first need to understand how human or machine intelligence works. Learning through examples is what drives intelligence, whether it is human or machine. For instance, a child learns to recognize letters or different animals by being shown examples of letters or animals, and they quickly learn to recognize them.

According to Professor Samir Ravaldeh from the University of Michigan-Dearborn, the human mind is essentially a machine that seeks patterns, can identify qualities when familiar with examples, and ultimately categorizes them independently and unknowingly. Ravaldeh, an expert in AI, says that it may be easy to achieve this, but explaining how it is done is mostly impossible.

Deep learning works in a similar way to how children are trained. These systems are fed with the right examples to develop the ability to recognize certain things. Soon, its own trend-finding mechanism rates a neural network and categorizes relevant objects. When they search for the same object or image in their search bar, it displays the correct object or image. Like human intelligence, we do not truly know how the deep learning system reaches its decisions.


What did Sundar Pichai say about the black box?

“There is an aspect of which all of us in the field call it a black box. You don’t fully understand and you can’t quite tell why it said this or why it got wrong. We have some ideas and our ability to understand this gets better over time, but that is where the state of the art is,” Google CEO Sundar Pichai told Scott Pelley from 60 Minutes in January this year.

Pelley humorously added in between, "You don't fully understand it, and yet you've just left it on society?" Pichai replied, "To the extent that I think even the human mind, we don't fully understand it."


Why is the black box problem a matter of concern?

Although AI can perform tasks that humans cannot, the problem with AI black box raises skepticism and uncertainty about the tools being used. The lack of transparency and the inability to explain how AI arrives at its decisions are significant concerns. This creates challenges in trusting and interpreting the outcomes of AI systems, especially in critical areas such as healthcare, justice, and finance, where accountability and explainability are crucial.


What can be done to address the risks of AI black boxes?

According to experts, there are two approaches to dealing with the problem of black boxes - creating a regulatory framework and finding ways to gain a deep understanding of the inner workings of the box through deep learning. Since production and the decision-making behind it are opaque, an in-depth study of the inner workings can help mitigate the problems. This is where "Explainable AI" comes into play, which is an emerging field of AI that works towards making deep learning transparent and accountable.

However, AI black boxes pose several challenges, but systems using such architectures have already proven their usefulness. Such systems can identify complex patterns in data and reach accurate results at a high level of precision. They can quickly reach results and use less computing power. The only issue is that sometimes it can be difficult to understand how they arrive at their conclusions.

Post a Comment

0 Comments