Artificial Intelligence (AI) is a well-known and influential technology, but as with all innovations, there are some downsides to consider. AI relies on statistical techniques, which may struggle to precisely simulate data. While some of the challenges can be tackled using resourcefulness and modern technology, others may still persist.
Having a comprehensive knowledge of these concerns is crucial to fully comprehend the limitations of Artificial Intelligence (AI) systems. Although I share the optimism about the vast potential of AI with many others, it is vital to take into account our present situation in order to create well-informed decisions and implement successful strategies.
Understanding the Chinese Lounge
John Searle, an esteemed authority in the area of AI philosophy, created the Chinese Room argument. Originally designed to counter the Works Test, it has since emerged as a convincing argument against the concept of robotic independence in cognitive thinking.
In this particular experiment, the participant has been positioned in an environment surrounded by a vast collection of literature composed in Mandarin, despite possessing no proficiency in the language.
Within this room, there are two different periods of time where all activities are momentarily halted. One of these periods involves a card with a few characters inscribed in Mandarin, while the other includes a pen filled with black ink. The individual in the room has become uninterested and has resorted to reading some of the available literature, and has familiarized themselves with the names present on the card.
Although the characters were unintelligible, they followed a recurring pattern along with another set of characters found in the books. Acting on a hunch, the individual decided to key in the pertinent characters on the card and deposit it into the second time slot.
To their pleasure, the individual discovers that the second time slot has reopened, providing them with a scrumptious meal. As a result of matching the symbols correctly, a reward is distributed.
Searle ponders over the ponderance of the individual responsible for creating the cards considering the possibility that the individual confined to the room could be proficient in Mandarin even if they do not comprehend the situation, solely by observing that the individual inside the room is reacting to the characters inscribed on the card.
As per John Searle, computers can be coded to identify prototypes and make decisions based on these patterns, even if they do not have an in-depth understanding of the symbols. This is similar in essence to how a person reacts to symbols without the necessity of comprehending them.
Searle’s argument infers that although computers can recognise the underlying relationship between two variables, they are incapable of creating a theoretical rationale for the detected association. This inherent incapability of Machine Learning is a limitation to be considered.
Staying Alert and Focused
In constructing predictions, algorithms are most efficient when utilising data that resembles the data on which they were trained. Predictions may be less trustworthy if the input data is vastly dissimilar to the training dataset.
The basis of any mathematical model is the correlation between sets of data. However, these associations are not always applicable.
For instance, the relationship between height and weight often becomes less significant as weight increases. Individuals may put on weight without necessarily experiencing a decrease in height.
If I generate a model that determines an individual’s height based on their weight, it should offer a precise estimation provided that both variables originate from typical ranges. Nevertheless, as the weights become more extreme, particularly in the case of underweight or overweight individuals, the probability of prediction errors increases.
A dataset possessing a statistical shape that is similar to a normal distribution is referred to as “normal”. This implies that a majority of data points cluster in close proximity to the median value of the dataset. Algorithms tend to achieve better results when the underlying variables possess normal and predictable distributions. Hence, exceptional data points can lead to outcomes that are unexpected.
Individual Personality versus Social Dynamics
Predicting individual human behaviour is known to be an arduous task. As mentioned earlier, algorithms necessitate training data to make predictions. While these techniques demonstrate success in predicting general behavioural trends, applying them to specific individuals may pose challenges.
This limitation is not exclusive to Machine Learning(ML) but applicable to all statistical methods. For instance, if we consider the global population as a whole and focus on long-term trends and worldwide averages (and define strength in a measurable way such as weightlifting), we can infer that males are typically stronger than females in terms of physical strength.
Addressing the individual level is more multifaceted. A female mixed martial arts fighter or trainer who regularly engages in strength training would have a distinct physical advantage over a male athlete who has no history of weightlifting. In this sense, does this contradict the worth of AI?
Absolutely not. Artificial Intelligence-based strategic decisions are usually of a broad nature. Amazon’s algorithm is not biased against any individual; instead, it has recognised a correlation between certain behaviours and a preference for specific product types.
Individuals A and B have a higher likelihood of purchasing the marketed product owing to the personalisation process, despite the absence of any guarantee. Nonetheless, utilizing this approach increases the company’s prospects of success significantly.
Susceptibility to Extreme Cases
Factoring in outliers is another crucial consideration. Data that is atypical or considered extreme may compromise the effectiveness of the models used to make predictions. Generally, models are trained using standard data.
In the case of training AI with outliers, it creates a second problem. If the data is grounded on short-term anomalies, the algorithm’s dependability becomes questionable when circumstances revert to their conventional state.
The onset of COVID-19 pandemic presented an unprecedented challenge to the worldwide supply chain. The unavailability of pertinent information during the initial stages of the crisis made it impossible to train models that could precisely anticipate its consequences.
Collecting current data and retraining the model with information from 2023 and 2024 is perceived to be the most effective way of proceeding. Nevertheless, it is worth noting that the accuracy of models based on this data is likely to lower as international borders reopen and supply chains are re-established.
It is crucial to comprehend that Artificial Intelligence (AI) takes time to comprehend the intricacies of a given scenario before being able to provide informed judgement. Nevertheless, it does not imply that collecting and implementing this data holds no significance.
A Futuristic Perspective
My aim is not to criticise Artificial Intelligence (AI); instead, I am certain that we are on the threshold of a period of exceptional growth in this domain. AI studies are progressing at a fast pace, and I am hopeful that this will be a defining period.
It is crucial to remember that artificial intelligence is still in its nascent phase and is not capable of wholly replacing human creativity at the moment. When computers and humans collaborate, incredible outcomes can be achieved. Both are indispensable; computers can process colossal amounts of data, while humans can translate these findings into practical implementations.