We know that common sense is not that common. Yet, it was none other than the French philosopher, Voltaire who – almost – coined the term when writing in his Dictionnaire Philosophique (1764), it is sometimes said, common sense is very rare.
In the world of artificial intelligence (AI), common sense and even simple language are major – and perhaps even – unachievable milestones. The bitter reality is that AI coding and algorithms do not have commons sense. And worse, AI also has a hard time understanding spoken language. Siri could not even understand my name in a recent self-text.
Worse, algorithms struggle awfully when seeking to interpret written text – and this is without even mentioning James Joyce’s Ulysses. AI already would battle with, for example, understanding the difference between a sign that reads, “Free Horse Manure” and a sign that says, “Free Nelson Mandela”. We humans, get it – AI doesn’t.
Things get even worse for AI when our very own and very human thinking remains a set of rather puzzling series of allegories, metaphors, irony, cynicism, guesswork, inferences, speculations, etc. Something which we do every day, often without even noticing it. With that, two rather legitimate questions for AI’s machine intelligence open up,
1. How can we possibly hope to program human guesswork and common sense into a computer?
2. How can we have inferences, speculations, etc. put into an algorithm?
Of particular concern for AI are inferences – scientific and otherwise. Commonly, we think of inferences as steps in reasoning when moving from a premise to a logical consequence. Inference literally means to carry forward.
Inference is divided into Aristotle’s deduction and induction. Sounds complicated, but it is something we do all the time. Put rather simply, a deduction moves from an idea to an observation, while an induction moves from an observation to an idea.
Yet, for AI this is a monumental task. One might even argue that, unless an AI system can engage into the process of inferring, it may not even deserve to be called intelligent. Without it, AI simply isn’t intelligent. Here is an example of a very human inference,
If Johnny Depp comes in onto the movie set late, the movie’s director might infer that he might not be taking things seriously, when in fact – the traffic was backed up due to an accident.
At a slightly more sophisticated level is the probabilistic inference wherein it draws conclusions from statistical data. For all three – common sense, scientific- and probabilistic inference – the ability to determine which bits of knowledge are relevant and which are not, is a computational skill that AI still finds hard to manage.
In the scientific understanding of inferences, researchers use them to frame research hypothesis. After that, a positivistic understanding of science suggests to test such a hypothesis. Yet virtually no hypothesis is derived mechanically. Furthermore, a hypothesis doesn’t just simply pop into scientists’ heads. Instead, there is an inextricable link between an inference and a hypothesis.
This is something almost impossible for AI to do as hypotheses are genuine acts of the human mind. They also remain central to all science. Yet, this is all too often not explainable by pointing to data, factual evidence, and anything obvious. In addition, the development of scientific hypotheses might never be programmable. 😊😊
Even more problematic for AI is the fact that, in some cases, only by deliberately ignoring virtually all previous knowledge or significantly re-conceptualizing it, can this be done.
Somewhat akin to Kant’s a priori knowledge, Copernicus, for example, needed to reject the previous knowledge of a geocentric model to achieve what he sets out to achieve. Copernicus needed to infer a radical new system of the solar system. Yet, his “leap” launched what we today call, The Scientific Revolution.
Unlike AI’s coded algorithms, real detective work, scientific discovery, innovation, and common sense are all workings of the human mind. Yet, these are workings that AI programmers – particularly those in search of generally intelligent machines – must somehow account for and program into their AI algorithms.
In sharp contrast to this, current AI tends to be somewhat locked into a system that thinks like this:
1. if it’s raining, the streets are wet. It is, in fact, raining. Therefore, the streets are wet. The conclusion is the inference which AI draws from. It is perfectly logical;
2. if it’s raining, then pigs will fly. It’s raining. Therefore, pigs will fly. A silly argument, but, as for current AI, it is perfectly valid and perfectly logical. It again uses reasoning from a hypothetical proposition.
Perhaps one of the most inconvenient truths for AI is that deductions never really add knowledge. Yet, simple – or perhaps “simplistic” – logic is perfect for AI. This is particularly true when deductions merely confirm what a rational person would conclude from given premises:
1. if we know that all persons are mortal – they die – and that such and such is an actual person, we already know that such and such a person will die;
2. if a rooster crows, the sun is coming up. This is true. But if we were to ask AI, why the sun rose, it is unlikely to dish up the rooster. Unlike AI, we would not attribute much intelligence to something like this.
Worse, AI doesn’t even “understand” what was said above. Even more problematic, AI doesn’t know what is relevant, irrelevant, and comical. In short, observing the sun rising every morning isn’t really that relevant to a turkey’s life expectancy.
Our confidence comes from what is known as habit of association, as the philosopher Hume once said. From all this, we can see that there is what can be called, for example, as the AI’s turkey fallacy,
A turkey found that, on his first morning at the turkey’s new farm, he was fed at 9am – on warm days, on cold days, on rainy days, and on dry days. Each day, the turkey added another observation to his list. Eventually, he concludes, I am always fed at 9:00am. Yet one day, his conclusion was shown to be false – on Christmas Eve. Instead of being fed, our turkey became the farmer’s Christmas turkey.
This is how many AI programs are trained. They are trained by making observations gained from huge sets of data and reaching deductive inferences. From that, premises are formed. And even a false turkey conclusion.
Unlike AI’s algorithms and self-learning machines, the real world – including our turkey’s farm – consists of very dynamic environments. These are constantly changing in both predictable (good for AI) and unpredictable ways (bad for AI – and turkeys). Worse, AI programmers cannot enclose our natural environment in a closed system of algorithmic rules.
For the so far still nonexistent self-driving car, this, for example, means that a busy city street is full of exceptions. This is just one reason we do not have robot-cars wandering around Rome, Manhattan, Mumbai, and Sydney.
As much as AI engineers wish this to be the case, London, for example, refuses to be like games such as, Atari or Go or chess. And London is not a scaled-up version of it, either. Even worse, London cannot be scaled up to a chess game as it is not an enclosed system that AI programmers like so much.
Perhaps even more problematic is the fact that there is a real danger that AI engineers will get somewhat asphyxiated into their machine thinking. Yet, simply analyzing the past through algorithms is, at times, not much help.
Particularly when making decision that reaches into the future – as the turkey scenario has shown. This can just be one of the many reasons why super-intelligence might still generate rather super-stupid outcomes.
In other words, it remains rather imperative to know how not to become a turkey. For one, the limitations of machine learning are often already established by the dataset given to it in its training sessions. In AI trainings, these sets are often fixed. Yet, our real world generates datasets continuously – 24/7, seven days a week, year in and year out.
This means that any given dataset is, most likely, a very tiny “time slice”, representing no more than partial evidence of a carved up – not real – world. Worse, machine learning does not have an actual “understanding” of the real world – carved up or otherwise.
This fact alone is hugely important for deep learning and AGI. Yet, all this creates some very disturbing problems on questions like,
how, when, and to what extent should people trust AI systems
that do not even understand what they are made to analyze.
In the end, even a well trained AI system using the most sophisticated machine learning system imaginable can only ever predict a rather limited range of likely outcomes. It can also pretend to “understand” a problem.
But all machine learning might stop rather abruptly when a sudden and unexpected change or event renders the AI simulation – and by inference, the self-driving car – essentially worthless.
Strictly speaking, the idée fixe of machine learning is really somewhat of a misnomer, simply because AI systems are not learning in the sense that we do. Unlike machine learning, human beings gain an increasingly deeper and ever more robust appreciation of what it means to be in the world.
Yet, the simulative character of machine learning helps explain why it is ostensibly stuck on narrowly defined applications. And worse, it has shown –for decades – that very little or no progress toward artificial general intelligence has been made, despite the recent hype.
In other words, learning involves escaping narrowly defined AI performances. It means to gain a more general understanding of things in the world – something AI, however sophisticated, “still” struggles with.
This content originally appeared on CounterPunch.org and was authored by Thomas Klikauer.
Thomas Klikauer | Radio Free (2023-08-18T05:39:37+00:00) AI and the Christmas Turkey. Retrieved from https://www.radiofree.org/2023/08/18/ai-and-the-christmas-turkey/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.