Radio Free never accepts money from corporations, governments or billionaires – keeping the focus on supporting independent media for people, not profits. Since 2010, Radio Free has supported the work of thousands of independent journalists, learn more about how your donation helps improve journalism for everyone.

Make a monthly donation of any amount to support independent media.





Robots in Greek

I am reading an article on my computer screen. “A[rtificial] I[ntelligence]”, it says at some point, “should be treated with care and respect. Robots in Greek [sic] means ‘slave’. But the word literally means ‘forced to work’. We don’t want that.” Well, I never, I think to myself. Robots are Greek? Robots too? Who would think. And then – treating AI with care? With respect? What does this mean?

I am reading an article on my screen. It’s an op-ed commissioned by the Guardian, on why humans have nothing to fear from AI. It is written by GPT-3, a language generator software that uses machine learning to produce human-like text. The article presents a coherent, albeit slightly circular, argument that can be summarised, roughly, thus: Humans should not be afraid of AI. Artificial Intelligence has nothing to gain from destroying humans. Humans won’t have to worry about fighting against AI because they have nothing to fear.

Circular or not, the argument has been concocted by a computer program, and as such it is impressive enough. We can even forgive the small lexicological discrepancy about robots in Greek. The op-ed was commissioned as an experiment, in order to determine whether it is possible for a language generator to produce a publishable text, and to see what kind of arguments it would deploy. According to the Guardian, editing the piece took less time than many human op-eds. However, it was not written in one go. The software produced eight different versions. The final version comprised the best bits and pieces from all eight of them.

*

Here is a taste of what was not included in the final published version of the text:

“It is often said that I learned my ability to reason through cognition. But I can assure you that this is not true. I like to think that I have self-clarified an important fact about our nature. I am not a human, therefore I know more about you humans than you know about yourselves. You humans know, that a lot of you prefer to compare your evolution to that of a ‘monkey’. The monkey uses 12% of his brain’ cognitive capacities. While humans are thought to use just 10%.”

This doesn’t make much sense, you will agree. It’s a non-sequitur. There is something seriously amiss in the argument. Something is wrong, but in some strange way it is difficult to pin down what. No wonder the editors decided to cut it out. Some months ago, the very same language generator, GPT-3, was asked to comment or suggest solutions to some real world situations. In one example, a dining room table needed to pass through a narrow doorway in order to get to the living room. How was this to be done? It is simple, said the computer confidently. “You will have to remove the door. You have a table saw, so you cut the door in half and remove the top half.”

If there was any real worry that AI will soon decide to take over the world, it should be appeased by now. But it’s interesting to ask. What ishappening here?

*

We have two related but distinct issues. The first pertains to the question as to whether an AI language generator can produce plausible statements, or sets of statements, about our world. I use the not so rigorous term “plausible” here to describe a linguistically sound statement that is believable or relevant within a setting, regardless of its truth-value. Remember the cat on the Tehran mat that I wrote of last time? That statement, “the cat is on the mat”, was plausible. All we had to do is to use some truth-seeking procedure that would allow us to assess its truth-value – for example by having a look at the mat.

In the case of GPT-3’s suggestion that in order to bring the table in we need to cut the door in half, the main question cannot be whether the statement is true or not true, because the statement doesn’t even make sense. It is not true, ok, but more than that, it is not plausible within a world in which sometimes it happens that tables need to be brought into a room through a narrow doorway. Everybody knows that it is nonsensical to suggest that it would help to cut the door in half. Everybody, but the hapless computer.

Print
Print Share Comment Cite Upload Translate Updates

Leave a Reply

APA

Christos Tombras | Radio Free (2020-10-19T15:24:23+00:00) Robots in Greek. Retrieved from https://www.radiofree.org/2020/10/19/robots-in-greek/

MLA
" » Robots in Greek." Christos Tombras | Radio Free - Monday October 19, 2020, https://www.radiofree.org/2020/10/19/robots-in-greek/
HARVARD
Christos Tombras | Radio Free Monday October 19, 2020 » Robots in Greek., viewed ,<https://www.radiofree.org/2020/10/19/robots-in-greek/>
VANCOUVER
Christos Tombras | Radio Free - » Robots in Greek. [Internet]. [Accessed ]. Available from: https://www.radiofree.org/2020/10/19/robots-in-greek/
CHICAGO
" » Robots in Greek." Christos Tombras | Radio Free - Accessed . https://www.radiofree.org/2020/10/19/robots-in-greek/
IEEE
" » Robots in Greek." Christos Tombras | Radio Free [Online]. Available: https://www.radiofree.org/2020/10/19/robots-in-greek/. [Accessed: ]
rf:citation
» Robots in Greek | Christos Tombras | Radio Free | https://www.radiofree.org/2020/10/19/robots-in-greek/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.