Until a few decades ago, imagining intelligent machines and human-like automatons meant moving into the realm of science fiction: literary and cinematographic sagas - just think of 2001: Space Odyssey by Stanley Kubrick and the novel of the same name by Arthur C. Clark on which it is based, or Ridley Scott's blockbuster Blade Runner, inspired by the novel of Philip K Dick The Android Hunter – they have in fact built entire universes on these assumptions, creating parallel worlds into which to project such representations. Some of these predictions would later prove to be wrong; others, however, are surprisingly accurate. Scientific and applied research - in particular disciplines such as computational logic, experimental physics, computer engineering, but also cognitive psychology, the study of the representation of knowledge and language - have made such significant progress that the twenty-first century will go down in history as characterized by the massive use of Artificial Intelligence (AI) systems and robotic tools.
The use of AI, especially generative AI, and the use of robots are at the centre of public debate and institutional initiatives aimed at understanding the impact and consequences of these technologies are multiplying. The interaction between human beings and intelligent machines governed by algorithms raises numerous ethical and moral questions, which will have to be addressed carefully and precisely by the entire scientific community and by the various national and international actors, avoiding hasty solutions and alarmist slogans. Considering that in the next future there will be ever-increasing complementarity between humans on the one hand, and machines on the other, asking ourselves what the latter are unable to do compared to the former can represent an interesting starting point in defining the relationship that binds both.
Giuseppina Gini, Associate Professor of Robotics at the Polytechnic of Milan, author and coordinator of numerous Italian and European research projects, tin response to the being posed - what robots and AI are currently not capable of doing - specifies: "What today robots and AI don't know how to do is the same as what they can't do. The development of science and technology is certainly driven by the desire for knowledge but also by the objective of producing more. What is and will be available is what has potential for use. For example, a robot today is a rigid structure, with limited mobility and cannot adopt all the positions that are possible for natural beings. But this is not a theoretical limit: it would be enough to add more degrees of mobility and actuators, but who would care? Another example, linked to AI, is the ability to manage novelty. Today the deep learning systems are very effective at recognizing objects and people from images, learning from a large quantity of appropriately labelled images. What they assign to the new image is therefore a label from those known, or a non-recognition. However, even this is not an insurmountable theoretical limit. In many cases it would be sufficient to add a greater number of labels. More generally, similarity rules should be added to assign previously unseen objects to the most similar categories. The biggest difference between what humans learn and what AI systems learn is the role of sensory experience in producing knowledge. For humans, all knowledge is mediated by sensory stimuli, and is constantly evolving. Machines do not interact and do not learn from sensory experience, and therefore knowledge is introduced to them in the form of data and rules. So far, experiments to make robots acquire knowledge through continuous interaction with the environment are only just beginning."
Complete mobility, learning and cataloguing of new information and the role of sensorial experience in the production of knowledge: this is what machines and intelligent agents seem, for now, unable to do. Will these be the next revolutions in AI and robotics?