A.I.
ai1
From the birth of artificial intelligence to the Human Brain Project

In a scenario in which institutions and individuals have to fight in an increasingly harsh way to grab the (few) research funds, the recent news of the European Union to include the Human Brain Project in the FET  [i] has outraged the “flagship projects” for the next decade. Surprise struck also for the consequent money allocation: one billion euros in ten years to create a complete simulation of the human brain through a network of supercomputers.

This project has been seen as an unrealistically attainable goal, or even an ill-posed problem [ii], but this and other similar projects do not come from nowhere [iii]. They have about 50 years of history behind, collected in the discipline known as Artificial Intelligence (AI).

The beginning of  AI is commonly set, almost in a concordant way, in 1956, the year in which a workshop in Dartmouth in New Hampshire (USA) was held by the American mathematician John McCarthy. It was attended by scientists like Claude Shannon and Marvin Minsky, both belonging to the world of engineering and logical-mathematical sciences, but also Herbert Simon, an economist and future Nobel laureate, computer like Allen Newell and others. The agenda of the event was organized around “simple” topics: the development of a “conjecture, in which all the aspects related to learning and to the analysis of intelligence can be precisely described, or simulated by a computer”.

The years from 1956 to the mid-60s (conventionally to 1966 when the federal government of the United States cut fundings for AI, judging the results produced hitherto largely disappointing [iv]) may be associated with an early period. The attention here is focused on software implementations parser type or finite state automate for the handling of strings (ie sequences of symbols). The main results obtained focused primarily on the automatic resolution of mathematical problems, and it seems necessary to mention the software “Logic Theorist” created by Herbert Simon and Allen Newell, which was able to solve most of the theorems present in the “Principia Mathematica” of Russell and Whitehead. The main failures, however, came from the understanding of natural languages. Nevertheless, remarkable results were achieved by the program’s psychologist Joseph Weizenbaum’s ELIZA and software Shrdlu developed by Terry Winograd.

ai2

From the ashes of the first approach, a second wave was born, that of expert systems [v]. As the previous, also this project was concerning softwares but the basic concept was very innovative: it overcome the idea that the programmer had to explicitly encode all possible strategies. To do this, it was necessary to emulate what was happening with human experts of various subjects (about which they know a good number of rules, but also the appropriate cognitive schemas to make extrapolations) and the new software came with two key components.

On the one hand coded rules in the form of propositions IF THEN ELSE or decision trees (“knowledge base”). On the other hand the “inference engines”, that is mechanisms based on the INPUT producing a solution (OUTPUT) resulting from elaborative flows not explicitly encoded in the programs. These were the years in which the main language for the implementation of AI projects passed from LISP (invented by McCarthy in person) to PROLOG that allowed greater use of the processing types hitherto little explored: recursion and backtracking on all. Among the names of systems that have been fundamental, we may cite Mycin, Molgen and Teiresias. Possible applications could go from the analysis of noisy systems (diagnosis, interpretation of data), the forecasting, planning and scheduling (eg in the manufacturing sector but not only) to the design mode (machines to design other machines).

Even though expert systems were efficient in several areas, it was later clear that their purely computational (and associative) nature could be an insuperable limit for further progress. Humans in fact demonstrated their intelligence not only solving mathematical equations, engineering problems or learning languages but also moving, recognizing shapes and colors and, on the basis of everything, implementing behaviors. In addition, the studies during the 80s and 90s began to highlight some interesting facts about the physical functioning of human and animal brains.

It gradually became clear the that this system elaborates electrical signals, which are propagated in a network consisting of about 1010 neurons. The neurons were connected to each other (but not to all) in a more or less strong way, with biological transmission lines (axons) and interface points (synapses), which were more or less “open” to the passage of signals, with a degree of opening influenced by individual experience. Still it had been proved that neurons had the properties to electrically activate [vi], according to the input, and transmit their state to subsequent neurons. In the  attempt to include all of this in a unique model [vii], sensory data could be associated to the INPUT of the overall system, while the electrical signals further downstream were connected to the command sent to the actuators (OUPUT).

ai3

The transition from these considerations to the creation of softwares, that could have been able to implement, was therefore very short and these programs were called Artificial Neural Networks (ANN).

Another thought influenced, in those years, the landscape of AI: the theory of genetic algorithms [viii]. It was the common heritage of the concept of natural selection in the biological field, as a development of the observations made by Charles Darwin. The basic idea was that, from an initial population, and with a set of rules to obtain new genes from old ones, it would be able to trigger the mechanisms of selection by which certain individuals with certain characteristics became prevalent within a species. Was there some intelligence in this selection?

In part yes, because those remained after several generations was generally more suitable to live in its environment  than its predecessors. If instead of talking about genes, we were to focus on biological variables, that influenced the exposed properties of objects (eg shape, color, size) and evolution was understood as a transformation of an initial pattern of the total population into a final one, as a consequence the algorithms in question would appear. A graphical representation can be given by the famous “Game of Life” by John Connway.

Artificial neural networks and genetic algorithms opened the third phase of AI in which the keywords were “black box system.” These software, in fact, not only created a transfer function (INPUT → OUTPUT) through a well-defined set of rules (such as the parser or expert systems) but such knowledge was encoded in a specific, distributed manner, that is on many elemental memory units (for example, weights in synaptic ANN) each of which taken individually wasn’t associated with specific information. They let that the program, so to speak, would build and use its intelligence – meaning the ability to solve a class of problems, as widely as possible – without any interference of the programmer.

The fact that such intelligence could be considered an emergent property (something not obtainable with the simple sum of the properties of the elementary constituents) caused the coupling of this phase with the theory of complex systems, gaining quickly importance in those years. It also led to the emergence of a school of thought within AI, the so called connectionism that, for the first time, had found its most important representatives outside of computer science and logical-mathematical sciences (eg, the psychologists David Rumelhart, David McClelland and biologist Gerald Edelman).

The most successful applications of this type of systems have been in electronics, biology, economics for genetic algorithms and recognition audio / video in the field of data mining and forecasts (eg meteorology) for ANN. These were all considered important results and in some ways extraordinary but still nothing has made the scientific community agree in stating that the first man-made intelligence had been born. We are therefore, today, in the fourth quarter of AI, trying to go beyond the partial previous failures, at least the so called maximalist lens. But why would a general AI type, that emulates human intelligence, be impossible?

ai5

Some like John Lucas and Roger Penrose have focused on the limitations of formal systems, arising from the incompleteness and undecidability theorems of Gödel with the human mind which would have its own critical success factor in the non-symbolic or algorithmic reasoning. Others have focused on the fact that it is the very definition of intelligence to be lacking. In this case the dock rather than the AI itself is the so-called Turing Test, the method to determine – at least at the conceptual level -  if a software was or not intelligent.

The latest widely accepted view is based on the apparent one-way interaction systems of IA (software> hardware) that contrasts with the observation of the bi-directionality of the mind – body, with the decisive role played not only by physical stimuli (environment) but also by the cultural ones (society) [ix].

In virtue of these doubts and of the fact that the best results were obtained only in applications concerning a specific date, the interest of researchers is mostly focusing on the integration between different forms of intelligent softwares, though for extremely limited purposes (weak AI). Almost no one cares about building machines or programs that can overcome the Turing test or demonstrate understanding of a broad set of behaviors. And we could ask ourselves where did the strong AI go? In some way it survived, even though it changed various things from the original setting [x].

Firstly it has changed its name: probably to detach from the failures in the program, which we previously described, some AI supporters of the Strong AI came together under the new uniform designation AGI (Artificial General Intelligence) [xi], or other under the title WBE (Whole Brain Emulation) [xii]

Today it’s difficult to understand which are its distinctive features, especially due to its nature strongly and continuously mutating. Certainly there is genuine will to  decisively break with the past, conceiving deeper and more complete measuring methods, compared to the Turing test [xiii], and with the aim of overcoming the traditional approach, considered a quite ‘suffocating, centered on information sciences. The core of AGI is, in fact, interdisciplinary and based on the integration of biology, cognitive science, neuro-science and nanotechnology.

ai6

Of course, in contrast with this amazing deployment of scientific knowledge, some of the statements of those, who continue to believe in Strong AI even today, may seem to be quite absurd or unscientific, if not vaguely disturbing.

Let’s take the thought of singularity for example: among the current supporters of AGI and WBE, nearly all agree with the ideas of Ray Kurzweil [xiv], that the first significant results, in terms of emulation of brain function, will emerge between 2015 and 2045. But this would only be the first step to an ever-accelerating progress in which man, thanks to technology, would increase always more its potential to become something, that someone has called “Man 2.0″.

Questionable ideas? Oversimplification and confidence in the technology? Disinterest for all possible ethical or safety for the human race? Maybe. One thing is certain: to understand whether or not we are getting closer to what is expected by Kurzweil , we don’t have to wait, then, too long.

 

[i] “Human Brain Project”, 2013, URL: http://www.humanbrainproject.eu/

[ii] Vaughan Bell, The human brain is not as simple as we think, The Observer, 2013, URL: http://www.rawstory.com/rs/2013/03/03/the-human-brain-is-not-as-simple-as-we-think/

[iv] Pietro Greco, Einstein e il ciabattino, Dizionario asimmetrico dei concetti scientifici di interesse filosofico, Editori Riuniti, 2002

[v] Paola Mello, Lucidi sui Sistemi Esperti, Laboratorio di Informatica Avanzata, Università di Bologna, URL: http://www.lia.deis.unibo.it/Courses/AI/fundamentalsAI2011-12/lucidi/SistemiEsperti2011.pdf

[vi] Paul Churchland,  Il motore della ragione la sede dell’anima, Il Saggiatore, Milano, 1998

[vii] Giovanni Martinelli, Reti neurali e neurofuzzy, Euroma La Goliardica, 2000

[viii] John H. Holland, Genetic Algorithms Computer programs that “evolve” in ways that resemble natural selection can solve complex problems even their creators do not fully understand, URL: http://www2.econ.iastate.edu/tesfatsi/holland.GAIntro.htm

[ix] Gerald Edelman, La materia della mente, Adelphi, 1999

[x] Jonathan Russell, Peter Norvig, Intelligenza Artificiale: Un Approccio Moderno, Pearson, 2010

[xi] Pei Wang And Ben Goertzel, Introduction: Aspects of Artificial General Intelligence, 2006, URL: https://a316de03-a-62cb3a1a-s-sites.googlegroups.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf

[xii] Sandberg, A. & Bostrom, N. (2008): Whole Brain Emulation: A Roadmap, Technical Report #2008‐3, Future of Humanity Institute, Oxford University, URL: www.fhi.ox.ac.uk/reports/2008‐3.pdf

[xiii] Itamar Arel and Scott Livingston, Beyond theTuring Test, IEEE Computer Society, 2009, URL: http://web.eecs.utk.edu/~itamar/Papers/IEEE_Comp_Turing.pdf

[xiv] Ray Kurzweil, La singolarità è vicina, Apogeo, 2008

Related News



Jooble è un sito dove potete trovare le offerte di lavoro provenienti da tutto l’Internet. Qui lavori di scrittura freelance Writingjobz.com

Authors from the Network