People learn to train machines

That's what it used to look like when people had something to learn. For machines, the input works on other channels. Frei Gemein Gemein freigegeben freigegeben freigegeben freigegeben freigegeben freigegeben uns uns uns uns uns uns Roman Roman Roman Roman Roman Roman Roman Roman Roman This is an excerpt from Timo Daum's book "The Artificial Intelligence of Capital", which will be released by Nautilus in March.

Significant progress has been made in particular in the area of ​​image recognition, autonomous driving or speech assistants who can understand and generate natural speech. Machine learning continues to improve and eventually becomes an economic factor - Amazon makes 35 of its revenue from purchases made by clicking on proposed products. Behind Amazon's proposals is a "machine learning recommendation engine", so the translation of machine learning recommendation engine generated by another AI, namely Google Translate.

In the year 1959, the American AI pioneer Arthur Samuel introduced the game of dame to a computer, implementing the first self-learning procedures. Games have always played a major role in the history of the computer - as test scenarios, but also as advertising measures. Board games such as queen or chess, but also the go game, are determined by simple rules that can be implemented in a few lines of code. The lady game is one of the easiest. White and black stones are placed alternately on a 8 × 8 board, stones may only be moved forward in a diagonal direction, an opposing stone can be struck by diagonal skipping, while hitting enemy stones several moves in a row are allowed - such rules can be translate into program code without any problem; a computer program that can play dame compliant is written fast.

But how do I teach the program to choose the formally correct moves to be the one with the best chances of winning the game? The approach taken historically can be summarized under the term "brute force": All possible moves and all possible consequences are examined and evaluated according to predetermined criteria, such as the number of own stones and their proximity to the opposite end of the board. The train with the highest rating is the best. However, this method quickly reaches its limits, even at a studied depth of a few moves, the number of possibilities increases immeasurably. Therefore, strategies are needed that only examine and evaluate the most promising moves.

This is where the concept of learning comes into play: The results of past games flow into the evaluation of the positions. If a game has been won, all the previous positions of this game can be evaluated more positively and vice versa. The program is getting better, as it will be more likely to seek positions in the future that have been on a winning path in the past. With this method, Arthur Samuel managed to train his program so that after only eight hours of training time, he played better than himself.

Samuel himself coined the term machine learning for the subdivision of Artificial Intelligence, which is based on learning methods. With the common notion of general artificial intelligence, this area has little in common, it is usually about feeding software with a lot of data, so that it is better at coping with limited tasks over time. Or, as Arthur Samuel put it, "to equip them with the ability to learn without being explicitly programmed". This means that the code only contains the rules, while the quality of the game is provided by the data that goes into evaluations of certain constellations that did not exist before. As a result, the program can get better over time without any change to the underlying programming.

Three types of machines

Timo Daum works as a university teacher in the fields of online, media and digital economy. He has a degree in physics and over two decades of experience in the IT industry. He organizes lectures and seminars on the subject of digital capitalism. His book The Capital We Are. The critique of the digital economy (2017) was awarded the prize The political book 2018 of the Friedrich Ebert Foundation. Timo Daum lives in Berlin. All rights reserved Fabian Grimm

There are machines that are designed only for a specific task: A hair dryer can blow dry, nothing else. Although I can divert it from its intended purpose, for example by de-icing a door lock, it nevertheless has a very limited scope of application - it is, so to speak, a one-dimensional one-purpose machine. These are what I call Typ1 machines.

Then there are machines that can solve not just a problem or a few, but whole classes of problems because they are programmable: equipped with a new program, they are also able to solve new problems. These machines we call computers whose theoretical basis or range of action had already been described by Alan Turing. The application possibilities are in principle infinite, because there are endless possibilities to program software - therefore we are dealing with multi-purpose machines. I call them Typ2 machines. Typ2 machines are a great thing, daily the tasks that they are able to cope with become more. Nevertheless, they have the property that they always execute the same code exactly the same, so they are also called deterministic automata. Even after the umpteenth execution of the program code, its range of functions has not widened; in principle, they provide the same results - let's just say random numbers. They have not gotten better over time, but not worse. Ada Lovelaces argument that they would not be able to outgrow what they programmed into is absolutely true.

Typ3 machines are also simulated on computers, but differ from Typ2 machines in that they are enabled by machine learning to evolve over time in the same task and, hopefully, to deliver improved results in conjunction with the Data with which they are fed. In this respect, Typ3 machines are no longer deterministic machines, the argument of Lady Ada is no longer valid: the same input does not necessarily follow the same output or result given the same starting conditions. Unlike Type 1 and Type2 machines, Typ3 machines are capable of solving tasks that they have not been programmed to do. To do this, the software has to learn, and learning means deriving models from a large amount of data.

Suppose we feed software with a person's transactional data. She goes to work every working day, shops, trips, visits friends. After some time has passed or a lot of data is accumulated, the software will be able to make predictions for future behavior. And predict with high hit rate that the target person will go to the bathroom on a Monday at nine o'clock. She has developed a model of her subject and can predict almost every step of the way, without knowing what a human being is and without any understanding of what work or sleep mean. Such a simple AI application can be expected to become familiar with your model so well after a while that it is capable of saying, "Do not you have to go to sport today?" But even the dystopian application seems in this simple example: "I have just told your insurance that you were not back in sports."

Learning the secret

Marvin Minsky, one of the pioneers of AI and participant in the founding hangout at Dartmouth College, called words that can have a variety of meanings, "suitcase words": One and the same container (the word) can hold a whole hodgepodge Contain meanings. "Learning" is such a password, just like "intelligence": learning to ride a bike, learning a language, learning programming, memorizing a poem, learning to play chess, learning to dance the tango - there is always talk of learning, and there are the processes each conceivable different.

When we as children learn our first language, the mother tongue, we speak, imitate, collect, vary, try out, in short: we learn intuitively. Soon we master our mother tongue, without explicitly knowing their rules, without z. For example, we can say what a participial construction is. Quite different is usually the deliberate learning of all other languages, the foreign languages: We deliberately adopt vocabulary and rules and process input and output, much like a deterministic symbol processing machine would do that, ie software that executes a program in formal logical language , This form of learning takes much more effort, and our command of the foreign language never reaches that of the mother tongue. The distinction between mother tongue and foreign language goes back to the theory of second language acquisition of the linguist Stephen Krashen, which he had developed in the 1970er years. In principle, Krashen differentiates between the conscious grammatical process of learning and the acquisition of the mother tongue by children. These are fundamentally different learning processes, which incidentally also take place in different brain regions.

How machines learn is nothing like the spongy absorption of information, their recombination and abstraction that we know from ourselves. When we hear that a computer can beat the World Chess Champion (1997) or the world's best Go player (2016), we tend to think the computers played the game "like a human". Of course, these programs do not really know what a game is. They play it better, but then start again very stupid, as soon as you change the rules a bit. In principle, this is not a problem for a person, but the AI ​​has to start all over again with the training process.

Our Brain: The only known General Artificial Intelligence

The brain is the most complex organ and at the same time the most powerful thinking machine, the only known successful implementation of a general artificial intelligence, one could also call it Typ4 machine. So it makes sense to try a replica. Neurophysiologist and cyberneticist Warren McCulloch and mathematician Walter Pitts have already outlined 1943 as the first concepts for artificial neural networks (ANN), and 1954 was the first implementation of computing machines.

Our brain, the natural neural network par excellence, has 100 billion nerve cells (neurons) whose job is to acquire, process and relay neuroelectric and neurochemical signals. Each neuron is in turn connected via synapses to proud 7.000 other brain cells. In contrast, the artificial brains from the labs are rather modest. These are already operated with some 10 to 100 cells, which in turn are connected to other cells in turn 10 to 100, and certainly deliver results. Mind you, these are computer-simulated neural networks, not attempts to recreate the brain with biomass. AlphaGo, the software developed by Alphabet's daughter DeepMind, which struck 2016 for the first time in the complicated Go game, had 17.328 input neurons, whereas even an average ant has a quarter of a million nerve cells. There is still a lot of room for improvement.

A neural network usually consists of three parts: the so-called input layer, which comprises all neurons that forward input signals, one or more layers of hidden neurons, which mostly show non-linear conduction behavior, and finally the output layer, which combines all the signals of the previous layers and spend. (PAYWALL, keep?) The connections between the nodes are weighted, d. H. they are more or less pronounced, these connection weights are also changeable. The neurons of all strata are connected with their predecessors and successors and, in addition, with other strata. This allows the implementation of learning or memory: Previous events can have an impact on future behavior - a feedback effect.

In supervised learning, input and output patterns are given to the neural network: the desired output for a given input is thus known and can serve to check the delivered output. The deviations with the desired result are followed by corrections for the weights of the connections. An example: If we would like to develop a program that can recognize pears on pictures, we show it many pictures with pears and possibly even pictures without pears, but with apples, for example. The program analyzes the images and tries to find similarities of all pear images, eg the specific pear shape or a stem on the head side. At some point, with a degree of tolerance, the program will be able to recognize pear images that it has never seen before. It has developed a model of what a pear is. For new, unknown images, it checked from now on whether they correspond to the model. The program still does not know what a pear is, but has developed a model from it: pear is simply a set of values ​​for certain parameters in the program. It can now also classify unknown objects: pear or non-pear.

Deep learning

Deep Learning is the term used to refer to the part of machine learning that works with artificial neural networks and has the most recent successes. Deep learning and artificial neural networks are mostly used interchangeably. In recent years, it has become clear that deep learning is very effective in teaching computers tasks such as speech recognition, image interpretation, automatic translation, and practicing games like Go, which require something like intuitive knowledge.

The concept of deep learning - the term is mostly synonymous with the use of neural networks and alluding to its "hidden" layers - is comparatively old: Geoffrey Hinton and colleagues invented the principle of "backpropagation" over three decades ago , In order to be able to meaningfully optimize their hidden layers in this most important method for training artificial neural networks, the desired output must be known for all training patterns - we speak of supervised learning. A prerequisite for this is labeled (labeled) data.

For image recognition, this means that every single image that feeds the deep-learning algorithm has to be labeled by hand, so the image content has to be described, eg. B. "cat with ball of yarn on sofa and TV", each with an associated boundary (bounding box). Labeling the input data, which is perhaps similar to tagging texts, is a very elaborate process that currently still needs to be manually organized and executed by humans. At the beginning of supervised machine learning processes there is an enormous labeling effort.

One subcategory of supervised learning is strengthened learning. Here, the machine can first try it out and get feedback afterwards: That was good, that was not so good. This method is used, for example, in machine learning of computer games or in robot football, or when a neural network is to learn to determine the language of words. The goals and the output are known, but no guidelines are given. In this case, the trial-and-error method is used to a certain extent and evaluated ex-post.

The AI ​​expert Ethem Alpaydin says that current "deep networks" are not deep enough and far from the abilities of our visual cortex to capture some complex sceneries. (PAYWALL, keep ???) In order to be able to abstract in limited contexts, eg to recognize handwritten signs or objects on pictures, it is just sufficient. Even Geoffrey Hinton himself does not think so much about his principle today and finds all current AI advances accordingly stale, as he recently revealed to the news portal Axios opposite. He now considers this form of supervised or guided learning to be outdated and new forms of unsupervised learning much more interesting. (!!! spanish link)

The latter (unsupervised learning) is relatively new. Here no labeled data are provided, pictures z. Eg without caption. Nevertheless, neural networks are able to recognize hidden structures, conspicuous patterns or recurring forms in data structures, even breaking new ground. Here you can easily observe what happens, for example, when the software is supposed to detect structures in pictures that we do not know about or that we do not reveal to them ex ante. It is obvious that non-monitored methods are also economically interesting, but eliminates the labeling effort.

Then there is the exciting field of imitative learning. Here a certain behavior or actions are given, eg. As the gripping of objects or driving a car, and the software tries to imitate the seen. Algorithms, for example, are left to the observation of human test drivers, who should then imitate the software. Therefore, the test kilometers driven in the simulator become the main criterion for the success of this technology.

Garry Kasparov reports in his book worth reading about his (lost) battle against the computer at the chessboard of early attempts in the 1980 years to succeed with machine learning in the royal game. Researchers fed their program with hundreds of thousands of chess grandmasters in hopes that the software would develop models and recognize patterns in the victorious games.

At first it seemed to work, the evaluation of positions was more accurate than with conventional programs. However, when the program played itself, it behaved peculiarly. After quite passable opening game, the program sacrificed his lady, his most valuable figure, to go down with flying flags shortly thereafter. What happened? When a grandmaster sacrifices his queen, it's almost always a brilliant and decisive move that leads to victory. The program clearly recognized this pattern. It could not know that the inversion does not apply: sacrificing the lady alone is no guarantee of victory.

Monday painting

In Mark Twain's famous teenage novel Tom Sawyer and Huckleberry Finn by 1876, Aunt Polly sends Tom out to whitewash the garden fence one morning. A bit later, Ben Rogers comes by, another boy in Tom's age. Tom convinces Ben that it is a playful pleasure to paint a fence, and after some negotiation he agrees to exchange his apple for permission to pick up the brush - an early example of successful gamification.

The company Google always has to remove fences, most recently she asked her "friends" - all of us - to take part in a great thing, namely training one of her learning machines to recognize drawn objects. So, half the world set about writing on the quickdraw.withgoogle.com website. Crucial for the quality - ie speed and reliability in the recognition of objects - are as many different training examples. The software is already quite good, it sometimes recognizes after two bars, what is meant, such as a hockey stick or a snowman or a snowwoman. Google learns much more than just drawing through the painting game. For example, this experiment reveals cultural differences: Using a circle - I draw it clockwise or against it - or a snowman - I draw it with two or three circles - the software can determine with high probability, if I come from Asia, left or right handed, young or old, female or male. Prerequisite for this is solely the evaluation of browser requests that z. B. betray the place where the person is located. Much like Tom Sawyer's friends, many people are willing and willing to sign for Google, even though there is no pay for it, on the contrary, with our creative power, we also provide a wealth of usable cultural data. Google's painting playground impressively demonstrates the relationship between AI algorithms and their training through user data: We feed the machine, which we later encounter in the form of applications. Google says thank you.

Help! With your financial help you support independent journalism.