When talking about “learning” technologies, we sometimes merge concepts that share fields but which actually have well-differentiated areas that it is useful to distinguish. In this article, we will try to clarify those related intelligent technologies that form part of companies’ technical transformation and which are at times used indistinctly when in fact they are not the same.
This is the “oldest” of the disciplines although its very name makes us think of science fiction and as yet non-existent worlds. In the 1950s, this discipline was discussed and flirted with as “machine intelligence”. As we have read in a previous article about the evolution of technologies, its history had its ups and downs until it resurfaced in recent years, thanks in particular to the explosion of big data which brought the key ingredient: data.
Artificial intelligence, also known as AI, the English acronym for Artificial Intelligence, has come to simulate human intelligence in information systems (or machines as they are termed more informally). This simulation process includes the capture of information by the information system, application of reasoning or logic based on given rules and self-correction if necessary for the particular system. Thus the end goal of artificial intelligence is to detect patterns and reach more efficient solutions than human beings themselves can.
On a business level, thanks to the fact that current technologies are able to process a greater volume of data and at greater speeds, AI has made a forceful comeback and has become an effective technological solution. However, the more widespread type these days is that known as weak or reactive AI: personal assistance systems such as Siri, designed for very specific and limited tasks. If a machine is able to “think” for itself in the way a human would, then this is known as strong AI and is the type more associated with science fiction.
In a business application, AI is used for repetitive data analysis tasks or automating decision-making, based on rules introduced into the system, for example applicable to multiple sectors.
One of the keys to AI development is learning without the need to program all infinite rules and possible variables, so that the system can learn and make decisions for itself based on this learning. This is the key difference between intelligent technologies, since if all variables needed to be programmed, this would be no different from computing as we understand it in the more traditional form.
This type of artificial intelligence, with a considerable degree of reliability in its decisions, is known as machine learning.
The most common model currently used is supervised machine learning, which requires human intervention to provide the background data needed for “learning” or to indicate initially what is correct and what is not. In the future there will be unsupervised learning whereby data sets are not previously labeled but where the machine itself will study and generate patterns from this data.
The current application is related closely to prediction, due to the study of background date (in the supervised model or otherwise). This opens a huge realm of possibilities for all activities requiring prediction of behaviour, whether for consumer trends, error detection or process optimisation, for example.
Machine learning is a discipline that will help the progress of many companies that unarguably will always seek to reduce time, cost and errors.From learning to understanding is the next step for intelligent technologies #machinelearning #deeplearning Click To Tweet
Within the field of automatic learning, reinforcement learning is important in those areas where optimisation is relevant. Unlike other modes of learning, the system does not require background data or information as a starting point for training, since it works by repeating a similar operation repeatedly but with minor adjustments each time that facilitate those “decisions” it identifies as correct in order to best reach the given objective.
A good example of this is the genetic optimisation simulation developed by David Bau. The program uses a simple genetic algorithm to evolve random two wheeled shapes into cars over generations, with the objective of travelling as far as possible.
This method is associated with the learning methods of children, who acquire a great deal of their knowledge through repetition, for example. Although experiments began at the end of the 1960s, this is currently a very limited technology in terms of simulation games, for example. However, it is envisaged that it will make headway in the fields of automatic car driving or choosing optimal resource configurations and will be given impetus by other technologies requiring it. If you are interested in this topic, this link to an interesting article in the MIT Technology Review will provide a more detailed understanding of reinforcement learning.
The most recent field of intelligent technologies is deep learning, which is also essentially the one most inspired by human behaviour, like a neural network. Deep learning is based on hierarchical learning or representational learning, creating ways of reading information layer by layer. Thus the “intelligence” is able to enrich the information passing to the next layer and so improve this information. Unlike machine learning that requires information to be introduced with a minimum of human processing, deep learning is capable of starting with more pure and raw data, with the starting value of the pixel information.
Image recognition is one of the main areas where this is being put into practice. In the same way as a child’s learning process, in the application of deep learning, the machine is able to study pictures and their variations and to extract common patterns. For example, studying a photo of different types of dogs in different contexts, so that each time it comes to see a photograph that shares these patterns, the photograph will be identified and classified it as a dog through its particular learning.
The important point is not that all dogs of the same type or size are treated the same but that through common elements it is able to classify them as such. Its application required greater levels of computation and complex processing each time, but its potential is enormous and definitive in order for information system intelligence to be genuinely equivalent to human intelligence. If you wish to better understand the topic of layer learning, here is an article about how deep learning works, somewhat technical but interesting.
Natural Language Processing
NLP or Natural Language Processing is the technique applied to extract information, whether text or voice, to improve communication between people (human language) and machines (computational language).
Its origin is closely linked to artificial intelligence in its early experiments, although it was not until the 1980s that the first automatic translation systems were developed. How does it work? It requires a mathematic modeling process that allows the computer, which has its own language, to understand the human. Thee are two important roles in this process: programmers code using programming languages (such as Python) and the computing linguists adjust the model so that it can be implemented by engineers.
With high applicability, the appearance of chatbots in recent years has caused a resurgence of value and importance in the field of people’s relationships with IT systems whether in terms of language analysis, voice recognition, automatic translation, extraction and data recovery. It has become a technique that combines computation, artificial and linguistic intelligence, essential for the development of nearly any technological solution that requires human interaction or analysis of human-generated information.
Nevertheless, there are still obstacles to overcome, like word ambiguity, colloquial language or feelings, for example.
ALGORITHMS: THE KEY TO EVERYTHING
In the previous technologies, we have spoken of a key element: rules or orders. These rules are known as algorithms., abstract rules or instructions which can be introduced by humans and which permit the machine’s auto-configuration or auto-development to solve a problem.
Algorithms form the basis of all intelligent technology and set out the steps to be followed by the machine in its learning. They consist of information input or configuration (the problem), a period of learning and an output (the solution). The work of the programmer involves translating the problem to be resolved into a language that the machine can understand; breaking down the complex process into simple steps; giving specific orders that the machine can study and learn; and arrive at the solution or answer with the most precision possible. Success is often measured in the solution or otherwise of the problem, time and number of operations it takes and memory and resources used, that is, efficiency.
Famous algorithms include that of Google which places the web in one position or other on the search engine results pages, or Facebook or the trending topic on Twitter. In all cases these are established by deciding to show one piece of content rather than another, based on certain rules that are not known to the user. To give you an idea, the Google search engine uses more than 200 algorithms whose variables are modified more than 500 times a year, making them practically impossible for a human to work out. These complexity levels allow the human-machine collaboration to achieve better results.
Imagen: pexels.com | pixbay