fbpx

What Are Neural Networks And How Are They Changing Your Life?

neural networks

Does your smartphone answer questions?

Does Google Images recognize what’s in any photo?

If the answer is yes, it’s because these applications use Neural networks. No miracles, no magic. Miracles and magic would be a lot faster for me to explain.

In our new blog article, we will focus exactly on the fascinating world of neural networks.

It’s not exactly easy stuff. I mean, eating sushi is definitely easier. But we will try to be as much newbie-friendly as possible, don’t worry!

 

Why are Neural networks so amazing?

Artificial neural networks are computing systems that learn to perform tasks by considering examples, generally without being programmed with task-specific rules.

They represent an important group of algorithms for Machine learning, both supervised and unsupervised (especially the first), and were born to emulate the functioning of the human brain.

Obviously, it was not possible to create such a complex structure, but the results achieved to date are still amazing.

 

The strength of Neural networks

 

While simple models like Linear regression can be used to make predictions based on a small number of data features, neural networks can deal with giant sets of data and with many features!

For example, in image recognition, they can learn to identify images of cats by analyzing samples-pics that have been manually labeled as “cat” or “no cat” and using the results to identify cats in other images.

They do this without any prior knowledge of cats. Instead, they automatically generate identifying characteristics from the samples that they already processed.

 

A little bit of history about Neural networks

The first ideas for artificial neural networks date back to the 1940s. The main concept was that a network of interconnected artificial neurons could learn to recognize patterns in the same way as a human brain.

The basic constituent on which the numerous neural network models are based is, indeed, the artificial neuron proposed by W.S. McCulloch and Walter Pitts in 1943.

This neuron schematized a combiner with multiple binary data as input and a single binary data as output.

A sufficient number of such elements, connected in such a way as to form a network, was able to calculate simple functions.

 

The introduction of Perceptrons

In 1958, Frank Rosenblatt introduced the first neural network scheme, called Perceptron.

This was the ancestor of current neural networks, and could recognize and classify forms in order to provide an interpretation of the general structure of biological systems.

Rosenblatt’s probabilistic model was therefore aimed at the analysis, in mathematical form, of functions such as the storage of information, and their influence on pattern recognition.

The perceptron really constituted a decisive progress compared to the binary model of McCulloch and Pitts, because its synaptic weights were variable and therefore the perceptron was able to learn!

 

“The first ideas for artificial neural networks date back to the 1940s. The main concept was that a network of interconnected artificial neurons could learn to recognize patterns in the same way as a human brain.”

 

The structure of Neural networks

The neural networks’ structure is clearly inspired by that of the brain. It consists of interconnected layers of units, called artificial neurons, which send data to each other through connections called edges.

The output of the preceding layer will be the input of the subsequent layer.

The first and the last layer of the network are called respectively input layer and output layer. Each layer between these is known as a “hidden layer”.

Signals travel from the input layer to the output layer, possibly after traversing them multiple times. Every layer takes care of recognizing different features of the overall data.

I wrote “layer” a thousand times. I will see the layers in my dreams. Or nightmares.

 

Neural networks and kitties

Think about the apps which can recognize among different cats races in a pic through your smartphone camera.

The first hidden layer in the neural network could identify the size of the animal in the image. The second layer, instead, would be able to recognize the shape of the body.

The third one could take care of the fur’s color, etcetera…

This carries on through to the final layer, which will output the probability that the cat is a proud member of a specific race.

Personally, I hope for Thai.

I LOVE Thai cats.

Something similar happens with Google Lens, which can recognize practically anything.

You know? I’m an Italian boy working in Minsk, the capital of Belarus, and an important eastern European IT centre.

The main language here is Russian.

Well, I always ask for the Russian menu even if my Russian is terrible just to play with Google Lens.

In this way, I have fun translating the text in real-time, and I pretend to know the language, just to make a good impression! >:D

 

“The neural networks’ structure is clearly inspired by that of the brain. It consists o