Artificial intelligence (AI) is not a new concept, it is been around for decades, and still, it is much unexplored. Did you know that the first ideas revolving AI can be traced back to the ‘40s? Nevertheless, it is still a field that is in its infancy and a concept that we still don’t know a lot about. There is a lot of controversy revolving this topic, too. While part of the community is ready to strike with full force ahead into a huge uncharted territory that is AI, part of community raises some serious questions and ethical dilemmas. For example, Stephen Hawking and Elon Musk publicly declared their fears and worries, while Mark Zuckerberg and Ray Kurzweil, on the other hand, are ready to become cyborgs. Still, the world doesn’t wait for anyone, and the world of AI is waiting even less, so here we are.

What is actually AI? Just like with any emerging field, this simple question still has no clear answer. However, we can describe what the main goals of AI are, (apart from destroying humanity, of course:)). There are categories of problems, problems with many subtle factors that we cannot describe with an algorithm. These problems we, as humans, know how to solve by learning and computers don’t have this ability.

Computers can calculate complex numerical calculations in a blink of an eye. In theory, they even have bigger processing power than human brain – 109 transistors with a switching time of 10-9 seconds against 1011 neurons, with a switching time of about 10-3 seconds. Still, brains outperform computers in almost every aspect. What AI wants to do is to make computers learn and make decisions based on that knowledge. One of the tools that AI is using to achieve that is Neural Networks.

Neural networks are trying to imitate a few abilities that the brain has, and computers don’t. The first of those abilities is adaptability. As I previously mentioned, theoretically a computer should outperform brain in any aspect, but a computer is static and cannot use all its potential. On the other hand, a brain is reconfigurable, so to say. Just last month my dentist accidentally made a small damage on one of my nerves and as a result, part of my lip was numb. After a matter of days other nerves took over the function of the damaged nerve and my lip was fine again. My dentist told me a bunch of stories where people had nerves cut in half and were still able to reach full recovery.

Computers cannot do that. I still haven’t heard that a computer reconfigured itself in case of an error and that a hard disk took over the processor’s functions, for example. The other problem is that largest part of computers is just data storage, meaning, those parts don’t do any processing whatsoever. Nerves in the brain, on the other hand, are processing data all the time, and they do it in parallel. That is why the brain outperforms computers; because there are numerous little operations happening in your brain in parallel as you are reading this.

Basically, that is what artificial Neural Networks are trying to accomplish – to introduce brain functionalities to a computer by copying the behavior of the nervous systems. In essence, Artificial Neural Networks are coping nature.

Biology behind the Idea

So, what nature’s concepts are Neural Networks trying to  imitate? As you are probably aware, the smallest unit of the nervous system is a neuron. These are cells with similar and simple structure. Yet, by continuous communication, these cells achieve enormous processing power. If you put it in simple terms, neurons are just switches. These switches generate an output signal if they receive a certain amount of input stimuli. This output signal is input for another neuron.

Each neuron has these components:

  • Body, also known as soma
  • Dendrites
  • Axon

Body (soma) of a neuron carries out the basic life processes of a neuron. Every neuron has a single axon. This is a long part of the cell; in fact, some of these go through the entire length of the spine. It acts like a wire and it is an output of the neuron. Dendrites, on the other hand, are inputs of neuron and each neuron has multiple dendrites. These inputs and outputs, axons and dendrites of different neurons never touch each other even though they come close.

These gaps between axons and dendrites are called synapses. Trough these synapses signals are carried by neurotransmitter molecules. There are various neurotransmitter chemicals and each serves a different type of neuron. Among them are the famous serotonin and dopamine. The amount and type of these chemicals will dictate how “strong” the input to the neuron is. And, if there is enough input on all dendrites, the soma will “fire up” the signal on the axon, and transmit it to the next neuron.

Main Components and Concepts of Neural Networks

Before we dive into concepts and components of neural networks, let’s just reflect on the goal of neural networks. What we want for them to do is to learn certain processes the way we do. For example, after we show an image of the dog to the neural network few times, we expect that next time we show it, our network will be able to say “Ok, that is a dog”. So, how do neural networks do that?

Based on the elements of the nervous system, Artificial Neural Networks are composed of small processing units – neurons and weighted connections between them. A weight of the connection simulates a number of neurotransmitters transferred among neurons, described in the previous chapter. Mathematically, we can define Neural Network as a sorted triple (N, C, w), where N is set of neurons, C is set {(i, j)|i, j ∈ N} whose elements are connections between neurons i and j, and w(i, j) is weight of connection between neurons and j.

Usually, a neuron receives output from many other neurons as its input. Propagation function transforms them in consideration of the connecting weights to the so-called input network of that neuron. Often, this propagation function is just the sum of weighted inputs – weighted sum. Mathematically, it is defined like this: net = Σ (i*w), where net is input network, i is a value of each individual input and w is a weight of the connection through which input value came. After, that input network is processed by something that is called activation function. This function decides if the output of the neuron will be active. This function simulates the functionality of the biological soma, which will ignite output only if there are strong enough stimuli on the input.

Every biological neuron has a certain threshold which has to be reached in order for neuron output to be fired, so the activation function has to take this into consideration. Another aspect that this function has to be aware of is the previous state of the neuron. Biologically, this has no justification, but it simplifies implementation of these networks. So, to sum it up, activation function transforms input network of the neuron and the previous state of the neuron into neuron output, where threshold plays an important role too.

Now, as we can see there are a few changeable components of each neuron: connection weight, propagation function, and activation function. By modifying values of these components, we are making learning mechanism or learning strategy. This mechanism is basically an algorithm which we use to train our network.

By now, we have learned that neural networks are comprised of neurons, connections and that there are some adjustable parameters that we can use to create learning strategies, but there is another aspect that I haven’t mentioned yet. The way that neurons are organized is also very important for the neural network. This is called the topology of a neural network. Usually, neurons are organized in several layers. Some neurons are used just to receive input from outside world –the input neuron layer, while others are used just to provide an output to the outside world – the output neuron layer.

Between these two layers, there can be an optional number of processing layers that are called hidden neuron layers. Of course, neural topology is a separate topic and more will be explained in blog posts to come, but it is crucial to understand its importance. Also, by this topology, we identify multiple types of neural networks. You can find more information on neural network types here.

Conclusion

Neural networks are an interesting and growing topic. Also, these networks have a wide range of application. Currently, they are used mostly for predictions, classifications, and automation, but my guess is that we will find more and more applications. As for the destruction of humanity and devaluation of ethical values, my opinion is that AI will be more of an extension of humans and human experience and less real danger. What do you think?


This article is a part of  Artificial Neural Networks Series, which you can check out here.


Read more posts from the author at Rubik’s Code.


Discover more from Rubix Code

Subscribe now to keep reading and get access to the full archive.

Continue reading