Note: This is an ongoing series, more articles will be added soon.
Deep learning and artificial intelligence are big buzz words today, aren’t they? However, this field is not quite as new as the majority of people thinks. We as humans were always interested in the way we think and the structure of our brain. Of course, I am not saying that our great-great-great ancestors were trying to build Artificial Neural Networks, but there was always certain curiosity revolving thinking and learning processes. With the advance of modern electronics, this curiosity was harnessed and we started exploring ways in which we can build a thinking machine. The roots of the field can be traced back to 1943. when a young mathematician – Walter Pitts and a neurophysiologist – Warren McCulloch wrote a paper that introduced the first model of neurological networks. They explained how neurons might work and how we can replicate this behavior using simple circuits.
The Organization of Behavior, a book written by Donald O. Hebb, reinforced this concept and introduced the Hebbian rule. This rule implies that connection between two neurons is strengthened when both neurons are active. However, testing all these theories was limited until computer gained on its processing power the 1950s. Then it became possible to test some of these theories and Nathanial Rochester from the IBM was one of the first to try to simulate a neural network in the lab. Even thou this first experiment failed, it blazed the path to further experiments. In 1956 well-known scientists and ambitious students met at the Dartmouth Summer Research Project and discussed how to simulate a brain.
After that, John von Neumann tried to simulate simple neuron functions by using vacuum tubes and telegraph relays. Somewhere around that time, around 1956-1957, Frank Rosenblatt and neuro-biologist Charles Wightman developed the first successful neurocomputer – Mark I Perceptron. Intrigued by the functions of the eye of the fly they made a computer that was able to recognize simple numerics. It used simple 20×20 pixel sensor and worked with 512 motor driven potentiometers – where potentiometers were used to adjust weights of the connections. The first neural network that was used in the real world was MADALINE in 1959. and it was developed by Bernard Widrow and Marcian Hoff of Stanford. MADALINE is actually an adaptive filter that eliminated echoes on the phone lines and it is still in commercial use. Cool fact, right?
Still, after that initial success field stagnated a while. The research funds were short, and standard computing gained its momentum. However, isolated research was happening, but the lack of communication researchers caused a pause in the field until the 1980s. Then in 1982, several events caused the new renaissance of the field. John Hopfield invented the so-called Hopﬁeld networks and Teuvo Kohonen described the self-organizing feature maps. The field regained its importance and by 1987, the Institute of Electrical and Electronic Engineer’s (IEEE) first International Conference on Neural Networks drew more than 1,800 attendees.
AI, Deep Learning, and Neural Networks are buzzwords of every conference today. Kick off your journey into this exciting field by checking these articles:
- Introduction to Artificial Neural Networks
- Common Neural Network Activation Functions
- How do Artificial Neural Networks learn?
- Backpropagation Algorithm in Artificial Neural Networks
- Implementing Simple Neural Network in C#
- Introduction to TensorFlow – With Python Example
- Implementing Simple Neural Network using Keras – With Python Example
- Introduction to Convolutional Neural Networks
- Implementation of Convolutional Neural Network using Python and Keras
- Introduction to Recurrent Neural Networks
- Understanding Long Short-Term Memory Networks (LSTMs)
- Two Ways to Implement LSTM Network using Python – with TensorFlow and Keras
Read more posts from the author at Rubik’s Code.