Note: This is an ongoing series, more articles will be added soon.


Deep learning and artificial intelligence are big buzzwords today, aren’t they? However, this field is not quite as new as the majority of people thinks. We as humans were always interested in the way we think and the structure of our brain. Of course, I am not saying that our great-great-great ancestors were trying to build Artificial Neural Networks, but there was always certain curiosity revolving thinking and learning processes. With the advance of modern electronics, this curiosity was harnessed and we started exploring ways in which we can build a thinking machine. The roots of the field can be traced back to 1943. when a young mathematician – Walter Pitts and a neurophysiologist – Warren McCulloch wrote a paper that introduced the first model of neurological networks. They explained how neurons might work and how we can replicate this behavior using simple circuits.

The Organization of Behavior, a book written by Donald O. Hebb, reinforced this concept and introduced the Hebbian rule. This rule implies that a connection between two neurons is strengthened when both neurons are active. However, testing all these theories was limited until computer gained on its processing power the 1950s. Then it became possible to test some of these theories and Nathanial Rochester from the IBM was one of the first to try to simulate a neural network in the lab. Even thou this first experiment failed, it blazed the path to further experiments. In 1956 well-known scientists and ambitious students met at the Dartmouth Summer Research Project and discussed how to simulate a brain.

After that, John von Neumann tried to simulate simple neuron functions by using vacuum tubes and telegraph relays. Somewhere around that time, around 1956-1957, Frank Rosenblatt and neuro-biologist Charles Wightman developed the first successful neurocomputer – Mark I Perceptron. Intrigued by the functions of the eye of the fly they made a computer that was able to recognize simple numerics. It used simple 20×20 pixel sensor and worked with 512 motor driven potentiometers – where potentiometers were used to adjust weights of the connections. The first neural network that was used in the real world was MADALINE in 1959. and it was developed by Bernard Widrow and Marcian Hoff of Stanford. MADALINE is actually an adaptive filter that eliminated echoes on the phone lines and it is still in commercial use. Cool fact, right?

Still, after that initial success field stagnated a while. The research funds were short, and standard computing gained its momentum. However, isolated research was happening, but the lack of communication researchers caused a pause in the field until the 1980s. Then in 1982, several events caused the new renaissance of the field.  John Hopfield invented the so-called Hopfield networks and Teuvo Kohonen described the self-organizing feature maps. The field regained its importance and by 1987, the Institute of Electrical and Electronic Engineer’s (IEEE) first International Conference on Neural Networks drew more than 1,800 attendees.

AI, Deep Learning, and Neural Networks are buzzwords of every conference today. Kick off your journey into this exciting field by checking these articles:


Read more posts from the author at Rubik’s Code.


Discover more from Rubix Code

Subscribe now to keep reading and get access to the full archive.

Continue reading