Machine Learning, Neural Networks and Artificial intelligence are big buzzwords of the decade. It is not surprising that today these fields are expanding pretty quickly and are used to solve a vast amount of problems. We are witnesses of the new golden period of these technologies. However, today we are merely innovating. Majority of the concepts used in these fields were invented 50 or more years ago. In fact, all ideas are based heavily on math.
This is the most troubling part for the people who are trying to get into the field. The most usual question that I get on the meetups and conferences is “How much math should I know?”. Those were all the people with software development background trying to get into the data science world. Actually, this was the question that I asked myself long ago when I started my journey through this universe. One of the big challenges was blowing off dust from old college books and trying to remember stuff that was forgotten during the years in the software development industry.
Concepts that were so useful and that helped me work on high-quality software couldn’t help me, which was exciting and scary. So, I had to go to the basics and re-figure some things out. In this article, I will try to cover as much ground as possible. There will be stuff left out, simply because on this topic I could easily write a book about (well, not easily, per se, but you get my point). Please, feel free to explore these topics further, and gather as much knowledge as possible.
While some people will argue that even this much math is too much, in my humble opinion, knowing this bare minimum will help you understand concepts of machine learning and AI in more depth, which in turn will give you the ability to easily switch programming languages, technology stacks, and frameworks. There are three big areas that we will explore are:
Trackbacks/Pingbacks