This is a guest post by Gilad David Maayan.

When working on your AI project, if you have to handle a large collection of rich media, such as images, video or audio, traditional machine learning algorithms are not going to be enough. In this case, you need a deep learning framework.

Deep Learning (DL) frameworks are basically libraries, interfaces, and tools that help you build deep learning models more easily. They have pre-built components that are optimized so you don’t have to build underlying algorithms. 

However, choosing a deep learning framework is not an easy task. In this article, I’ll provide an overview of the top solutions to help you select a framework for your project. 

What is a Deep Learning Framework and how to choose it?

To understand this concept, let’s use an example. Consider the images in the picture below. While these are all animals, there are several categories in this image – Owl, Horse, Lion, Giraffe, and so on. If we need to classify these images into their corresponding categories, a  Convolutional Neural Network (CNN) is the right solution. However, if you code a CNN from scratch, it could take you weeks until you get a working model.

Deep learning frameworks provide the interface and library to build deep learning models without needing to code underlying algorithms. Some of the key features of an effective deep learning framework are:

  • Optimized for performance
  • Easy to code
  • Community support
  • Reduce computations through parallel processes
  • Computes gradients automatically

Top Deep Learning Frameworks of 2019

The features mentioned above are the criteria we took to build this shortlist of the top four deep learning frameworks: 


TensorFlow is developed and maintained by Google. Since its release is becoming increasingly popular, leading the market due to its flexibility to work on images as well as sequence-based data. The latest version, TensorFlow 2.0, is now in beta and has been well received among data scientists. You can learn more about what’s new in TensorFlow 2.0 and how to use it in this article. However, it is not for beginners, as it requires a deep understanding of linear algebra and calculus. TensorFlow has a flexible architecture that enables to deploy the deep learning models on one or more CPUs, GPUs and TPUs. Some uses cases of TensorFlow are:

  • Text-based applications
  • Sound recognition
  • Image recognition
  • Video analysis


  • Community support—since it is maintained by Google, powers many features of Google applications, such as speech recognition and Google translate, delivering frequent new releases and quick updates. 
  • Computational graph visualizations—compared with other solutions such as PyTorch, its graph visualizations are superior. 
  • Debugging potential—can introduce and retrieve the results of discrete data combining this with TensorBoard to get a graph visualization, making debugging much simpler. 
  • Spin up sessions made easy—since can run with multiple GPUs, TensorFlow makes it easy to run the code on different machines without the need of stopping the program. 
  • Written in Python—all nodes and tensors in TensorFlow are Python objects which is an easy language to read and code in. 


  • Partially open-source—some algorithms are open-source but the advanced hardware infrastructure is not. 
  • Not compatible with Windows—since it doesn’t work on Windows, you need to install TensorFlow using the python package library. 


Pytorch is a port to the Torch deep learning framework. This, in turn, can be used for building deep neural networks and executing tensor computations. It provides a framework that allows you to build computational graphs and change them as you go, making it more intuitive. It is easier to use than TensorFlow as it does not require advanced mathematics background. 

However, PyTorch does not have a visualization tool like TensorFlow. One interesting feature in PyTorch is called declarative data parallelism. You can use the torch.nn.DataParallel library to run modules in batches, in parallel on a multi-GPU setup. Deep learning platforms like MissingLink can help schedule and automate PyTorch tasks on multiple machines. 


  • Runs on Python—it only requires a basic understanding of Python to build a simple Deep Learning model with ease due to its deep Python integration. It also uses auto-differentiation and strong support for GPUs. 
  • User-friendly—by its architectural style, compared to other frameworks, also offering a faster deep learning training. 
  • Easy to debug—allows easy debugging using PyCharm to define-by-run mode during runtime. 


  • Smaller community—since it is relatively new in the deep learning frameworks market, its user community is smaller. Moreover, the resources outside the official documentation are limited. 
  • No visualization interface—unlike TensorFlow.


Keras is a solid framework for basic research, based in Python, and it was developed focusing on enabling fast experimentation. It is a good fit for a project related to image classification or sequence models. Keras fully integrates with Tensorflow, and has been a part of TensorFlow since version 1.10.0.

Keras builds on Theano and TensorFlow providing a high-level API to the underlying tensor libraries. Put simply, when you use Keras as part of your Python program, you can create your model like this:

import keras.layers as L
import keras.models as M
my_input = L.Input(shape=(100,))
intermediate = L.Dense(10, activation='relu')(my_input)
my_output = L.Dense(1, activation='softmax')(intermediate)
model = M.Model(input=my_input, output=my_output)
view raw hosted with ❤ by GitHub

This is useful when you want to embed deep learning networks into a Python program. Keras boasts a great community supporting it.  In terms of creating and using models exclusively within Python, Keras offers a solid API. It also allows users to connect externally to TensorBoard.


  • Simple to use—being user-friendly, it enforces minimal code writing, making it easy to build a small neural network with short pieces of code. 
  • Large community—offers extended resources and documentation, and boasts a large and active user community. 


  • Limited customization—the application has a cap for customization, beyond which you need to use TensorFlow to continue. 

Microsoft Cognitive Toolkit

Previously known as CNTK, Microsoft Cognitive Toolkit is a unified deep-learning toolkit, describing neural networks as a series of computational steps via a directed graph. It allows you to combine popular model types, such as feed-forward deep neural networks, CNNs, and long short-term memory (LSTM) networks. Microsoft’s focus on machine learning development is evident in its launch of ML.NET for pure machine learning. You can learn more about machine learning with ML in this guide


Highly flexible—provides a plug-in architecture, so you can define your own computation nodes. Therefore, it is especially suitable for works requiring customization. 

Enables distributed training—ensuring enhanced performance for CPUs, single and multi GPUs, and multi-machine-multi GPU scenarios. This allows for parallel training on multiple GPUs spanning multiple machines. 


No visualization interface—makes it difficult for users to get insights on complex neural networks, as well as debug performance bottlenecks. 


Advances in deep learning are creating more use cases for artificial intelligence in projects ranging from image classification to sequencing. That being said, choosing the right framework for your project depends on a number of factors. For example, a beginner may benefit from a Python-based deep learning framework. Factors like the speed, resource requirement, and coherence of the trained model are considerations to keep in mind when choosing a deep learning framework for your organization’s needs. 

Gilad David Maayan is a technology writer who has worked with over 150 technology companies including SAP, Imperva, Samsung NEXT, NetApp and Ixia, producing technical and thought leadership content that elucidates technical solutions for developers and IT leadership. Today he heads Agile SEO, the leading marketing agency in the technology industry. LinkedIn: