In the previous article, we have explored how we can use BERT with ML.NET for the question and answering NLP task. In this article, we explore another kind of NLP task – Sentiment analysis. This type of analysis is used to determine if some textual data is positive, negative, or neutral. It is a useful technique that helps businesses to monitor feedback and better understands customer needs.
Are you afraid that AI might take your job? Make sure you are the one who is building it.
STAY RELEVANT IN THE RISING AI INDUSTRY! 🖖
The topics covered in this article are:
1. Dataset and Prerequisites
The dataset for this article is from the ‘From Group to Individual Labels using Deep Features’, Kotzias et. al,. KDD 2015, and hosted at the UCI Machine Learning Repository – Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository. Concretely, the complete dataset for sentiment analysis can be downloaded here.
This dataset contains sentences labeled with positive or negative sentiment, in the format: sentence | score. The Score is either 1 (for positive) or 0 (for negative). The sentences come from three different websites like Yelp, IMDB, and Amazon. In this article, we use the reviews from IMDB. Here is what they look like:
The implementations provided here are done in C#, and we use the latest .NET 5. So make sure that you have installed this SDK. If you are using Visual Studio this comes with version 16.8.3. Also, make sure that you have installed the following package:
$ dotnet add package Microsoft.ML
$ dotnet add package Microsoft.ML.FastTree
You can do the same from the Package Manager Console:
Install-Package Microsoft.ML
Install-Package Microsoft.ML.FastTree
Note that this will install default Microsoft.ML package as well. You can do a similar thing using Visual Studio’s Manage NuGetPackage option:
If you need to catch up with the basics of machine learning with ML.NET check out this article.
2. What is Sentiment Analysis?
In the past couple of years, sentiment analysis became one of the essential tools to monitor and understand customer feedback. This way detection of underlying emotional tone that messages and responses carry is fully automated, which means that business can better and faster understand what the customer needs and provide better products and services.
Sentiment Analysis is, in a nutshell, the most common text classification tool. It’s the process of analyzing pieces of text to determine the sentiment, whether they’re positive, negative, or neutral. Understand the social sentiment of your brand, product, or service while monitoring online conversations is one of the essential tools of the modern business and sentiment analysis is the first step towards that.
The applications of sentiment analysis are endless. For example, you can use this technique to automatically analyze a large number of reviews about your product which could help you discover if customers are happy about it. Or you want to monitor the response from social media in real-time and automatically detect and contact unhappy customers.
Another cool thing is that sentiment analysis is the first step in feedback analysis. Basically, you can start with sentiment analysis and after that extend your applications with more advanced techniques intent analysis and contextual semantic search.
3. Types of Sentiment Analysis
In its most basic form sentiment analysis detects two levels of emotional feedback – positive and negative. This type is used within this tutorial. However, it is possible to go the other way and detect more specific emotions and intentions. For example, you can detect if the customer is frustrated, happy, sad, interested, not interested, etc. In general, it all depends on what you want to detect and how you structure your training data.
Here are some of the most popular approaches to sentiment analysis:
- Emotions – If you noticed smilies in that social media now automatically puts while you type in your post, this is exactly it. The sentiment analysis component of the system detects in real-time the underlying emotion of the text you are typing in and it can predict if you are angry or happy.
- Fine-Grained Feedback – Instead of just detecting whether the feedback is positive or negative, you can extend this from a very negative to a very positive scale, with everything in between.
- Intent Analysis – This is a deeper understanding of the intention of the customer. You can predict if a customer intends to buy some product or not. Eventually, your system can track the intention of a particular customer, form a pattern, and then be used for marketing and advertising.
- Aspect-based – This type of analysis is used so you can understand how customers feel about specific attributes of the product. For example, how users feels about a certain sections of your e-book.
4. Sentiment Analysis and ML.NET
The dataset that is used in this tutorial has examples of feedbacks that are either positive or negative. This means that we just need to perform binary classification, which is very cool because we can utilize the knowledge from the previous blog posts. We can even use more advanced techniques like SVM and Decision trees. However, the bigger challenge that we face is how to prepare data for this.
Computers don’t understand words. They understand numbers. So we need a mechanism to map words into numbers. In the previous article, we used word embeddings to do so and we use the same technique here. Essentially, we convert words into some vector space, meaning we assign certain vectors or scalars (map them to some latent vector space) to each word in the language. These are word embeddings. There are many available word embeddings like Word2Vec.
In this article, we use ML.NET’s default word embeddings or word features. We use the FeaturizeText method to do so. This method transforms a text column into a float array of normalized ngrams and char-grams counts. Here is a quick example of how it can be used.
var textPipeline = mlContext.Transforms.Text.FeaturizeText("Features", textData, "Text");
var textTransformer = textPipeline.Fit(dataview);
var predictionEngine = mlContext.Model.CreatePredictionEngine<TextData,
TransformedTextData>(textTransformer);
var prediction = predictionEngine.Predict("This is some example text.");
// Print feature values and tokens.
Console.Write("Features: ");
for (int i = 0; i < 10; i++)
Console.Write($"{prediction.Features[i]:F4} ");
Console.WriteLine("\nTokens: " + string.Join(",", prediction
.OutputTokens));
Features: 0.0941 0.0941 0.0941 0.0941 0.0941 0.0941 0.0941 0.0941 0.0941 0.1881 ...
Tokens: this, is, some, example, text.
5. Sentiment Analysis Implementation with ML.NET
5.1 High-Level Architecture
Before we dive deeper into this implementation, let’s consider the high-level architecture of this implementation. In general, we want to build an easily extendable solution that we can easily extend with new Binary Classification algorithms that ML.NET will include in the future. We certainly hope that multiclass options will be available in the future. That is why the folder structure of our solution looks like this:
The Data folder contains .txt with input data and the MachineLearning folder contains everything that is necessary for our algorithm to work. The architectural overview can be represented like this:
At the core of this solution, we have an abstract TrainerBase class. This class is in the Common folder and its main goal is to standardize the way this whole process is done. It is in this class where we process data and perform feature engineering. This class is also in charge of training machine learning algorithm. The classes that implement this abstract class are located in the Trainers folder. Here we can find multiple classes which utilize ML.NET algorithms. These classes define which algorithm should be used. In this particular case, we have only one Predictor located in the Predictor folder.
4.2 Data Models
In order to load data from the dataset and use it with ML.NET algorithms, we need to implement classes that are going to model this data. Two files can be found in Data Folder: SentimentData and SentimentPredictions. The SentimentData class models input data and it looks like this:
using Microsoft.ML.Data;
namespace SentimentAnalysisMlNet.MachineLearning.DataModels
{
public class SentimentData
{
[LoadColumn(0)]
public string SentimentText;
[LoadColumn(1), ColumnName("Label")]
public bool Sentiment;
}
}
The SentimentPredictions class models output data:
using Microsoft.ML.Data;
namespace SentimentAnalysisMlNet.MachineLearning.DataModels
{
public class SentimentPrediction : SentimentData
{
[ColumnName("PredictedLabel")]
public bool Prediction { get; set; }
public float Probability { get; set; }
public float Score { get; set; }
}
}
4.3 TrainerBase and ITrainerBase
As we mentioned, this class is the core of this implementation. In essence, there are two parts to it. The first one is the interface that describes this class and another is the abstract class that needs to be overridden with the concrete implementations, however, it implements interface methods. Here is the ITrainerBase interface:
using Microsoft.ML.Data;
namespace SentimentAnalysisMlNet.MachineLearning.Common
{
public interface ITrainerBase
{
string Name { get; }
void Fit(string trainingFileName);
BinaryClassificationMetrics Evaluate();
void Save();
}
}
The TrainerBase class implements this interface. However, it is abstract since we want to inject specific algorithms:
using SentimentAnalysisMlNet.MachineLearning.DataModels;
using Microsoft.ML;
using Microsoft.ML.Calibrators;
using Microsoft.ML.Data;
using Microsoft.ML.Trainers;
using Microsoft.ML.Transforms;
using System;
using System.IO;
namespace SentimentAnalysisMlNet.MachineLearning.Common
{
/// <summary>
/// Base class for Trainers.
/// This class exposes methods for training, evaluating and saving ML Models.
/// Classes that inherit this class need to assing concrete model and name; and to implement data pre-processing.
/// </summary>
public abstract class TrainerBase<TParameters> : ITrainerBase
where TParameters : class
{
public string Name { get; protected set; }
protected static string ModelPath => Path.Combine(AppContext.BaseDirectory, "classification.mdl");
protected readonly MLContext MlContext;
protected DataOperationsCatalog.TrainTestData _dataSplit;
protected ITrainerEstimator<BinaryPredictionTransformer<TParameters>, TParameters> _model;
protected ITransformer _trainedModel;
protected TrainerBase()
{
MlContext = new MLContext(111);
}
/// <summary>
/// Train model on defined data.
/// </summary>
/// <param name="trainingFileName"></param>
public void Fit(string trainingFileName)
{
if (!File.Exists(trainingFileName))
{
throw new FileNotFoundException($"File {trainingFileName} doesn't exist.");
}
_dataSplit = LoadAndPrepareData(trainingFileName);
var dataProcessPipeline = BuildDataProcessingPipeline();
var trainingPipeline = dataProcessPipeline.Append(_model);
_trainedModel = trainingPipeline.Fit(_dataSplit.TrainSet);
}
/// <summary>
/// Evaluate trained model.
/// </summary>
/// <returns>RegressionMetrics object which contain information about model performance.</returns>
public BinaryClassificationMetrics Evaluate()
{
var testSetTransform = _trainedModel.Transform(_dataSplit.TestSet);
return MlContext.BinaryClassification.EvaluateNonCalibrated(testSetTransform);
}
/// <summary>
/// Save Model in the file.
/// </summary>
public void Save()
{
MlContext.Model.Save(_trainedModel, _dataSplit.TrainSet.Schema, ModelPath);
}
/// <summary>
/// Feature engeneering and data pre-processing.
/// </summary>
/// <returns>Data Processing Pipeline.</returns>
private EstimatorChain<ITransformer> BuildDataProcessingPipeline()
{
var dataProcessPipeline = MlContext.Transforms.Text.FeaturizeText(
outputColumnName: "Features",
inputColumnName: nameof(SentimentData.SentimentText))
.AppendCacheCheckpoint(MlContext);
return dataProcessPipeline;
}
private DataOperationsCatalog.TrainTestData LoadAndPrepareData(string trainingFileName)
{
var trainingDataView = MlContext.Data.LoadFromTextFile<SentimentData>(trainingFileName, hasHeader: false);
return MlContext.Data.TrainTestSplit(trainingDataView, testFraction: 0.3);
}
}
}
That is one large class. It controls the whole process. Let’s split it up and see what it is all about. First, let’s observe the fields and properties of this class:
public string Name { get; protected set; }
protected static string ModelPath => Path.Combine(AppContext.BaseDirectory, "classification.mdl");
protected readonly MLContext MlContext;
protected DataOperationsCatalog.TrainTestData _dataSplit;
protected ITrainerEstimator<BinaryPredictionTransformer<TParameters>, TParameters> _model;
protected ITransformer _trainedModel;
The Name property is used by the class that inherits this one to add the name of the algorithm. The ModelPath field is there to define where we will store our model once it is trained. Note that the file name has .mdl extension. Then we have our MlContext so we can use ML.NET functionalities. Don’t forget that this class is a singleton, so there will be only one in our solution. The _dataSplit field contains loaded data. Data is split into train and test datasets within this structure.
The field _model is used by the child classes. These classes define which machine learning algorithm is used in this field. The _trainedModel field is the resulting model that should be evaluated and saved. In essence, the only job of the class that inherits and implements this one is to define the algorithm that should be used, by instantiating an object of the desired algorithm as _model.
Cool, let’s now explore Fit() method:
public void Fit(string trainingFileName)
{
if (!File.Exists(trainingFileName))
{
throw new FileNotFoundException($"File {trainingFileName} doesn't exist.");
}
_dataSplit = LoadAndPrepareData(trainingFileName);
var dataProcessPipeline = BuildDataProcessingPipeline();
var trainingPipeline = dataProcessPipeline.Append(_model);
_trainedModel = trainingPipeline.Fit(_dataSplit.TrainSet);
}
This method is the blueprint for the training of the algorithms. As an input parameter, it receives the path to the .csv file. After we confirm that the file exists we use the private method LoadAndPrepareData. This method loads data into memory and splits it into two datasets, train and test dataset. We store the returning value into _dataSplit because we need a test dataset for the evaluation phase. Then we call BuildDataProcessingPipeline().
This is the method that performs data pre-processing and feature engineering. For this data, there is no need for some heavy work, we just create word embeddings. Here is the method:
private EstimatorChain<ITransformer> BuildDataProcessingPipeline()
{
var dataProcessPipeline = MlContext.Transforms.Text.FeaturizeText(
outputColumnName: "Features",
inputColumnName: nameof(SentimentData.SentimentText))
.AppendCacheCheckpoint(MlContext);
return dataProcessPipeline;
}
Next is the Evaluate() method:
public RegressionMetrics Evaluate()
{
var testSetTransform = _trainedModel.Transform(_dataSplit.TestSet);
return MlContext.Regression.Evaluate(testSetTransform);
}
It is a pretty simple method that creates a Transformer object by using _trainedModel and test Dataset. Then we utilize MlContext to retrieve regression metrics. Finally, let’s check out Save() method:
public void Save()
{
MlContext.Model.Save(_trainedModel, _dataSplit.TrainSet.Schema, ModelPath);
}
This is another simple method that just uses MLContext to save the model into the defined path.
4.4 Trainers
Thanks to all the heavy lifting that we have done in the TrainerBase class, the other Trainer classes are pretty simple and focused only on instantiating the ML.NET algorithm. We have ten classes that utilize ML.NET‘s binary classifiers. Let’ take a look at one of them – DecisionTreeTrainer class:
using SentimentAnalysisMlNet.MachineLearning.Common;
using Microsoft.ML;
using Microsoft.ML.Calibrators;
using Microsoft.ML.Trainers.FastTree;
namespace SentimentAnalysisMlNet.MachineLearning.Trainers
{
public class DecisionTreeTrainer : TrainerBase<
CalibratedModelParametersBase<FastTreeBinaryModelParameters, PlattCalibrator>>
{
public DecisionTreeTrainer(int numberOfLeaves, int numberOfTrees, double learningRate = 0.2)
: base()
{
Name = $"Decision Tree-{numberOfLeaves}-{numberOfTrees}-{learningRate}";
_model = MlContext.BinaryClassification.Trainers.FastTree(numberOfLeaves: numberOfLeaves,
numberOfTrees: numberOfTrees,
learningRate: learningRate);
}
}
}
As you can see, this class is pretty simple. We override the Name and _model. We use the FastTree class from the BinaryClassificaton namespace. Notice how we use some of the hyperparameters that this algorithm provides. With this, we can create more experiments. The numberOfLeaves represents the number of nodes that are going to be created in each branch of the decision tree, while the numberOfTrees represent the number of trees that are going to be trained. Remember, this implementation uses the MART algorithm, which creates multiple trees and then picks the best one. The learningRate hyperparameter defines how fast this algorithm learns. The other class are similar, some have hyperparameters, some don’t.
4.5 Predictor
The Predictor class is here to load the saved model and run some predictions. Usually, this class is not a part of the same microservice as trainers. We usually have one microservice that is performing the training of the model. This model is saved into file, from which the other model loads it and run predictions based on the user input. Here is how this class looks like:
public class Predictor
{
protected static string ModelPath => Path.Combine(AppContext.BaseDirectory, "svm.mdl");
private readonly MLContext _mlContext;
private ITransformer _model;
public Predictor()
{
_mlContext = new MLContext(111);
}
/// <summary>
/// Runs prediction on new data.
/// </summary>
/// <param name="newSample">New data sample.</param>
/// <returns>An object which contains predictions made by model.</returns>
public PalmerPenguinsBinaryPrediction Predict(PalmerPenguinsBinaryData newSample)
{
LoadModel();
var predictionEngine = _mlContext.Model.
CreatePredictionEngine<PalmerPenguinsBinaryData, PalmerPenguinsBinaryPrediction>(_model);
return predictionEngine.Predict(newSample);
}
private void LoadModel()
{
if (!File.Exists(ModelPath))
{
throw new FileNotFoundException($"File {ModelPath} doesn't exist.");
}
using (var stream = new FileStream(ModelPath,
FileMode.Open,
FileAccess.Read,
FileShare.Read))
{
_model = _mlContext.Model.Load(stream, out _);
}
if (_model == null)
{
throw new Exception($"Failed to load Model");
}
}
}
In a nutshell, the model is loaded from a defined file, and predictions are made on the new sample. Note that we need to create PredictionEngine to do so.
4.6 Usage and Results
Ok, let’s put all of this together.
using SentimentAnalysisMlNet.MachineLearning.Common;
using SentimentAnalysisMlNet.MachineLearning.DataModels;
using SentimentAnalysisMlNet.MachineLearning.Predictors;
using SentimentAnalysisMlNet.MachineLearning.Trainers;
using System;
using System.Collections.Generic;
namespace SentimentAnalysisMlNet
{
class Program
{
static void Main(string[] args)
{
var newSample = new SentimentData
{
SentimentText = "This is awesome!"
};
var trainers = new List<ITrainerBase>
{
new LbfgsLogisticRegressionTrainer(),
new AveragedPerceptronTrainer(),
new PriorTrainer(),
new SdcaLogisticRegressionTrainer(),
new SdcaNonCalibratedTrainer(),
new SgdCalibratedTrainer(),
new SgdNonCalibratedTrainer(),
new DecisionTreeTrainer(5, 10),
new DecisionTreeTrainer(5, 10, 0.1),
new DecisionTreeTrainer(10, 20),
new DecisionTreeTrainer(10, 20, 0.1),
new GamTrainer(),
new RandomForestTrainer(2, 5),
new RandomForestTrainer(5, 10),
new RandomForestTrainer(10, 20)
};
trainers.ForEach(t => TrainEvaluatePredict(t, newSample));
}
static void TrainEvaluatePredict(ITrainerBase trainer, SentimentData newSample)
{
Console.WriteLine("*******************************");
Console.WriteLine($"{ trainer.Name }");
Console.WriteLine("*******************************");
trainer.Fit("C:\\Users\\n.zivkovic\\source\\repos\\YoloV4MlNet\\SentimentAnalysisMlNet\\Assets\\Data\\imdb_labelled.txt");
var modelMetrics = trainer.Evaluate();
Console.WriteLine(modelMetrics.ConfusionMatrix.GetFormattedConfusionTable());
Console.WriteLine(modelMetrics.AreaUnderRocCurve);
Console.WriteLine($"Accuracy: {modelMetrics.Accuracy:0.##}{Environment.NewLine}" +
$"F1 Score: {modelMetrics.F1Score:#.##}{Environment.NewLine}" +
$"Positive Precision: {modelMetrics.PositivePrecision:#.##}{Environment.NewLine}" +
$"Negative Precision: {modelMetrics.NegativePrecision:0.##}{Environment.NewLine}" +
$"Positive Recall: {modelMetrics.PositiveRecall:#.##}{Environment.NewLine}" +
$"Negative Recall: {modelMetrics.NegativeRecall:#.##}{Environment.NewLine}" +
$"Area Under Precision Recall Curve: {modelMetrics.AreaUnderPrecisionRecallCurve:#.##}{Environment.NewLine}");
trainer.Save();
var predictor = new Predictor();
var prediction = predictor.Predict(newSample);
Console.WriteLine("------------------------------");
Console.WriteLine($"Prediction: {prediction.Prediction:#.##}");
Console.WriteLine($"Probability: {prediction.Probability:#.##}");
Console.WriteLine("------------------------------");
}
}
}
Not the TrainEvaluatePredict() method. This method does the heavy lifting here. In this method, we can inject an instance of the class that inherits TrainerBase and a new sample that we want to be predicted. Then we call Fit() method to train the algorithm. Then we call Evaluate() method and print out the metrics. Finally, we save the model. Once that is done, we create an instance of Predictor, call Predict() method with a new sample and print out the predictions. In the Main, we create a list of trainer objects, and then we call TrainEvaluatePredict on these objects.
In the list of algorithms, we relied on the hyperparameters to create several variations of Decision Trees. Here are the results:
*******************************
LBFGS Logistic Regression
*******************************
TEST POSITIVE RATIO: 0.4842 (138.0/(138.0+147.0))
Confusion table
||======================
PREDICTED || positive | negative | Recall
TRUTH ||======================
positive || 102 | 36 | 0.7391
negative || 65 | 82 | 0.5578
||======================
Precision || 0.6108 | 0.6949 |
0.7204968944099379
Accuracy: 0.65
F1 Score: .67
Positive Precision: .61
Negative Precision: 0.69
Positive Recall: .74
Negative Recall: .56
Area Under Precision Recall Curve: .69
------------------------------
Prediction: True
Probability: .51
------------------------------
*******************************
Averaged Perceptron
*******************************
TEST POSITIVE RATIO: 0.4842 (138.0/(138.0+147.0))
Confusion table
||======================
PREDICTED || positive | negative | Recall
TRUTH ||======================
positive || 104 | 34 | 0.7536
negative || 38 | 109 | 0.7415
||======================
Precision || 0.7324 | 0.7622 |
0.8409247757073844
Accuracy: 0.75
F1 Score: .74
Positive Precision: .73
Negative Precision: 0.76
Positive Recall: .75
Negative Recall: .74
Area Under Precision Recall Curve: .85
------------------------------
Prediction: True
Probability:
------------------------------
*******************************
Prior
*******************************
TEST POSITIVE RATIO: 0.4842 (138.0/(138.0+147.0))
Confusion table
||======================
PREDICTED || positive | negative | Recall
TRUTH ||======================
positive || 138 | 0 | 1.0000
negative || 147 | 0 | 0.0000
||======================
Precision || 0.4842 | 0.0000 |
0.5
Accuracy: 0.48
F1 Score: .65
Positive Precision: .48
Negative Precision: 0
Positive Recall: 1
Negative Recall:
Area Under Precision Recall Curve: .48
------------------------------
Prediction: True
Probability: .51
------------------------------
*******************************
Sdca Logistic Regression
*******************************
TEST POSITIVE RATIO: 0.4842 (138.0/(138.0+147.0))
Confusion table
||======================
PREDICTED || positive | negative | Recall
TRUTH ||======================
positive || 101 | 37 | 0.7319
negative || 34 | 113 | 0.7687
||======================
Precision || 0.7481 | 0.7533 |
0.82362220250419
Accuracy: 0.75
F1 Score: .74
Positive Precision: .75
Negative Precision: 0.75
Positive Recall: .73
Negative Recall: .77
Area Under Precision Recall Curve: .83
------------------------------
Prediction: True
Probability: .76
------------------------------
*******************************
Sdca NonCalibrated
*******************************
TEST POSITIVE RATIO: 0.4842 (138.0/(138.0+147.0))
Confusion table
||======================
PREDICTED || positive | negative | Recall
TRUTH ||======================
positive || 102 | 36 | 0.7391
negative || 37 | 110 | 0.7483
||======================
Precision || 0.7338 | 0.7534 |
0.8232771369417332
Accuracy: 0.74
F1 Score: .74
Positive Precision: .73
Negative Precision: 0.75
Positive Recall: .74
Negative Recall: .75
Area Under Precision Recall Curve: .83
------------------------------
Prediction: True
Probability:
------------------------------
*******************************
Sgd Calibrated
*******************************
TEST POSITIVE RATIO: 0.4842 (138.0/(138.0+147.0))
Confusion table
||======================
PREDICTED || positive | negative | Recall
TRUTH ||======================
positive || 102 | 36 | 0.7391
negative || 46 | 101 | 0.6871
||======================
Precision || 0.6892 | 0.7372 |
0.8033126293995859
Accuracy: 0.71
F1 Score: .71
Positive Precision: .69
Negative Precision: 0.74
Positive Recall: .74
Negative Recall: .69
Area Under Precision Recall Curve: .79
------------------------------
Prediction: True
Probability: .52
------------------------------
*******************************
Sgd NonCalibrated
*******************************
TEST POSITIVE RATIO: 0.4842 (138.0/(138.0+147.0))
Confusion table
||======================
PREDICTED || positive | negative | Recall
TRUTH ||======================
positive || 104 | 34 | 0.7536
negative || 46 | 101 | 0.6871
||======================
Precision || 0.6933 | 0.7481 |
0.8020802523908114
Accuracy: 0.72
F1 Score: .72
Positive Precision: .69
Negative Precision: 0.75
Positive Recall: .75
Negative Recall: .69
Area Under Precision Recall Curve: .79
------------------------------
Prediction: True
Probability:
------------------------------
*******************************
Decision Tree-5-10-0.2
*******************************
TEST POSITIVE RATIO: 0.4842 (138.0/(138.0+147.0))
Confusion table
||======================
PREDICTED || positive | negative | Recall
TRUTH ||======================
positive || 99 | 39 | 0.7174
negative || 54 | 93 | 0.6327
||======================
Precision || 0.6471 | 0.7045 |
0.7495316967366656
Accuracy: 0.67
F1 Score: .68
Positive Precision: .65
Negative Precision: 0.7
Positive Recall: .72
Negative Recall: .63
Area Under Precision Recall Curve: .75
------------------------------
Prediction: True
Probability: .53
------------------------------
*******************************
Decision Tree-5-10-0.1
*******************************
TEST POSITIVE RATIO: 0.4842 (138.0/(138.0+147.0))
Confusion table
||======================
PREDICTED || positive | negative | Recall
TRUTH ||======================
positive || 106 | 32 | 0.7681
negative || 69 | 78 | 0.5306
||======================
Precision || 0.6057 | 0.7091 |
0.7358523119392685
Accuracy: 0.65
F1 Score: .68
Positive Precision: .61
Negative Precision: 0.71
Positive Recall: .77
Negative Recall: .53
Area Under Precision Recall Curve: .74
------------------------------
Prediction: True
Probability: .52
------------------------------
*******************************
Decision Tree-10-20-0.2
*******************************
TEST POSITIVE RATIO: 0.4842 (138.0/(138.0+147.0))
Confusion table
||======================
PREDICTED || positive | negative | Recall
TRUTH ||======================
positive || 97 | 41 | 0.7029
negative || 40 | 107 | 0.7279
||======================
Precision || 0.7080 | 0.7230 |
0.7923198264813172
Accuracy: 0.72
F1 Score: .71
Positive Precision: .71
Negative Precision: 0.72
Positive Recall: .7
Negative Recall: .73
Area Under Precision Recall Curve: .79
------------------------------
Prediction: True
Probability: .61
------------------------------
*******************************
Decision Tree-10-20-0.1
*******************************
TEST POSITIVE RATIO: 0.4842 (138.0/(138.0+147.0))
Confusion table
||======================
PREDICTED || positive | negative | Recall
TRUTH ||======================
positive || 100 | 38 | 0.7246
negative || 46 | 101 | 0.6871
||======================
Precision || 0.6849 | 0.7266 |
0.7857142857142857
Accuracy: 0.71
F1 Score: .7
Positive Precision: .68
Negative Precision: 0.73
Positive Recall: .72
Negative Recall: .69
Area Under Precision Recall Curve: .78
------------------------------
Prediction: True
Probability: .51
------------------------------
Awesome, so we got different predictions from different algorithms, along with different metrics. Note that a lot of algorithms actually have bad performance and mark “This is awesome!” as a negative review. Apart from that a lot of algorithms have low confidence (probability) even though they marked the statement as positive. The best results gave Sdca Logistic Regression with 76% confidence that sentence is positive. This is an indication that we need to do additional data preparation.
Conclusion
In this article, we covered a lot of ground. We learned how Sentiment Analysis works and which types of it are out there. As always, we implemented it all using ML.NET.
Thank you for reading!
Nikola M. Zivkovic
CAIO at Rubik's Code
Nikola M. Zivkovic a CAIO at Rubik’s Code and the author of book “Deep Learning for Programmers“. He is loves knowledge sharing, and he is experienced speaker. You can find him speaking at meetups, conferences and as a guest lecturer at the University of Novi Sad.
Rubik’s Code is a boutique data science and software service company with more than 10 years of experience in Machine Learning, Artificial Intelligence & Software development. Check out the services we provide.
Trackbacks/Pingbacks