Use Machine Learning to predict customers you might lose – Part 4

So far we have seen how a Dynamics CRM integration can be connected to Azure ML to receive the predictions. Once we got the integration going there is no dearth of possibilities. You may like to build an alert / flagging functionality that can alert a Customer Service rep to contact a customer if their predictors are indicating that they might churn. You may incorporate predictions into exec reporting so that the execs are aware of the churn trends and make decisions to minimise churn.

Insights

One of the things I discussed at the start of this series was to be able to get some insights into the key drivers of customer churn e.g. how do you know which features are most likely to cause churn. Answering such questions begins with analysing your data, few starting points can be

1. From your data find out what fields change with respect to the Churn variable e.g. does the churn rate increase as the income of the customer goes up or is it dependent on their usage?

There are measures like correlation, covariance, entropy, etc that can help you answer such questions.

2. Find the distribution of your data and identify any outliers e.g. check if there is a skew in the data or if the classes are unbalanced. You may need to apply some statistical techniques like variance, standard deviation to have a better platform to delve into some of these insights.

Azure Machine Learning does provide some modules straight off the bat that can make the job easier e.g. it has the following modules

Compute Elementary Statistics

Compute Linear Correlation

Getting advanced insights can be tricky based on your algorithm or setup of the experiment (project). But there are ways e.g. with bit of a Python code you can produce a decision rule tree below. The last label in the box class= {LEAVE, STAY} tells us if the customer will churn based on what path they fall under

clip_image002

Above is the automatically generated insight that tells us that overage is most important variable in deciding customer churn. If overage exceeds 97.5 then a customer is more likely to churn, this does not mean that every customer whose overage is more than 97.5 will churn nor does it mean that whose overage is less than that will stay. It is just that Overage is the strongest indicator of churn based on our data.

We can even derive decision rules from insights like these e.g. customers with overage less than 97.5 and Leftover minutes less than 24.5 minutes are most likely to stay. On the contrary customers with overage more than 97.5 and average income more than $100059.5 are most likely to leave.

Here is another one that shows the impact of House Value, Handset Value and other features on the churn

clip_image004

Once decision rules have been identified based on the above insights, policies can be made to retain such customers who are at risk of churn e.g. give them discounts, offer them a change of plan, prize them with loyalty offerings, etc.

Where to from here?

Hopefully by now you appreciate the potential of machine learning and recognise the opportunity it provides when it is complemented with traditional information systems like CRMs, ERPs and Document Management systems. The field of machine learning is enormous and sometimes quite complex too as it based on scientific techniques and mathematics. You need to understand and lot of theory if you need to get into the black box i.e. how machine learning does what it does?

But great thing about using Azure Machine Learning suite is that it makes entry into machine learning easier by taking care of the complexities and giving you an easy-to-understand and easy-to-use environment. You have full control over the data structure and algorithms used in your project. It can be tuned as per the needs of your organisation to receive the best possible results.

For example you can tune the example I provided in the following different ways

1. Rather than going with Random Forest you can choose Support Vector Machines or Neural Networks and compare the results.

2. You are not restricted to Javascript, you can call the web services from a plugin. That way in a data migration scenario, while you are importing data you can set the prediction scores as the data is being imported

3. You can also change the threshold of confidence percentage to ignore the predictions score where confidence is less than a certain amount.

So there are lot of possibilities. Hope you enjoyed the series.

Happy CRM + ML!!

Use Machine Learning to predict customers you might lose – Part 3

Cruising through our machine learning journey and starting from where we left in the previous instalment, the next step is to expose our machine learning model as a Web service so that it can be invoked from within Dynamics CRM.

Azure Machine Learning has this fantastic concept of converting an experiment into a trained model. Trained model is like a compiled version of your experiment that can be exposed via a web service, all from the click of just one button i.e. Setup Web Service

clip_image001

Azure ML takes care of the rest by deploying the model. Once deployed you can inspect its configuration by going to the Web Services section as shown here

image

In order to connect to this web service from within Dynamics CRM, we can use the below JavaScript. We can pass CRM objects to this service in JSON format and get prediction results back

sendRequest: function (avgIncome,overAge,leftOver,houseVal,handsetVal,longCalls) {

var service = AzureScript.getRequestObject();

var url = "https://xxxxxxxxx.services.azureml.net/workspaces/xxxxxxxxxxxxxxx/services/xxxxxx/execute?api-version=2.0&details=true";

var jsonObject =

{

"Inputs": {

"input1": {

"ColumnNames": [

"College",

"Avg_Income",

"Overage",

"Leftover",

"House_Value",

"Handset_Value",

"Over_15min_Calls_Per_Month",

"Average_Call_Duration",

"Reported_Satisfaction",

"Usage",

"Considering_Change_Of_Plan",

"Outcome"

],

"Values": [

[

null,

avgIncome,

overAge,

leftOver,

houseVal,

handsetVal,

longCalls,

null,

null,

null,

null,

null

]

]

}

},

"GlobalParameters": {}

};

var dataString = JSON.stringify(jsonObject);

if (service != null) {

service.open("POST", url, false);

service.setRequestHeader("X-Requested-Width", "XMLHttpRequest");

service.setRequestHeader("Authorization", "Bearer xxxxxxxxx==");

service.setRequestHeader("Accept", "application/json");

service.setRequestHeader("Content-Type", "application/json; charset=utf-8");

service.setRequestHeader("Content-Length", dataString.length);

service.send(dataString);

//Recieve result

var requestResults = eval('(' + service.responseText + ')');

try {

resultOutput = requestResults.Results.output1.value.Values[0]

return resultOutput;

}

catch (err) {

console.log('Unable to interpret result');

}

}

}

Let us prepare Dynamics CRM to start consuming this web service. I have created an event onSave() of the Telecom Customer form which passes the relevant data to the Azure Service and gets the score. The Javascript for that is as below

function onFormSave() {

//Prepare data - only fields with high Information gain

var houseValue = Xrm.Page.getAttribute('manny_housevalue').getValue();

var income = Xrm.Page.getAttribute('manny_income').getValue();

var longcalls = Xrm.Page.getAttribute('manny_longcalls').getValue();

var overage = Xrm.Page.getAttribute('manny_overage').getValue();

var phonecost = Xrm.Page.getAttribute('manny_phonecost').getValue();

var leftOver = Xrm.Page.getAttribute('manny_leftover').getValue();

var valOutput = AzureScript.sendRequest(income, overage, leftOver, houseValue, phonecost, longcalls);

if (valOutput != null && valOutput[0]!=null && valOutput[1]!=null) {

if(valOutput[0]=="STAY")

Xrm.Page.getAttribute('manny_predictedchurnstatus').setValue(true)

else

Xrm.Page.getAttribute('manny_predictedchurnstatus').setValue(false)

var prob = parseFloat(valOutput[1]);

if(prob>=0 && prob<=1.0)

Xrm.Page.getAttribute('manny_predictionconfidencepercentage').setValue(prob*100)

}

}

getRequestObject: function () {
 ///<summary>
 /// Get an instance of XMLHttpRequest for all browsers
 ///</summary>
 if (XMLHttpRequest) {
 // Chrome, Firefox, IE7+, Opera, Safari
 // ReSharper disable InconsistentNaming
 return new XMLHttpRequest();
 // ReSharper restore InconsistentNaming
 }
 // IE6
 try {
 // The latest stable version. It has the best security, performance,
 // reliability, and W3C conformance. Ships with Vista, and available
 // with other OS's via downloads and updates.
 return new ActiveXObject('MSXML2.XMLHTTP.6.0');
 } catch (e) {
 try {
 // The fallback.
 return new ActiveXObject('MSXML2.XMLHTTP.3.0');
 } catch (e) {
 alertMessage('This browser is not AJAX enabled.');
 return null;
 }
 }
 },

 

These scripts are trivial and should be self-explanatory. Basically we are passing the highly correlated features to the prediction service and getting two outputs

Prediction score -> assigned to-> manny_predictedchurnstatus

Prediction confidence -> assigned to -> manny_predictionconfidencepercentage

And they are displayed on the form like this, it’s integrated i.e. the moment you change the data the score is updated.

clip_image001[5]

In the next blog post, we will touch upon the Insights that can be gained from a machine learning integration

Use Machine Learning to predict customers you might lose – Part 2

Continuing our journey from the previous post where we defined the issue of churn prediction, in this instalment, let us create the model in Azure Machine Learning. We are trying to predict the likelihood of customer’s churn based on certain features in the profile which are stored in the Telecom Customer entity. We will use a technique called Supervised Learning, where we train the model on our data first and let us understand the trends before it can start giving us some insights.

Obviously you need access to Azure Machine Learning, once you log into it, you can create a new Experiment. That gives you a workspace designer and a toolbox (somewhat like SSIS/Biztalk) where you can drag control and the feed into each other. So it is a flexible model and for most tasks you do not need to write code.

Below is a screenshot of my experiment with toolbox on the left

image

Now machine learning is something which is slightly atypical for a usual CRM audience, I would not be able to fit full details of each of these tools in this blog but I will touch on each of these steps so that you can understand at high-level that what is going on inside these boxes. Let us address them one by one

Dynamics CRM 2016 Telecom

This module is the input data module where we are reading the CRM customer information in the form of a dataset. At the moment of writing the blog, there is no direct connection available from Azure Machine Learning to CRM online. But where there is a will, there is a way i.e. I discovered that you can connect to CRM using the following

1. You schedule a daily export of CRM data into a location that Azure Machine Learning can read e.g. Azure blob storage, Web Url over Http

2. You can write a small Python based module that connects to Dynamics using Azure Directory Services, the module can the pass the data to the Azure using a DataFrame control

From my experience having an automatic sync is not important from Dynamics to Azure ML but it is important the other way round i.e. Azure ML to Dynamics.

Split Data

This module basically splits your data into a two sets

1. Training dataset – The data based on which the machine learning model will learn

2. Testing dataset – The data based on which the accuracy of the model will be determined

I have chosen stratified split which ensures that the Testing dataset is balanced when it comes to classes being predicted. The split ratio is 80/20 i.e. 80% of the records will be used for training and 20% for testing.

Two-class Decision Forest

This is main classifier i.e. the module that does the grunt of the work. The classifier of choice here is a random forest with bootstrap aggregation. Two-class makes sense for us because our prediction has two outcomes i.e. whether the customer will churn or not.

Random forests are fast classifiers and very difficult to overfit, rather than taking one path they learn your data from different angles (called ensembles). Then in the end the scores of various ensembles are combined to come up with an overall prediction score. You can read more about this classifier here.

Train Model

This module basically connects the classifier to the data. As you can see in the screenshot of the experiment I posted above there are two arrows coming out of Spilt Data, the one of the left is the 80% one i.e. the training dataset. The output of this module is trained model that is ready to make predictions.

Score Model

This step uses the trained model from the previous step and tests the accuracy of the model against our test data. Put simply, here we start feeding the data to the model that it has not seen before and count how many number of times the model gave the correct prediction Vs wrong prediction.

Evaluate Model

The scores (hit vs miss) generated from the previous modules are evaluated in this step. In Data Science there are standard metrics to measure this kind of performance e.g. Confusion Matrices, ROC curves and many more. Below is the screenshot of the Confusion matrix

clip_image002clip_image004

I know there is a lot of confusing details here (hence the name Confusion Matrix) but as a rule of thumb we need to focus on AUC i.e. area under the curve. As shown in the results above we have a decent 72.9% of the area under the curve (which in layman terms means percentage of correct predictions). Higher percentage does not necessarily equate to a better model, more often than not a higher percentage (e.g. 90%) means overfitting i.e. a state where your model does very well on the sample data but not so well on the real-world data. So our model is good to go.

You can read more about the metrics and terms above here

In the next blog we will deploy and integrate the model with Dynamics CRM.

Use Machine Learning to predict customers you might lose – Part 1

“Customer satisfaction is worthless. Customer loyalty is priceless.”

Jeffrey Gitomer

Business is becoming increasingly competitive these days and getting new customers increasingly difficult. The wisest thing to do in this cut-throat scenario is to hold on to your existing customer base while trying to develop new business. Realistically, no matter how hard it tries, every organisation still loses a percentage of its customers every year to the competition. This process of losing customers is called Churn.

Progressive organisations take churn seriously, they want to know in advance that approximately how many customers they are going to lose this year and what is causing the churn. Having an insight into customer churn at least gives an organisation an opportunity to proactively take measures to control the churn before it is too late and the customer is gone.

Two pieces of information help the most when it comes to minimising the churn

1. Which customers are we going to lose this year

2. What are the biggest drivers of customer churn

The answers to the above questions often are hidden in the customer data itself but revealing these answers out of swathes of data is an art – rather a science called Data Science. With recent advances in some practical Data Science techniques like Machine Learning getting these answers is becoming increasingly feasible even for small scale organisations who do not have the luxury of a Data Science team. Thanks to services like Azure Machine Learning which are trying to democratise these advanced techniques to a level such that even a small scale customer can leverage them to solve their business puzzles.

Let me show you how your Dynamics CRM can leverage the powerful Machine Learning cortex to get some insights into the key drivers of customer churn. In this blog series, we will build a machine learning model that will answer the questions regarding churn. I have divided the series into four parts as below

Part 1 – Introduction

Part 2 – Creating a Machine Learning model

Part 3- Integrate the model with Dynamics CRM

Part 4 – Gaining insights within Dynamics CRM

I will take the example of a Telecom organisation but the model can be extended any kind of organisation in any capacity and from any industry.

Scenario

Let us say there is a Telecom company called TelcoOrg which uses Microsoft CRM 2016 and they have an entity called Telecom Customer that stores their telco profile. Such profile may include some data regarding a customer mobile plan, phone usage, demography and reported satisfaction.

Understanding the features

In data science projects, it is crucial to understand the data points (called features). You need to carefully select those features that are relevant to the problem at hand, some the features also need to be engineered and normalised before they start generating some information gain. Below are the features that we will be using in this scenario of our Churn problem

Let me quickly explain the features so that we can understand the information contained in them

Feature

Information

Has a College degree?

If the customer has a college degree

Cost price of phone

Price of the customer’s phone as per the plan/contract with TelcoOrg

Value of customer’s house

Approximate value of customer’s house based on Property Information websites like RPData, etc.

Average Income

Yearly income as reported by the customer

Leftover minutes per month

Average number of minutes a customer normally does not use from monthly quota

Average call duration

Average duration of calls made based on call history

Usage category

The category customer’s phone usage falls under as compared to other customers e.g. Very High, High, Average, Low or Very Low

Average overcharges

Average number of times a customer is usually overcharged per month

Average long duration calls

Average number of calls a customer usually makes per month that are more than 15 minutes long

Considering change?

How customer responded to TelcoOrg’s survey when asked if they are considering changing to another provider e.g. Yes, considering, Maybe, Not looking, etc.

Reported level of satisfaction

How customer responded to TelcoOrg’s survey when asked if they are satisfied with TelcoOrg’s service e.g. Unsatisfied, Neutral, Satisfied

Account Status

Current Status of the customer (i.e. if they have left or are currently Active)

Predicted Churn Status

This is the predicted status returned by the Azure Machine Learning Web Service

Prediction Confidence Percentage

This field means how confident Azure Machine Learning Web Service is regarding its prediction. A threshold can be set to only consider the predictions above e.g. we can say, take only those predictions where WS is 70% confident.

The screenshot below shows the information from the Telecom Customer entity. The section highlighted in blue are predictions based on Azure Machine Learning web services. Whenever any of the fields on this CRM form changes, the WS updates its prediction scores based on the record’s data. I will provide details later during the series as to how I built this integration.

clip_image001

Below is a screenshot of some of these records

clip_image003

We will achieve the following business benefits using Azure Machine Learning

1. Customers who are predicted to be at a higher risk of leaving (churn) can be flagged, so the customer retention teams can get it touch with them to proactively address their concerns in a bid to retain them

2. Find what factors affect churn the most i.e. out of all these fields we will determine what fields are more likely to make a leave than others

3. We will also get insights into some business rules that dictate churn i.e. the drivers

I hope you understand the problem now and find it interesting so far. Let us meet in next part of the blog where I will show how a machine learning model is created.

How to write a thinking engine for Dynamics CRM

Time to uncover the core functionality that will turn Dynamics CRM into a machine that can learn. Its brain !! If you haven’t been through the previous posts – Part 1 and Part 2, I recommend you do, to make better sense of the following content. Hoping you understand what we are doing, lets keep cruising

Analyse Dynamics CRM data

Let us see a sample from our feature set first i.e. a Case record in Dynamics CRM

data1

The engine will train on such data, it will look for measurable attributes like ETA given by Support Rep (estimated), how much time Support Rep actually took (actual), nature of work (whether the work corresponded to a bank or was it from a government agency), other variables can be introduced depending on what is important for your organisation.

 

Then the engine will start learning: not only by just understanding meaningful words but also by correlating them. In Data Science, it is very important to focus on only those attributes that provide you with the most information gain. So we will only need to extract the most meaningful attributes that pertain to the problem at hand i.e. predicting what department the query belongs to –Tax, Investment or Medical. In our scenario – the most meaningful phrases are noun phrases i.e . proper nouns, combination of common nouns, industry jargons, etc. So our engine should be able to separate this critical information from a big blurb of text while staying away from the common words which occur in every email / query.

Note: In any other kind of action based application, verb phrases may be more important than nouns, so you need to adjust your extraction module accordingly –  horses for courses.

 

Designing the engine (brain)

We will use three key Data Science concepts to build this engine

Natural Language Processing

This process will involve tokenisation and meaningful keyword extraction

Term Frequency – Inverse Document Frequency

We will use this measure to determine distances between various features for our classification problem

Support Vector Machine

This will be classification algorithm for our classification task at hand i.e. to determine the department

Below are various phases involved in the brain training

engine1

 

Writing the engine (brain)

I have written the grammar engine below that uses regular expressions for synthesis. I found that tokenisation is much faster and accurate when you use regular expressions as it gives the NLTK engine a jump start.

Then it uses Bi-gram approach for grammatical tagging of the text. This approach is more efficient than unigram approach because it understands the context of the word in the sentence before tagging it (rather than just the word itself)

After the synthesis, tokenisation and tagging, we then move to the keyword definition and their extraction. I am sharing the source code below, you can tune it to suit your requirement. It uses Python’s NLTK library and Brown corpus for Bigram tagging.

Tokeniser.py
# >>>>>>>> Manny Grewal - 2016  <<<<<<<<<<<<
# Fast and simple POS Tagging module with emphasis on key pharases
# Based on Brown Corpus - News
# Below are the regular expressions that give a jump start to the tagger
#>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
taggingDatabase = brown.tagged_sents(categories='news')
tokenGrammar = nltk.RegexpTagger(
    [(r'(\W)', 'CD'), #special chars
     (r'(\d+)', 'CD'), #digits only
     (r'\'*$', 'MD'), 
     (r'(The|the|A|a|An|an)$', 'AT'), # match articles
     (r'^-?[0-9]+(.[0-9]+)?$', 'CD'), # match amounts and decimals
     (r'.*able$', 'JJ'),
     (r'(?<![!.?]\s)\b[A-Z]\w+', 'NNP'), # noun pharses
     (r'.+ness$', 'NN'),
     (r'.*ly$', 'RB'),
     (r'.*s$', 'NNS'),
     (r'.*ing$', 'VBG'),
     (r'.*ed$', 'VBD'),    
     (r'.*', 'NN')
])
uniGramTagger = nltk.UnigramTagger(taggingDatabase, backoff=tokenGrammar)
biGramTagger = nltk.BigramTagger(taggingDatabase, backoff=uniGramTagger)

# Grammar rules 
#This grammar decides the 3 word, 2 word pharses and what tokens should be chosen
#>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
triConfig = {}
triConfig["NNP+NNP+NNP"] = "NNP" # New York City
triConfig["NNP+IN+NNP"] = "NNP" # Ring of Fire
#triConfig["NN+NN+NN"] = "NN" # captial gain tax

biConfig = {}
biConfig["NNP+NNP"] = "NNP"
biConfig["NN+NN"] = "NNI"
biConfig["NNP+NN"] = "NNP"
biConfig["NN+NNP"] = "NNP"
biConfig["AT+NNP"] = "NNP"
biConfig["JJ+NN"] = "NNI"
biConfig["VBG+NN"] = "NNI"
biConfig["RBT+NN"] = "NNI"

uniConfig ={}
uniConfig["NNP"] = "NNP"
uniConfig["NN"] = "NN"
#>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
       
# Split the sentence into singlw words/tokens
def tokeniseSentence(textData):
    tokens = nltk.word_tokenize(textData)
    return tokens

# generalise special POS tags
def replaceTagWithGeneral(tagValue):
    if(tagValue=="JJ-TL" or tagValue=="NN-TL" or tagValue=="NNPS"):
        return "NNP"
    elif(tagValue=="NNS"):
        return "NN"
    else:
        return tagValue


# Extract the main topics from the sentence
def ExtractKeyTokens(textData):
    tokens = tokeniseSentence(textData)
    generatedTags = biGramTagger.tag(tokens)

    
    # replace special tags with general tag
    for cnt, (w,t) in enumerate(generatedTags):
        replacedVal = replaceTagWithGeneral(t)
        generatedTags[cnt]=(w,replacedVal)
   
    matchedTokens=[]

    #process trigrams
    remainingTags=len(generatedTags)
    currentTag=0
    while remainingTags >= 3:
        firstTag = generatedTags[currentTag]
        secondTag = generatedTags[currentTag + 1]
        thirdTag = generatedTags[currentTag + 2]
        configKey = "%s+%s+%s" % (firstTag[1], secondTag[1], thirdTag[1])
        value = triConfig.get(configKey)
        if value:
            for l in range(0,3):
                generatedTags.pop(currentTag)
                remainingTags-=1
            matchedTokens.append("%s %s %s" %   (firstTag[0], secondTag[0], thirdTag[0]))
        currentTag+=1
        remainingTags-=1

    #process bigrams
    remainingTags=len(generatedTags)
    currentTag=0
    while remainingTags >= 2:
        firstTag = generatedTags[currentTag]
        secondTag = generatedTags[currentTag + 1]            
        configKey = "%s+%s" % (firstTag[1], secondTag[1])
        value = biConfig.get(configKey)
        if value:
            for l in range(0,2):
                generatedTags.pop(currentTag)
                remainingTags-=1
            matchedTokens.append("%s %s" %   (firstTag[0], secondTag[0]))
        currentTag+=1
        remainingTags-=1

    #process unigrams
    remainingTags=len(generatedTags)
    currentTag=0
    while remainingTags >= 1:
        firstTag = generatedTags[currentTag] 
        value = uniConfig.get(firstTag[1])
        if value:
            generatedTags.pop(currentTag);
            remainingTags-=1
            matchedTokens.append(firstTag[0])
        currentTag+=1
        remainingTags-=1
    
    return set(matchedTokens)

 

 

In a bid to keep this post relevant to the Dynamics CRM audience and not to flood it with too much mathematical complexity, I will describe the steps in nutshell that I performed to develop the ML engine:

1. Wrote a program that gets the key phrases out of the text

2. Fed the phrases to a Linear SVM classifier using the TF-IDF distance as a similarity measure

3. Trained the engine on a corpus of around 300 tickets, 100 from each category

4. Tested it using rolling windows approach of 10%

5. Adjusted the Coefficient of the kernel measures to give best results and yet avoid over-fitting of the model

6. Once trained, I pickled my engine and deployed it as a WS that will accept ticket description as input and predict the department

 

Integration of Dynamics CRM with the prediction service

Let us look how CRM will connect to the Machine Learning web service.

A plugin will fire on creation of Case and will pass the ticket description and receive the predicted department as shown below by the predictedDepttResult variable

Below is the source code of the plugin, it uses JSON to connect to WS

CasePredictTeam.cs

namespace Manny.Xrm.BusinessLogic
{
    public class CasePredictTeam : IPlugin
    {
         public void Execute(IServiceProvider serviceProvider)
        {
            //Extract the tracing service for use in debugging sandboxed plug-ins.
            ITracingService tracingService =  (ITracingService)serviceProvider.GetService(typeof(ITracingService));
           
            // Obtain the execution context from the service provider.
            IPluginExecutionContext context = (IPluginExecutionContext)  serviceProvider.GetService(typeof(IPluginExecutionContext));

            //Extract the crm service for use in debugging sandboxed plug-ins.
            IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
            IOrganizationService crmService = serviceFactory.CreateOrganizationService(context.UserId);
          
            if (context.InputParameters.Contains("Target") && context.InputParameters["Target"] is Entity)
            {                
                Entity entity = (Entity)context.InputParameters["Target"];             
                if (entity.LogicalName != "incident")
                    return;              
                try
                {
                    if (entity.Attributes.Contains("description"))
                    {
                        var url = "http://<put your host name and WS here>/PredictTicketDeptt/";                     
                       
                        string predictedDepttResult = "(default)";
                        var httpWebRequest = (HttpWebRequest)WebRequest.Create(url);
                        httpWebRequest.ContentType = "application/json";
                        httpWebRequest.Method = "POST";                       
                        using (var streamWriter = new StreamWriter(httpWebRequest.GetRequestStream()))
                        {
                            string rawDesc = (string) entity.Attributes["description"];
                            rawDesc = EncodeJson(rawDesc);
                            string json = "{\"descp\":\"" + rawDesc + "\"}";
                            streamWriter.Write(json);
                            streamWriter.Flush();
                            streamWriter.Close();
                            tracingService.Trace("2");
                            var httpResponse = (HttpWebResponse)httpWebRequest.GetResponse();
                            using (var streamReader = new StreamReader(httpResponse.GetResponseStream()))
                            {
                                tracingService.Trace("3");
                                predictedDepttResult = streamReader.ReadToEnd();
                            }
                            tracingService.Trace("4");
                        }

                        Entity caseToBeUpdated = new Entity("incident");
                        tracingService.Trace(predictedDepttResult);
                        caseToBeUpdated.Attributes.Add("incidentid", entity.Id);

                        var optionSetValue = GetOptionSetValue(predictedDepttResult);
                        tracingService.Trace(optionSetValue.ToString());
                        caseToBeUpdated.Attributes.Add("manny_department", new OptionSetValue(optionSetValue));                       
                        crmService.Update(caseToBeUpdated);
                     
                    }
                }
                catch (FaultException<OrganizationServiceFault> ex)
                {
                    throw new InvalidPluginExecutionException("An error occurred in the CasePredictTeam plug-in.", ex);
                }

                catch (Exception ex)
                {
                    tracingService.Trace("v: {0}", ex.ToString());
                    throw;
                }
            }            
        }
        public int GetOptionSetValue(string deptt)
        {
            if (deptt == "Tax")
                return 159690000;
            else if (deptt == "Investment")
                return 159690001;
            else
                return 159690002;

        }
        public string EncodeJson(string rawDesc)
        {
            return rawDesc.Replace("\r\n", " ").Replace("\n", " ").Replace("\r", " ").Replace("\t", " ")
                .Replace("\b", " ").Replace("\"","\\\"").Replace("\\","\\\\");
        }
    }
}

 

Once integrated, the ticket will start predicting the department as soon as it is created.

You can build the Support Rep Prediction WS using a similar approach. Rather the predicting based on description text, it will use ETA, Actual Time  Spent and nature of work as three parameters  to choose the best Rep i.e. it will predict the Rep that will take minimum amount of time to resolve the case based on the nature of work involved. It is also a classification problem, rather than classifying into 3 classes (Tax, Investment and Medical), you will be classifying in N classes and N is the number of Support Reps in the team.

 

I hope you got the basic idea of the possibilities and potential of machine learning. In the world of Data Science, sky is the limit and lot of wonders are waiting to be explored.

 

Happy treasure-hunting !!

Turn Dynamics CRM into a thinking machine

The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom

Isaac Asimov

As the saying goes wisdom is an asset of unmatchable importance, wisdom comes with intelligence. In computers, intelligence comes with extracting meaning out of data using Data Science. A little tinge of intelligence can turn an instruction-taking information system into an instruction-giving thinking machine.

In the previous post we discussed the idea to create an intelligent routing system in Dynamics CRM that can tell which Support Rep is best suited to resolve a customer ticket. If you missed the introductory post and the agenda, I recommend you to read it first to understand the following content better.

 

Prepare Dynamics CRM to marry Data Science

Before we start training our machine learning engine, we need to prepare our data to suit the data science algorithms better. We will use the following techniques

1. Classification using Support Vector machines : to find out what team/department the ticket belongs to

2. Logistic Regression: to predict the most suitable agent

 

We will train our Machine learning engine first using a supervised approach based on the existing tickets. In nutshell, it will understand the characteristics of various types of tickets and convert them into mathematical form, then predict by applying mathematical formulas on those characteristics. Some examples of these characteristics can be

  • Which Support Rep is better at handling certain kinds of customers
  • Which Support Rep generally resolves a ticket earlier than estimated
  • What are the traits of a ticket that belongs to certain category e.g. Investment Category

 

 

Let us see how our data looks like..

If you recall we are an advisory support organisation that primarily deals with Tax, Investment and Medical queries. Below is the how our historical ticket database looks like

image

 

It is all fictitious data. Neither the customers are real nor the Reps. But the ticket contents are realistic.

You cannot train a machine learning system with rubbish content, your samples have to be relevant to the domain for which you are building the ML model.

Data that will used by Data Science

Let me explain the fields

Description – This is email or phone transcript from the customer which contains the queries/questions and problem definition of the ticket

Department – The Department to which the query belonged to. In the past it was manually set by Tier 1 Agent, but now our system will predict it automatically

Type – It is the industry sector / vertical of the Customer. It will used as well in the algorithm, which I will explain the upcoming posts

Support Rep – The Rep who worked on the query

Estimated Time – ETA given by the Support Rep before starting work

Total Time Spent – Actual time taken by the Support rep to perform work before moving to next query

Architecture

Below is a view of our Intelli-routing engine that shows how the engine will fit inside CRM and integrate with Machine Learning WS

image

 

Its self explanatory, basically I will build my engine using Python. It can be deployed in Azure Machine Learning  (as it supports Python). But my Azure access has expired so I will use another provider to host my web service.

 

In the next post we will start building our Intelli-routing model in Python and train the classifier.

Dynamics CRM – Prediction based routing

Imagine a day at work of a front desk staff who is handling the support mailbox or reception of any mid-size organisation. It is not uncommon for them to receive hundreds, if not thousands of email and phone enquires everyday.

If this organisation happens to be using Dynamics CRM, then every enquiry is usually handled as below

  1. Read the description of the enquiry
  2. Understand
  3. Determine what team/department the query belongs to
  4. If there are multiple members in that team, then find who is best suited to answer it
  5. Assign it to that person

Move to the next enquiry. Repeat 1 to 5 above…. hundreds of times.

Now imagine the time spent on every enquiry to perform steps 1 to 5. A fair guess – it can easily take 10 minutes to grasp, digest and route the query.

Realistically, for most queries there is often a support rep matching ritual, back and forth, something like

Hey, who do you think this should go to?

Oh sorry! so it was meant for Helen, no worries I can assign to her

Have you worked on this kind of stuff earlier?

When will you get free to look at it, customer needs an answer today

 

Capture

Courtesy: http://www.glasbergen.com

We have already spent 20 minutes and the ticket has not even landed on the support rep’s desk yet !!

Well – time is money. If we can find a solution to save this time, its a great return on investment.

Supercharge your Tier 1 Support

Through this blog series, I will try to explore a solution to this problem using Machine Learning. We will automate steps 1 to 5, full automation.

A machine algorithm will predict

  • Which team does this query belongs to?
  • Which agent will get free first and which agent is best-skilled to answer this query

And machine would not take 20 minutes to decide, it will take 20 seconds

 

Scenario

Let us layout a scenario

A big advisory firm that uses Dynamics CRM  offers many kinds of services to its clients. They have professional advisors on their team who can answer queries across of the range, no matter if they are tax enquires, investment or even medical.

Each team – tax, investment and medical has a range of support reps available to handle enquires.

Traditionally, Tier 1 staff created support cases upon receiving customer enquires and assigned them to the relevant support rep by following steps 1 to 5 described at the start.

Upon assignment, Support Rep gives an ETA before working on the query and system tracks how time support rep actually spent.

We will also track some other parameters which we can be leveraged by the ML engine.

 

 

Machine Learning Algorithm

The Machine Learning approach will tackle this situation as shown below

1. ML engine will train itself by synthesising content and correlating parameters that belong to a category

2. ML engine will be deployed as a web service (compiled model) to be consumed by Dynamics CRM

3. It will start predicting what category the enquiry is after reading, tokenising and tagging the content

4. Once it has known the category, it will then find who is best suited to answer the query using parameters like

  • Which customer support rep will be the earliest to get free to look at this
  • Which customer support rep is generally good at these kinds of queries

 

Techniques

We will see and the use the following machine learning techniques to build the smarts

  • Tokenisation & Semantic Analytics using Natural Language Processing
  • Support Vector Machines
  • Inverse Document Frequency (TF-IDF)

 

See you in the next post