Calling external web services from CRM Online Sandboxed plugin

I have seen this question many times – Can you call external endpoints from within a plugin running inside Sandbox of Dynamics CRM Online?

Recently I was riddled with the same situation where the sandbox did not allow me to call an external endpoint.

On the positive note, I was able to overcome this issue with a little tweak and I thought it might be useful to share with the community.

 

Problem

Say we need to call a JSON based web service from within a CRM plugin

 

Code that would not work

var client = new HttpClient();
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue( "Bearer", wsKey);
client.BaseAddress = new Uri("<your ws url>");
                
//Say jsonBody is typed object
HttpResponseMessage response = await client.PostJsonAsync("", jsonBody);

if (response.IsSuccessStatusCode)
{
        string result = await response.Content.ReadAsStringAsync();
        var typedResult= JsonConvert.DeserializeObject<Results>(result);      
}

 

Modified Code that will work

var client = new HttpClient();
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue( "Bearer", wsKey);
client.BaseAddress = new Uri("<your ws url>");
                
//Rather than using a typed object, construct the JSON object manually using strings
string jsonBody =  "{\"Inputs\": {\"input1\": {\"ColumnNames\": [\"AnnualReview\",\"Category\"],........";

//Rather than using PostJsonAsync use PostAsync
 HttpResponseMessage response = await client
                   .PostAsync("", new StringContent(jsonBody , Encoding.UTF8, "application/json"))
                    .ConfigureAwait(false);

if (response.IsSuccessStatusCode)
{
        string result = await response.Content.ReadAsStringAsync();
		//Rather than using DeserializeObject, parse the json string manually
        var parsingResp = ((string)result).ParseWSResponse();
}

 

Learning

In nutshell, my experience has been that you can call external services as long as you stick to base .NET classes that come packaged with the framework out of the box.

Azure IoT Hub Streaming Analytics Simulator

Azure IoT Hub Streaming Analytics Simulator is an application written by Manny Grewal. The purpose of this blog is to explain What, Why and How of this application.

 

What?

Streaming analytics is a growing trend that deals with analysing data in real-time. Real-time data streams have a short life span, their relevance decreases with time, so they demand quick analysis and rapid action.

Some areas where such applications are highly useful include data streams emitted by

  • Data centres to detect intrusions, outages
  • Factory production line to detect wear and tear of the machinery
  • Transactions and phone calls to detect fraud
  • Time-series analysis
  • Analogy Detection

 

Data used by streaming analytics applications is temporal in nature i.e. it is based on short intervals of time. What is happening at the interval TX can be influenced by what happened 2 minutes ago i.e. at the interval TX-2

So the relationships between various events are time-based rather than entity based (e.g. as in general Entity Relational Database based systems)

Take the scenario of a Data Centre which has two sensors that emit a couple of data streams – Fan Speed of the server hardware and its temperature.

If temperature reading of server hardware is going high, it could be related to the dwindling Fan Speed reading. We need to look at both the readings over an interval of time to establish a hypotheses on their correlation.

 

 

Why?

In order to model and work with streaming analytics it is important to have an event generator that can generate the data streams in a time-series fashion.

Some example of such generators can be vehicle sensors, IoT devices, medical devices, transactions, etc. that generate data quickly.

The purpose of this application is to simulate the data generated by those devices, it just helps you setup quickly and start modelling some data for your IoT experiments.

 

 

Main benefits of this app

1. Integrated with Azure IoT Hub i.e. the messages emitted by this application are sent to the Azure IoT Hub and can be leveraged by the Intelligence and Big Data ecosystem of Azure.

2. This app comes with 4 preset sensors

a. Temperature/Humidity

b. Air Quality

c. Water Pollution

d. Phone call simulator

3. Configure > Ready. App can be easily pointed to your Azure instance and can start sending messages to your Azure IoT Hub

4. Can be extended, if you are handy with .NET development. I have designed the app on S.O.L.I.D framework so it can be extended and customised the link to source code is below

 

How?

 

App and source code can be downloaded from my Github

 

A quick tour of the app is below

IoT Hub

 

 

Configure

The app needs to be configured with details of your Azure IoT Hub account.

The following files need to be configured

1. App.Config

2. If you are registering Devices in the Hub, then keys for the devices need to be stored in the SensorBuilder.cs

3. You may need to restore the Nuget Packages to build the application

 

Once the above three steps have been completed, you can build the application and the EXE of the application will be generated.

 

Sensor Tuning

Sensors can be tuned from the classes inheriting IDataPoint e.g. in the FloatDataPoint.cs

The following properties can be used to tune the sensors

Property Name Tuning
MinValue The minimum value of the sensor reading e.g. for climatic temperature it can be -40C
MaxValue The maximum value of the sensor reading e.g. for climatic  temperature it can be 55C
CommonValue This is the average value of the sensor e.g. for warmer months it can be 30C
FluctuationPercentage How much variance you want in the generated data
AlertThresholdPercentage When should an alert be generated if the reading passes a certain threshold e.g. 80% of the maximum value

 

Azure IoT Hub

The messages sent by the sensor simulator can be accessed in the Azure IoT Hub. Once you have configured your hub and related streaming jobs. The messages can be seen in the dashboard as below

image

 

The messages are sent in the JSON format and below is a structure of one of the messages emitted by a sensor located at Berwick, VIC


{
"IncludeSensorHeader": 1,

"MessageId": "949a3618-c4a4-42bc-9c2a-39da86aa9191",

"EmittedOn": "2017-06-30T11:13:45.3543200",

"SensorDataHeader": {
"Sensor":
"Berwick",
"DeviceId":
"G543",
"Lat":
-38.0309,
"Long":
145.3461
},
"SpecialMessage":
null,
"Readings": [

{
"ReadingValue":
27.9943523,
"MetaData":
{
"Name":
"Temperature",
"Unit":
"C"

},
"Level":
"Normal"
},

{
"ReadingValue":
49.6043358,
"MetaData":
{
"Name":
"Humidity",
"Unit":
"RH"

},
"Level":
"Normal"
}

],
"EventProcessedUtcTime":
"2017-07-01T11:26:53.1434112Z",
"PartitionId":
0,
"EventEnqueuedUtcTime":
"2017-06-30T11:13:48.4340000Z",
"IoTHub":
{
"MessageId":
null,
"CorrelationId":
null,
"ConnectionDeviceId":
"G543",
"ConnectionDeviceGenerationId":
"636297589019910475",
"EnqueuedTime":
"2017-06-30T11:13:47.5760000Z",
"StreamId":
null
}
}

Can Dynamics CRM understand images? Yes! Using deep learning.

Machine Learning is quite a buzzword these days and we have witnessed how quickly Microsoft and other vendors have made progress in this area. Just couple of years back Microsoft had no product or tool in this space and today they have closer to a dozen. Recently Microsoft has integrated Machine Learning into SQL Server and Dynamics CRM, it is slowly becoming core to its product line.

I would not be surprised if machine learning becomes a mandatory skill for most of the development jobs in the next decade.

How Image Recognition can help CRM?

Attaching documents is a common feature asked for in many CRM projects where customers can complete an application form and then upload scanned copies to support their application. Think of invoices, receipts, certificates, licenses, etc. As of now there is no way that Dynamics CRM can detect if the scan that a customer is uploading is a picture of a license, or beach or a car.

What if Dynamics CRM can detect and recognise the scanned image and tell the user that it is expecting a license not a Dilbert on the beach.

clip_image001

Source: Ol.v!er [H2vPk] – Flickr

Wouldn’t it be great?

Although there are some Image engines that can tell you what an uploaded picture contains but there isn’t any engine or tool (as per my knowledge) that can tell whether an upload document is a license or not. This is because there are only subtle differences between scanned copies of various documents.

In this blog series I will build and demonstrate an approach to have this kind of image recognition capability with our favourite Dynamics CRM and we will use a branch of machine learning called Deep Learning that is very good at tasks related to Computer Vision. I would not be delving into the concepts of Deep Learning (there are numerous posts and videos on the internet) but will try to cover the major building block in this whirlwind tour.

Australian Identity Documents

I will take a real business case which is ubiquitous in many online applications in Australia where a customer is asked to provide a scan of their Australian ID as a proof. For our blog we will use the following Australian IDs

1) Victoria Driver’s License

 

clip_image003

Courtesy: VicRoads

 

2) Australian Visa

 

clip_image005

Courtesy: http://www.thejumpingkoala.com/

3) Medicare card

 

clip_image007

Courtesy: Medicare

Note: Because of their sensitive nature I would only be exposing sample documents in this blog

The expectation is that the system can tell if the user is attaching a scanned copy of their Australian Visa when the record type is Australian Visa. So we will validate the image based on its content.

Good thing about deep learning based systems is that the detection algorithms do not rely on exact colour, resolution and placement but rather on pattern and feature matching. I got pretty good results when I built this system which I will share in later posts.

Technical Setup

Deep Learning based systems use a concept of neural networks to train themselves and to perform their tasks. There are many kinds of neural networks and the one that does the job for us is the Convolutional Neural Network. CNNs are good at image related tasks.

In order to train a CNN from scratch you need lot of hardware and computing power and I do not have that. So I will be using a partially trained network and customise it for our specific task i.e. to identify the images of those 3 types of Australian IDs.

Let us cover the building blocks of our solution

TensorFlow TM

TensorFlow is an open source framework for Deep Learning and we will be using it to train our engine.

Python

TensorFlow comes in many platforms but we will use its Python version.

Dynamics 365

Once our model is trained we will deploy it online as web service and CRM can query that. I would not be posting the integration code here as I have already posted code to integrate Dynamics CRM with Machine Learning web services in my other blog

 

Let us start by training an image recognition model that can classify an image e.g. a scanned copy and tell if it is an Australian ID e.g. driving license or visa scan, etc.

Approach

We will use an approach called Transfer Learning. In this approach you take an existing Convolutional Neural Network and retrain its last few layers. Think of it this way that you have already got a network that can detect differences of aeroplane from a dog but you need to retrain it to pick more subtle differences i.e. the difference between a scanned invoice and a scanned passport.

TensorFlow is based on the concept of a tensor which is a mathematical vector that contains the features of an image. We will grab the penultimate layer of tensors and retrain it with some sample images of a Medicare card, an Australian Visa and Victoria’s Driver license.

Once the model is trained we will use a simple Support Vector Machine classify and predict the likelihood of the uploaded image to be an Australian ID. The output of the SVC classifier will a predicted class along with a likelihood probability e.g.

(Visa, 0.83)

Model thinks 83% the image is that of an Australian Visa

(Medicare, 0.89)

89%, it is a Medicare

(License, 0.45)

45% it is a license

If the confidence percentage is low it means that image is not in the class of our interest e.g. in the last example the uploaded image is most likely not a license. As a rule of thumb, a probability of 0.80 is good mark for the prediction to be reliable.

Training Pool

Below are the screenshots of the samples that I used as a training for my image classification model. As you can see images differ in terms of angles, positioning, colours, etc. system can still learn based on important properties and disregard irrelevant properties.

Australian Visa

Training Set

clip_image008

Medicare

Training Set

clip_image009

Victoria Driver’s License

Training Set

clip_image010

Training Phase

The training procedure involves categorising all the training images into a folder which is a named after their class. As you can see in the screenshots above, the windows folders are named after the class i.e. DriversLicense, Medicare and Visa

We then iterate over all these images and pass them to the penultimate layer of TensorFlow which gives us a feature tensor (a 2048 dimensional array of that image), we then label the image with its respective class.

Support Vector Machine

Once we have the feature tensor and label of every image, our training dataset is complete and we feed it to a Support Vector Machine and train the model. To save time, I pickled the model so that it can be reused for all predictions.

I know some of this terminology may be new to you but in the next post I will explain the architecture and some sample code that generates the predictions. Then it will start falling in place. See you then.

Part 3

In the previous two instalments I have been explaining the image recognition system that I built to recognise Australian IDs and discussed how our traditional CRM can benefit from such intelligent capabilities.

In this post I will cover the Architecture and share some sample code

Architecture

clip_image012

As you can see above there are basically two major pillars of the system

A) Python

B) CRM ecosystem

Python is used to build the model using TensorFlow, then the compiled version of the trained model is deployed to an online webservice that should be able to accept binary contents like image data.

On the CRM ecosystem side, user can upload the image in a web portal or directly from CRM based on the scenario, then we need to pass it to the model and get the score.

Source Code

Below is an excerpt of the source code from one of the unit tests that will give you glimpse of what happens under the hood on Python side of the fence. This is just one class for introductory purposes, not the entire source code.

import os

import pickle

import sklearn

import numpy as np

from sklearn.svm import SVC

import tensorflow as tf

import tensorflow.python.platform

from tensorflow.python.platform import gfile

model_dir = 'inception'

def CreateImageGraph():

#Get the tensorflow graph

with gfile.FastGFile(os.path.join(

model_dir, 'classify_image_graph_def.pb'), 'rb') as f:

graph_def = tf.GraphDef()

graph_def.ParseFromString(f.read())

_ = tf.import_graph_def(graph_def, name='')

def ClassifyAustralianID(image):

nb_features = 2048

#Initialise the feature tensor

features = np.empty((1,nb_features))

CreateImageGraph()

with tf.Session() as sess:

next_to_last_tensor = sess.graph.get_tensor_by_name('pool_3:0')

print('Processing %s...' % (image))

if not gfile.Exists(image):

tf.logging.fatal('File does not exist %s', image)

image_data = gfile.FastGFile(image, 'rb').read()

#Get the feature tensor

predictions = sess.run(next_to_last_tensor,{'DecodeJpeg/contents:0': image_data})

features[0,:] = np.squeeze(predictions)

clear = '\n' * 20

print(clear)

return features

if __name__ == '__main__':

#Unpickle the trained model

trainedSVC = pickle.load(open('Trained SVC','rb'))

#Path to the image to be classified

unitTestImagePath = 'Test\\L5.jpg'

#Get feature tensor of the image

X_test = ClassifyAustralianID(unitTestImagePath)

print("Trying to match the image at path %s.....",unitTestImagePath)

#Get predicted probabilities of various classes

y_predict_prob=trainedSVC.predict_proba(X_test)

#Get predicted class

y_predict_class=trainedSVC.predict(X_test)

#Choose the item with the best probability

bestProb = y_predict_prob.argsort()[0][-1]

#Print the predicted class along with its probability

print("(%s, %s)" % (y_predict_class, y_predict_prob[0][bestProb]))

The purpose of the above stub is to test the prediction class ClassifyAustralianID with a sample image L5.jpg which is below. As we can see it is a driving license.

clip_image013

Running this image against the model gives us this output

clip_image014

It means the model says, it is 93% sure that the input image matches the Driving License class. In my testing I found anything above 80% was the correct prediction

i.e. the confidence percentage for the below images was low because they do not belong to one of our classes (Drivers License, Visa or Medicare), which is the expected output

clip_image015

Closing Notes

Image recognition is a field of budding research and getting a lot of attention these days because of driverless cars, robots, etc. This little proof of concept gave me a lot of insight into how things work behind the scenes and it was a great experience to create such a smart system. The world of machine learning is very interesting!!

Hope you enjoyed the blog.

Power BI for Data Scientists

With my involvement in some data science work recently, I have had the privilege to explore a lot tools of the trade – Rapid Miner, Python, Tensorflow and Azure Machine Learning to name a few. My experience has been highly enriching but I felt there was no Swiss knife that can handle the initial – and the most critical stage of a Data science project: i.e. Hypothesis stage.

During this stage, scientists typically need to quickly prep the data, find the correlation patterns and establish hypotheses. It requires them to fail fast by identifying null hypotheses and spurious correlations and stay focussed on the right path. I recently explored Power BI and would like to share my findings through this blog.

Business Problem

Let us take a business case of a juice vendor say Julie. Julie sells various kinds of juices and she collects some data about her business operations on daily basis. Say we have the following data for the month of July which looks like below. It is pretty much – when, where, what and for how much?

clip_image001

Now say I am a data scientist who is trying to help Julie to increase her sales and give her some insights that what should she focus on to get the best bang of her buck. I have been tasked to build an estimation model for Julie based on simple linear regression.

Feature Engineering

I will start by analysing various correlations between the features and our target variable i.e. Revenue. It can be commenced by importing the data into Power BI and looking after the following basics

1) Eliminate the null values with mean value of the feature

2) Dedupe any rows

3) Engineer some new features as below

Feature

DAX formula

Day Type

Purpose of this feature is to distinguish between a week day and a weekend day. I wanted to test a hypothesis that weekend day might generate more sales than a week day.

Day Type = IF(WEEKDAY(Lemonade2016[Date],3) >= 5,”Weekend”,”Weekday”)

Total Items Sold

Lemon + Orange

Revenue

Total Items Sold * Price

Data preparation and feature engineering was a breeze in Power BI, thanks its extensive support of DAX, calculated columns and measures. The dataset looks like below now.

clip_image001[4]

Hypotheses Development

Once we had our dataset ready in Power BI, the next task was to analyse the patterns between Revenue and other features

Hypothesis 1 – There is a positive correlation between Temperature and Revenue

Result: Passed

Hypothesis 2 – There are more sales on a weekend day

Result: Failed

I derived these results using the below visualizations built briskly using Power BI platform

clip_image003

Next off to some advanced hypothesis development. Shall we?

I needed to understand the relationship between the leaflets given on a particular day and their relationship with Revenue. Time to pull some heavy plumbing in, so I decided to tow R into in the mix. Power BI comes with inbuilt (almost!) support with R and I was able to quickly spawn a coplot using just 6-8 lines of R in the R Script Editor of Power BI

clip_image004

Interesting insight was how correlation differs based on the day. This was made possible using the Power BI slicer as shown below

clip_image006

 

clip_image008

Wednesday – Less correlation between leaflets and sales

 

Sunday – High correlation between leaflets and sales

Power BI + R = Advanced Insights

If you need to analyse the dynamics between various features and how this dynamics impacts your target variable i.e. Revenue. You can easily model that in Power BI. Below is a dynamic co plot that shows the incremental causal relationship between Leaflets, Revenue and Temperature.

The 6 quadrants at the bottom should be read in conjunction with 6 steps in the top box. The bottom left is the first step and the top right the last step of leaflets. Basically it shows how the correlation between Temperature and Revenue is affected by leaflets bin size

clip_image009

I ended my experiment by building a simple regression model that can give you prediction of your Revenue if you enter Temperature, Price and Leaflets. Below is the code for model in case you are keen

clip_image010

Power BI is a very simple and powerful tool for the exploratory data scientist in you. Give it a go.

How developers can move to the next level

Bored of writing  plugins, workflows, integrations and web pages and want to try something interesting? Try artificial intelligence.

It is so interesting and powerful that once you are into it you will never look back. Drones are in the air and driverless cars are being trialled. All such smart machines have one key requirement i.e. Visual Recognition.

Ability to understand what a frame contains – what is in that image, what is in the video?

It is quite fascinating to think about how can a program interpret an image?

If that is something you like then read on.

 

How a program understands an image

Images are matrices of pixel values, think of it as a 3D array where first dimension is the with of the image, second dimension is along the height and third dimension is the color channel i.e. RGB.

For the below image – An array value of [10][5][0]=157 means the value of Red Channel of the pixel at 10th row and 5th column is 157

and its Green Channel value may be 34 i.e. [10][5][1]=34

 

image

Source: openframeworks.cc

So at very basic level image interpretation is all about applying machine learning to these matrices

 

How to write a basic Image classifier

In this blog, I will highlight how can you write a very basic image classifier – that would not be state of the art but it can give you an understanding about the basics. There is a great source available that can help you train your image classifier. The CIFAR dataset gives you around 50K classified images in their matrix form that your program can train upon and additional 10K image that you can use to test the accuracy of your program. At the end of this blog I will leave you with the link to full source code a working classifier.

 

Training Phase

In the training phase you load all these images in an array and also store their category in an equivalent array e.g. let me show you some code

unpickledFile=self.LoadFile(fileName)
# Get the raw images.
rawImages = unpickledFile[b'data']
# Get the class-numbers for each image. Convert to numpy-array.
classNames = np.array(unpickledFile[b'labels'])
# Reshape 32 *32 * 3 (3D) vector into 3072 (1D) vector
flattenedMatrix = np.reshape(matrixImages, (self.NUM_EXAMPLES, self.NUMBER_OF_PIXELS * self.NUMBER_OF_PIXELS * self.TOTAL_CHANNELS))

 

In the above code we are loading the CIFAR dataset and converting into two arrays. Array flattenedMatrix contains the image pixels and Array classNames contains what the image actually contains e.g. a boat, horse, car, etc.

So flattenedMatrix [400] will give us pixel values of the 400th example and classNames[400] will give us its category e.g. a car

That way program can relate, what pixel values correspond to what objects and create patterns that it can match against during prediction.

Prediction

This being a very simple classifier uses a simple prediction algorithm called kNN i.e. k Nearest Neighbour. Prediction occurs by finding the closest neighbour from the images the program already knows.

For example if k=5, then for an input image X the program finds 5 closest images whose pixel values are similar to X. Then the class of X is computed based on the majority vote e.g. if 3 of those images are of category horse, then X is also most likely to be a horse.

Below is some code that shows how this computation occurs

def Predict(self, testData, predictedImages=False):
# testData is the N X 3072 array where each row is 3072 D vector of pixel values between 0 and 1
totalTestRows = testData.shape[0]
# A vector where each element is zero with N rows where each row will be predicted class i.e. 0 to 9
Ypred = np.zeros(totalTestRows, dtype = self.trainingLabels.dtype)
Ipred = np.zeros_like(testData)

# Iterate for each row in the test set
for i in range(totalTestRows):
# It uses Numpy broadcasting. Below is what is happening
# testData[i,:] is test row of 3072 values
# self.trainingExamples - testData[i,:] gives you a difference matrix of size 50000 X 3072 where each element is the difference value
# np.sum() computes sums across the columns e.g. [ 2 4 9] sum is 15,
# distances is 50000 rows where each element is the distance (cummulative sum of all 3072 columns) from test record (i)
distances = np.sum(np.abs(self.trainingExamples - testData[i,:]), axis = 1)
#Partition by nearest K distances (smallest K)
nearest_K_distances= np.argpartition(distances, self.K)[:self.K]
#K matches
labels_K_matches= self.trainingLabels.take(nearest_K_distances)
# top matched label
best_label=np.bincount(labels_K_matches).argmax()
Ypred[i] = best_label
# do we need to return predicted Image as well
if(predictedImages==True):
best_label_arg= np.argwhere(labels_K_matches==best_label)
# store the match
Ipred[i] = self.trainingExamples[nearest_K_distances[best_label_arg[0][0]]]
return Ypred, Ipred

 

As outlined above if you need to try this yourselves, full source code is available on my Github page

Part 2 – Bot Framework

The recently released Bot Framework equips us with the basic plumbing that is required for chat sessions and making connections with services like LUIS. Some of the key features of Bot Builder SDK include

· Support for both C# and Node.js

· Open source on Github

· Conversation support – Prompts, Dialog and Rulesets for form flows

· Chat emulator – a client for testing

· Connector to Cognitive services like LUIS

Once you have the prerequisites discussed in the previous part, you can create a new bot project from Visual Studio by going

File > New > Project > Bot Application

The project setup is based on WebAPI / MVC style routing and you need to implement a message controller. Below is a screenshot of the source code for the bot

clip_image001

Handling messages

The main entry point of the bot framework is the MessagesController as shown below


[BotAuthentication]
public class MessagesController : ApiController
{
[ResponseType(typeof(void))]
public virtual async Task<HttpResponseMessage> Post([FromBody] Activity
activity)
{
// check if activity is of type message
if (activity != null && activity.GetActivityType() ==
ActivityTypes.Message)
{
await Conversation.SendAsync(activity, () => new InsuranceDialog());
}
else
{
HandleSystemMessage(activity);
}
return new HttpResponseMessage(System.Net.HttpStatusCode.Accepted);
}

The controller is secured by the BotAuthentication decoration that secures the bot’s endpoint, then we are checking the incoming message to ensure it is of type message and initiate a dialog called InsuranceDialog. The dialog then passes the message to LUIS to determine the customer’s intent and generates a reply accordingly. We will dig in more details of LUIS in the next blog.

Replies

Replies from the bots are posted back on the chat screen using some of the common methods described below

context.PostAsync("Hi there. Welcome to BestPrice.");

Above line shows how to post a basic message back to the user

PromptDialog.Choice<TypeOfInsuranceOptions>(context, 
ResumeTypeOptionsAsync, options, "Let us know what are you interested in?");

Here we are using a dialog class which not only posts a message with options but also listens to the user’s input i.e. the option they chose.

PromptDialog.Confirm(context, HandleInsuranceOptions,"Do you want to know about 
our insurance?"
,"Didn't get that!");

This is an example of a confirm message where we expect a Yes or No from the user

Using the Channel Emulator

One of most useful application for such projects is the Channel Framework Emulator which is a client you use to unit test your bots. It can connect to both online and locally deployed bot apps. You need to ensure that AppId and Secret you use in this app are the ones your bot app uses i.e. the ones in its web.config. Below is a screenshot of our bot being tested locally. Let us meet in the next blog post where we explore LUIS.

  clip_image003

Build a Chatbot for Dynamics CRM– Part 1

“Chatbots are about taking the power of human language and applying it more pervasively to our computing.”

Satya Nadella

We have seen an age of mobile phone apps, and guess what is coming next? Chatbots. To acknowledge their soaring growth and to leverage on this business opportunity, at this year’s Build conference, Microsoft has released a full framework to build bots. It is called the Bot Framework.

Microsoft is not alone in the game, Facebook and Amazon have released their bot platforms as well, and the developer base is growing at an astonishing pace. Technology is making huge leaps in Natural Language Processing, with Google just having open-sourced their NLP parser and Microsoft having enriched their Language platform LUIS (Language Understanding Intelligent Service). These advancements coupled with the capability to build chatbots presents an incredible opportunity for developers and the businesses alike. A proof of their popularity is this statistic that says since last year bots have outnumbered humans on the internet. So not only they are a raging trend but also a hot market.

But what does all this mean for businesses? Put simply, organisations will be able to leverage Conversation as a platform where they can deploy intelligent chatbots to serve their customers. The equation of return on investment is quite attractive too based on a survey finding that the average cost of a customer transaction via phone is around $2.50 and, the average cost of a digital transaction (online or on mobile) is only around $0.17.  It is not all doom and gloom though, there will still be lot of human element required to fill up what bots lack, at least for the foreseeable future.

I decided to give the Microsoft’s bot platforms a whirl to check how easy it is to build a basic chat-bot. Through this blog series, I will walk you through the process of building a chat-bot that may interact with Dynamics CRM and can optionally be deployed using Microsoft portals. We will use two spanking new platforms released recently as a part of Microsoft’s Cognitive Suite: Bot Framework and LUIS. Before we start building, let us first understand how bots fit into the ecosystem.

clip_image002

We will use the setup outlined in the above diagram. The bot will be primarily built on the Bot Framework using .NET (Node.js is also supported) and it will interact with LUIS to parse the natural language and try to understand what the customer means.

There will be three more parts to this series and I will also link the source code of the working bot in the last part

Part 1 – Introduction

Part 2 – Bot Framework

Part 3- LUIS

Part 4 – Chatbot Integration and Deployment

Let us layout the scenario to understand what we are building.

Scenario

Say we are an insurance company called BestPrice and we are deploying a chatbot that customers can converse with to know about our products and to register their interest. The bot will pass some of the conversations to LUIS to determine customer’s intent. Three intents will be used for this demo

Greeting – Conversation is just a greeting like hi, hello, etc.

Enquire – The customer wants to enquire about our insurance products

Engage – Customers wants us to engage with them

Prerequisites

In order to setup the project you need to have the following prerequisites

1. Bot Framework VS template

2. Bot Framework Channel Emulator

3. Bot Framework dlls (via Nuget)

4. A developer account with Bot Framework

5. A developer account with LUIS with subscription key

6. Once the bot is deployed online it needs to be registered with bot framework

You can read more about the above prerequisites here or search them online

In the next instalment we will start building the bot and go through some of key building blocks.