51
portfolio_page-template-default,single,single-portfolio_page,postid-51,stockholm-core-2.3.2,qode-social-login-1.1.3,qode-restaurant-1.1.1,select-theme-ver-9.0,ajax_fade,page_not_loaded,popup-menu-slide-from-left,,qode_menu_,wpb-js-composer js-comp-ver-6.8.0,vc_responsive
Title Image

Machine Learning

Machine Learning

 

 

 

 

 

About the above– another video from Raj Ramesh. If you feel your eyes glazing over with the math, imagine if the points were individual donations instead of sales receipts? What if you were the person at the Red Cross who decides where to put bell ringers? Geo information, giving data, number of hours manned, etc. would be great layers to help analyze your numbers. Machine learning can do this for you.

 

What is Machine Learning?

 

Machine learning (ML) is one of the three main applications of artificial intelligence. It is how humans and machines use algorithms and statistical models to teach other machines to perform tasks.

 

Instead of following detailed programs that take time to write and test, machine learning algorithms uses pattern analysis and inference to make conclusions, which makes the development process faster. 

 

How does that work? Suppose you need to make a machine understand that 1 + 1 = 2. Instead of programming the entire string, ML allows data scientists to teach machines a set of rules to understand the value of 2. The next step is to help the machine create inferences about 2 by feeding it data on other numbers.  The algorithm can quickly figure out that 1 + 1 is the most correct path to the answer. You can also tell the algorithm to list all the ways to get to 2; which may help you understand combinations you never thought about.

 

Machine learning also forms the basis for two related applications; predictive analytics and data mining. Both play a big role in AI and have been in use since the 1990s. Predictive analytics is the study of making predictions with computers. Data mining is computers using unstructured learning methods to draw inferences and see patterns in data.

 

A look at the approaches to machine learning

 

The three approaches to machine learning are:

 

Learning algorithms—these are the types of algorithms uses to teach machines. The four most common are supervised, unsupervised, semi-supervised and reinforcement.

Processes and techniques—these are tools to help fine-tune ML applications. The most notable is decision trees and association rules. They include feature learning, sparse dictionary learning and anomaly detection.

Models—these are tools that focus ML in specific ways.

Neural networks are structured like biological neural networks. They allow individual bits of data to be analyzed from a number of different directions and at distinct levels.

Support vector machines are used for classification and regression analysis.

Bayesian networks enable ML to graphically depict conclusions based on variables within a specialized graph. A Bayesian network is good for illustrating the relationships between factors of cause and effect.

Genetic algorithms follow the process of natural selection (“only the strong survive,” as the 70’s soul song goes) to help ML algorithms improve performance.

 

Learning algorithms

 

This approach involves four types of algorithm, each with differing inputs and a flexible range of outputs.

 

Supervised learningaccording to McKinsey Consulting, supervised learning is “based on algorithms that can learn from data without relying on rules-based programming.”

 

This type of learning requires the algorithm be fed a set of data with the range of inputs and outputs, called the training data. To translate, if you wanted a machine to identify a parrot, the differing pictures of birds you’d show it is the training data. The algorithm continually checks its conclusions against the training data to ensure correctness. The training data “supervises” or acts as a check against mistakes.

 

The two most common applications of supervised learning are classification and regression analysis. Classification is as easy as colors, first letters or features. Performing regression analysis on vast amounts of variable data is a time-consuming task that machines can do easily.

 

Unsupervised learning—these algorithms work best in situations where the humans have data around a situation that they don’t know much about. Unsupervised learning algorithms look for pattern and inference in the data. Instead of using the training data to ensure correctness, unsupervised algorithms use the training data as examples to refer to.

 

Here’s how you can use unsupervised learning:

 

Clustering—if your training data are the traits and KPIs of good savers, then cluster analysis of bank information should show which customers would be a good fit for a specific offer.

Anomaly detection—If your training data says that login data should only appear within a certain timeframe and geographical range, you can create a tool to flag banks about variations in observed credit card activity.

Association—If your training data contains behaviors and spending patterns after a life event, this could be used to create a better recommendation engine for new parents that need all kinds of things.

Autoencoder—If your training data has information on the physical representation of electronic interference on x-ray images, your algorithm can remove that interference for an automated X-ray reader.

 

Raj Ramesh has another use for unsupervised learning: competitive analysis. Here’s one way to do it:

 

 

 

Semi-supervised learning—with this type of algorithm, the training data is a mix of labeled and unlabeled examples. The problem is defined, but the model must learn the structures to organize the data as well as make predictions. Here’s how it works.

 

General adversarial networks, or GANs are the key. GANs are subroutines of the main algorithm. They look at the data from directions that differ and complement each other. They work in pairs and in a step-by-step manner.

 

GAN 1, the “generator” reviews the data and tries to find matches according to the labeled training data.

GAN 2, the “discriminator” examines each match and evaluates as true or false, real/fake, etc.; based on the training data. GAN 2 provides feedback on the why of each decision.

Feedback data from GAN 2 gets fed back into the training data. Better training data increases overall performance.

 

Image generation is where GANs are really taking off. In the video below, NVIDIA engineers have optimized GANs to create ultra-realistic images of human faces from just two training images.

 

 

Reinforcement learning—the example of how GANs interact to improve overall performance is a good example of reinforcement learning. The process a reinforcement algorithm follows is iteration to implementation to optimization to improvement and back to iteration.

 

 

Machine learning models

 

ML models were briefly mentioned early in this section. Those descriptions were meant to be an easy introduction to get you thinking. Now let’s dig a little deeper into each.

Neural Networks—the best way to explain these networks, designed to mimic how the biological brain works, is the image at right.

 

Support vector machine—this model is best at performing classification and regression when paired with supervised and unsupervised learning algorithms. It is helpful with categorizing text and images.

 

Bayesian networks—this probabilistic model helps determine conditional dependence and causation. The network outputs its conclusions on a directed graph that illustrates the interaction between the conclusion and its variables. Here’s an example, from TowardDataScience.com:

 

The image above is a perfect example of conditional dependence. You need clouds for rain, but clouds do not always mean rain. And so forth.

 

Genetic algorithms—like neural nets, these models also borrow from bio-structures. Genetic algorithms solve difficult optimization and search problems by using techniques from evolutionary biology like mutation, crossover and selection.

 

Who is using machine learning, and for what?

 

The range of ML applications literally boggle the mind. Here are a few specific to comms and marketing.

 

If you find yourself thinking too hard about the best image to use in a post or campaign, Stackla has a product for you. Their Co-pilot product uses machine learning to identify and localize popular images on social media and the web. Because a great image on Instagram in Albuquerque, New Mexico may not be the best performer in Kennebunkport, Maine. Their product ID’s images based on your campaign. Once you find what you want, Co-pilot manages their rights and fees in the background. Here’s a look at their product:

 

 

Analysis is another natural extension of machine learning. For communicators and marketers, this look at how Crimson Hexagon’s audience analysis tool helped Fender Guitar solve a number of problems and sync itself to business strategy.

 

 

Cision’s new Communication Cloud product is a combination of ML and natural language processing that could completely change how you leverage media intelligence in your campaign plans.

 

 

New Knowledge, of Austin TX, is a newcomer to communications. They call themselves an “information integrity” company; but it sounds like AI-powered, proactive reputation management. Lately they’ve been in the news for applying the same techniques used by Russian social media propagandists on the 2017 Alabama US Senate special election between Doug Jones and Roy Moore.

 

It’s a very different kind of audience analysis, based on the patterns and behavior of the right wing and alt-right. New Knowledge believes the ability of these groups to game the mechanics of content distribution poses a risk to brands, government, celebrities and society at large. Click to the five-minute mark of this video, by CEO, Jonathan Morgan. He starts to take you through this world of nutbags seen through the lens of linguistics, social media and machine learning.

 

 

A look at Machine Learning through Porter’s lens

 

For an PR or marketing agency, the approach is differentiation. Adding the ability to monitor the public internet in near-real time to understand what the web is saying and saying about their clients in particular separates you from the competition. It also elevates your stature as a leader—especially at this stage of industry adoption. Whether an agency adds the talent or subscribes to a platform, there is a legitimate opportunity for growth.

 

Whichever frame you determine as your advantage, be sure to have a lot of data around it. In another video from Raj Ramesh of TopSigma.com, he makes the case that lots of recent data is the key to success with machine learning:

 

 

 

 

Strategies for success with machine learning? Follow the early adopters

 

With so many tools already available for communicators to use, success will be defined by the company that comes together in a planned way to integrate the technology into their systems.

 

In February of 2019, Ben Lorica and Paco Nathan of O’Reilly Media published a whitepaper, “The State of Machine Learning Adoption in the Enterprise.” The paper focused on the results of an 11,000 person study on the current state of machine learning in their organizations. Respondents were classified as exploring (just beginning to use ML), early adopters (have used ML for at least two years) and sophisticated (five years or more with an ML product in the enterprise).

 

The piece has a lot of valuable takeaways, but for our purposes we will look at two graphs. One focuses on methodology, the other who builds the data model. The lesson to learn? Follow the early adopters. Why? In this case

 

 

Methodology is the type of process used to develop or onboard ML technology. The authors found that companies had appropriated principles of agile software development for building and integrating data products. Others had their own processes, and a few used Kanban. Kanban is a methodology originally developed by Toyota that focuses on teams maximizing daily workflow. Westerners unfamiliar with Kanban may recognize “just-in-time”; or how Kanban was popularly known outside of Japan.

 

Let’s unpack the term “no methodology.” Some managers, in a well-intentioned effort to get into the game ASAP may see an opportunity to save money or time by and avoiding a formal adoption process, or “no methodology”. To them, “no methodology” may mean letting the vendor lead the change, bringing in a consultant, or asking the team to shoulder the burden. Unfortunately, they miss something obvious.

 

The human side of “no methodology” is also known for stalled implementation, cost increases, and missed launch dates. A formal method, like Agile, provides a structure for reporting, transparency and organizational commitment managers can’t always recreate via homegrownprocesses.

 

Which is why following the early adopters makes the most sense. 76% of early-adopters used a methodology of some sort to help the organization plan and execute. Compare that to 42% of explorers who did not.

 

 

Building the model means programming the algorithm that tells the machine what to do. Although it is very easy to lean a vendor for knowledge and development, too often that sales or support rep becomes the absentee expert. This can limit the company’s flexibility and affect their vision of what the tool can do. Resist the urge to approach AI as a “fix”. It is a strategic tool that will change your organization.

 

Develop your own data team, even as you work with a vendor. Bring in a CDO or a data scientist early, to help the rest of the management team define strategy. At the very least, identify, develop and support a member of your team as the in-house expert on how the technology works. You will appreciate the added expertise and flexibility when the time comes.

 

One big thing before we go: focus on segmentation, emotion and social intelligence

 

We’ve talked about what it is and how to build it. Let’s talk a little about how to use it. To start, let’s hear from a really smart woman, Fiona McArthur. She’s a Global Managing Director at adam&eveDDB, part of Doyle Dane Bernbach, an advertising OG. I interviewed her for this book. Here’s her take on AI:

 

“I think we assume that AI is new news. But really, it’s just brilliantly better informed by the more nuances and rich data we now have access to.  Using data and intelligence to target relevant comms is and has always been a mainstay of the comms industry. Programmatic is a brilliant example of AI – not always done well, but commercially a very powerful contributor to many brands, like the aggregators hotels.com or Orbitz.”

 

McArthur and adam&eveDDB use AI for insight generation. An insight, according to Evangelos Simoudis of O’Reilly is “a novel, interesting, plausible, and understandable relation, or set of associated relations, that is selected from a larger set of relations derived from a data set.”

 

Translated, it means finding incredibly detailed segmentations from a data set much larger than transaction history, demographics and preferences. Machine learning offers the ability to combine data streams from all sides of a client’s business. It allows you to find new segments from a database that includes information from online sources, store POS and sensor data, customer service data and more.

 

Once you find an insight, use machine learning and natural language processing to understand emotion and context. There are a handful of tools on the market for that, but for our purposes let’s look at a solution from Crimson Hexagon.

 

Many social media analytics platforms tell you sentiment. Machine learning and natural language processing gives you the emotions behind sentiment—as well as the concerns underlying them.

 

 

This is what AI can do.  

Date

March 4, 2016

Category

Artificial Intelligence, Machine Learning