Deep Learning for Spell Checking

I use spell checking every day, it is built into word processors, browsers, smartphone keyboards. It helps to create better documents and make communication clear. More sophisticated spell checker can find grammatical and stylistic errors.

How to add spell checking to your application? A very simple implementation by Peter Norvig is just 22 lines of Python code.

In “Deep Spelling” article, Tal Weiss wrote that he tried to use this code and found that it is slow. The code is slow because it is brute forcing all possible combinations of edits on the original text.

An interesting approach is to use deep learning. Just create artificial dataset by adding spelling errors into correct English texts. And you better have lots of text! The author has been using one billion words dataset released by Google. Then train character-level sequence-to-sequence model with LSTM layers to convert a text with spelling errors to a correct text. Tal got very interesting results. Read the article for details.

Good quality spell checkers can be very useful for chatbots. Most of the chatbots rely on simple NLP techniques, and typical NLP pipeline includes syntax analysis and part of speech tagging, which can be easily broken if the input message is not grammatically correct or has spelling errors. Perhaps fixing spelling errors earlier in the NLP pipeline can improve the accuracy of natural language understanding.

It can be good to try train such spell checker model on another dataset, more conversational.

Have you tried to use any models like this in your apps?

Intelligence Platform Stack

Machine intelligence field grows with breakneck speed since 2012. That year Alex Krizhevsky, Ilya Sutskever and Geoffrey E. Hinton achieved the best result in image classification on LSVRC-2010 ImageNet dataset using convolutional neural networks. It’s amazing that end-to-end training of a deep neural network worked better than sophisticated computer vision systems with handcrafted feature engineering pipelines being refined by researchers for decades.

Since then deep learning field got the attention of machine learning researchers, software engineers, entrepreneurs, venture investors, even artists and musicians. Deep learning algorithms surpassed the human level of image classification and conversational speech recognition, won Go match versus 18-time world champion. Every day new applications of deep learning emerge, and tons of research papers are published. It’s hard to keep up. We live in a very interesting time, future is already here.

Continue reading

Natural Language Pipeline for Chatbots

Chatbot developers usually use two technologies to make the bot understand the meaning of user messages: machine learning andhardcoded rules. See more details on chatbot architecture in my previous article.

Machine learning can help you to identify the intent of the message and extract named entities. It is quite powerful but requires lots of data to train the model. Rule of thumb is to have around 1000 examples for each class for classification problems.

 

If you don’t have enough labeled data then you can handcraft rules which will identify the intent of a message. Rules can be as simple as “if a sentence contains words ‘pay’ and ‘order’ then the user is asking to pay for an order”. And the simplest implementation in your favorite programming language could look like this:

def isRefundRequest(message):
    return 'pay' in message or 'order' in message

Continue reading

How to Run Text Summarization with TensorFlow

Text summarization problem has many useful applications. If you run a website, you can create titles and short summaries for user generated content. If you want to read a lot of articles and don’t have time to do that, your virtual assistant can summarize main points from these articles for you.

It is not an easy problem to solve. There are multiple approaches, including various supervised and unsupervised algorithms. Some algorithms rank the importance of sentences within the text and then construct a summary out of important sentences, others are end-to-end generative models.

End-to-end machine learning algorithms are interesting to try. After all, end-to-end algorithms demonstrate good results in other areas, like image recognition, speech recognition, language translation, and even question-answering.

Image credit: https://research.googleblog.com/2015/11/computer-respond-to-this-email.html

Continue reading

Google Assistant Bot Platform

Google announced Google Assistant bot in May 2016 on Google I/O conference. The bot is integrated into a new messaging application Google Allo that was released on September 21, 2016.

Google Assistant can show weather, news, travel ideas, restaurants, put events on your calendar, and make restaurant reservations. I guess it can do most of the things that Google Now could handle.

Google Assistant is going to be an “uber-bot”: a bot that serves as an entry point for any user requests. The bot can recognize what you are asking and route the request to an appropriate specialized bot. On October 3, 2016, Google announced “Actions on Google” program, which will allow developers to build “actions” for Google Assistant.

Continue reading

Chatbot Architecture

Chatbots are on the rise. Startups are building chatbots, platforms, APIs, tools, analytics. Microsoft, Google, Facebook introduce tools and frameworks, and build smart assistants on top of these frameworks. Multiple blogs, magazines, podcasts report on news in this industry, and chatbot developers gather on meetups and conferences.

I have been working on chatbot software for a while, and I have been looking on what is going on in the industry. See my previous posts:

 

 

In this article, I will dive into architecture of chatbots.

Continue reading

Character-level Convolutional Networks for Text Classification

One of the common natural language understanding problems is text classification. Over last few decades, machine learning researchers have been moving from the simplest “bag of words” model to more sophisticated models for text classification.

Bag of words model uses only information about which words are used in the text. Adding TFIDF to the bag of words helps to track relevancy of each word to the document. Bag of n-grams enables using partial information about structure of the text. Recurrent neural networks, like LSTM, can capture dependencies between words even if they are far from each other. LSTM learns structure of sentences from the raw data, but we still have to provide a list of words. Word2vec algorithm adds knowledge about word similarity, which helps a lot. Convolutional neural networks can also help to process word-based datasets.

A trend is to learn using raw data, and provide machine learning models with an access to more information about text structure. A logical next step would be to feed a stream of characters to the model and let it learn all about the words. What can be cruder than a stream of characters? An additional benefit is that the model can learn misspellings and emoticons. Also, the same model can be used for different languages, even those where segmentation into words is not possible.

Continue reading

2016 is the Year of Chatbots

When Apple introduced App Store in 2008, developers’ attention moved from web-based to native mobile apps.

A few years later the app market stabilized. Facebook, Amazon, and Google apps dominate in their verticals. Consumers don’t want to install new apps anymore. According to comScore’s mobile app report, most US smartphone owners download zero apps in a typical month, and a “staggering 42% of all app time spent on smartphones occurs on the individual’s single most used app”.

More than half of the time we spend on our phones is talking, texting or in email, according to Experian’s report.

Continue reading

RE-WORK Virtual Assistant Summit Presentation Notes

In the end of January, RE-WORK organized a Virtual Assistant Summit, which took place in San Francisco at the same time as RE-WORK Deep Learning Summit.

Craig Villamor wrote a nice overview of key things discussed on the summit.

I didn’t attend these conferences, but I watched a few presentations which RE-Work kindly uploaded to YouTube. I would like to share notes I took while watching these videos. I could have misinterpreted something, so please keep that in mind, and watch original videos for more details.

Continue reading

Deep Learning Hardware

Deep learning is computationally intensive. Model training and model querying have very different computation complexities. A query phase is fast: you apply a function to a vector of input parameters (forward pass), get results.

Model training is much more intensive. Deep learning requires large training datasets in order to produce good results. Datasets with millions of samples are common now, e.g. ImageNet dataset contains over 1 million images. Training is an iterative process: you do forward pass on each sample of the training set, do backward pass to adjust model parameters, repeat the process a few times (epochs). Thus training requires millions, or even billions more computation than one forward pass, and a model can include billions of parameters to adjust.

Continue reading