By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Upside - Where Data Means Business

Finding the New Normal with Text Analytics

The current environment is changing rapidly, causing artificial intelligence algorithms to stumble. Identifying new sources of data using text analytics might provide a more forward-looking strategy.

In a recent article in MIT Technology Review, Will Douglas Heaven wrote that AI algorithms built and trained under normal circumstances were struggling to adjust to the pace of change during this pandemic. Consumer behaviors shifted so suddenly and in such a drastic way that businesses were playing catch-up to adjust their models to respond appropriately to the new normal.

For Further Reading:

Getting Started with Natural Language Processing in Your Enterprise

Using Text Analytics and NLP: An Introduction

Putting Analytics and AI in Context for Better Outcomes

One of the challenges associated with this abrupt change is that the structured and curated data of the past, which was used to build their models, is no longer as relevant to current circumstances. This environmental shift forces businesses to look for new sources of data that can better represent the decisions they are trying to make.

The challenge is that these new data sources are often less structured and wilder in nature. They include news stories, Twitter feeds, and email and Slack conversations. This unstructured text has significant value for understanding what is happening right now, but it needs to be curated before it can become an input for these AI models. Also, this needs to be done in near real-time to adjust to the rapidly changing environment.

One of the first steps to harnessing unstructured data is to identify what the text is referring to. One applicable process in the field of text analytics is named entity recognition (NER). This focuses on identifying words representing people, places, things, and objects within the content and creating structured attributes using them. Each person, place, thing, or object becomes an attribute with a value representing how important that topic is in the text. This value can either be a raw count of the times the term appears in the text or can leverage a concept known as term frequency-inverse document frequency (tf-idf) to identify how this text's use of the word or phrase compares to other documents that use the same term or phrase.

To get started, consider the paths forward. There are both open source and commercial software and services available that help you transform your data into valuable information. Although this list is not exhaustive, it will point you to some options to consider.

Open Source

Open source solutions are a great place to start exploring and get your feet wet. Oftentimes, these open source solutions are supported by educational institutions that are researching how to advance this field of study and sharing with the community.

Apache: Because the Apache Software Foundation is a decentralized open source community of developers, there are a couple projects that support the function of named entity recognition and text analytics. Both OpenNLP and Apache UIMA are supported projects in this space. Recently another project by the name of Stanbol was moved to the Apache Attic, but it also provides an easy way for users to stand up a self-contained RESTful web service and try the concept out.

GATE: General Architecture for Text Engineering is a Java set of tools for information extraction. Originally built by the University of Sheffield, it has been shared with the open source community and is used by scientists, educators, and businesses for a variety of text processing tasks.

Stanford NLP: The Stanford NLP Group is a team of natural language researchers who make their software available for public use. Their Stanford CoreNLP is an integrated suite of natural language tools written in Java that support English, Spanish, and Chinese text processing.

Commercial Options

As easily accessible and powerful as the open source solutions are, they do not always provide the ease of deployment or support needed by companies as they build these features into the models that run their businesses. This is where commercial solutions provide the raw power of text processing as well as scalability and sustainability for the long term.

LingPipe: The first option in the commercial space is actually a bridge solution. LingPipe is a software package supported by Alias-i. Its licensing spans from free open source to a paid, perpetual server license. Alias-i provides consulting services for this framework to integrate it into its clients' business solutions and create a sustainable model for its use.

Software-as-a-Service Solutions: There are several software-as-a-service platforms that will allow you to perform named entity recognition or intelligent tagging on your unstructured data. Examples of these include Pool Party and Refinitiv. These services can get you going quickly without having to fully understand the technical details of implementing your own service from the ground up using one of these open source engines as the core.

IBM: There is no question that one of the leaders in the text analytics space is IBM. The company built their platform Watson to help businesses harness the power of advanced analytics. Watson Natural Language Understanding can extract topics from many types of textual content. This is just a piece of the larger Watson ecosystem and can support multiple use cases that your business has with artificial intelligence and deep learning.

A Final Word

With a rapidly changing environment, the structured and curated data you have been relying on for your machine learning models may no longer be sufficient. To rectify this, it is time to start looking at unstructured data that can better capture current events and be more responsive to an ever-changing environment. With the power of text analytics, you can adjust to the new normal as it happens.

About the Author

Troy Hiltbrand is the chief information officer at Amare Global where he is responsible for its enterprise systems, data architecture, and IT operations. You can reach the author via email.


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.