Estimates state that 70%–85% of the world’s data is text (unstructured data) [1]. New deep learning language models (transformers) have caused explosive growth in industry applications [5,6,11].
This blog is not an article introducing you to Natural Language Processing. Instead, it assumes you are familiar with noise reduction and normalization of text. It covers text preprocessing up to producing tokens and lemmas from the text.
We stop at feeding the sequence of tokens into a Natural Language model.
The feeding of that sequence of tokens into a Natural Language model to accomplish a specific model task is not covered here.
In production-grade Natural Language Processing (NLP), what is covered in this blog is that fast text pre-processing (noise cleaning and normalization) is critical.
- I discuss packages we use for production-level NLP;
- I detail the production-level NLP preprocessing text tasks with python code and packages;
- Finally. I report benchmarks for NLP text pre-processing tasks;
Dividing NLP Processing into Two Steps
We segment NLP into two major steps (for the convenience of this article):
- Text pre-processing into tokens. We clean (noise removal) and then normalize the text. The goal is to transform the text into a corpus that any NLP model can use. A goal is rarely achieved until the introduction of the transformer [2].
- A corpus is an input (text preprocessed into a sequence of tokens) into NLP models for training or prediction.
The rest of this article is devoted to noise removal text and normalization of text into tokens/lemmas (Step 1: text pre-processing). Noise removal deletes or transforms things in the text that degrade the NLP task model. It is usually an NLP task-dependent. For example, e-mail may or may not be removed if it is a text classification task or a text redaction task. We’ll cover replacement and removal of the noise.
Normalization of the corpus is transforming the text into a common form. The most frequent example is normalization by transforming all characters to lowercase. In follow-on blogs, we will cover different deep learning language models and Transformers (Steps 2-n) fed by the corpus token/lemma stream.
NLP Text Pre-Processing Package Factoids
There are many NLP packages available. We use spaCy [2], textacy [4], Hugging Face transformers [5], and regex [7] in most of our NLP production applications. The following are some of the “factoids” we used in our decision process.
Note: The following “factoids” may be biased. That is why we refer to them as “factoids.”
NLTK [3]
- NLTK is a string processing library. All the tools take strings as input and return strings or lists of strings as output [3].
- NLTK is a good choice if you want to explore different NLP with a corpus whose length is less than a million words.
- NLTK is a bad choice if you want to go into production with your NLP application [3].
Regex
The use of regex is pervasive throughout our text-preprocessing code. Regex is a fast string processor. Regex, in various forms, has been around for over 50 years. Regex support is part of the standard library of Java and Python, and is built into the syntax of others, including Perl and ECMAScript (JavaScript);
spaCy [2]
- spaCy is a moderate choice if you want to research different NLP models with a corpus whose length is greater than a million words.
- If you use a selection from spaCy [3], Hugging Face [5], fast.ai [13], and GPT-3 [6], then you are performing SOTA (state-of-the-art) research of different NLP models (my opinion at the time of writing this blog).
- spaCy is a good choice if you want to go into production with your NLP application.
- spaCy is an NLP library implemented both in Python and Cython. Because of the Cython, parts of spaCy are faster than if implemented in Python [3];
- spacy is the fastest package, we know of, for NLP operations;
- spacy is available for operating systems MS Windows, macOS, and Ubuntu [3];
- spaCy runs natively on Nvidia GPUs [3];
- explosion/spaCy has 16,900 stars on Github (7/22/2020);
- spaCy has 138 public repository implementations on GitHub;
- spaCy comes with pre-trained statistical models and word vectors;
- spaCy transforms text into document objects, vocabulary objects, word- token objects, and other useful objects resulting from parsing the text ;
- Doc class has several useful attributes and methods. Significantly, you can create new operations on these objects as well as extend a class with new attributes (adding to the spaCy pipeline);
- spaCy features tokenization for 50+ languages;
Do you find this in-depth technical education about NLP applications to be useful? Subscribe below to be updated when we release new relevant content.
Creating long_s Practice Text String
We create long_
, a long string that has extra whitespace, emoji, email addresses, $ symbols, HTML tags, punctuation, and other text that may or may not be noise for the downstream NLP task and/or model.
MULPIPIER = int(3.8e3) text_l = 300 %time long_s = ':( 😻 😈 #google +1 608-444-0000 08-444-0004 608-444-00003 ext. 508 ' long_s += ' 888 eihtg DoD Fee https://medium.com/ #hash ## Document Title</title> ' long_s += ':( cat- \n nip' long_s += ' immed- \n natedly <html><h2>2nd levelheading</h2></html> . , ' long_s += '# bhc@gmail.com f@z.yx can\'t Be a ckunk. $4 $123,456 won\'t seven ' long_s +=' $Shine $$beighty?$ ' long_s *= MULPIPIER print('size: {:g} {}'.format(len(long_s),long_s[:text_l]))
output =>
CPU times: user 3 µs, sys: 1 µs, total: 4 µs Wall time: 8.11 µs size: 1.159e+06 :( 😻 😈 #google +1 608-444-0000 08-444-0004 608-444-00003 ext. 508 888 eihtg DoD Fee https://medium.com/ #hash ## Document Title</title> :( cat- nip immed- natedly <html><h2>2nd levelheading</h2></html> . , # bhc@gmail.com f@z.yx can't Be a ckunk. $4 $123,456 won't seven $Shine $$beigh
A string, long_s
of 1.159 million characters is created in 8.11 µs.
Python String Corpus Pre-processing Step and Benchmarks
All benchmarks are run within a Docker container on MacOS Version 14.0 (14.0).
Model Name: Mac Pro Processor Name: 12-Core Intel Xeon E5 Processor Speed: 2.7 GHz Total Number of Cores: 24 L2 Cache (per Core): 256 KB L3 Cache: 30 MB Hyper-Threading Technology: Enabled Memory: 64 GB
Note: Corpus/text pre-processing is dependent on the end-point NLP analysis task. Sentiment Analysis requires different corpus/text pre-processing steps than document redaction. The corpus/text pre-processing steps given here are for a range of NLP analysis tasks. Usually. a subset of the given corpus/text pre-processing steps is needed for each NLP task. Also, some of required corpus/text pre-processing steps may not be given here.
1. NLP text preprocessing: Replace Twitter Hash Tags
from textacy.preprocessing.replace import replace_hashtags %time text = replace_hashtags(long_s,replace_with= 'HASH') print('size: {:g} {}'.format(len(text),text[:text_l])))
output =>
CPU times: user 223 ms, sys: 66 µs, total: 223 ms Wall time: 223 ms size: 1.159e+06 : ( 😻 😈 _HASH_ +1 608-444-0000 08-444-0004 608-444-00003 ext. 508 888 eihtg DoD Fee https://medium.com/ _HASH_ ## Document Title</title> :( cat- nip immed- natedly <html><h2>2nd levelheading</h2></html> . , # bhc@gmail.com f@z.yx can't Be a ckunk. $4 $123,456 won't seven $Shine $$beigh
Notice that #google
and #hash
are swapped with_HASH_,
and ##
and _#
are untouched. A million characters were processed in 200 ms. Fast enough for a big corpus of a billion characters (example: web server log).
2. NLP text preprocessing: Remove Twitter Hash Tags
from textacy.preprocessing.replace import replace_hashtags %time text = replace_hashtags(long_s,replace_with= '') print('size: {:g} {}'.format(len(text),text[:text_l]))
output =>
CPU times: user 219 ms, sys: 0 ns, total: 219 ms Wall time: 220 ms size: 1.1134e+06 :( 😻 😈 +1 608-444-0000 08-444-0004 608-444-00003 ext. 508 888 eihtg DoD Fee https://medium.com/ ## Document Title</title> :( cat- nip immed- natedly <html><h2>2nd levelheading</h2></html> . , # bhc@gmail.com f@z.yx can't Be a ckunk. $4 $123,456 won't seven $Shine $$beighty?$
Notice that #google
and #hash
are removed and ##,
and _#
are untouched. A million characters were processed in 200 ms.
3. NLP text preprocessing: Replace Phone Numbers
from textacy.preprocessing.replace import replace_phone_numbers %time text = replace_phone_numbers(long_s,replace_with= 'PHONE') print('size: {:g} {}'.format(len(text),text[:text_l]))
output =>
CPU times: user 384 ms, sys: 1.59 ms, total: 386 ms Wall time: 383 ms size: 1.0792e+06 :( 😻 😈 PHONE 08-PHONE 608-444-00003 ext. 508 888 eihtg
Notice phone number 08-444-0004
and 608-444-00003 ext. 508
were not transformed.
4. NLP text preprocessing: Replace Phone Numbers – better
RE_PHONE_NUMBER: Pattern = re.compile( # core components of a phone number r"(?:^|(?<=[^\w)]))(\+?1[ .-]?)?(\(?\d{2,3}\)?[ .-]?)?(\d{2,3}[ .-]?\d{2,5})" # extensions, etc. r"(\s?(?:ext\.?|[#x-])\s?\d{2,6})?(?:$|(?=\W))", flags=re.UNICODE | re.IGNORECASE) text = RE_PHONE_NUMBER.sub('_PHoNE_', long_s) print('size: {:g} {}'.format(len(text),text[:text_l]))
output =>
CPU times: user 353 ms, sys: 0 ns, total: 353 ms Wall time: 350 ms size: 1.0108e+06 :( 😻 😈 _PHoNE_ _PHoNE_ _PHoNE_ 888 eihtg DoD Fee https://medium.com/ ## Document Title</title> :( cat- nip immed- natedly <html><h2>2nd levelheading</h2></html> . , # bhc@gmail.com f@z.yx can't Be a ckunk. $4 $123,456 won't seven $Shine $$beighty?$
Notice phone number 08-444-0004
and 608-444-00003 ext. 508
were transformed. A million characters were processed in 350 ms.
5. NLP text preprocessing: Remove Phone Numbers
Using the improved RE_PHONE_NUMBER
pattern, we put ''
in for ‘PHoNE'
to remove phone numbers from the corpus.
text = RE_PHONE_NUMBER.sub('', long_s)
print('size: {:g} {}'.format(len(text),text[:text_l]))
output =>
CPU times: user 353 ms, sys: 459 µs, total: 353 ms Wall time: 351 ms size: 931000 :( 😻 😈 888 eihtg DoD Fee https://medium.com/ ## Document Title</title> :( cat- nip immed- natedly <html><h2>2nd levelheading</h2></html> . , # bhc@gmail.com f@z.yx can't Be a ckunk. $4 $123,456 won't seven $Shine $$beighty?$
A million characters were processed in 375 ms.
6. NLP text preprocessing: Removing HTML metadata
I admit removing HTML metadata is my favorite. Not because I like the task, but because I screen-scrape frequently. There is a lot of useful data that resides on an IBM mainframe, VAX-780 (huh?), or whatever terminal-emulation that results in an HTML-based report.
These techniques of web scraping of reports generate text that has HTML tags. HTML tags are considered noise typically as they are parts of the text with little or no value in the follow-on NLP task.
Remember, we created a test string (long_s
) a little over million characters long with some HTML tags. We remove the HTML tags using BeautifulSoup
.
from bs4 import BeautifulSoup
%time long_s = BeautifulSoup(long_s,'html.parser').get_text()
print('size: {:g} {}'.format(len(long_s),long_s[:text_l])))
output =>
CPU times: user 954 ms, sys: 17.7 ms, total: 971 ms Wall time: 971 ms size: 817000 :( 😻 😈 888 eihtg DoD Fee https://medium.com/ ## Document Title :( cat- nip immed- natedly 2nd levelheading
The result is that BeautifulSoup
is able to remove over 7,000 HTML tags in a million character corpus in one second. Scaling linearly, a billion character corpus, about 200 million word, or approxiately 2000 books, would require about 200 seconds.
The rate for HTML tag removal byBeautifulSoup
is about 0. 1 second per book. An acceptable rate for our production requirements.
I only benchmark BeautifulSoup
. If you know of a competitive alternative method, please let me know.
Note: The compute times you get may be multiples of time longer or shorter if you are using the cloud or Spark.
7. NLP text preprocessing: Replace currency symbol
The currency symbols “[$¢£¤¥ƒ֏؋৲৳૱௹฿៛ℳ元円圆圓﷼\u20A0-\u20C0] “ are replaced with _CUR_
using the textacy package:
%time textr = textacy.preprocessing.replace.replace_currency_symbols(long_s)
print('size: {:g} {}'.format(len(textr),textr[:text_l]))
output =>
CPU times: user 31.2 ms, sys: 1.67 ms, total: 32.9 ms Wall time: 33.7 ms size: 908200 :( 😻 😈 888 eihtg DoD Fee https://medium.com/ ## Document Title :( cat- nip immed- natedly 2nd levelheading . , # bhc@gmail.com f@z.yx can't Be a ckunk. _CUR_4 _CUR_123,456 won't seven _CUR_Shine _CUR__CUR_beighty?_CUR_
Note: The option textacy replace_<something>
enables you to specify the replacement text. _CUR_
is the default substitution text for replace_currency_symbols.
You may have the currency symbol $
in your text. In this case you can use a regex:
%time text = re.sub('\$', '_DOL_', long_s)
print('size: {:g} {}'.format(len(text),text[:250]))
output =>
CPU times: user 8.06 ms, sys: 0 ns, total: 8.06 ms Wall time: 8.25 ms size: 1.3262e+06 :( 😻 😈 #google +1 608-444-0000 08-444-0004 608-444-00003 ext. 508 888 eihtg DoD Fee https://medium.com/ #hash ## <html><title>Document Title</title></html> :( cat- nip immed- natedly <html><h2>2nd levelheading</h2></html> . , # bhc@gmail.com f@z.yx can't Be a ckunk. _DOL_4 _DOL_123,456 won't seven _DOL_Shine _DOL__DOL_beighty?_DOL_ :
Note: All symbol $
in your text will be removed. Don’t use if you have LaTex or any text where multiple symbol $
are used.
8. NLP text preprocessing: Replace URL String
from textacy.preprocessing.replace import replace_urls
%time text = replace_urls(long_s,replace_with= '_URL_')
print('size: {:g} {}'.format(len(text),text[:text_l]))
output =>
CPU times: user 649 ms, sys: 112 µs, total: 649 ms
Wall time: 646 ms
size: 763800
:( 😻 😈 888 eihtg DoD Fee _URL_ ## Document Title :(
9. NLP text preprocessing: Remove URL String
from textacy.preprocessing.replace import replace_urls
%time text = replace_urls(long_s,replace_with= '')
print('size: {:g} {}'.format(len(text),text[:text_l]))
output =>
CPU times: user 633 ms, sys: 1.35 ms, total: 635 ms
Wall time: 630 ms
size: 744800
:( 😻 😈 888 eihtg DoD Fee ## Document Title :(
The rate for URL replace or removal is about 4,000 URLs per 1 million characters per second. Fast enough for 10 books in a corpus.
10. NLP text preprocessing: Replace E-mail string
%time text = textacy.preprocessing.replace.replace_emails(long_s)
print('size: {:g} {}'.format(len(text),text[:text_l]))
output =>
CPU times: user 406 ms, sys: 125 µs, total: 406 ms
Wall time: 402 ms
size: 725800
:( 😻 😈 888 eihtg DoD Fee ## Document Title :( cat-
nip immed-
natedly 2nd levelheading . , # _EMAIL_ _EMAIL_ can't Be a ckunk. $4 $123,456 won't seven $Shine $$beighty?$
The rate for email reference replace is about 8,000 emails per 1.7 million characters per second. Fast enough for 17 books in a corpus.
11. NLP text pre-processing: Remove E-mail string
from textacy.preprocessing.replace import replace_emails
%time text = textacy.preprocessing.replace.replace_emails(long_s,replace_with= '')
print('size: {:g} {}'.format(len(text),text[:text_l]))
output =>
CPU times: user 413 ms, sys: 1.68 ms, total: 415 ms
Wall time: 412 ms
size: 672600 :( 😻 😈 888 eihtg DoD Fee ## Document Title :( cat-
nip immed-
natedly 2nd levelheading . , # can't Be a ckunk. $4 $123,456 won't seven $Shine $$beighty?$
The rate for email reference removal is about 8,000 emails per 1.1 million characters per second. Fast enough for 11 books in a corpus.
12. NLP text preprocessing: normalize_hyphenated_words
from textacy.preprocessing.normalize import normalize_hyphenated_words
%time long_s = normalize_hyphenated_words(long_s)
print('size: {:g} {}'.format(len(long_s),long_s[:text_l])))
output =>
CPU times: user 186 ms, sys: 4.58 ms, total: 191 ms
Wall time: 190 ms
size: 642200 :
( 😻 😈 888 eihtg DoD Fee ## Document Title :( catnip immednatedly
Approximately 8,000 hyphenated-words, cat — nip
and immed- iately
(mispelled) were corrected in a corpus of 640,000 characters in 190 ms or abouut 3 million per second.
13. NLP text preprocessing: Convert all characters to lower case
### - **all characters to lower case;**
%time long_s = long_s.lower()
print('size: {:g} {}'.format(len(long_s),long_s[:text_l]))
output =>
CPU times: user 4.82 ms, sys: 953 µs, total: 5.77 ms
Wall time: 5.97 ms
size: 642200
:( 😻 😈 888 eihtg dod fee ## document title :( catnip immednatedly 2nd levelheading . , # can't be a ckunk. $4 $123,456 won't seven $shine $$beighty?$
I only benchmark the .lower
Python function. The rate for lower case transformation by.lower()
of a Python string of a million characters is about 6 ms. A rate that far exceeds our production rate requirements.
14. NLP text preprocessing: Whitespace Removal
%time text = re.sub(' +', ' ', long_s)
print('size: {:g} {}'.format(len(text),text[:text_l]))
output =>
CPU times: user 44.9 ms, sys: 2.89 ms, total: 47.8 ms
Wall time: 47.8 ms
size: 570000
:( 😻 😈 888 eihtg dod fee ## document title :( catnip immednatedly 2nd levelheading . , # can't be a ckunk. $4 $123,456 won't seven $shine $$beighty?$
The rate is about 0.1 seconds for 1 million characters.
15. NLP text preprocessing: Whitespace Removal (slower)
from textacy.preprocessing.normalize import normalize_whitespace
%time text= normalize_whitespace(long_s)
print('size: {:g} {}'.format(len(text),text[:text_l]))
output =>
CPU times: user 199 ms, sys: 3.06 ms, total: 203 ms
Wall time: 201 ms
size: 569999
:( 😻 😈 888 eihtg dod fee ## document title :( catnip immednatedly 2nd levelheading . , # can't be a ckunk. $4 $123,456 won't seven $shine $$beighty?$
normalize_whitespce
is 5x slower but more general. For safety in production, we use normalize_whitespce.
To date, we do not think we had any problems with faster regex.
16. NLP text preprocessing: Remove Punctuation
from textacy.preprocessing.remove import remove_punctuation
%time text = remove_punctuation(long_s, marks=',.#$?')
print('size: {:g} {}'.format(len(text),text[:text_l]))
output =>
CPU times: user 34.5 ms, sys: 4.82 ms, total: 39.3 ms
Wall time: 39.3 ms
size: 558599
:( 😻 😈 888 eihtg dod fee document title :( catnip immednatedly 2nd levelheading can't be a ckunk 4 123 456 won't seven shine beighty
spaCy
Creating the spaCy pipeline and Doc
In order to text pre-process with spaCy, we transform the text into a corpus Doc object. We can then use the sequence of word tokens objects of which a Doc object consists. Each token consists of attributes (discussed above) that we use later in this article to pre-process the corpus.
Our text pre-processing end goal (usually) is to produce tokens that feed into our NLP models.
- spaCy reverses the stream of pre-processing text and then transforming text into tokens. spaCy creates a Doc of tokens. You then pre-process the tokens by their attributes.
The result is that parsing text into a Doc object is where the majority of computation lies. As we will see, pre-processing the sequence of tokens by their attributes is fast.
Adding emoji cleaning in the spaCy pipeline
import en_core_web_lg nlp = en_core_web_lg.load() do = nlp.disable_pipes(["tagger", "parser"]) %time emoji = Emoji(nlp) nlp.max_length = len(long_s) + 10 %time nlp.add_pipe(emoji, first=True) %time long_s_doc = nlp(long_s) print('size: {:g} {}'.format(len(long_s_doc),long_s_doc[:text_l]))
output =>
CPU times: user 303 ms, sys: 22.6 ms, total: 326 ms
Wall time: 326 ms
CPU times: user 23 µs, sys: 0 ns, total: 23 µs
Wall time: 26.7 µs
CPU times: user 7.22 s, sys: 1.89 s, total: 9.11 s
Wall time: 9.12 s
size: 129199
:( 😻 😈 888 eihtg dod fee document title :( catnip immednatedly 2nd levelheading can't be a ckunk 4 123 456 won't seven shine beighty
Creating the token sequence required at 14,000 tokens per second. We will quite a speedup when we use NVIDIA gpu.
nlp.pipe_names output => ['emoji', 'ner']
Note: The tokenizer is a “special” component and isn’t part of the regular pipeline. It also doesn’t show up in nlp.pipe_names
. The reason is that there can only be one tokenizer, and while all other pipeline components take a Doc
and return it, the tokenizer takes a string of text and turns it into a Doc
. You can still customize the tokenizer. You can either create your own Tokenizer
class from scratch, or even replace it with an entirely custom function.
spaCy Token Attributes for Doc Token Preprocessing
As we saw earlier, spaCy provides convenience methods for many other pre-processing tasks. It turns — for example, to remove stop words you can reference the .is_stop
attribute.
dir(token[0]) output=> 'ancestors', 'check_flag', 'children', 'cluster', 'conjuncts', 'dep', 'dep_', 'doc', 'ent_id', 'ent_id_', 'ent_iob', 'ent_iob_', 'ent_kb_id', 'ent_kb_id_', 'ent_type', 'ent_type_', 'get_extension', 'has_extension', 'has_vector', 'head', 'i', 'idx', 'is_alpha', 'is_ancestor', 'is_ascii', 'is_bracket', 'is_currency', 'is_digit', 'is_left_punct', 'is_lower', 'is_oov', 'is_punct', 'is_quote', 'is_right_punct', 'is_sent_end', 'is_sent_start', 'is_space', 'is_stop', 'is_title', 'is_upper', 'lang', 'lang_', 'left_edge', 'lefts', 'lemma', 'lemma_', 'lex_id', 'like_email', 'like_num', 'like_url', 'lower', 'lower_', 'morph', 'n_lefts', 'n_rights', 'nbor', 'norm', 'norm_', 'orth', 'orth_', 'pos', 'pos_', 'prefix', 'prefix_', 'prob', 'rank', 'remove_extension', 'right_edge', 'rights', 'sent', 'sent_start', 'sentiment', 'set_extension', 'shape', 'shape_', 'similarity', 'string', 'subtree', 'suffix', 'suffix_', 'tag', 'tag_', 'tensor', 'text', 'text_with_ws', 'vector', 'vector_norm', 'vocab', 'whitespace_']
Attributes added by emoji
and new.
dir(long_s_doc[0]._) output => ['emoji_desc', 'get', 'has', 'is_emoji', 'set', 'trf_alignment', 'trf_all_attentions', 'trf_all_hidden_states', 'trf_d_all_attentions', 'trf_d_all_hidden_states', 'trf_d_last_hidden_state', 'trf_d_pooler_output', 'trf_end', 'trf_last_hidden_state', 'trf_pooler_output', 'trf_separator', 'trf_start', 'trf_word_pieces', 'trf_word_pieces_'
I show spaCy performing preprocessing that results in a Python string corpus. The corpus is used to create a new sequence of spaCy tokens (Doc).
There is a faster way to accomplish spaCy preprocessing with spaCy pipeline extensions [2], which I show in an upcoming blog.
17. EMOJI Sentiment Score
EMOJI Sentiment Score is not a text preprocessor in the classic sense.
However, we find that emoji almost always is the dominating text in a document.
For example, two similar phrases from legal notes e-mail with opposite sentiment.
The client was challenging. :( The client was difficult. :)
We calcuate only emoji when present in a note or e-mail.
%time scl = [EMOJI_TO_SENTIMENT_VALUE[token.text] for token in long_s_doc if (token.text in EMOJI_TO_SENTIMENT_VALUE)]
len(scl), sum(scl), sum(scl)/len(scl)
output =>
CPU times: user 179 ms, sys: 0 ns, total: 179 ms
Wall time: 178 ms
(15200, 1090.7019922523152, 0.07175671001659968)
The sentiment was 0.07 (neutral) for 0.5 million character “note” with 15,200 emojis and emojicons in 178 ms. A fast sentiment analysis calculation!
18. NLP text preprocessing: Removing emoji
You can remove emoji using spaCy pipeline add-on
%time long_s_doc_no_emojicon = [token for token in long_s_doc if token._.is_emoji == False]
print('size: {:g} {}'.format(len(long_s_doc_no_emojicon),long_s_doc_no_emojicon[:int(text_l/5)]))
output =>
CPU times: user 837 ms, sys: 4.98 ms, total: 842 ms
Wall time: 841 ms
size: 121599
[:(, 888, eihtg, dod, fee, , document, title, :(, catnip, immednatedly, 2nd, levelheading, , ca, n't, be, a, ckunk, , 4, , 123, 456, wo, n't, seven, , shine, , beighty, , :(, 888, eihtg, dod, fee, , document, title, :(, catnip, immednatedly, 2nd, levelheading, , ca, n't, be, a, ckunk, , 4, , 123, 456, wo, n't, seven, , shine, , beighty, , :(, 888, eihtg, dod, fee, ]
The emoji spacy pipeline addition detected the emojicons, 😻 😈, but missed :) and :(.
19. NLP text pre-processing: Removing emoji (better)
We developed EMOJI_TO_PHRASE
to detect the emojicons, 😻 😈, and emoji, such as :) and :(.
and removed them [8,9].
%time text = [token.text if (token.text in EMOJI_TO_PHRASE) == False \
else '' for token in long_s_doc]
%time long_s = ' '.join(text)
print('size: {:g} {}'.format(len(long_s),long_s[:text_l]))
output =>
CPU times: user 242 ms, sys: 3.76 ms, total: 245 ms
Wall time: 245 ms
CPU times: user 3.37 ms, sys: 73 µs, total: 3.45 ms
Wall time: 3.46 ms
size: 569997
888 eihtg dod fee document title catnip immednatedly 2nd levelheading ca n't be a ckunk 4 123 456 wo n't seven shine beighty 888 eihtg dod fee document title catnip immednatedly 2nd levelheading ca n't be a ckunk 4 123 456 wo n't seven shine beighty 888 eihtg dod fee document title catnip imm
20. NLP text pre-processing: Replace emojis with a phrase
We can translate emojicon into a natural language phrase.
%time text = [token.text if token._.is_emoji == False else token._.emoji_desc for token in long_s_doc]
%time long_s = ' '.join(text)
print('size: {:g} {}'.format(len(long_s),long_s[:250]))
output =>
CPU times: user 1.07 s, sys: 7.54 ms, total: 1.07 s
Wall time: 1.07 s
CPU times: user 3.78 ms, sys: 0 ns, total: 3.78 ms
Wall time: 3.79 ms
size: 794197
:( smiling cat face with heart-eyes smiling face with horns 888 eihtg dod fee document title :( catnip immednatedly 2nd levelheading ca n't be a ckunk 4 123 456 wo n't seven shine beighty
The emoji spaCy pipeline addition detected the emojicons, 😻 😈, but missed :) and :(.
21. NLP text pre-processing: Replace emojis with a phrase (better)
We can translate emojicons into a natural language phrase.
%time text = [token.text if (token.text in EMOJI_TO_PHRASE) == False \
else EMOJI_TO_PHRASE[token.text] for token in long_s_doc]
%time long_s = ' '.join(text)
print('size: {:g} {}'.format(len(long_s),long_s[:text_l]))
output =>
CPU times: user 251 ms, sys: 5.57 ms, total: 256 ms
Wall time: 255 ms
CPU times: user 3.54 ms, sys: 91 µs, total: 3.63 ms
Wall time: 3.64 ms
size: 904397
FROWNING FACE SMILING CAT FACE WITH HEART-SHAPED EYES SMILING FACE WITH HORNS 888 eihtg dod fee document title FROWNING FACE catnip immednatedly 2nd levelheading ca n't be a ckunk 4 123 456 wo n't seven shine beighty FROWNING FAC
Again. EMOJI_TO_PHRASE
detected the emojicons, 😻 😈, and emoji, such as :) and :(.
and substituted a phrase.
22. NLP text preprocessing: Correct Spelling
We will use symspell for spelling correction [14].
SymSpell, based on the Symmetric Delete spelling correction algorithm, just took 0.000033 seconds (edit distance 2) and 0.000180 seconds (edit distance 3) on an old MacBook Pro [14].
%time sym_spell_setup()
%time tk = [check_spelling(token.text) for token in long_s_doc[0:99999]]
%time long_s = ' '.join(tk)
print('size: {:g} {}'.format(len(long_s),long_s[:250]))
output =>
CPU times: user 5.22 s, sys: 132 ms, total: 5.35 s
Wall time: 5.36 s
CPU times: user 25 s, sys: 12.9 ms, total: 25 s
Wall time: 25.1 s
CPU times: user 3.37 ms, sys: 42 µs, total: 3.41 ms
Wall time: 3.42 ms
size: 528259 FROWNING FACE SMILING CAT FACE WITH HEART a SHAPED EYES SMILING FACE WITH HORNS 888 eight do fee document title FROWNING FACE catnip immediately and levelheading a not be a chunk a of 123 456 to not seven of shine of eighty
Spell correction was accomplished for immednatedly, ckunk
and beight.
Correcting mis-spelled words is our largest computation. It required 30 seconds for 0.8 million characters.
23. NLP text preprocessing: Replacing Currency Symbol (spaCy)
%time token = [token.text if token.is_currency == False else '_CUR_' for token in long_s_doc]
%time long_s = ' '.join(token)
print('size: {:g} {}'.format(len(long_s),long_s[:text_l]))aa
Note: spacy removes all punctuation including :)
emoji and emoticon. You can protect the emoticon with:
%time long_s_doc = [token for token in long_s_doc if token.is_punct == False or token._.is_emoji == True]
print('size: {:g} {}'.format(len(long_s_doc),long_s_doc[:50]))
However, replace_currency_symbols
and regex ignore context and replace any currency symbol. You may have multiple use of $
in your text and thus can not ignore context. In this case you can use spaCy.
%time tk = [token.text if token.is_currency == False else '_CUR_' for token in long_s_doc]
%time long_s = ' '.join(tk)
print('size: {:g} {}'.format(len(long_s),long_s[:250]))
output =>
CPU times: user 366 ms, sys: 13.9 ms, total: 380 ms
Wall time: 381 ms
CPU times: user 9.7 ms, sys: 0 ns, total: 9.7 ms
Wall time: 9.57 ms
size: 1.692e+06 😻 👍 🏿 < title > Document Title</title > :( < html><h2>2nd levelheading</h2></html > bhc@gmail.com f@z.y a$@ ca n't bc$$ ef$4 5 66 _CUR_ wo nt seven eihtg _CUR_ nine _CUR_ _CUR_ zer$ 😻 👍 🏿 < title > Document Title</title > :( < html><h2>2nd leve
24. NLP text preprocessing: Removing e-mail address (spacy)
%time tokens = [token for token in long_s_doc if not token.like_email]
print('size: {:g} {}'.format(len(tokens),tokens[:int(text_l/3)]))
output =>
CPU times: user 52.7 ms, sys: 3.09 ms, total: 55.8 ms
Wall time: 54.8 ms
size: 99999
About 0.06 second for 1 million characters.
25. NLP text preprocessing: Remove whitespace and punctuation (spaCy)
%time tokens = [token.text for token in long_s_doc if (token.pos_ not in ['SPACE','PUNCT'])]
%time text = ' '.join(tokens)
print('size: {:g} {}'.format(len(text),text[:text_l]))
26. NLP text preprocessing: Removing stop-words
NLP models (ex: logistic regression and transformers) and NLP tasks (Sentiment Analysis) continue to be added. Some benefit from stopword removal, and some will not. [2]
Note: We now only use different deep learning language models (transformers) and do not remove stopwords.
%time tokens = [token.text for token in long_s_doc if token.is_stop == False]
%time long_s = ' '.join(tokens)
print('size: {:g} {}'.format(len(long_s),long_s[:text_l]))
27. NLP text pre-processing: Lemmatization
Lemmatization looks beyond word reduction and considers a language’s full vocabulary to apply a morphological analysis to words.
Lemmatization looks at the surrounding text to determine a given word’s part of speech. It does not categorize phrases.
%time tokens = [token.lemma_ for token in long_s_doc]
%time long_s = ' '.join(tokens)
print('size: {:g} {}'.format(len(long_s),long_s[:text_l]))
output =>
CPU times: user 366 ms, sys: 13.9 ms, total: 380 ms
Wall time: 381 ms
CPU times: user 9.7 ms, sys: 0 ns, total: 9.7 ms
Wall time: 9.57 ms
size: 1.692e+06 😻 👍 🏿 < title > Document Title</title > :( < html><h2>2nd levelheading</h2></html > bhc@gmail.com f@z.y a$@ ca n't bc$$ ef$4 5 66 _CUR_ wo nt seven eihtg _CUR_ nine _CUR_ _CUR_ zer$ 😻 👍 🏿 < title > Document Title</title > :( < html><h2>2nd leve
Note: Spacy does not have stemming. You can add if it is you want. Stemming does not work as well as Lemmazatation because Stemming does not consider context [2] (Why some researcher considers spacy “opinionated”).
Note: If you do not know what is Stemming, you can still be on the Survivor show. (my opinion)
Conclusion
Whatever the NLP task, you need to clean (pre-process) the data (text) into a corpus (document or set of documents) before it is input into any NLP model.
I adopt a text pre-processing framework that has three major categories of NLP text pre-processing:
- Noise Removal
- Transform Unicode characters into text characters.
- convert a document image into segmented image parts and text snippets [10];
- extract data from a database and transform into words;
- remove markup and metadata in HTML, XML, JSON, .md, etc.;
- remove extra whitespaces;
- remove emoji or convert emoji into phases;
- Remove or convert currency symbol, URLs, email addresses, phone numbers, hashtags, other identifying tokens;
- The correct mis-spelling of words (tokens) [7];
- Remove remaining unwanted punctuation;
- …
2. Tokenization
- They are splitting strings of text into smaller pieces, or “tokens.” Paragraphs segment into sentences, and sentences tokenize into words.
3. Normalization
- Change all characters to lower case;
- Remove English stop words, or whatever language the text is in;
- Perform Lemmatization or Stemming.
Note: The tasks listed in Noise Removal and Normalization can move back and forth. The categorical assignment is for explanatory convenience.
Note: We do not remove stop-words anymore. We found that our current NLP models have higher F1 scores when we leave in stop-words.
Note: Stop-word removal is expensive computationally. We found the best way to achieve faster stop-word removal was not to do it.
Note: We saw no significant change in Deep Learning NLP models’ speed with or without stop-word removal.
Note: The Noise Removal and Normalization lists are not exhaustive. These are some of the tasks I have encountered.
Note: The latest NLP Deep Learning models are more accurate than older models. However, Deep Learning models can be impractically slow to train and are still too slow for prediction. We show in a follow-on article how we speed-up such models for production.
Note: Stemming algorithms drop off the end of the beginning of the word, a list of common prefixes and suffixes to create a base root word.
Note: Lemmatization uses linguistic knowledge bases to get the correct roots of words. Lemmatization performs morphological analysis of each word, which requires the overhead of creating a linguistic knowledge base for each language.
Note: Stemming is faster than lemmatization.
Note: Intuitively and in practice, lemmatization yields better results than stemming in an NLP Deep Learning model. Stemming generally reduces precision accuracy and increases recall accuracy because it injects semi-random noise when wrong.
Read more in How and Why to Implement Stemming and Lemmatization from NLTK.
Our unique implementations, spaCy, and textacy are our current choice for short text preprocessing production fast to use. If you don’t mind the big gap in performance, I would recommend using it for production purposes, over NLTK’s implementation of Stanford’s NER.
In the next blogs, We see how performance changes using multi-processing, multithreading, Nvidia GPUs, and pySpark. Also, I will write about how and why our implementations, such as EMOJI_TO_PHRASE
and EMOJI_TO_SENTIMENT_VALUE
and or how to add emoji, emoticon, or any Unicode symbol.
References
[1] How Much Data Do We Create Every Day? The Mind-Blowing Stats Everyone Should Read.
[2] Industrial-Strength Natural Language Processing;Turbo-charge your spaCy NLP pipeline.
[4] Textacy: Text (Pre)-processing.
[5] Hugging Face.
[6] Language Models are Few-Shot Learners.
[7] re
— Regular expression operations.
[10] Classifying e-commerce products based on images and text.
[11] DART: Open-Domain Structured Data Record to Text Generation.
[12] Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT.
[13] fast.ai .
[14] 1000x faster Spelling Correction.
This article was originally published on Medium and re-published to TOPBOTS with permission from the author. Read more technical guides by Bruce Cottman, Ph.D. on Medium.
Enjoy this article? Sign up for more AI and NLP updates.
We’ll let you know when we release more in-depth technical education.
Leave a Reply
You must be logged in to post a comment.