Tutorial on Using SpaCy to Create Your Own SEN Model

Among data mining subtasks, Named Entity Recognition (NER) involves detecting names, medical diagnoses, geographic coordinates, and percentages from unstructured data sources. If the goal is to train a dataset to locate compounds by their names, capturing all of the different forms of chemical names present in the training data is crucial. To accomplish this, spaCy’s entity linker, and other NLP tools can be leveraged. Third-party software such as ANNIE can also be utilised as necessary.

To extract the most valuable and practical insights from raw data, developers often rely on Natural Language Processing (NLP) tools. These tools are particularly useful for processing natural language as the words within it possess distinctive traits that set them apart from other types of textual data.

Roughly two million developers have made use of the open-source spaCy package for processing natural language with computer programs. With its assistance, it is conceivable to develop a personalised entity recognition model that can identify numerous instances of a specified entity.

In this article, I will outline the measures we took to integrate spaCy into our entity recognition system.

What is the tool referred to as spaCy?

Python and Cython are leveraged by the spaCy package to handle intricate NLP tasks.

Matthew Honnibal and Ines Montani, the founders of Explosion, a software company, created and released the spaCy library under the MIT license. While the Natural Language Toolkit (NLTK) is primarily used in academic settings, SpaCy has been specifically designed for deployment in production environments.

SpaCy is a cutting-edge natural language processing (NLP) package, developed in Python and Cython. It is widely regarded as one of the most comprehensive and user-friendly NLP annotation tools available, and is becoming increasingly popular with businesses who wish to explore the opportunities that natural language processing and analysis software afford.

The processing and utilisation of large amounts of unstructured textual data is frequently disregarded despite its significance. To facilitate the comprehension of this data by machines, it must be appropriately formatted. Natural language processing (NLP) can be employed to accomplish this.

SpaCy provides pre-trained language-specific models that are capable of handling various Natural Language Processing (NLP) tasks, including parsing, tagging, Named Entity Recognition (NER), lemmatization, tok2vec, attribute ruler, and more. A total of 18 languages are supported, and a single multilingual pipeline module is also offered.

What is Named Entity Recognition (NER)?

To extract valuable information from a text, it is necessary to first identify and categorise pertinent items.

Entity identification, entity chunking, and entity extraction are synonymous and describe a Named Entity Recognition model, more commonly referred to as NER or NERC. NER is used in the fields of Artificial Intelligence (AI), Natural Language Processing (NLP), and Machine Learning (ML).

Named entities in text data pertain to real-world entities, including individuals, locations, and objects. Any word or phrase within the text that appears as a proper name can be classified as a named entity.

Named Entity Recognition (NER) is the first step in Information Retrieval, a method used to locate pertinent information within unstructured, free-form textual material. NER software accomplishes this by scanning the unstructured text for specific named entities, such as people’s names, company names, numbers, currency amounts, percentages, and codes, and then classifies them according to predetermined categories.

The SpaCy system utilises its advanced statistical entity recognition to precisely label the set of tokens that are situated in its proximity. This complex procedure consists of two separate components: entity recognition and entity extraction. Entity recognition entails identifying entities from unstructured data, while entity extraction extracts the pertinent information from the text that pertains to the entity. Both components work in tandem to achieve the objective of accurately labelling the group of tokens located in proximity.

  • Identifying and recognising unknown or unfamiliar entities.
  • Classifying the various entities appropriately.

The first step in Named Entity Recognition (NER) is to determine the type of token (or sequence of tokens) that constitutes a particular entity. Inside-Outside-Beginning (IOB) chunking is commonly used to identify the beginning and ending indices of entities. Following that, entity classification schemes must be established. Some of the more commonly used entity classification options are as follows:

  • Person
  • Organisation
  • Location
  • Time
  • Calculations or Numerals
  • String patterns include number sequences, email addresses, and IP addresses.

Machine learning and deep learning are the primary techniques used for entity recognition in many modern systems. However, in some instances, hard rules are still utilised. Textual data presents a unique challenge due to its ambiguous nature, having been composed by humans and therefore open to multiple interpretations. For instance, the term “Boston” could refer to either a city or a person, leading to difficulty in accurately identifying the relevant entity.

Different NER Models

  1. Named Entity Recognition Using a Dictionary

    Object recognition in textual sources, also referred to as Named Entity Recognition (NER), is the process of identifying and classifying objects within text. Named entities, such as a person’s name, an organisation, or a location, are detected and categorised based on predetermined criteria. In this study, we present an NER architecture that utilises dictionaries as an informational source.

    This software employs public domain dictionaries available on the internet. A dictionary is a collection of words and phrases that are used to define and clarify various nouns.

    There are two fundamental phases involved in this process:
    • Employing dictionaries to identify potential candidates, and
    • Eliminating irrelevant results utilising a parts-of-speech tagger

      By incorporating additional filters based on word forms, the accuracy and sensitivity of Named Entity Recognition (NER) systems can be increased. To enable these dictionary searches, an efficient prefix-tree data structure is used. Additionally, most existing NER methods rely on some form of machine learning.

      A dictionary is a collection of words that can serve as a reference. A simple string matching technique is used to determine whether certain terms appear in the text in relation to the vocabulary items. The system’s lexicon must be continuously updated and maintained; however, this approach has its limitations.
  2. Rule-Based Frameworks

    Lexicons and grammars are two fundamental components of a Natural Language Understanding (NLU) system. A lexicon is a structured repository of words, phrases, and terms that are categorised according to their semantic meaning. This enables the NLU system to detect and categorise words and terms from a given input. However, in cases where the semantic meaning of a word cannot be determined from the lexicon alone, the grammar component of the NLU system is employed to make a final determination.

    To accurately detect and categorise named entities, natural language entity recognition (NERC) systems must validate their vocabulary and grammar using vast collections of text. It is also necessary to evaluate the systems’ effectiveness and reliability.

    However, this process does not guarantee that the performance of the created system will remain stable over time. As a result of newly created Named Entities (NEs) or changes in the meaning of existing ones, it is anticipated that the system’s error rate may increase significantly with time.

    The rule-based matcher engine provided by spaCy allows not only for the identification of specific words and phrases but also for the examination of tokens and their relationships within a text. Furthermore, the context of tokens can be examined by combining spans into a single token or by adding entries to named entities via the doc.ents function.

    This approach extracts information in line with established standards. The most widely accepted standards are
    • Pattern-Based Rules:

      When a pattern-based rule is applied, words in a text are rearranged in accordance with a predefined morphological pattern.
    • Contextual Norms:

      Standards are set based on the definition of the term or the language used in the surrounding environment.
  3. Machine Learning-Based Systematisation:

    Machine learning techniques employ statistical models to detect entities in a document based on its textual characteristics. This model is a significant improvement over the previous two methods as it is capable of accurately identifying entity types even with slight variations in spelling.

    The utilisation of machine learning-based systems enables the use of statistical models to recognise entity names. In a feature-based model, information is conveyed by the features themselves. This approach surpasses the limitations of rule-based and dictionary-based techniques by being able to identify an existing entity even with a slight variation in spelling.

    Concerning Named Entity Recognition (NER), software developers commonly adopt a machine learning-based approach. An annotated text is first used to train a Machine Learning (ML) model. The time taken to train the model is heavily dependent on its complexity. Following training, the model may be used to annotate unprocessed documents.

Spatio-Temporal Acquisitive Computing for Near-Earth

The spaCy Python package enhances and streamlines natural language processing (NLP). Intended for professionals, this software can be used to develop applications capable of processing and understanding large amounts of text.

This technique can be used either as a preprocessing step for deep learning or independently to organise data or recognise natural language. spaCy provides a range of services, including tokenisation, parts-of-speech tagging, text categorisation, and named entity recognition.

Using SpaCy for Named Entity Recognition (NER) tasks is simple and straightforward. The model can provide reliable performance results across all types of text, even when the data needs to be adjusted to meet our company’s specific requirements.

With its named entity recognition (NER) capability, SpaCy offers an advanced natural language processing (NLP) system for labelling sequences of tokens. This model provides a default method for recognising a broad range of names, numbers, and identifiers associated with people, places, objects, languages, activities, and many other categories.

SpaCy provides numerous options for expanding our entity collection by updating our model’s training data. When required, Named Entity Recognition (NER) can be customised with any existing set of classes.

Individuals with an MIT license who speak English can access the following four pre-trained spaCy models:

  1. en_core_web_sm (12 MB)
  2. en_core_web_md (43 MB)
  3. en_core_web_lg (741 MB)
  4. en_core_web_trf (438 MB)

Getting Started with Named Entity Recognition (NER)

As the use of Natural Language Processing (NLP) applications continues to rise, it is essential to become proficient in Named Entity Recognition (NER) to effectively train a model and maximise its potential.

In this blog post, we have furnished comprehensive guidelines for using spaCy to train a custom named entity recognition model. To make it easy for beginners to learn, we have included a considerable amount of information.

Training your own Natural Language Processing (NLP) models can be an exciting and fulfilling process. These models not only have the potential to be integrated into various NLP packages, but they can also provide you with a thorough understanding of how these packages function. We are confident that you will find these tasks to be as exciting and captivating as we do.


  1. Is it possible to add my own entities to spaCy?

    If the predefined named entities in spaCy are not sufficient, you can create your own using the EntityRuler() class. The entityRuler() function allows you to add custom entities to a spaCy pipeline.

    A new entityRuler() object is created using the entityRuler() method, which is then passed to the NLP pipeline. This object has an add patterns() method that enables you to annotate a dictionary of text patterns with entities. To add a pattern to the NLP pipeline, simply call the add_pipe() method.
  2. When deciding which is better, should I learn NLTK or spaCy?

    I will explain why spaCy is a better choice than NLTK in all circumstances.
    • The Natural Language Toolkit (NLTK) offers a wide range of algorithms that can be highly beneficial for scientists, but can be difficult for programmers to use. On the other hand, spaCy keeps its toolkit up-to-date with the most advanced and dependable algorithms available.
    • Although the Natural Language Toolkit (NLTK) offers support for a broad range of languages, spaCy goes further by providing statistical support for seven specific languages: English, German, Spanish, French, Portuguese, Italian, and Dutch. Moreover, it has the ability to recognise named entities in multiple languages.
    • spaCy has a clear advantage over NLTK because it supports word vectors, a feature that NLTK lacks. Additionally, spaCy is often more effective than NLTK because it utilises the most advanced and efficient techniques available.
  3. In other words, what does Doc ents mean in spaCy?

    This document property stores a record of all entities detected within the document. If the entity recognizer is used, this property will return an object consisting of the names of recognised entities as spans.

    Documents are represented as a sequence of tokens. It is possible to export sentences and named entities as NumPy arrays, as well as serialising them in a binary text format without any loss of information. An array of TokenC structures is stored by the Doc object. Python’s Token and Span objects are simply pointers to this array and do not themselves contain the associated data.
  4. What type of NER model does spaCy utilise?

    spaCy employs a Multilayer Convolutional Neural Network (CNN) based on Word Embeddings for its Named Entity Recognition (NER) model. It is an incredibly efficient statistical approach for NER in Python, enabling users to attach labels to clusters of consecutive tokens.

    The model discussed in this text has the ability to identify a wide range of entities, such as individuals, organisations, dialects, and events, among others, through its name or identification number. In addition to providing pre-defined entities, spaCy’s Named Entity Recognition (NER) model also allows users to create custom classes by retraining the model with newly acquired data.
  5. How accurate is spaCy?

    With the release of version 3.0, spaCy now provides improved accuracy due to its modern transformer-based pipelines. Multi-task learning enables users to train their own pipelines using any pre-trained transformer and share the same training between multiple components. Furthermore, spaCy’s transformer support enables users to take advantage of the power of PyTorch or HuggingFace, which provide tens of thousands of pre-trained models.

Join the Top 1% of Remote Developers and Designers

Works connects the top 1% of remote developers and designers with the leading brands and startups around the world. We focus on sophisticated, challenging tier-one projects which require highly skilled talent and problem solvers.
seasoned project manager reviewing remote software engineer's progress on software development project, hired from Works blog.join_marketplace.your_wayexperienced remote UI / UX designer working remotely at home while working on UI / UX & product design projects on Works blog.join_marketplace.freelance_jobs