Search
Close this search box.

Computer to Human: Understanding and Responding​

An overview of Natural Language Processing

By Natural Language Processing, or NLP, we mean the field of Computer Science and Artificial Intelligence concerned with the interaction between computers and human languages.

In brief, NLP is used to teach computers how to read, decipher, understand humans’ sentences. While nowadays many people daily rely on Natural Language Translation services, like Google Translate, or on personal assistants, like Google Assistant or Siri, it was not an easy way to get where we are now.

So, we use Natural Language Processing to accomplish several tasks, such as:

  • Language translation applications (we mentioned Google Translate)
  • Word Processors, like Microsoft Word, grammar and tone of voice check applications, like Grammarly.
  • Interactive Voice Response (IVR) applications, now widely used in call centers, to handle at least the first part of a call.
  • Personal assistant applications, such as Cortana or Alexa.

How does NLP work

By writing and applying algorithms that aim at identifying, then extracting humans’ natural language rules. All the data is converted into information that computers can understand and, eventually, respond to.

The computer processes the information by a syntactic, then semantic point of view, inferring the meaning of each word based on the context and converting the semantic intentions into human language.

Huge advancements in NLP can be attributed to the advancement of Deep Learning technologies. Now, NLP is integrated into our regular Data Analysis. Neural Networks are able to extract information from large bodies of text quickly and accurately. They can classify text into categories, determine sentiment about text, and analyze the similarity of text data. All that information can, finally, be described by a single feature vector.

 

Machine Translation, from rudimental models to quality outputs

Machine Translation, or MT, is the task of automatically converting source text (that can also be inferred by sounds) in one language, to text in another language.

The fluidity and complexity of human language, with its syntax and variety of semantic nuances, make automatic translation one of the most challenging tasks of artificial intelligence.

The main problem to face is that accurate translation requires the ability to resolve ambiguity and establish what the content of the sentence is in terms of information, tone, etc.

The first public demonstration of a Russian-English machine translation system took place in New York, January 1954 as a result of a collaboration between IBM and Georgetown University.

It was a small-scale experiment of just 250 words and six ‘grammar’ rules, but it raised expectations that automatic systems would have been capable of high-quality translation soon.

It took more time than expected, but the significative development of neural networks finally brought us very quick and accurate translations. These technologies are now widely used and integrated in many digital tools, as web browsers, for example.

 

Natural Language Translator, a case study

The latest advancements of NLP are improving the quality of automatic translation services in terms of speed and accuracy, and topologies like NMT (neural machine translation) make use of newer and increasingly efficient hardware-based on CPU architectures as Intel Sky-Lake.

RINF TECH was involved in a project aimed to fully optimize an NMT software to match the hardware platform. Learn more about how RINF TECH’s team worked to provide a faster and accurate Neural Language Translator tool Natural Language Translator.

You can follow us on LinkedIn, to find fresh insights on technology, from AI to Robotics.

Looking for a technology partner?

Let’s talk.