Balancing Standards and De Novo Approaches in Algorithms

Many ways to tackle hate speech and mis-/disinformation exist when it comes to Large Language Models (LLMs). Some LLMs outperform, in certain cases, the well-tried and tested BERT model(s), be it in treating rare languages or otherwise. That is because, fortunately, there are ways to circumvent indexes of words and phrases which do not compare with the standard of utilizing such registers, or to deliver tailored solutions with regard to memes, videos, and so forth.

Giving preference to BERT e.a. would probably mean sidelining new or more creative ways to tackle NLP prematurely, to some degree. Although, when it comes of the share of ‘traditional’ in comparison to more ‘progressive’ technical solutions, conservative types of tackling hate and mis-/disinformation already lead the algorithmic landscape, in our fast-paced times. Standards are good and necessary, as we all know. But there are disadvantages.

Taking into account the above: open source and university research might be a wonderful manner for the said, progressive technical solutions to be promoted. Already, some of those approaches appear to work very well.

Thorsten Koch, MA, PgDip
Policyinstitute.net
21 February 2024

Author: author

Leave a Reply

Your email address will not be published. Required fields are marked *