Leave us a message
We usually respond within a few hours. You can also make an appointment with us for a specific date by clicking on the link to our calendar below.
Is the algorithm capable of writing "Harry Potter"?
Will artificial intelligence replace journalists and perhaps even writers? In June 2020, OpenAI (supported by Elon Musk) which is one of the two main AI research laboratories (their competitor is DeepSense) presented another version (GPT-3) of the natural language model, which is considered by many as a breakthrough.
The model uses deep learning and allows the computer to generate human-like statements. The model is based on a statistical analysis of the language and estimation of the probability of the occurrence of words in succession. In a very simple way: the algorithm analyses how often the words 'blue' and 'ocean', for example, occur in succession. In the same way, it analyses the occurrence of specific sentences or paragraphs in succession.
What distinguishes GPT-3 from previous solutions is the ability to operate on gigantic volumes. GPT-3 has been trained on tens of billions of pages of text, Internet entries, books and Wikipedia. A large part of the training was provided by the CommonCrawl database, which contains more than 60 billion domains.
During the GPT-3 presentation, it was shown how the model creates a detective novel with Harry Potter in the lead role, poems and how to moderate aggressive texts. The audience did not distinguish between texts written by the algorithm and human-written texts.
People who have been approved by OpenAI for testing GPT-3 shared very interesting experiments on Twitter. GPT-3 was able to correctly select an answer from a closed medical test, create simple financial reports and even write code training machine learning algorithms.
Nevertheless, the model can make significant mistakes. Without understanding the meaning of language, but relying on pure statistics, when one of the scientists asked the GPT-3 an open question, the algorithm did not know how to answer. The programme responded: "I have no time to explain why not".
As with other applications of AI algorithms, GPT-3 has learned (and will continue to learn), unfortunately, also many discriminatory, racist and sexist word combinations, e.g. combining with the words 'black' or 'woman'. Many of the words we can find on the Internet are particularly susceptible to discriminatory or affirmative juxtapositions. According to TechCrunch, 'Islam' (the name of the religion) is over-represented in connection with the word 'terrorism' and the word 'atheism' is often juxtaposed with the word 'cool'.
According to TechCrunch, many media companies already today write texts using tools similar to GPT-3. However, until now there has not been a solution as powerful as GPT-3. According to many commentators, the use of AI in the area of natural language has overtaken the trend of AI solution in the area of vision.
After the premiere of GPT-3, there were many opinions that the solution will become widely available much faster than one might think. However, it is difficult to imagine that the content created by the model will be distributed without human supervision. Because of its capabilities, there is a danger that media creations created by GPT-3 would contain a much greater number of prejudices, and would fuel stereotypes, clichés and thought papers.They could also significantly increase the regulation of 'fake news'.
There will certainly be a great deal of pressure from media companies to use GPT-3 to produce lightweight content such as gossip, sports or entertainment. GPT-3 poses a real threat to the so-called 'media workers' whom, interestingly enough, we no longer call beginner journalists.
OpenAI claims that it is working intensively on the use of filters for a model that would reduce unwanted phenomena. Nevertheless, the reflection is that any product based on AI is as good as the data set it learns from. GPT-3 is nevertheless learning from human-generated content. People generate stereotypes and prejudices. As the philosopher said: The limits of our language are the limits of our knowledge.
For the time being, we are writing this blog and not the algorithm :)
We usually respond within a few hours. You can also make an appointment with us for a specific date by clicking on the link to our calendar below.