“A true Prophet in Africa is a good historian, who has enough past knowledge to predict the future.” says Professor Kaba Kamene. This statement speaks to the deep reverence for history and the importance of knowledge in African cultures. Throughout the centuries, ancient civilizations in Africa relied on the knowledge and wisdom of their elders to make decisions and predict future outcomes. This magical experience of using history to shape the future has been a critical component of African culture and has shaped the development of many ancient civilizations on the continent. However, in modern times, we have seen a shift towards relying on technology and artificial intelligence to make predictions about the future. In this blog, we will explore why AI will always be a fake prophet, even with infinite data fed to it, and why history and human experience will always be the most reliable predictors of the future.
GPT AI Lacks Contextual Understanding
One of the main issues with GPT AI is its lack of contextual understanding. While it is true that the model is capable of generating human-like text, it does not fully understand the context behind the words it is producing. GPT AI relies on statistical analysis to generate its responses, which can often result in nonsensical or irrelevant outputs.
For example, GPT AI may produce text that is grammatically correct but does not make sense in the context of the conversation or topic at hand. This can be a major issue for businesses that rely on accurate and relevant information to provide value to their customers.
The Role of Context in Language Comprehension
The fact that GPT AI lacks contextual understanding is a huge factor that makes it unlikely that it will ever be able to replace humans. This is because contextual understanding is a critical component of language comprehension, and without it, the AI cannot fully grasp the nuances of human communication. While GPT AI is capable of generating text that is grammatically correct, it often fails to take into account the context of the conversation or the intent behind the words. As a result, the language produced by the AI can be misleading or even harmful in some cases. In contrast, humans are able to understand the subtleties of language and use contextual clues to interpret the meaning behind words. This ability allows humans to communicate more effectively and avoid miscommunication or misunderstanding. Overall, while GPT AI may have some utility in certain contexts, it is unlikely that it will ever fully replace the value that humans bring to language generation and interpretation.
GPT AI Can Produce Biased or Offensive Language
Another issue with GPT AI is its tendency to produce biased or offensive language. This is because the model is trained on data from the internet, which is known to contain biases and prejudices. As a result, GPT AI can produce language that is discriminatory or offensive to certain groups of people.
For businesses that rely on ethical and inclusive practices, this can be a major concern. Using language that is offensive or discriminatory can harm a company’s reputation and even lead to legal trouble in some cases.
The Limitations of Training AI on Internet Data: How Anti-Black Bias is Perpetuated
One of the major issues with GPT AI is that it is trained on data from the internet, which is by default biased towards the experiences and perspectives of individuals from rich western countries who have full-time access to the internet. This means that GPT AI is not a true representation of the diversity of human experiences, but rather a reflection of the dominant cultural narratives of the internet. This is particularly concerning for communities that are underrepresented online, such as Black communities. In fact, more than 80% of the Black race is not on the internet, meaning that their reality is not reflected in the data that GPT AI is trained on. As a result, GPT AI can be considered anti-Black, as it perpetuates a biased and incomplete understanding of the world. It is important for AI developers to recognize these biases and work towards creating more inclusive and representative models that accurately reflect the diversity of human experiences.
We have some exciting news to share regarding our upcoming book releases. While "The Great Mutombo" has been delayed to September, we are thrilled to announce that "Shining Stars of Africa" will now be released in early April instead!https://t.co/xuf0JQk0jd pic.twitter.com/IdWckQTnXQ
— Ilanga laBaNtu (@ilanga_labantu_) March 9, 2023