The world is undergoing an unprecedented digital boom in AI-powered technologies, which have started to make their way into a variety of life domains, as well as applied and human sciences. This began with medicine, engineering, industry, and innovation, and has extended to education, languages, literature, philosophy, culture, media, and so forth.

With the concurrent rapid evolution of AI, the available information about AI’s identity has increased, along with its capability to provide users with the required information.

Some folks depend on integrated technological underpinnings to obtain accurate and reliable information, while others may receive less accurate information that often entails hyperbole, dramatically raising concerns or fake expectations.

Amid the interplay between reality and fabrication, it becomes essential to exercise restraint so as to accurately comprehend the reality, evaluate the impact of this technology on societies, their future, and the way people engage with it across various facets of their lives.

Numerous scientific studies indicate that AI is now capable of performing multiple tasks with high efficiency, including analysing big data at ultra-fast speeds and extracting precise results. It possesses advanced abilities to recognise images and sounds with accuracy that sometimes surpasses human performance, offering intelligent recommendations across various digital platforms.

Moreover, it is utilised to enhance medical diagnosis through deep learning models and to operate chatbots that provide round-the-clock customer service.

Notwithstanding the advanced booms AI has achieved in human life through providing a multitude of services, thus observers see the link between AI’s capabilities and human awareness or cognition as an exaggeration or a misunderstanding of the technology’s true nature.

They point out that AI absolutely lacks any human-like feeling or consciousness, but rather relies on mathematical models and software on which it has been trained, enabling it to process information and predict within a limited domain.

Other opinions note that AI’s capabilities to predict remain limited compared to humans, as the latter largely possess consciousness as part of God-given dignity.

According to multiple analyses, a broad range of perceptions have spread recently alleging that AI will displace human workers. Reciprocally, numerous studies indicate that this technology, albeit the potential concrete transformations it might make in the labour market, simultaneously creates new job opportunities that require various renewable skills.

Noteworthy, the abundance of reliable information about AI is one of the critical factors in directing public policies toward pursuing technology in a scientific and responsible manner, thereby contributing to enhancing innovations in universities and enterprises worldwide, in addition to empowering individuals to adapt and be ready for the evolving changes in the work environment.

Conversely, the prevalence of misinformation about AI-powered applications could lead to indefensible concerns among users, such as losing jobs or violating privacy. This reflects the misunderstanding of the nature of technology and its capability limits, thereby triggering digital gaps between those who wield a clear-eyed scientific comprehension of AI’s role and those who pursue oppositional attitudes without objective reasons.

Highlighting the correct standards in handling AI to ensure the obtainment of accurate information, Principal Software Engineer at the Qatar Computing Research Institute (QCRI), Dr Hamdy Mubarak told Qatar News Agency (QNA) that the correct utilisation of AI primarily requires the veracity of entered information or the information which is being inquired about.

He pointed out that it is highly important to verify the outputs and never accept them as they are without verification, stressing that this information should essentially be compared with other reliable sources such as official portals and encyclopedias.

AI tools rely on analysing data and learning from pre-trained datasets, yet they are not infallible, requiring careful human oversight to monitor and continually review their outputs, Dr Hamdy highlighted.

He indicated that it is essential to verify the safety standards in AI model responses before their use, to avoid passing on any biases and to ensure that the information provided is up to date. These measures, he said, require assessing the models’ performance using standardised test samples to identify their strengths and weaknesses.

Dr Hamdy further called for safety standards to include preventing access to information that may harm individuals, such as promoting self-harm or violating privacy, or that may detrimentally impact communities, including incitement to violence, hate speech, rumours, and bias or discrimination among people based on religion, nationality, race, or other factors.

He emphasised the importance of refraining from using personal or sensitive data in training or feeding AI applications.

The performance of AI, he says, differs based on language, as some models in English outperform others and expose weakness in Arabic.

Strong AI models exist in mathematics and logical reasoning, yet they may fall short in literary composition, image generation, or poetry. Therefore, it is essential to test AI models within their specific domain, verify the accuracy of their information and references, and ensure their use complies with laws while respecting individuals and communities, Dr Hamdy underlined.

He noted that numerous AI-powered applications could generate fake information displayed as reliable with the objective of garnering users to accept it without verification.

He added that these models may produce biased results, making it crucial to identify and address such biases before relying on them.

Many users fall into the trap of “algorithmic bias” by fully trusting AI outputs without review, which can shape fixed beliefs and influence their decisions, he said.

Dr Hamdy warned against supplying AI applications with private information, as these applications could further use this information for training which at the end of the day may be leaked or shared with other users.

It is very important to be vigilant when extensively relying on these kinds of applications. Human critical thinking should be preserved as a substantial component in analysing and using AI outputs.

In addition, Dr Hamdy advised users to never rely on one AI application, since each tool has its own technological characteristics, strengths, and weaknesses, which could yield discrepancies in outputs.

Relying on more than one platform and comparing the outputs among these platforms represents a critical step toward verifying the information and upscaling the reliability level, as this procedure provides users with the chance to assess the congruity or discrepancy and ultimately exclude inaccurate outputs, he noted.

Dr Hamdy added that many professional studies gauge AI performance in a variety of areas, including languages, sciences, and medicine, as well as the congruity of content with diverse cultural contexts. He elucidated that such models display an apparent outperformance in limited domains compared to others.

He warned that misinformation can erode public trust in institutions and media, deepen social divides, and foster “echo chambers” where people only encounter content reinforcing their existing beliefs, limiting open dialogue and stifling intellectual diversity.

Many studies revealed that flawed information spreads much more rapidly on social media platforms than the correct one, which could possibly cause numerous problems, such as diffusing rumours, creating fissures among people, or even influencing critical decisions, as many users of social media sometimes share information without verifying their authenticity, triggering large mushrooms of flawed information.

Amid the ongoing controversy about comparing human cognition with that of AI, Dr Hamdy emphasises that AI is literally capable of processing a huge bulk of data and generating answers based on the training, but it doesn’t possess self-cognition.

Dr Hamdy confirmed that AI doesn’t understand meaning as humans do and has no human feeling, but rather leans on statistical and mathematical models that predict the most likely correct answers in accordance with data.

He further added that the human mind is characterised by deep thinking and comprehending contexts, in addition to interacting with values and cultures, thereby rendering the comparison between him and AI inaccurate.

Humans are the sole beings capable of moral reflection and taking decisions that go beyond technological data, while AI remains a robust statistical tool, nothing more, Dr Hamdy highlighted.

Finally, Dr Hamdy stated that studies suggest that there is a discrepancy in information accurateness from one application and one subject to another, but generally nobody can count on those tools 100%, as these studies indicate that the veracity percentage ranges between 40% and 60% in myriad fields, implying that verification from other sources is an essential matter.

According to a host of digital experts, AI is neither a miraculous force nor a looming menace, but rather a powerful tool capable of reshaping societies for the better if approached rationally.

Accordingly, folks must distinguish fact from fiction in its outputs and engage with it through a scientific, critically evaluative lens to harness its benefits while keeping its risks at bay.