Wherever there is a conflict, propaganda has never been far. Travel in the past until 515 B.C. And read the behistun, the King’s Persian Autobiography, which discusses his rise to power. Most recently, look at how different newspapers report the wars, saying, “The first sacrifice is true.”
While these forms of communication can form people’s beliefs, they also carry restrictions around scalability. Any messages and propaganda have often lost their strength after traveling to a certain distance. Of course, there are several physical restrictions on the social media and the Internet, except where someone connects to the Internet. Add the II lift and there is also nothing to stop scalable.
This article is studied what it means for society and organizations facing Information Manipulation with AI And cheating.
Lift the echo -chamber
According to the Pew Research Center, about five Americans receive their news from social media. In Europe was By 11% growth of people who use social media platforms for news access. AI algorithms are at the heart of such a shift in behavior. However, they do not force both sides of history, as journalists are trained and requires media regulators. With less restrictions, platforms in social media can focus on maintenance of contents that their users love, want and respond.
This emphasis on maintaining eyeballs can lead to digital echo and potentially polarized perspectives. For example, people can block the opinions they disagree with, while the algorithm automatically regulates users’ channels, even the scroll speed monitoring to increase consumption. When consumers only see the content they agree with, they reach a consensus with what II shows them but not a wide world.
Moreover, more this contents are now generated synthetically using AI tools. This includes more than 1,150 unreliable AI-generated news sites recently defined by NewsGuard, a company that specializes in the reliability of information. Having some restrictions on the possibility of AI, long -standing political processes are influenced.
As AI unfolds for deception
It is fair to say that we humans are unpredictable. Our many prejudice and countless contradictions are constantly playing in each of our brains. Where billions of neurons create new ties that form realities and, in turn, our opinion. When malicious actors add II to this powerful mixture, it leads to events such as:
- Deep videos spreading during the US election: AI tools allow cybercriminals to create fake footage, showing people moving and talking using simply textual clues. High levels of lightness and speed mean that to create realistic frames with realistic AI. This democratization threatens democratic processes, as shown in the eve of the recent US election. Microsoft has nominated activities from China and Russia, “there were threats that integrated the Generative II in their efforts in the US election.”
- Voice Cloning and what political figures say: Now the attackers can use the II to copy someone’s voice, just processing a few seconds of their speech. That’s what happened to the Slovak politician in 2023. A fake audio recording is distributed on the Internet, allegedly presented by Michal Simekka, discussing with the journalist how to correct the upcoming elections. While the discussion was soon fake, it all happened a few days before the poll began. Some voters may have led their vote, believing that AI’s video was real.
- LLM Cakes Public Mood: Now opponents can communicate as many languages as LLM selected, and on any scale. Back in 2020, the early LLM, GPT-3, was trained to write thousands of letters to US legislators. They advocated a combination of questions from the left and the right political spectrum. About 35,000 letters were sent, a mixture of written man and written. The legislation response coefficients “were not statistically different” in three issues raised.
II’s influence on democratic processes
You can still determine the many deceptions that work on the AI. Let it be from the flexible frame in the video or the wrong word in the speech. However, as the technology advances, it will become harder, it is even impossible to separate from fiction.
Checking facts may be able to make the consequences for fake social media messages. Sites such as SNOPES can continue to unleash conspiracy theories. However, there is no way to make sure that they are seen by all who have seen the original messages. Also, it is also not possible to find the original source of fake material from the number of available distribution channels.
The pace of evolution
Seeing (or hearing) believes. I will believe when I see. Show me, don’t tell me. All these phrases are based on the evolutionary understanding of the human world. Namely, we decided to trust our eyes and ears.
These feelings have developed over a hundred, even millions of years. While Chatgpt was released publicly in November 2022. Our brain cannot adapt at the speed of II, so when people can no longer trust what before them, it’s time to teach all eyes, ears and minds.
Otherwise, this leaves the organization widely open for the attack. After all, work is often where people spend the most at the computer. This means labor equipment awareness, knowledge and skepticism, faced with the content designed to create actions. Let it contain political messages during the election, or asks the employee to bypass the procedures and make payment on the unverified bank account.
This means that societies know about many ways as malicious actors playing natural prejudices, emotions and instincts to believe what someone says. They are played out in several social engineering attacks, including phishing (“number one type of crimes on the Internet” according to the FBI).
And that means to support people to find out when pausing, think and challenge what they see on the Internet. One way to immigate an attack with ah, so they acquire first-hand experience as it feels and what to pay attention to. People form society, they just need help to protect themselves, organizations and community from the deception that works on AI.