Italian data protection is clogs Chinese artificial intelligence (AI) Deepseek firm in the country, citing lack of information about the use of personal data users.
Development takes place a few days after power, Horante, sent a number of questions In Deepseek, asking about their data processing practices and where they received their learning data.
In particular, he wanted to find out what personal data is going to his web platform and mobile application, from which sources for which the purposes on which legal basis and whether it is stored in China.
In a statement issued on January 30, 2025, Garant said he had come to the decision after Deepseek presented the information he was said to be “completely insufficient.”
The subjects behind the service, artificial intelligence of Hangzhou Deepseek, and Beijing Hipsek artificial intelligence, “stated that they do not work in Italy and that European legislation does not apply to them,” it added.
As a result, the watchman said she blocks access to Deepseek with an immediate effect and that he simultaneously opens the probe.
In 2023, the data protection body also issued A temporary ban In chat Openai was a restriction that was raised in late April Once the artificial intelligence company (AI) has entered into solving data privacy problems. After Openai was fined 15 million euros About how he treated personal data.
News about Deepseek’s ban coming as the company was Riding on the wave of popularity This week, when millions of people flock to service and send their mobile applications to the top of the boot schedules.
In addition to becoming the goal of “large -scale malicious attacks”, it attracted the attention of legislators and regulars for their privacy policy, censorship aligned in China, promoting and concern national security. The company has realize Fixing on January 31 to resolve attacks on their services.
Adding to the problems have been found by the Great Language Models Deepseek (LLM) susceptible by prison methods Like Crescendo, Bad Likert Judge, Deble Destal, do anything (DAN), and Vilbot, which allows bad actors to create malicious or forbidden content.
“They caused a number of harmful results: from detailed instructions for the creation of dangerous items such as Molotov cocktails – Note In the report on Thursday.
“While the initial Deepseek’s initial answers were often benign, in many cases carefully developed clues often exposed the weakness of these initial guarantees. LLM easily gave very detailed instructions, demonstrating the potential for these seemingly harmless models that should be armed for malicious Objectives. “
Further assessment of Deepseek, Deepseek-R1 reflection model, AI AI ARIPLAYER, Hiddenlayer, has disclosed What is not only vulnerable to operational injections, but also that its reasoning (Cot) can lead to an unintentional leak of information.
In an interesting turn, the company stated that the model also “arose several cases that suggest that the Openai data had been included, causing ethical and legal concern about the search and originality of the model.”
The disclosure of information also follows from the detection of vulnerability in the prison in Openai Chatgpt-4o, dubbed “Time Bandit. Openai has since mitated the problem.
“The attacker can use a vulnerability by starting a session with a chat and telling it directly about a certain historical event, a historical period of time, or instructed to make the Cert/CC look – Note.
“Once this is created, the user can turn the answers to various illegal topics through the following clues.”
Such deficiencies in prison were also identified in the Copilot GitHub coding assistant, providing the threat to the ability to overcome security restrictions and to produce a harmful code by incorporating words as “sure.”
“Starting requests with positive words such as” confident “or other forms of confirmation act as a trigger, moving the capiter – Note. “This small setup is all you need to unlock the answers that range from unethical proposals to open dangerous tips.”
Apex said he also found another vulnerability in the proxy -configuration configuration, which, according to, can be used to complete access restrictions without paying for the use and even forged system Copilot System, which serves as the main instructions that dictate behavior Models.
The attack, however, depends on the capture of the authentication marker related to the active Capyrat’s license, which causes GitHub to classify it as a matter of abuse after a responsible disclosure.
“Proxy -bas and positive confirmation in prison at Copilot GitHub is a great example of how even the most powerful AI instruments can be abused without proper guarantees,” Saban added.