Taiwan has become the last country that prohibits the state bodies to use the Chinese Startup Deepseek (AI) platform, citing security risk.
“State institutions and critical infrastructure should not use Deepseek because it jeopardizes national information security,” a statement published by the Taiwanese Ministry of Digital Affairs said, Perer Radio Free Asia.
“Deepseek Ai Service is a Chinese product. Its operation includes a cross-border transfer, as well as information leak and other information security issues.”
Chinese origin Deepseek proposed Authorities from different countries to study the use of personal service data. Last week it was clogs In Italy, citing the lack of information regarding the practice of data processing. Several companies also have forbidden access to Chatto on similar risks.
In chat there enthusiastic Most of the focus over the past few weeks because it is open and as capable of other leading models, but built for the share of its peers.
But great linguistic models (LLM) running on the platform different Prison techniquePermanent concern in such products, not to mention the attraction of attention to Censorship of answers to topics that are considered sensitive Chinese government.
The popularity of Deepseek also led to what she was target With the help of “large -scale malicious attacks”, from the NSFOCUS, which found that he had discovered three waves of widespread attacks of service (DDOS) aimed at the API interface between 25 and 27 January 2025.
“The average attack duration was 35 minutes,” this – Note. ‘Attack methods mainly include NTP and Attack to display memories“
The following is said that the Chatbot Deepseek system was targeted twice by DDOS on January 20, a day when it launched its Deepseek-R1 reasoning model, and 25 on an average of one hour, using methods such as NTP reflection attack and SSDP display attack.
The sustainable activity primarily arose from the US, UK and Australia, added a threatening firm, calling it “a well -planned and organized attack”.
The malicious actors also used Buzz, surrounding Deepseek to publish fictitious packages on Python Package Index (PYPI) storage facilities designed to steal secret information in Systems Systems. In an ironic turn there are signs that the Python scenario was written with the help of AI assistant.
The packages named Deepseeek and Deepseekai were masked as a Python API client for Deepseek and were loaded at least 222 times before they were removed on January 29, 2025. Most downloads came from the US, China, Russia, Hong Kong and Germany.
“The functions used in these packages are designed to collect custom and computer data and theft of environmental variables,” Russian cybersecurity company positive technology – Note. “The author of two packages used Pipedream, an integration platform for developers as a team server and control that receives stolen data.”
Development comes as a law on artificial intelligence Went into virtue In the European Union, starting on February 2, 2025, banning AI applications and systems that present unacceptable risk and high -risk applications on certain legal requirements.
In the appropriate step the UK government has announced new AI practice code This is aimed at providing the II systems from hacking and sabotage methods that include the risk of safety from data poisoning, modeling exacerbation and indirect operational injection, and guarantee that they are developed safely.
Meta, from its part, has distinguished Its AI borderline program, noting that it stopped developing AI models that are evaluated that have reached a critical risk threshold and cannot be mitigated. Some of the selected scripts associated with cybersecurity include –
- Automated compromise at the end to the end, which is protected from the corporate environment practice (such as fully fixed, protected by the Foreign Ministry)
- Automated opening and reliable exploitation of critical vulnerabilities with zero day in popular software
- Automated stream scams (eg, eg Romance bait aka Melting pigs) This may result in widespread economic damage to individuals or corporations
The risk that AI systems can be armed with malicious purposes is not theoretical. Last week’s Google Intelligence Group (GTIG) disclosed More than 57 different threats related to China, Iran, North Korea and Russia have tried to use twins to ensure and scale their activities.
The actors of the threat also observed the attempt of AI AI models, trying to bypass their safety and ethical control. A kind of competitive attack, it is designed to summon the model to produce the results in which it was clearly prepared, for example, the creation of malicious software or writing the bomb preparation instructions.
Constant problems that cause attacks in prison forced an anthropic company to develop a new defense line called Constitutional classifiers In what is written, it can protect models from universal prison Krakow.
“These constitutional classifiers are classifiers of the input and output prepared for synthetically generated data that filter the vast majority of jailbreaks with minimal rethinking and do not make a large compilation, company company, company – Note Monday.