As part of the Llama Mange (LLM), a high-speed security disadvantage (LLM) Llama Llama, which can allow the attacker to execute an arbitrary code on the llama-stack output.
Vulnerability tracked as Cve-2024-50050CVSS 6.3 out of 10.0 was assigned. On the other hand, SNYK’s Security Country Firm appointed This is a critical severity of 9.3.
“The affected versions are vulnerable to the desserization of unreliable data, that is, the attacker can execute an arbitrary code by sending malicious data that are deasserized,”-Avi Lumenski Research – Note In the analysis earlier this week.
Disadvantage, according to Llama stackwhich determines API interfaces to develop artificial intelligence (AI) applications, including using Llama Meta.
Specifically, it was found to be due to the lack of execution of the remote code in the implementation of API conclusion Python, which automatically deasserized Python objects using picklethe format that was considered a risky Due to the possibility of performing arbitrary code when unreliable or malicious data are loaded with the help of the library.
“In the scripts where Zeromq Socket The network is subjected to the attackers can use this vulnerability by sending the created malicious objects to the socket, – Lumassky said. – Since recv_pyobj will break these objects, the attacker can reach an arbitrary code (RCE) by the host car.
After the responsible disclosure of information on September 24, 2024 the problem was address by meta on October 10 in Version 0.0.41. It also was send in pyzmqPython Library that provides access to Zeromq messaging library.
In a consultation issued by Meta, a company – Note He corrected the risk of the remote code associated with the use of the saline as a serialization format to communicate with the JSON format.
This is not the first time when such vulnerabilities were discovered as part of AI. In August 2024 Oliga minute “Shadow vulnerability” as part of the Keras Tensorflow, bypass for Cve-2024-3660 (CVSS assessment: 9.8), which can lead to an arbitrary code of the code from the use of a dangerous marshal module.
Development comes when Benjamin Flash security researcher opened a high -degree deficiency in Chatgpt Cawler Openai, which can be armed to start distributing the service attack (DDOS) against arbitrary sites.
The problem is the result of incorrect HTTP Post Post Requirements on API “Chatgpt (.) Com/Backend-API/Attributions”, which is designed to adopt the URL list as an entrance, but no check if the same URL is several URL, There are several time in the list and does not comply with the limitation of the number of hyperlinks that can be transferred as an entrance.
This opens the scenario when a bad actor can transfer thousands of hyperlinks within one HTTP request, causing Openai to send all these requests to the victim site without trying to limit the number of connections and not prevent the issuance of duplicates.
Depending on the number of hyperlinks transmitted by Openai, it provides a significant factor for potential DDOS attacks, effectively overloading the resources of the target site. AI has ever secured the problem.
“Caneler Chatgpt can be launched on the DDOS victim via HTTP -In question for unrelated API Chatgpt, Flesch – Note. “This defect in the Openai software will give rise to the DDOS attack on nothing of the suspicious victim site, using several ranges of IP Microsoft Azure IP, which works Chatgpt Cucawler.”
The disclosure of the information also follows from the Truffle security report that popular AI coding helpers “recommend” API keys and focus passwords, a risky advice that may mislead the inexperienced programmers who introduce security deficiencies in their projects.
“LLM helps to perpetuate it probably because they have undergone training in all dangerous coding practices,” – Joe Leon’s security researcher – Note.
The news of the vulnerability within the LLM also follows from the studies how models can be abused to expand the capabilities of the cyber-fad cycle, including the installation of the useful load and command management of the final stage.
“Cyber -Pogrosis provided by LLMS is not a revolution, but an evolution,” Mark Wautzman’s deep instinct researcher – Note. “There is nothing new, LLM just does cyber -esters better, faster and more precisely, more and more. LLM can be successfully integrated into every phase of a life -thrust cycle with an experienced driver. These abilities can grow in autonomy as major technologies.”
Recent studies also have demonstrated A new method called Shadow This can be used to detect a genealogy model, including its architecture, type and family using its computing schedule. The approach is based on the previously disclosed attack technique called Shadowlogic.
“The signatures used to identify malicious attacks within the computing schedule can be adapted to the tracking and detection of repetitive models called recurrent subgraphs, which allows them to determine the architectural genealogy of the model,” Hacker News Hacker News said in a statement.
“Understanding the model families used in your organization increases your overall awareness of your AI infrastructure, which allows you to better manage your security post.”