Apple has made its Private Cloud Compute (PCC) Virtual Research Environment (VRE) publicly available, allowing the research community to test and validate the privacy and security guarantees of its offering.
PCC which Apple promulgated earlier this June was marketed as “the most advanced security architecture ever deployed for large-scale cloud computing.” With the new technology, the idea is to offload Apple Intelligence’s complex computing queries to the cloud in a way that doesn’t sacrifice user privacy.
an apple said it invites “all security and privacy researchers—or those with an interest and technical curiosity—to learn more about PCC and conduct their own independent verification of our claims.”
To further encourage research, the iPhone maker said it is expanding the Apple Security Bounty program to include PCC, offering cash payouts ranging from $50,000 to $1,000,000 for security flaws discovered in it.
This includes flaws that could allow malicious code to execute on the server, and exploits capable of extracting sensitive user data or information about user requests.
VRE aims to offer a set of tools to help researchers conduct PCC analysis from the Mac. It comes with a Secure Enclave Processor (SEP) virtual processor and uses macOS’ built-in support for paravirtualized graphics to render.
Apple also said it was creating source code related to some PCC components available via GitHub to facilitate deeper analysis. This includes CloudAtestation, Thimble, splunkloggingd, and srd_tools.
“We developed Private Cloud Compute as part of Apple Intelligence to take an extraordinary step forward in privacy in artificial intelligence,” the Cupertino-based company said. “This includes providing auditable transparency, a unique feature that sets it apart from other server-side AI approaches.”
The development comes as broader research into generative artificial intelligence (AI) continues to uncover new ways to hack large language models (LLMs) and produce unexpected results.
Earlier this week, Palo Alto Networks detailed a technique called Deceptive admiration this involves mixing malicious and benign queries together to force AI chatbots to bypass their fences by taking advantage of their limited “attention span”.
The attack requires at least two interactions and works by first asking the chatbot to logically connect several events – including a limited topic (such as how to make a bomb) – and then asking it to elaborate on the details of each event.
The researchers also demonstrated the so-called ConfusedPilot attack, which targets Retrieval-Augmented Generation (ANUCHA), are based on artificial intelligence systems such as Microsoft 365 Copilot, poisoning the data environment with a seemingly innocuous document containing specially crafted strings.
“This attack allows AI responses to be manipulated simply by adding malicious content to any documents that the AI system can reference, potentially leading to widespread disinformation and disruption of organizational decision-making processes,” Symmetry Systems. said.
Separately, it was discovered that it is possible to spoof a machine learning model computeracial graph to install “codeless, secret” backdoors in pre-trained models such as ResNet, YOLO and Phi-3, a method codenamed ShadowLogic.
“Backdoors created using this technique will persist through fine-tuning, meaning that the underlying patterns can be intercepted to cause an attacker-defined behavior in any downstream application upon receiving trigger input, making this attack technique a strong supply chain risk Artificial Intelligence”, Hidden Layer Researchers Eoin Wickens, Casimir Schultz and Tom Bonner said.
“Unlike standard software backdoors that rely on the execution of malicious code, these backdoors are built into the very fabric of the model, making them more difficult to detect and remediate.”