Close Menu
Indo Guard OnlineIndo Guard Online
  • Home
  • Cyber Security
  • Risk Management
  • Travel
  • Security News
  • Tech
  • More
    • Data Privacy
    • Data Protection
    • Global Security
What's Hot

More than 269 000 sites infected with malicious JSFiretruC JavaScript software in one month

June 13, 2025

Transition from Monitoring Alert to Risk Measurement

June 13, 2025

Band

June 13, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
Indo Guard OnlineIndo Guard Online
Subscribe
  • Home
  • Cyber Security
  • Risk Management
  • Travel
  • Security News
  • Tech
  • More
    • Data Privacy
    • Data Protection
    • Global Security
Indo Guard OnlineIndo Guard Online
Home » Expand users’ capabilities and protect against Genai data loss
Global Security

Expand users’ capabilities and protect against Genai data loss

AdminBy AdminJune 6, 2025No Comments5 Mins Read
GenAI Data Loss
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link


06 June 2025Hacker NewsArtificial Intelligence / Zero Trust

Genai data loss

When AI generative tools became widely available at the end of 2022, not only technologists paid attention. Employees of all branches immediately recognized the potential of the generative II to improve productivity, streamline communication and acceleration. Like many waves of consumer IT innovations, the storage and cooperation platforms, the storage platform, and landed at the enterprise, not through official channels, but through the hands of employees who seek to work smarter.

Faced with the risk when they are subjected to sensitive data into AI public interfaces, many organizations responded with urgency and force: they blocked access. Although it is clear as an initial defense measure, the blocking of public AI apps is not a long-term strategy-this is a stop. And in most cases it is not even effective.

Shadow AI: Invisible risk

The ZSCALER OPHERLABZ team tracked AI and Machine Training (ML) traffic Enterprises and figures tell a convincing story. Only in 2024, Opherlabz analyzed 36 times more AI and ML traffic than in the previous year, discovering more than 800 different AI applications.

The lock did not prevent employees from using AI. They send e -mail files to personal credentials, use their phones or home devices and remove screenshots to enter AI systems. These workarounds move sensitive interaction in the shade, not taking into account the monitoring of enterprises and protection. The result? The growing blind spot is known as Shadow AI.

Blocking unauthorized AI applications can force the use to zero in the reporting panels, but in reality your organization is not protected; It’s just blind to what’s really going on.

Lessons from the Wheel

We were here before. When an early software appeared as a service tool, IT-coats have moved to control the unauthorized use of cloud storage applications. The answer was not to ban file sharing; Most likely it was to offer a safe, unobstructed, unambiguous alternative that meets the expectations of employees for convenience, convenience and speed.

However, this time around the rates is even higher. With Saas data leak often means incorrect file. With AI this may mean unintentional learning public model your intellectual property, it is impossible to delete or get this data as soon as they disappear. There is no “cancel” button on the memory of the big language model.

First visibility, then politics

Before the organization may intellectually manage the use of AI, it must understand what is actually going on. Blocking traffic without visibility is similar to creating a fence without knowing where the properties lines are.

We have solved such problems before. The ZSCALER position in the traffic flow gives us an unmatched point of view. We see what applications are accessed, who and how often. This real -time visibility is important for the risk assessment, policy formation, and the permanent, safe acceptance of the II.

Next, we developed as we are dealing with politics. A lot of suppliers will simply give black and white options “Allow” or “Block”. The best approach is the context that knows a policy that meets the principles with zero durability, which do not imply any implicit confidence and demand of continuous, contextual assessment. Not every use of AI is the same risk, and the policy should reflect it.

For example, we can provide access to the AI ​​app with caution for the user or allow the transaction only in the browser insulation mode, meaning that users are unable to insert potentially sensitive data into the app. Another approach that works well is the redirection of users to an alternative application approved by the corporate, which is managed indoors. This allows employees to benefit about productivity without the risk of data exposure. If your users have a safe, fast and sanctioned way of using AI, they will not need to get around you.

Finally, the ZSCALER data protection tools mean that we can allow employees to use some AI public applications, but prevent them from accidentally sending sensitive information. Our studies show more than 4 million data loss disorders (DLP) in the ZSCALER cloud that provides cases where case sensitive data is such as financial data, personally identified information, source and medical data – were to be sent to AI, and this transaction was blocked by Zscale. The real data loss would have occurred in these AI applications without DLP ZCALER.

Balancing the inclusion with protection

It is not about stopping the adoption of the AI ​​- it is responsible. Safety and performance should not diverge. With the right tools and thinking of the organization can achieve both: expanding the rights and capabilities of users and data protection.

Learn more in Zscaler.com/security

Found this article interesting? This article is a contribution to one of our esteemed partners. Keep track of us further Youter  and LinkedIn To read more exclusive content we publish.





Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Admin
  • Website

Related Posts

More than 269 000 sites infected with malicious JSFiretruC JavaScript software in one month

June 13, 2025

Transition from Monitoring Alert to Risk Measurement

June 13, 2025

Band

June 13, 2025

Apple Zero Click’s downside in reports to spy on journalists using spyware Paragon software

June 13, 2025

Both Vextrio and affiliates control the global network

June 12, 2025

How to Decide Safety Expanding

June 12, 2025
Add A Comment
Leave A Reply Cancel Reply

Loading poll ...
Coming Soon
Do You Like Our Website
: {{ tsp_total }}

Subscribe to Updates

Get the latest security news from Indoguardonline.com

Latest Posts

More than 269 000 sites infected with malicious JSFiretruC JavaScript software in one month

June 13, 2025

Transition from Monitoring Alert to Risk Measurement

June 13, 2025

Band

June 13, 2025

Apple Zero Click’s downside in reports to spy on journalists using spyware Paragon software

June 13, 2025

Both Vextrio and affiliates control the global network

June 12, 2025

How to Decide Safety Expanding

June 12, 2025

The new tokenbreak attack combines AI moderation with a one -sided character change

June 12, 2025

AI AI agents work on secret accounts – learn how to fasten them in this webinar

June 12, 2025
About Us
About Us

Provide a constantly updating feed of the latest security news and developments specific to Indonesia.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

More than 269 000 sites infected with malicious JSFiretruC JavaScript software in one month

June 13, 2025

Transition from Monitoring Alert to Risk Measurement

June 13, 2025

Band

June 13, 2025
Most Popular

In Indonesia, crippling immigration ransomware breach sparks privacy crisis

July 6, 2024

Why Indonesia’s Data Breach Crisis Calls for Better Security

July 6, 2024

Indonesia’s plan to integrate 27,000 govt apps in one platform welcomed but data security concerns linger

July 6, 2024
© 2025 indoguardonline.com
  • Home
  • About us
  • Contact us
  • Privacy Policy

Type above and press Enter to search. Press Esc to cancel.