After a good year of steady wealth, the hangover finally hit. This is a delicate option (for now) as the market adjusts the stock price of major players (like Nvidia, Microsoft and Google) while other players re-evaluate the market and adjust their priorities. Gartner calls this the trough of frustrationwhen interest wanes and implementations do not bring the promised breakthroughs. Technology makers shake up or fail. Investments continue only if the surviving vendors improve their products to the satisfaction of early adopters.
Let’s make it clear that this will always be the case: the post-human revolution promised by AI proponents was never a realistic goal, and the incredible excitement generated by the early LLMs was not based on market success.
AI is here to stay
What’s next for AI? Well, when it happens after the Gartner hype cycle, the deep crash is followed by a slope of enlightenment as mature technology regains its footing, gains crystallize, and vendors bring second and third generation products to market. And if all goes well, this is followed by the holy grail of performance, where mass adoption increases due to the popularity of the technology in the market. Gartner insists that there are several important factors: not every technology can recover from a failure, and that it is important that the product finds its market quickly enough.
It is now almost certain that AI is here to stay. Apple and Google are bringing consumer products to market that repackage technology into smaller, digestible, easy-to-use chunks (photo editing, text editing, advanced search). While the quality is still very uneven, it seems that at least some players have found a way to produce generative AI in a way that makes a difference – both to consumers and to their own bottom line.
What has LLM ever done for us?
Well, where does that leave enterprise customers – and cybersecurity applications in particular? The fact is that generative AI still has significant shortcomings that hinder its widespread adoption. One of them is the fundamentally non-deterministic nature of generative AI. Since the technology itself is based on probabilistic models (a feature, not a bug!), the output will vary. This may scare off some industry veterans who expect old-school software. It also means that generative AI will not be a replacement for existing tools – it is rather an enhancement and addition to existing tools. However, it has the potential to act as one layer of multi-layered defense that is difficult for attackers to predict as well.
Another drawback that causes problems with adoption is cost. There are models very expensive to train and this high cost is now being passed on to consumer models. Therefore, much attention is paid to reducing the query cost. Improvements in hardware, combined with breakthroughs in model refinement, promise significant reductions in the power consumption of running AI models, and there are reasonable expectations that (at least textual output) will turn into a profitable business.
Cheaper and more accurate models are great, but there is also a growing realization that the task of integrating these models into organizational workflows will be a significant challenge. As a society, we do not yet have the experience to know how to effectively integrate artificial intelligence technologies into everyday work practices. There is also the question of how the existing workforce will accept and work with new technologies. For example, we’ve seen cases where workers and customers prefer to interact with a model that prioritizes explainability over accuracy. A Research March 2024 Harvard Medical School found that the effect of AI assistance was inconsistent and varied in a test sample of radiologists, with some radiologists’ performance improved by AI and others’ performance worsened. The recommendation is that while AI tools should be introduced into clinical practice, a nuanced, personalized and carefully calibrated approach must be used to ensure optimal patient outcomes.
What about the market approach we mentioned earlier? While generative AI will (probably) never replace the programmer (whatever some companies claim), AI code generation has become a useful prototyping tool for a variety of scenarios. This is already useful for cybersecurity professionals: the generated code or configuration is a smart starting point to quickly build something before refining it.
A big caveat: the existing technology has the ability to speed up the work of an experienced professional who can quickly debug and correct the generated text (code or configuration). But this can be potentially disastrous for a user who is not a veteran in the field: there is always the possibility of creating an unsafe configuration or dangerous code that, if it goes into production, will worsen the organization’s cybersecurity posture. So, like any other tool, it can be useful if you know what you’re doing, and it can backfire if you don’t.
Here we must caution about one particular characteristic of the current generation of generative AI tools: they sound deceptively confident when announcing their results. Even if the text is clearly wrong, all modern tools present it in a confident manner that easily misleads novice users. So, keep in mind: the computer lies about how confident it is, and sometimes it’s very wrong.
Another effective use case is customer support, or rather level 1 support – being able to help customers who don’t bother reading the manual or published FAQs. A modern chatbot can intelligently answer simple questions and direct more complex inquiries to a higher level of support. While not exactly ideal from the customer’s perspective, the cost savings (especially for very large organizations with many untrained users) can be significant.
Uncertainty surrounds how artificial intelligence will be integrated into business a benefit to the management consulting industry. For example, Boston Consulting Group now earns 20% of its revenue from AI-related projects, while McKinsey expects 40% of its revenue to come from AI projects this year. Other consulting companies such as IBM and Accenture are also involved. The business projects are quite diverse: facilitating the translation of advertisements from one language to another, advanced procurement search when evaluating suppliers, and strengthened chatbots for customer service that avoid hallucinations and include links to sources for increased reliability. Although only 200 out of 5,000 customer inquiries go through the chatbot at ING, this can be expected to increase as the quality of responses improves. Similar to the evolution of Internet search, one can imagine a tipping point where “asking a bot” becomes the knee-jerk reaction rather than digging through a morass of data.
AI governance must address cybersecurity challenges
Regardless of the specific use cases, new AI tools are creating a whole new set of cybersecurity headaches. Like RPA in the past, customer-facing chatbots need machine IDs with appropriate, sometimes privileged, access to enterprise systems. For example, a chatbot may need to be able to identify a customer and retrieve some records from a CRM system, which should immediately raise alarm bells for IAM veterans. The setting is accurate access control around this experimental technology will be a key aspect of the implementation process.
The same is true for code generation tools used in Dev or DevOps processes: establishing proper access to the code repository will limit the blast radius in case something goes wrong. It also reduces the impact of a potential breach should the AI tool itself become a cybersecurity liability.
And, of course, there’s always third-party risk: By implementing such a powerful but little-understood tool, organizations open themselves up to challengers testing the limits of LLM technology. The relative immaturity here can be problematic: we don’t yet have best practices for hardening masters, so we need to make sure they don’t have write rights in sensitive areas.
AI capabilities in IAM
At the moment, use cases and opportunities for AI in access control and IAM take shape and are delivered to customers in products. Traditional areas of classic ML, such as role mining and entitlement recommendations, are reexamined in light of modern methods and user interfaces, with role creation and evolution more closely woven into off-the-shelf management workflows and user interfaces. Recent AI-inspired innovations such as peer group analysis, decision recommendations, and behavior-based management are becoming par for the course in the world of identity management. Customers now expect control technologies such as SSO and access control systems Privileged account management systems to offer AI-based anomaly and threat detection based on user behavior and sessions.
Natural language interfaces are starting to significantly improve UX in all of these categories The IAM solution allowing interactive natural language exchange with the system. We still need static reports and dashboards, but the ability for people with different responsibilities and needs to speak in natural language and refine search results interactively reduces the skills and training needed for organizations to understand the value of these systems.
This is the end of the beginning
One thing is certain: whatever the state of AI technology in mid-2024, it will not be the end of the field. Generative AI and MA is just one subfield of AI, and many other AI-related fields are advancing rapidly thanks to advances in hardware and generous government and private research funding.
Whatever form enterprise-ready mature AI takes, security veterans already need to consider the potential benefits generative AI can bring to their defense posture, what these tools can do to poke holes in existing defenses, and how we can deter blast radius if the experiment goes wrong.
Note: This expertly written article was prepared by Robert Byrne, Field Strategist at One Identity. Rob has over 15 years of IT experience in various roles such as development, consulting and technical sales. His career has mainly focused on identity management. Prior to joining Quest, Rob worked with Oracle and Sun Microsystems. He holds a Bachelor of Science degree in Mathematics and Computer Science.