In Conversation With… Federico Charosky, CEO of Quorum Cyber: The Impact of AI on Cyber Security

It’s clear that almost every industry is experiencing a degree of disruption as a result of AI. To kick things off, how do you see AI changing the threat landscape?

AI is a force multiplier, not a paradigm shift. It is just another capability that will be leveraged by both sides of the equation. I don’t think it enables the bad guys or the good guys to do anything they weren’t doing before. It primarily enables us to do more of the things we were doing before, but faster and better. That said, there are some specific risks which are developing in ways we haven’t seen before. Generative AI is going to lead to a normalisation of voice and video phishing attacks. A video of someone is no longer evidence of truth and we haven’t yet developed the countermeasures to be able to detect which videos or voice recordings are fake and which are real. The AI models themselves can’t identify or defend against this yet. OpenAI confirmed recently that they are unable to detect whether content has been generated by large language models (LLMs) or not. Whilst these fakes aren’t a brand new threat, they are a more sophisticated expression of it.

We aren’t quite at the stage where you can produce real-time fakes, but we aren’t far away. In a matter of weeks or a few months, models will be able to produce a real-time emulation of an ‘AI Federico’ that sounds and looks like me and is believable in a Teams call. We are moving towards this at a ridiculous speed. The power of real-time fakes would be that people could actually think they are having a phone call with me when they aren’t. This complicates the dynamic of attacks as well because it is not only the target who is a victim but also the person being impersonated. I think people will be very exposed to these scams in the beginning, and sadly information and money will be extracted through use of the voice of a known contact, perhaps a colleague or family member. The advent of generative AI is the most important technological development we are going to see in our lives. Nothing else is going to be this impactful. It is certainly going to take some time for people to prepare for and understand the changes.

How can we protect against deep fakes? Am I going to have to do two-factor authentication when I am on the phone with a friend or colleague?

It is a good question and it presents a difficult challenge for us to solve. Having confidence in comms mechanisms is going to be key, we need to be able to trust these. There are some really clever things coming from Microsoft that we are planning to use that enable us to do identity validation. We are implementing this in our Security Operations Centre (SOC) so that when we call a customer to escalate something, we can validate that we are speaking to the person claiming to be on the other end of the line. There are some new technologies coming that will help but it is also going to create new workflows. Given how long it has taken people to do simple business email compromise checks and the fact that they are still failing at this on a daily basis, this is going to be a mess. There is going to be a lot of fraud and scams using this channel, especially as this technology is getting easier for criminal gangs to access.

Is generative AI democratising the ability to launch sophisticated attacks? Will criminal gangs that weren’t historically involved in cyber security be able to launch meaningful attacks?

AI definitely lowers the bar for criminals to launch a higher volume of more sophisticated attacks. An equivalent example in the past is distributed denial-of-service (DDoS) attacks. It used to only be massive groups with botnets that could launch these attacks. However, the botnets then became commoditised and anybody could buy a DDoS for a fiver a day to attack the whole of Microsoft; it became a widely available and easy, yet sophisticated, attack mechanism. I think we will see a similar pattern with sophisticated phishing fakes. There is a chance that this rise will act as a catalyst for the adoption of technologies we’ve known we’ve needed for a while. We’ve talked about implementing a central system of cryptographically secure identities, where you bring your identity with you to your job, to your healthcare provider, to your bank, offering a secure central method of identity validation. There is a chance that this model could completely disrupt this sort of attack, but it would require a huge transformation.

Are advancements in generative AI changing the profiles of individuals and organisations that are at risk of cyber-attacks?

As we were saying before, AI is a force multiplier. From a risk perspective, everybody who was at risk before will continue to be at risk, just to a greater extent. However, I think if AI is ‘done well’, it can actually generate disproportionate benefits on the defender’s side. One of the problems with AI is that the rate of progress is so fast. If you do not adopt it early, then you might never be able to catch up. I am excited about the opportunity that our industry has to get ahead of a lot of threat actors out there. I think that what you can do with AI from a security standpoint is more powerful than what the threat actors can do with it. If we really do unlock that power, we might have the upper hand for the first time in a long time.

Can you elaborate on why you think there is a disproportionate advantage to the defenders?

I think there are some really useful parallels in the kinetic world. If you look at Israel’s Iron Dome, it has the ability to track almost the entire sky in real-time, and can track the movement of a missile and intercept it. The threat actor still has to get the missile through to be successful, and the level of difficulty has massively increased. The defence can scan the whole sky comprehensively and at incredible speed; they don’t need to defend actively against the threats at the source. The same applies in the digital world.

AI means that more attacks can be launched, but they are typically just as ‘spikey’ in terms of how they show up in the target’s systems; this doesn’t fundamentally shift the balance in favour of attackers. The nature of the way we protect against threats, i.e. by identifying and isolating anomalies, becomes far more powerful with the help of AI. Our ability to collect, analyse, interpret and respond to data is going to be unlike anything we have ever seen. Attackers are going to have to become much more sophisticated to pierce through these defences.

Cancer cells provide another useful metaphor for understanding this. We can easily detect cancer cells in the body but what we cannot yet do is monitor for them in real-time. Imagine how effectively we could catch and treat cancer if you suddenly gave doctors access to real-time information about every cell across the whole body. This is the potential AI poses for cyber security.

How close are we to having real-time constant monitoring across IT estates?

This is coming. Real-time estate monitoring is already being embedded into product ecosystems, like Microsoft’s, by default. It does come at a cost, but commercials aside we are not talking about technology which is going to take years to be deployed. This is already being embedded into the cloud ecosystems that customers are currently using.

So does this increase the attractiveness of adopting an ‘all Microsoft’ strategy?

Massively. We at Quorum Cyber have aligned with Microsoft as a Microsoft-only provider, but I think AI increases the likelihood of monolithic providers winning more generally. Whether it is Microsoft, AWS or Google, the native integration capabilities that enable you to monitor your entire estate – from identities, to emails, applications to endpoints, and everything in between – are profound. Being able to frictionlessly apply intelligence to that data and mine for insights has changed the game.

In my opinion, this means that the days are numbered for niche security product providers which have been trying to corner small segments of the market, unless they pivot massively. Whilst there are opportunities to partner with the ecosystem players, as Google and CrowdStrike are currently doing, the direction of travel is definitely towards integrated security ecosystems.

If Microsoft and the other ecosystem providers are putting so much effort into embedding real-time monitoring into their ecosystem, where do managed security providers come in and how do they continue to add value on top of the tools Microsoft is developing?

This is a major consideration for companies like us. In my opinion, we are still needed to unlock the technology for our customers who don’t typically have the skills or resources to maximise the benefit of the technology themselves. Importantly, this technology is not just plug-and-play. It still needs fine-tuning, which requires human enrichment, analysis and intervention. I think our approach will always be to have a human in the loop.

When you look at Microsoft, I think one of their strokes of genius when it comes to the adoption of AI is naming it ‘Copilot’. This was not done accidentally. They deliberately did not call it ‘Pilot’ because Microsoft is presenting AI as an aid to the human. This is very much what Microsoft has been saying and doing for years; not aiming to replace the human, but augment the human. This is where we come in. We try to be the pilot that can work with our AI copilot, doing more with less, and allowing humans more time to focus on what we do best. I think this is going to be true for the foreseeable future.

Will the cost of your service come down if AI means that you require fewer people or less proprietary technology layered on top of cyber security tools to deliver a high-quality service?

First of all, I have never seen a technology in this space that makes everything cheaper. The ecosystem always seems to find a way to charge a premium on top of a premium, whether it is service providers or technology companies themselves. Secondly, I don’t necessarily agree with the idea that we are going to need fewer people going forward. It is an assumption that a lot of people have jumped on, assuming that service providers might be able to reduce the number of people they need and increase their margins. Instead of needing fewer people, I think we’re going to need the same number of people but we’ll use them to do different things. I think there is an opportunity to think about how else we can use people to add value. Whilst there may be a reduction in the number of people required in the immediate term due to new improved product capabilities, we aren’t going to stop innovating and rethinking how people can add value, so in the long run things may balance out. I guess we will have to see how it pans out.

Do you think it will take a long time until a plug-and-play cyber solution can give you the same level of protection as being a customer of a leading managed service provider?

One fundamental challenge – a common problem when it comes to AI – is the lack of ‘auditability’ and transparency in how it makes decisions. This is going to result in quite a lot of human scrutiny still being needed when important decisions are being made. We are all aware of some of the issues regarding false assumptions and hallucinations in LLMs, which occur by virtue of their design. At the end of the day, LLMs are algorithms which identify that words or data usually appear next to other words or data, and combine them in an answer. However, this doesn’t necessarily mean that these data should have been connected. And if you translate that problem to cyber security false positive detection, you are still going to want someone reviewing and checking decisions. Hence, I think for a long time we can expect best results from a ‘copilot’ and not a ‘no pilot’ approach.

How should organisations’ approaches to cyber security change in light of recent advancements in AI?

I don’t think AI fundamentally changes the approach organisations need to take. If they were at risk before, they’re still at risk today. Should they be monitoring their estate? Absolutely. And they should probably be monitoring it more than before and be more aware of the attack vectors that are increasingly likely to be exploited.

Microsoft’s CISO, Bret Arsenault, the guy in charge of protecting Microsoft, made a comment on this topic which has stuck with me. It was something along the lines of ‘AI is one of those races that you want to run towards, even if you don’t know everything you want to know about it.’ This applies to all of AI, not just its applications in cyber security. It is not one of those opportunities where you want to wait and see what happens because late adopters are going to be left behind very quickly and catching up is going to be really, really difficult. I think we need to proceed with a bit more risk than we are comfortable with, but be ready to quickly learn from and react to what is and isn’t working. Doing nothing doesn’t sound like a sound or safe option to me.

Also, I’m still advising that we keep the humans in the picture for now. I see similarities at the moment with when RPA (Robotic Process Automation) software and companies like Blue Prism started to emerge. There was a mistaken approach at the time to try to automate and save costs everywhere immediately. The companies that found success instead automated the things they had been manually doing for years and understood to a tee; they didn’t try to automate the problems they were still figuring out. I think the same applies to AI, we should try to remove the things from our day jobs that we have done forever and understand reasonably well. This should then give cyber professionals more time to focus on solving the issues they are currently trying to figure out.

As an aside, how have you already been using generative AI at Quorum Cyber?

Our security analytics team has been using generative AI for about a year to accelerate code creation for new analytics. They ask the generative AI to give them the query and grab the 80% of the answer that is accurate and then iterate on it.

We have also been using it for a range of internal processes. Our People team has agreed to run towards AI, figuring out what does and doesn’t work along the way, and they are making great progress! On our internal website, we have some instructional five-minute videos explaining new policies which were recorded by a bot, read from a script developed by a script generation machine and based upon a policy which was produced by an LLM, for example.

Is Zero Trust becoming an even more important approach to adopt?

The idea that you need to authenticate every single interaction, not just when you come through the door, is really important. The specific term Zero Trust has been abused as a marketing term in the cyber security space, but its principles are becoming even more fundamental.If we can solve for identity assurance, that goes a long way to managing the risk. A number of interesting solutions are being developed in the space, this is an area where Microsoft is not alone. There are some good ecosystems of identity management separate from Microsoft which means there is more than one flavour, which is good for companies and consumers out there.

What can organisations do in the short term to harden their security? Is more training the answer?

Training certainly has a place, but I have a couple of fears. First, some people are going to throw in the towel. They’re going to say that this is impossible as they feel like they are never going to be able to do anything about the risk; they might end up doing less because of the perceived asymmetry of the threat. As a community of partners, suppliers and technology providers, we need to do more to make sure people know they can have a successful journey. Secondly, I have a fear that too often the burden falls on individuals rather than organisations. If all of their defence rests upon Susan in the corner not clicking on a link, then organisations are failing. Security teams should be providing a seamlessly secure experience for users, rather than relying on them to provide the security and shifting blame to the employees rather than the organisational security strategy.

We have discussed leveraging the benefits of AI to monitor an organisation’s IT estate, but presumably this will be harder to deliver to some complicated environments?

Absolutely. Most estates are messy, either because of M&A activity, or simply because upgrading or integrating all aspects of an estate wasn’t deemed cost effective. The cloud can be more secure when done correctly, and I think the majority of people now recognise this. It is far easier to monitor and maintain. It of course has a price and it is not necessarily cheaper than on-premise or hybrid environments, but it is more agile. When it comes to security architecture, someone told me there are three E’s to remember: effective, efficient and elegant. The elegant part is key to me because it means no obscurity or corners to hide in. In bulky hybrid environments there are lots of dark corners to hide in. This has always been the case and is not new with AI, however, AI will help attackers find the dark corners and it will enable breaches to spread more effectively through organisations’ estates.

On a slightly separate note, do you think AI has the potential to alleviate pressures in the cyber labour market?

This is a very interesting topic. For me, the easy answer is yes, but it will also create stresses somewhere else. Organisations will be able to shift where they are putting their cyber talent from some of the easier, more junior roles to focus on other things. There is a risk associated with this, however. Many people have recognised that one of the ways people become senior cyber professionals is by learning from doing these more junior roles. Learning how to pen test and understand how threat actors operate, learning how to write a report or analysing tickets manually become reflexes and are foundations that are vital in enabling people to develop to the next level of cyber professional. Technology has always done this to an extent, however, AI is such a fundamental shift in technology that it will almost certainly have unintended consequences. I think AI will change the job market rather than alleviate all of its pressures. As the threat level increases, we will need more senior talent to do really complex work and there is likely going to be a shortage of this talent.

It is impossible to know exactly where the demand might shift to. Right now, we are seeing a thriving market for prompt engineers, the people who know how to ask questions effectively to LLMs and AI models. I would like to think that the thinking element will remain in the domain of humans. Threat intelligence analysis and having the ability to connect the dots that will be done by humans in the short term, perhaps with some AI assistance. Will AI be able to ‘win’ long term? It is difficult to say. We said the same thing about the board game Go and then AlphaGo built a model which beat the world champion by doing something incredibly creative that humans had never done before. I imagine the same could happen for threat intelligence at some point.

What impact do you think AI will have on generalist MSPs with limited cyber security offerings?

It is absolutely possible that companies are going to use AI to put makeup over a capability and say, ‘we have an even better solution now because it is AI-powered’. Furthermore, it might be more difficult for consumers to do due diligence on their suppliers and understand what a market-leading offering looks like. We are already struggling with this issue and AI is likely to only make this messier. However, as waves of innovation happen, smaller and more innovative suppliers and technology providers will differentiate themselves from more established players.

Taking us as an example, we had no right to exist as there were enough SOC and Managed Detection and Response (MDR) providers working with established technologies in the market. However, we saw an opportunity to use technology a bit differently and created an opening. Somebody else is going to figure that out with the next wave of technological change and disrupt us and today’s other service providers. That is going to be super cool because we are going to see a continuous iteration of good. I think in this particular moment, there is an escape velocity for the firms that are well-placed to leverage the next wave of change, and some will certainly be left behind.

I think a key differentiator for the foreseeable future will be relationships. I think we might see the value of the relationship between suppliers and customers increase. This has been one of the ways we have been winning to date, we haven’t been selling a commoditised licence but a fully committed, mutually invested relationship. I think this model is going to thrive regardless of the technological innovations and is only going to become more important so that people know there is someone they trust available when things go wrong and they really need help.

Are there any comments you’d like to close with?

I would like to finish on a positive note. I think it’s crucial to convey the message that AI can be a huge differentiator and when leveraged effectively, it can have a net positive impact on cyber security. We just need to be smart about it, as well as curious and open to its opportunities.


Our Experience

If you would like to discuss this article or are considering investing in the cyber security market or any other areas in which Fairgrove has experience, please contact Patrick Woodrow or Adam Lee.

Photo: Towfiqu Barbhuiya / Unsplash