Cybersecurity and The Relationship With Ai
- Alex Morris II
- Dec 8, 2025
- 3 min read
Introduction
Within the last 3-4 maybe even 5 years, the world has seen a sharp increase in the power of artificial intelligence. At one point in time, AI generated pictures and videos looked sloppy and were clearly fake. Now we have online content that looks like real life. We have even seen AI start to be integrated into the professional world, where specific jobs are being targeted and replaced by machines in different capacities. We have even begun to see that in the IT world. AI being integrated in various industries to handle different IT related tasks. And while this may seem like something that could be beneficial or even efficient, it actually can be more detrimental to a business’ operations.
AI Enhancing Cybersecurity
AI-driven innovations are sweeping across industries due to the numerous options for advanced tools and methods that keep businesses safe. These improvements in technological advancements have led to algorithms that are capable of identifying potential threats and being able to respond with high levels of accuracy. Machine learning models can be taught to detect and analyze threats as well as be enabled to provide faster responses. And as a result, minimizing the damage that could have potentially been caused by a data breach.
The Challenges of AI Integration
I was at an event for other business owners and I met a guy who was in finance but had dabbled a little bit in the AI world. While discussing AI and how it fits in the overall IT world, he offered the analogy that while AI is powerful it is just like a college student. When you hire an entry-level employee fresh out of college, is it a good idea to hand them their duties on their first day and say good luck? Or should they be taught certain things about their job so that they can go out on their own? The same absolutely goes for AI. Models need to be properly trained on the ins and outs of the organization so that when the time comes where potential threats do come about, they can be properly identified.
Unfortunately, more often than not, companies tend to take a plug and play type of method with AI, where tools are immediately merged into the day to day processes from the start. While this may seem like a noble effort especially since threats are constantly evolving and companies are doing what they can to stay ahead of the curve, this can actually be more harmful than helpful. Without properly training models, this increases the likelihood of issues arising, such as false positives, ethics and compliance, overall costs, the lack of necessary human response to threats and potential disruption to operations.
False Positives
False positives in cybersecurity are when security tools incorrectly flag an otherwise harmless action as a potential threat. Over time, constant false flags will result in alerts being ignored, wasted resources which cuts into overall company spending and the risk of security teams overlooking legitimate attacks. As a result, a data breach becomes more likely to occur.
Ethics and Compliance
In order for AI models to “learn”, they must process and store a massive amount of data. And with the skyrocketing capabilities of AI related tools, there have not been many efforts to determine the confines which these tools are allowed to manage data. Without these guardrails, we could potentially see AI accessing information that it may not need and be vulnerable to further attacks.
Overall Costs
In some cases, security tools can be costly, especially when it comes to protecting a large amount of systems. In addition, companies deploy AI security tools so that they can operate independently. This type of approach may seem noble on the surface, but similar to the issue with ethics, allowing AI agents to work on their own can skyrocket monthly bills because of a lack of oversight. Like anything, tools cost money to operate, and poor management can lead to them running unnecessarily and escalating expenses.
Lack of Human Response
Whenever threats are found, they have to be remediated in a timely manner, otherwise hackers find and exploit them. AI security tools can find threats quicker than humans can but humans need to be able to patch those vulnerabilities. If companies are solely reliant on AI tools to find holes but don’t have the manpower to patch them, this can create a massive backlog and burn out security teams.
Potential Disruptions
As mentioned earlier, AI models have to be trained when incorporating into business operations. Without a slow integration, they will execute and process data to a point where other workflows become negatively impacted. Whether it is an unexpected patching schedule during core hours or threat actors leveraging AI to carry out sophisticated large scale attacks, they can interfere with the performance of adjacent procedures.



Comments