Friday, March 21, 2025

FIRST WITH SECURITY NEWS

AI not the silver bullet – yet

Published on

Nothing on earth is as fast as the speed at which Artificial Intelligence (AI) experts have appeared and companies have added those two letters to their products and solutions.

Johan van Wyk believes AI is of great benefit, but it is not a silver bullet – yet.

Speaking at Securex 2024, Van Wyk, the sales and marketing director at FANG Fences and Guards, said as things stand now, AI while powerful and capable of handling numerous tasks, lacks the human elements of intuition, judgment, and emotional intelligence, often referred to as “Humics.”

Therefore, the security sector must develop systems that integrate AI and human oversight.

“Human – AI collaboration is fast becoming inevitable today and most definitely inevitable in the near future. Certain (normally mundane) tasks can be leveraged via AI for the type of tasks it’s best suited like data analysis (many verticals pertaining to this), real-time monitoring, pattern recognition while leaving complex decision-making and interpersonal interactions to humans. This ensures the strengths of both are utilised effectively,” said Van Wyk.

He said AI fast became a buzzword in the last two years, but unfortunately AI can be overused and overemphasised from an electronic security solutions point of view.

“AI must be considered as a solution, and only be implemented when it makes sense to do so, and the only way to know of AI will address a client’s needs is by means of a proper, and detailed risk assessment. AI cannot, and should not be used as a blanket, general solution, it must always be logical to implement as part of an integrated, and most likely turnkey electronic security solution to address the identified risk.”

Van Wyk said to ensure AI can still work effectively in the security sector, a combination of strategies should be considered like:

  • Ensuring AI systems are transparent, and that their decision-making processes are understandable to human operators. This builds trust and enables better oversight.
  • Develop security systems that integrate both AI and human oversight. For instance, AI can flag potential security incidents for human review, ensuring critical decisions are made by experienced professionals like in the case of an offsite video monitoring control room where analytical AI software is a crucial part of raising and then identifying an “alarm”, but a human is still required to make final decisions on action to be taken, for example dispatching armed response or notifying the police (or not dispatching is equally important).
  • Continuously training security personnel to work with AI tools. This includes understanding AI capabilities (and potential shortcomings), interpreting its outputs, and making informed decisions based on AI-generated data.
  • Regularly conduct drills and simulations that involve AI and human teams working together. This helps identify potential gaps and areas for improvement in both technology and human response.
  • Ensuring that AI systems comply with industry standards and regulations. This includes data privacy, security protocols, and ethical guidelines, providing a framework within which AI operates safely and effectively. All humans should be mindful of, and work towards the ethical use of AI, especially the developers and users of AI.

MOST READ

SITE SPONSORS

More like this

Is AI a game-changer for cyberthreats in Africa?

As the African continent continues its digital transformation, cybercriminals are becoming increasingly sophisticated, with...

Sparcx developing AI-based radar target classification system

One of the key projects being pursued by Pretoria-based electronic engineering company Sparcx is...

AI can turn the tide on organised environmental crime in Africa

Effective law enforcement depends on accessing and analysing vast amounts of data that can...