How hackers use technology in various phishing and social engineering strategies

If we talk about Artificial Intelligence (AI) we can see that the advances that are being made are increasingly surprising.
Through its implementation, companies can add value to their services, automate complex tasks and improve day by day in their interaction with customers.
However, it must be taken into account that AI is a tool and as such the result of its application is based on use.

New threats emerge from the malicious use of AI, some as harmless acts, others as criminal activities, such as systems that pretend to be human to bypass mechanisms or false chatbots that request the entry of sensitive information.

Threats from AI-led chatbots are seen as one of the Information Security Industry Predictions that the WatchGuard research team developed based on security analysis and threat trends that occurred during 2018 - 2019.

"Cyber criminals continue to change the threat landscape as they update their tactics and intensify their attacks on companies, governments and even the Internet infrastructure," said Corey Nachreiner, Chief Technology Officer at WatchGuard Technologies.

"In this scenario, SMEs continue to be the target of cybercriminals, so they must begin to review their current security measures and make the security of their networks a high priority, seeking to implement solutions through managed services companies", added the executive.

The actions of black hat hackers are implemented through malicious chat rooms on legitimate sites.
"The objective is to direct victims to access the malicious link and thereby download files that contain malware or share private information, such as passwords, emails, credit card numbers or bank passwords," he explains.

Through virtual assistants or chatbots, hackers find new attack vectors.
A hacked chatbot could redirect victims to malicious links instead of legitimate ones. Attackers could also take advantage of web application flaws on legitimate sites to insert a malicious chatbot.

For example, a hacker could force a fake chatbot to appear while the victim is on a banking website, asking if they need help finding something. The chatbot could then recommend that the victim click malicious links to fake banking resources instead of linking to real ones. Those links could allow the attacker to do anything from malware installation to virtual hijacking of the bank's site connection, ”explains Nachreiner.

To try to detect malicious chatbots, the Director of Technology advises that it is important to ensure that communication is encrypted in all cases, in addition to regulating the way in which the data of those chat sessions is managed or stored.

In short, it is vitally important that those in charge of access and IT systems and security in organizations of all sizes implement not only appropriate security measures; training employees in the actions of hackers and how to be vigilant to prevent them, must be part of a regular update activity to be alert to any suspicious activity.