According to experts, Chinese tech companies need to raise their security concerns with ChatGPT-like services
As global tech companies scramble to offer competing products to ChatGPT, the much-discussed chatbot from San Francisco-based tech startup OpenAI, Chinese artificial intelligence (AI) experts and security experts are warning that the unchecked growth of such services is increasing concerns about of cybersecurity.
According to a representative of Beijing-based online security firm Huorong, hackers and online scam groups have used ChatGPT, which is capable of providing human answers to complex questions, to write malicious code that is included in spam and phishing emails could be used.
“We found that ChatGPT is being used to generate malicious code,” the person said. “That it lowers the barriers to (launch) online attacks.”
The Huorong representative added that while ChatGPT makes it easier to launch online attacks, it doesn’t necessarily increase the effectiveness of such attacks.
“(ChatGPT) is able to quote malicious open-source backdoor or trojan codes that are already available online, but it will not be able to augment the codes’ function (to make them more effective )” said the person.
Still, another tool that can aid and potentially popularize internet fraud does not bode well for Chinese online users who are already subject to a variety of online frauds, from privacy leaks to malicious adware.
dr You Chuanman, director of the IIA Center for Regulation and Global Governance at the Chinese University of Hong Kong, Shenzhen campus, warned that evolving technology may pose further challenges to the online security sector.
“There have been instances where ChatGPT has been used alongside some other encrypted services such as Telegram or WhatsApp, making online criminal activity more covert and difficult to detect or track,” You said.
He added that the AI chatbot could also make life much harder for Chinese internet companies, which until now have largely relied on armies of human censors to review online content. ChatGPT-like services, which can potentially spawn a huge volume of online fraud and sensitive content, will mean a significant increase in content review budgets, You said.
However, an increase in the potential proliferation of online fraud is not the only problem. Hackers also take advantage of ChatGPT’s language capabilities by composing phishing emails that seem more convincing.
“Personalized and error-free phishing and scam content appears more credible to victims and is likely to be more effective (with AI-powered chat tools),” said Feixiang He, Adversary Intelligence Research Lead at cybersecurity solution provider Group-IB.
“AI makes it faster and cheaper for scammers to generate unique and personalized phishing content and scripts targeted at victims,” he added.
In mid-February, a resident of Hangzhou used ChatGPT, which is not officially available in China and requires a virtual proxy network service to access, to write a fake announcement – in the tone of the city government – about the city’s shutdown. license plate policy.
The announcement quickly spread online and prompted an investigation into the matter by local police, according to local media reports. This is the first major example of ChatGPT being used to spread an online rumor in China.
According to Liang Hongjin, a partner at talent agency CGL Consulting, which helps Chinese companies hire AI talent, Chinese tech companies are increasingly aware of the security challenges AI technologies pose in their race to launch their own ChatGPT-like services could bring .
Liang said his firm has been tapped by a number of China’s top internet firms to hire top scientists specializing in AI-related security.
But compared to the hot competition for people who can develop and launch ChatGPT-like services, Chinese companies are lagging behind the safety curve to contain it, and “overall this is a universal trend (ignoring the need for better regulation AI technologies) worldwide,” said Liang.
Source: Crypto News Deutsch