Our Blog

blog

Brainteaser: The advent of ChatGPT will be a big challenge for cybersecurity apparatus. Here.s why.

Atul Rai, co-founder and CEO of Gurugram-based AI research firm Staqu Technologies, explains that artificial intelligence has witnessed three generations of progression, namely — detection (where it could detect objects or scenes), recognition (comprehending the objects or scenes), and generation (which involves generating human-like text, images, etc.). ChatGPT and GPT (generative pre-trained transformer) models of OpenAI fall under the third category.

The first and second progressions tested the substance of humans, risking the labour job market. However, the third phase is similar to mimicry of the human brain, posing a risk to humans functioning in intellectual fields, he says

Another concern is that the AI-driven chatbot may learn the tone of users’ writing. Its total reliance can cloud information-based comprehension. Further, due to the lack of validation it has the potential to spread fake news. Since it was trained primarily on data, ChatGPT may pick up inaccurate information and generate fake content that appears realistic, Rai adds.

It’s a valid point.
ChatGPT’s current training data goes back up to 2021. Here are a few more situations in which things could go way off the mark with ChatGPT.

  • It has the potential to be used for nefarious purposes such as the dissemination of misinformation, spam, or malicious content.
  • It can replicate and amplify biases in data on which it is trained, leading to biases or unfair systems.
  • It has the ability to automate tasks and processes, which may result in job losses in some industries.
  • The model often requires large amounts of data to be effective, raising concerns over collection and use of personal information.
  • Generative AI models can be used to orchestrate convincing phishing attacks or to automate the generation of malicious codes, both of which pose a security risk.
  • It is often difficult to determine who is responsible for the actions of a generative AI system. This lack of accountability can make it challenging to address any negative impacts of the system.

When developing and deploying generative AI models, it is critical to carefully consider these potential concerns and take appropriate steps to address them.

Moreover, concerns were raised by researchers on the possibility of the technology being used maliciously after ChatGPT was used to develop a chatbot in 2020 that could convincingly impersonate a human being in a live chat.

Manpreet Singh Ahuja, chief digital officer, PwC India, says,

“Cyberguards of banks and other enterprises will have to be significantly higher to be able to identify threats. Enterprises will revisit validation mechanisms as the current systems become vulnerable to bots like ChatGPT”.

Source: The economic times