Cybersecurity Breach: Over 1 Lakh ChatGPT Accounts Compromised, India Most Affected

In a concerning turn of events, new research reveals that more than 1 lakh ChatGPT accounts have been compromised, with India being the most affected country. The research, conducted by Singapore-based cybersecurity firm Group-IB, highlights the use of "info-stealing malware" by hackers to steal users' ChatGPT credentials.



The compromised accounts pose a significant risk of fraud and cyberattacks, although they may not directly expose sensitive banking information. However, crucial user data such as email addresses, passwords, and phone numbers are at risk, making users vulnerable to phishing attempts.


Group-IB's investigation also uncovered the sale of compromised ChatGPT credentials on dark web marketplaces over the past year. This alarming trend emphasizes the importance of addressing cybersecurity threats and protecting personal information.


The research further highlights that the Asia-Pacific region, particularly India, has been the hardest hit by the cyber attack. Among the affected accounts, a staggering 12,632 ChatGPT accounts from India have been compromised, followed by Pakistan with 9,217 compromised accounts. Brazil, Vietnam, and Egypt have also experienced significant breaches.


The hackers responsible for the breach employed "info-stealing malware," which gathers saved credentials from browsers, including bank card details, crypto wallet information, cookies, and browsing history. Users may unknowingly download this malware by clicking on suspicious links or downloading infected software.


Although ChatGPT accounts may not directly reveal bank or card information, unauthorized access can provide hackers with access to users' chat history with the AI chatbot. This could expose confidential and sensitive information, potentially leading to targeted attacks against companies and their employees.


Additionally, compromised accounts may put other online accounts at risk if users have reused their ChatGPT passwords elsewhere. Password reuse is a common practice, and it increases the vulnerability of multiple accounts if one set of credentials is compromised.


Group-IB recommends that ChatGPT users update their account passwords and enable two-factor authentication (2FA) for added security. It is also advised to be cautious while downloading applications from untrusted developers and avoid clicking on suspicious web links.


The recent surge in compromised ChatGPT accounts on the dark web highlights the need for increased awareness and robust security measures. As ChatGPT gains popularity in both personal and professional settings, organizations and individuals must prioritize cybersecurity and take proactive steps to safeguard their data.


By staying vigilant, implementing stringent access controls, encryption protocols, and comprehensive cybersecurity strategies, users can mitigate the risks associated with compromised ChatGPT accounts. Enabling 2FA adds an extra layer of protection, requiring a secondary verification code for account access, even if login credentials are compromised.


As the world continues to navigate the digital landscape, it is crucial to recognize the potential threats posed by cybercriminals and take necessary precautions to safeguard personal and sensitive information.


Conclusion :

In summary, a recent report by cybersecurity firm Group-IB revealed that over 100,000 ChatGPT accounts were compromised and their credentials sold on the dark web between June 2022 and May 2023. The research highlighted the prevalence of info-stealing malware, such as Raccoon, that targeted users' devices and collected sensitive information including email addresses, passwords, and phone numbers. The Asia-Pacific region, particularly India and Pakistan, was most affected by the cyberattack. The compromised accounts pose a risk of unauthorized access to stored chat history, potentially exposing confidential information that could be exploited for targeted attacks. It is crucial for users to update their passwords, enable two-factor authentication, and exercise caution when downloading applications or clicking on suspicious links to mitigate such risks.

Also ReadShould we really be concerned about AI autonomously learning Bengali? Is the AI "black box" problem truly a significant issue?

Post a Comment

0 Comments