Is ChatGPT a threat to your personal security?

Generative AI (GenAI) is incredibly powerful—it can create text, images, and even deepfake videos with astonishing accuracy. While this technology has many exciting uses, from creative projects to helping automate tasks, it also raises some serious security concerns. For instance, GenAI can be used to produce convincing fake content, such as videos or audio clips that mimic real people. Imagine a scammer using AI to clone your voice and call a family member pretending to be you, asking for money. The ability to generate such believable content means that scams, fraud, and misinformation could become much harder to detect, putting everyone at risk of falling victim to these kinds of attacks.

Moreover, GenAI can also be used to launch sophisticated cyberattacks. By mimicking human behavior, AI-driven bots could trick people into clicking malicious links or sharing sensitive information, making phishing schemes far more convincing than ever before. Hackers could even use AI to write harmful code or break into secure systems by finding vulnerabilities that humans might overlook. All of this means that the average person needs to be more cautious online and understand how AI could be exploited by bad actors. It's not just about protecting your own information, but also about being aware of the bigger picture, where AI-driven attacks could impact companies, governments, and the overall security of the internet.

The two paragraphs above were written by ChatGPT. I simply prompted it with, “write a two paragraph explanation in a conversational tone of why the average person should be concerned about security related to GenAI.” Not surprisingly, the answer it gave is exactly accurate.

AI does provide many benefits and potential to improve the world, but not everyone has genuine intent. In February, a finance worker in Hong Kong was conned into paying out $25 million because of an interactive AI deep fake conference call. Another story came out in the same month detailing how foreign governments use AI in phishing campaigns, social engineering attacks and gathering data on high profile individuals. With large hacking organizations and nation-sponsored teams building AI-powered hacking tools, how is the regular person expected to protect themselves?

The answer is as old as time - COMMON SENSE. If an offer sounds too good to be true, it probably is. Whether you are asked to give your personal information to a website that doesn’t have a need for it, deciding to put two factor authentication on your bank account or using critical thinking when making any other decisions online, I guess ChatGPT states it clearly enough…

People can protect themselves in an AI-driven world by staying informed, using secure technologies, practicing digital literacy, and maintaining control over personal data and privacy.

By Jerry Patterson, Director of the Information Security Office, and ChatGPT


To mark National Cyber Security Awareness Month this year, Information Resources & Technology will be sharing tips and insight throughout October on how to protect your data  -- and Rowan's data -- when using genAI. For more information on genAI at Rowan, visit go.rowan.edu/genAI. For other online security tips, visit go.rowan.edu/ncsam