Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Video
Why Are Security Fears About ChatGPT So Overblown?
Expert Etay Maor Says Limitations, Biases Make the AI Bot Unreliable - for NowTechnologists were quick to point out that popular AI-based chatbot ChatGPT could lower the bar for attackers in phishing campaigns and even write malware code, but Cato Networks' Etay Maor advises taking these predictions "with a grain of salt" and explores the pros and cons in a demo of ChatGPT.
See Also: Best Practices to Protect Communication and Email Fraud with Technology
One way software fails is through programming biases and a lack of transparency. "One thing we should definitely pay attention to is that ChatGPT is very confident in its answers," says Maor, the senior director of security strategy at Cato Networks.
He advises organizations using this technology to pay particular attention to copyright laws, data retention and privacy. He says that "some big companies have already been asking employees" to not share "company information with ChatGPT or with these interfaces, because they may retain or use this information."
In this video interview with Information Security Media Group, Maor discusses:
- Ways that both defenders and attackers can benefit from ChatGPT technology;
- The reliability of the code created by ChatGPT, how it falls short and whether these problems are fixable;
- How ChatGPT can be used to speed up development and drive innovation of AI capabilities in cybersecurity.
Maor previously served as chief security officer for IntSights, where he led strategic cybersecurity research and security services. He also held senior security positions at IBM, where he created and led breach response training and security research, and at RSA Security's Cyber Threats Research Labs, where he managed malware research and intelligence teams. He is an adjunct professor at Boston College and is part of call-for-paper committees for the RSA and QuBits conferences.