Tag: AI

  • Modern Threat Landscape: Weaponizing Trust

    Modern Threat Landscape: Weaponizing Trust

    Traditionally, cybercriminals relied on building their infrastructure to host and distribute malware. However, the landscape has shifted. Cloud platforms, like Facebook, offer a readily available and seemingly legitimate platform for attackers. With Generative AI capturing widespread interest, cybercriminals exploit this fascination to distribute malware through trusted platforms like Facebook. Imagine a Facebook page filled with…

  • New Research Demonstrates Automated Jailbreaking of Large Language Model Chatbots

    New Research Demonstrates Automated Jailbreaking of Large Language Model Chatbots

    While LLMs promise helpful conversation, they may have hidden vulnerabilities that can be exploited. For example, manipulating the prompts could lead them to reveal sensitive information or say unethical, inappropriate, or harmful things against their usage policies. This is called a jailbreak attack, essentially an attempt to bypass the model’s security measures and gain unauthorized…

  • Researchers Reveal Vulnerabilities in AI System

    The AI threat landscape is rapidly evolving. Natural Language Processing (NLP) enables seamless interaction with AI systems through conversational interfaces. However, as we increasingly rely on AI for productivity, new risks emerge. Recent research from the University of Sheffield has shown that NLP models like ChatGPT can be misused to produce harmful malware code, posing…

  • ChatGPT’s Oscillating Accuracy A Reminder User Oversight Necessary

    ChatGPT’s Oscillating Accuracy A Reminder User Oversight Necessary

    I wanted to share a recent study demonstrating ChatGPT’s accuracy has diminished for some key tasks, contrary to the prevalent assumption that training over time should increase accuracy.   This reminds us of the vital need for human oversight in any use of AI technologies.   Researchers from Stanford University and the University of California at…