Tag: AI

  • Researchers Reveal Vulnerabilities in AI System

    The AI threat landscape is rapidly evolving. Natural Language Processing (NLP) enables seamless interaction with AI systems through conversational interfaces. However, as we increasingly rely on AI for productivity, new risks emerge. Recent research from the University of Sheffield has shown that NLP models like ChatGPT can be misused to produce harmful malware code, posing…

  • ChatGPT’s Oscillating Accuracy A Reminder User Oversight Necessary

    ChatGPT’s Oscillating Accuracy A Reminder User Oversight Necessary

    I wanted to share a recent study demonstrating ChatGPT’s accuracy has diminished for some key tasks, contrary to the prevalent assumption that training over time should increase accuracy.   This reminds us of the vital need for human oversight in any use of AI technologies.   Researchers from Stanford University and the University of California at…

  • Privacy-Invasive Inference Capabilities of Large Language Models Uncovered

    LLMs (Large Language Models), like ChatGPT, are like word association champs, using massive data to guess what words come next. Interestingly, according to a recent study, they can also make a decent guess about a wide range of personal attributes from anonymous text, such as race, gender, occupation, and location [1]. The article gives an example where…

  • Strings of Nonsense Convince AI Chatbots to Abandon Ethical Rules

    Continuing previous coverage of development in AI systems, I wanted to share a study and demo from Carnegie Mellon University in Pittsburgh, Pennsylvania and the Center for AI Safety in San Francisco, California revealing a new spin on how chatbot safeguards are susceptible to attacks. AI chatbots like OpenAI’s ChatGPT, Google’s Bard, and Anthropic’s Claude don’t have…