Category: Blog

  • Privacy-Invasive Inference Capabilities of Large Language Models Uncovered

    LLMs (Large Language Models), like ChatGPT, are like word association champs, using massive data to guess what words come next. Interestingly, according to a recent study, they can also make a decent guess about a wide range of personal attributes from anonymous text, such as race, gender, occupation, and location [1]. The article gives an example where…

  • “Streamjacking” is the Newest Evolution of an Old Threat

    You are very much aware of ways in which physical devices, networks, and platform accounts can be hijacked and exploited for nefarious purposes. The latest addition to the list is “streamjacking,” which refers to the takeover of streaming platform accounts, such as those on YouTube [1]. The classic example of hijacked systems many of you…

  • Strings of Nonsense Convince AI Chatbots to Abandon Ethical Rules

    Continuing previous coverage of development in AI systems, I wanted to share a study and demo from Carnegie Mellon University in Pittsburgh, Pennsylvania and the Center for AI Safety in San Francisco, California revealing a new spin on how chatbot safeguards are susceptible to attacks. AI chatbots like OpenAI’s ChatGPT, Google’s Bard, and Anthropic’s Claude don’t have…

  • Privacy bug in Windows 11’s snipping tool

    Windows 11 built-in snipping tool allows users to take screenshots and perform basic image editing tasks, such as cropping, annotating, and highlighting. It also includes a redaction feature that enables users to remove sensitive information from an image before saving or sharing it. Despite being a useful tool, it has a vulnerability that could have…