Category: Blog

  • From ‘Catching Bad Words’ to ‘Understanding Bad Intent’: AI Safety’s Next Evolution

    As Large Language Models (LLMs) like Claude and GPT-4 become central to our digital lives, a silent arms race is happening behind the scenes. On one side, “jailbreakers” try to trick AI into bypassing its safety filters; on the other, researchers build shields to keep the AI helpful and harmless. The recent paper “Constitutional Classifiers++:…

  • From Months to Days: AI-Assisted Peer Review with Human Oversight

    The Breaking Point Your research center issues a call for papers on a pressing global challenge. Within weeks, 500 submissions flood in, each representing months or years of scholarly work, each deserving careful consideration. Then reality hits. You have perhaps a dozen qualified reviewers, most already overcommitted. Traditional peer review would demand thousands of person-hours…

  • LLMBase: One API, Many AI Models

    I built a tool to make working with multiple AI models easier, and it’s now available on GitHub: https://github.com/ngstcf/llmbase Dealing with API Fragmentation Like many developers experimenting with AI, I found myself wanting to try different models for different tasks. GPT-4o is great for general-purpose work, Claude Sonnet handles complex reasoning well, Gemini shines with…

  • Break Your Research Filter Bubble with Multi-AI Synthesis

    The Filter Bubble Problem Traditional research can sometimes fall short. When you rely on a single search engine or database, you may end up in a “filter bubble,” where algorithmic biases shape what you see. This can quietly limit exposure to important information and diverse perspectives, resulting in a narrower understanding of your topic. That’s…