Advertisement
A research team at the University of Stuttgart shows how easily large AI models can be persuaded to provide dangerous instructions for violent or illegal activities - despite built in safety mechanisms. In their study, an ‘attacker’ AI used conversational tactics to break through output guardrails in other models. Also: how GPs are using AI to support documentation and reduce errors. Meanwhile, an Augsburg team is using AI to advance textile recycling technology for industry.