AI agents must solve a host of tasks that require different speeds and levels of reasoning and planning capabilities. Ideally, an agent should know when to use its direct memory a
DeepMind's creative lead Lorrain enhances media with AI, working on projects with Marvel, Netflix, and teaching AI filmmaking at Columbia University.
Google DeepMind has been using its AI watermarking method on Gemini chatbot responses for months – and now it’s making the tool available to any AI developer
As AI tech gets smarter it’s getting harder to spot the difference between content made by a human and what’s been dreamed up by an algorithm. Google, pushing the AI envelope itself, is aware of this and wants to help.
Google DeepMind launches SynthID, a tool that embeds invisible watermarks in AI-generated text, enhancing transparency and combating misinformation.
The company conducted a massive experiment on its watermarking tool SynthID’s usefulness by letting millions of Gemini users rank it.
SynthID can watermark AI-generated content across different modalities such as text, images, audio, and videos.
The move gives the entire AI industry an easy, seemingly robust way to silently mark content as artificially generated, which could be useful for detecting deepfakes and other damaging AI content before it goes out in the wild.
I spent a couple of days last week at the University of Oxford in the UK where I spoke at and attended the Oxford Generative AI Summit. This multi-stakeholder event brought together elected and appointed officials from the UK and other countries along with academics and executives and scientists from tech and media companies.
It’s not your typical stop-motion film when characters name pets after Sylvia Plath and read The Diary of Anne Frank — or when the story’s inspired by a quote from existentialist thinker Soren Kierkegaard.
Google has finally open-sourced SynthID Text, which is its watermarking and detection tool for AI-generated text.