You thumb the Gemini app, hit “generate,” and your phone coughs up a 30‑second song. I watch that tiny file land in your messages and think: someone just traded the slow work of composition for a button. Now the choice is less about whether a machine can make music and more about what you do with those thirty seconds.
You tap generate and get a ringtone-length song.
Google has folded its Lyria 3 music model into the Gemini app and opened it to users aged 18 and up. The feature is in beta and will roll out to Gemini users in the coming days. Free accounts are limited to 30 seconds per creation; Google’s AI Plus, Pro, and Ultra tiers will raise that cap, but the company hasn’t published exact limits.
Lyria 3 is a microwave for melodies: fast, predictable, and serviceable when you need something immediately. For casual creators that’s perfect—short scores for shorts, quick background loops, or that social post that needs a soundtrack before lunch.
How long are songs generated by Lyria 3?
The baseline answer is 30 seconds for free users; subscribers on Google AI Plus, Pro, and Ultra get longer outputs, though Google left the specific lengths vague. If you were hoping to reproduce an epic, multi‑movement track, don’t hold your breath—this is engineered for short-form use.
You remember early music‑AI outputs that sounded like a glitchy choir.
Lyria 3 is the next public step from DeepMind’s lab work. Earlier Lyria versions were tested in Google’s Music AI Sandbox and used in experimental YouTube features that turned speech into song. Google says the new model writes its own lyrics, offers finer control over style, tempo, and vocal character, and produces more musically complex arrangements than before.
That matters because the tool no longer just stitches loops together; it tries to behave like a composer taking directions. Still, this is designed for quick expression, not symphonies.
You can feed it a prompt or feed it an image and watch it answer.
You give Lyria 3 a line of text, or you upload an image or video, and the model returns a short track plus album art generated by Google’s Nano Banana model. Expect a single, shareable clip—complete with AI‑written lyrics when you ask for them.
It becomes a faucet of 30‑second jingles that flows whenever you ask—handy for demos, risky for anything you want to call a craft object.
Can I use Lyria 3 tracks commercially?
Google embeds outputs with SnythID, its watermark for AI content, and Gemini can check audio uploads for that marker. That handles provenance, but rights and commercial use depend on Google’s terms of service and licensing rules for Gemini and DeepMind models—read those before you monetize anything you generate.
You ask how to tell if a track was machine‑made.
Google’s approach is to make detection part of the workflow: every Lyria output carries SnythID. It isn’t a visible watermark you can spot by ear, but you can upload audio into Gemini and get an automated check for the SnythID flag. That gives you a quick binary: AI‑made or not.
How does Lyria 3 generate music?
It ingests prompts—text, image, or video—and produces audio plus AI‑generated artwork. Behind the scenes it’s trained on large musical datasets (the company says it iterated on earlier Lyria models) and tuned for short, expressive outputs that map to user controls for style and tempo.
You should also note the languages supported at launch: English, German, Spanish, French, Hindi, Japanese, Korean, and Portuguese, with more promised later. That opens global plug‑and‑play use, especially for creators who publish on platforms like YouTube or in short‑form social feeds.
You worry about quality, originality, and ethics as a creator.
Use cases matter. If you’re prototyping an ad soundtrack, Lyria 3 can save hours. If you’re trying to build a lasting body of work, this is a tool that reshapes the early stages of composition—not a substitute for craft. I’d advise testing outputs, tagging what’s AI‑generated, and checking platform rules before you post or sell.
For journalists, producers, and label execs, the new wrinkle is provenance plus convenience: Google bundles creation and detection under one roof—Gemini, DeepMind models, Nano Banana for artwork, and SnythID for tracing.
So you can generate a jingle in seconds, detect where it came from, and move on—but what are you still willing to fight for as a maker? ?