Search results for

All search results
Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

Google’s Magenta group aims to create AI artists and musicians

Inspired by projects like Deep Dream, Google team members like Douglas Eck decided to assemble a group called Magenta to make creative AI.
By

Published onMay 23, 2016

You may recall back when Google unleashed their AI engine on image databases and encouraged it to “fill in the gaps” in pictures by trying to pick out familiar images in the content. After many iterations, Deep Dream conjured up trippy interpretations of images crawling with eyeballs and animal heads. Inspired by witnessing computers undergo their first LSD trip, Google team members like Douglas Eck decided to assemble a new team called Magenta that will attempt to make creative artificial intelligences.

Magenta’s early work was showcased in Durham, North Carolina at Moogfest. The project will formally launch at the beginning of June, and will take advantage of Google’s machine-learning engine Tensor Flow. The idea is to create AI systems that are capable of producing wholly original music, video, or digital art. Also in the cards is the potential for storytelling bots, but Eck said during a panel discussion that the technologies they are working with are “very far from long narrative arcs.”

Created with Dreamify (1)
Google Deep Dream stares into the soul of our own Edgar Cervantes.

We caught a glimpse of AI creativity back when Google forced their AI engine to read thousands of steamy romance novels, after which it began writing some seriously eerie post-modern poetry, but Magenta is tackling this endeavor in a much more deliberate way. The project will start with a focus on music before expanding outward into visual mediums. At the panel, the team demonstrated an early test system that could be fed a handful of notes and generate a more complex melody.

After reading thousands of romance books, Google’s AI is writing eerie post-modern poetry
News

You might be wondering what the practical applications of this technology might be beyond making already-starving artists and musicians even more starving in the future. Eck says that there are actually pretty extensive practical applications for AI systems that can think creatively. For instance, Google Play Music can generate radio stations based on your favorite playlists, but it doesn’t understand why you like the songs that you’ve chosen. Therefore it just algorithmically chooses similar artists and tracks, which can create some schizophrenic stations at times. If an AI system were able to understand your more nuanced creative preferences, if it could understand why you are enjoying listening to this particular list of songs, then it can more intelligently add relevant tracks. Alternatively, wearables might be able to convey to the system that you’re stressed out, and it could pick out (or spontaneously generate) some soothing music.

Magenta will be completely open, so developers will have access to this technology via GitHub, so come June 1st you’ll be able to play around with this AI tech yourself. What do you think of the potential for future AI artists, musicians, and poets? A natural step in AI evolution or Pandora’s box? Give us your opinions in the comments below!