Artificial intelligence is a term that used to seem so far away, reserved only for futuristic movies and books. The expression can spark controversy, with people concerned faceless machines and robots will soon replace them and their colleagues. But the reality is, AI is already so present in everyday life. There are virtual personal assistants, video games, smart cars, purchase prediction, fraud protection, online customer support, news generation, security surveillance, smart home devices, and so much more.
AI is much more realistic than many of us may have imagined. It sneaks into society without us even noticing. Currently, for example, AI is generating brand new sounds that have never been heard before, and they could soon be coming right out of your radio.
The new system, called NSynth, developed by an engineering team called Google Magenta, is hoping to give musicians nearly limitless options for computer-generated instruments to work with.
“Learning directly from data, NSynth provides artists with intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer,” said the team.
NSynth works by taking samples from about a thousand different instruments and blending them together in a two-step process. First, the program learns to identify the audible characteristics of each instrument in order to be reproduced. The machine can then mimic each instrument alone, or it can blend them, creating what sounds like a single, new instrument.
The result? An algorithm-driven digital instrument that sounds somewhere between a flute and a violin.
The program utilizes what is called “deep learning.” This is a specific approach to AI that includes vast amounts of data being processed much like the human brain would. Deep learning systems are intriguing for their ability to learn from their mistakes, and get better over time. Just like our brains and learning, the systems teach themselves how to improve.
“We wanted to develop a creative tool for musicians and also provide a new challenge for the machine learning community to galvanize research in generative models for music. To satisfy both of these objectives, we built the NSynth dataset, a large collection of annotated musical notes sampled from individual instruments across a range of pitches and velocities,” said the team.
Music critic Marc Weidenbaum called Google’s new approach promising. “Artistically, it could yield some cool stuff, and because it’s Google, people will follow their lead,” he said.
The NSynth team has also made it possible for anyone to download and use their database of sounds, having released a research paper that describes the NSynth algorithms. AI researchers and various other computer scientists will surely be excited to see the AI phenomenon for themselves.
This free online 9-part docuseries is going to change everything you think you know about diabetes and obesity.
The medical community isn’t telling you the whole truth about diabetes.
Learn what you can do to lose weight and avoid or heal diabetes.