AI Archives - Audio Media International https://audiomediainternational.com/tag/ai/ Technology and trends for music makers Thu, 08 Sep 2022 08:58:06 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9 https://audiomediainternational.com/wp-content/uploads/cropped-ami-favicon-32x32.png AI Archives - Audio Media International https://audiomediainternational.com/tag/ai/ 32 32 AI-innovators DAACI on their groundbreaking new composer-aiding technology https://audiomediainternational.com/daaci-interview/?utm_source=rss&utm_medium=rss&utm_campaign=daaci-interview https://audiomediainternational.com/daaci-interview/#respond Tue, 06 Sep 2022 14:00:55 +0000 http://audiomediainternational.com/?p=90303 At the crest of a wave of increasingly astounding AI-led compositional software, DAACI has a potential for generating melody, sound and texture that is vast in scope. Now, the company quests towards a Metaverse-leaning future…

The post AI-innovators DAACI on their groundbreaking new composer-aiding technology appeared first on Audio Media International.

]]>

At the crest of a wave of increasingly astounding AI-led compositional software, DAACI has a potential for generating melody, sound and texture that is vast in scope. Now, the company quests towards a Metaverse-leaning future…

 

DAACI Main

 

As a 2020s music-maker, debating the pros and cons of AI-generated tracks has become one of our regular pastimes. From those platforms which reliably construct ready-to-go soundtracks on the fly, to those that nudge composers into certain niches, the question of whether the growing surge in computer-driven creativity is a good thing or a bad thing keeps many of us up at night. One thing that’s undeniable is that the quality of these algorithmically-designed works is getting better.

Enter DAACI; a fully-formed artificial intelligence, capable of composing, arranging, orchestrating and producing completely original music in real-time. An acronym for Definable Aleatoric Artificial Composition Intelligence, the DAACI software doesn’t rely on human-crafted samples or existing frameworks, instead forming its own musical architecture and often going above and beyond what composers are capable of.

We had a conversation with DAACI’s CEO Rachel Lyske to learn more about this intriguing software, and how DAACI might end up benefiting modern composers…

AMI: Firstly, can you give us an overview of DAACI, and how its AI-led tech is able to construct musical elements in real-time?

Rachel Lyske: The best way to answer that is to start thinking like a composer. In the compositional process, composers have options and choices for what they can do to achieve their end result. They have their defined options depending on what it is they’re trying to say. They’re not going to choose certain musical options that don’t meld well together (such as sad music during a car chase for example). So there are always many intelligent constraints over the options they choose.

So what we do at DAACI is encode that series of options based on an input, then the computer can present those options in real time to compose for whatever brief we need to fulfil. Hence we can create this dynamic and limitless music because we’re not static. That’s how DAACI works, it’s a composition brain that acts in the same way as a composer would.

AMI: How does DAACI interpret a composer’s input?

RL: If we truly understand what a brief is – especially in music – then it doesn’t really communicate a deeper meaning. What music does is that it gives you an emotional connotation. We can annotate the emotional connotations within specific music choices. And what the emotional connotations will be if we use certain cues. Consequently, when we go through a brief and someone tells us they want it to be ‘happy’ or ‘scary’ or ‘tense’, we can look at this and see how combining certain options can lead to the end result that they need.

A big part of how it works is an analysis process and another half is this meta-compositional process. So we’re helping people determine what they want to say emotionally – and how musically they can say it. Then we’re aiding them to combine from different places, to create bespoke reactions to that brief.

So the question kind of answers itself. It’s an emotional language. Straight away as a composer, you know what you need to fulfil certain briefs, and you bring certain elements together to make music that hits a certain emotional target.
 

Rachel Lyske - DAACI CEO
Rachel Lyske, DAACI CEO

 

AMI: Do you see DAACI’s unique approach to AI-based music composition as more of a system that works in tandem with the composer, as opposed to a replacement?

RL: It is very much working in tandem with the composer. DAACI isn’t a replacement for a composer, it’s an enhancement of their process. They might choose it to replace their process but it’s certainly not replacing *the composer*. In reality, most commercial composers already have heaps of options to play with, and we’re just providing a similar mechanic for everyone else. We respect that composers do that, and we’re enhancing that approach.

AMI: You’ve stressed that video game composition is a particular area where DAACI might prove to have a strong impact, why do you think this is?

RL: Well the gaming market is massive and it’s only growing. As we get closer to the Metaverse and Web 3.0 it’s only going to swell. There’s no way that smaller composers can fulfil a lot of the huge demands that writing for interactive mediums entails. With this tool you can express your intent, and DAACI will do the rest.

As an early experiment, even with just three inputs we worked out that it could generate variations of 5×10^{11}. It exceeded that actually. To put that in context, Spotify holds around 82 million tracks, and that’s 6,400 times smaller than the options available via just the three inputs we entered for our brief. So we’ll invite any gaming company who wants to explore all that with us.

AMI: Is there any other software out there similarly innovating in the field of AI-based music composition, and what marks DAACI out as different from the likes of AIVA and Amper Music?

RL: What we are excited about is that the world is opening up and the attitude towards AI and composition isn’t a terrifying prospect anymore. As individuals we’ve got around thirty years of experience at DAACI so there’s a real maturity to our approach. Getting back to that idea that DAACI is the core of the system, what we’re not doing is feeding it a load of scores and saying ‘alright, make me something that sounds kind of like that’. We’re not trying to extract from some deep neural network some kind of truth from the music. What we’re doing is saying ‘we’ve got the intent’ and we can craft a meta-composition.

It’s not a trivial thing. The majority of us are professional musicians and artists, even amongst the coders, and that’s been one of the unique aspects of us. We really think we are unique in our approach. The other approaches are only going to get you so far. I think absolutely. We’re the only ones that are doing this.

AMI: How will DAACI be rolled out then – will it be a web platform, or an app?

RL: Ultimately the system is designed to benefit the composer and there’ll be a composer tool for them to use. But that’s only the start of a productivity tree that the benefits of DAACI will flow through. The commercial applications of this brain can go into many different products, just like the end result of a piece of music written by a composer. It’s the same with DAACI. It’s about creating a new framework.

AMI: The Innovate UK investment was a high-profile advocacy of DAACI – How competitive was that process of winning funding, and can you talk about how the investment will enable you to build on the company’s objectives?

RL: We are extremely thankful to Innovate UK for that. It’s absolutely brilliant. The recognition and the support has been fantastic and it’s great that they are recognising what we’re trying to do. It was a really rigorous process. It was like doing due diligence on an investment, it was a real deep dive into everything. I think there were 1,072 applications and only 71 were funded.

The main aim was to recognise game-changing, innovative and ambitious ideas that they think will significantly impact the UK economy for good. For us, having that investment has been extremely powerful. It’s allowing us to enhance our R&D side and develop more research. We’re massively thankful to them for that and their ongoing support.

AMI: Do you think that AI-led content and art generation is going to be a massive part of our lives across the next few decades, and are we just starting to see a tidal wave of AI-led applications? Particularly as innovative ideas like the Metaverse become more widespread?

RL: I do, and I think it’s an incredibly exciting time right now and I think it’s an incredibly empowering time. DAACI is riding that crest of that wave, anticipating what the needs of the future will be. Making music intelligent in its environment, and that’s essential for the future.

AMI: What would you say to those fearful of the perceived encroachment of AI into the music composer’s marketplace?

RL: If you were asking my brother that question he might not be as polite as me. My brother is the inventor of DAACI, and it’s been built on a lifetime of research. Essentially he’s a composer, I’m a composer, and we didn’t just wake up one day and decide to do this. It’s something that’s been a lifelong obsession. We genuinely believe that the landscape of how music is created and experienced is changing. If we can enhance and empower then why wouldn’t you try to do that.

AMI: How do you see DAACI evolving further in the future, and what’s next for the company?

RL: I think it will be an integral part of composers’ workflows. Our CCO Ken often refers to a great quote from Chris Cooke (CMO Insights and Miderm, 2008), he says “The history of the music industry is basically a story about how a sequence of new technologies respectively transformed the way music is made, performed, recorded, distributed and consumed.”, essentially there’s been a series of leaps over the last 100 years, DAACI is that next phase in that evolution, particularly as the digital world starts opening up. Composers become meta-composers. The users can become composers. It can be democratised. Things don’t stop, they evolve.

For more information on DAACI, visit daaci.com

 

The post AI-innovators DAACI on their groundbreaking new composer-aiding technology appeared first on Audio Media International.

]]>
https://audiomediainternational.com/daaci-interview/feed/ 0
The Horrors’ Tom Furse on how AI will revolutionise music production https://audiomediainternational.com/tom-furse-interview/?utm_source=rss&utm_medium=rss&utm_campaign=tom-furse-interview https://audiomediainternational.com/tom-furse-interview/#respond Mon, 08 Aug 2022 14:21:40 +0000 http://audiomediainternational.com/?p=90023 Founder member of The Horrors, synth polymath and passionate enthusiast for all things AI, Tom Furse spoke to us recently, and predicted how machine learning will re-define art in all its forms, he also admitted a growing hunger for innovation when it comes to music technology…

The post The Horrors’ Tom Furse on how AI will revolutionise music production appeared first on Audio Media International.

]]>

Founder member of The Horrors, synth polymath and passionate enthusiast for all things AI, Tom Furse spoke to us recently, and predicted how machine learning will re-define art in all its forms, he also admitted a growing hunger for innovation when it comes to music technology…

“Why go anywhere when you can go anywhere?” laughs Tom Furse, The Horrors’ inventive co-founder and creative journeyman tell us, when asked about his recent swing away from live performing. Aside from his continuing role as a The Horrors’ principal sonic architect, Tom’s individual exploits have brought two deep solo records, and a burgeoning passion for AI.

Using his own system to manufacture dense visuals, like those in his recent video for HAAi’s Baby We’re Ascending, as well as his ‘Relics’ series of generative art pieces. Furse explained to us how he foresees similar mind-blowing AI innovations eventually re-drawing the music technology landscape. But first, we asked Tom about his recent departure from performing live with The Horrors…

AMI: Last year you announced you weren’t going to be touring with The Horrors anymore, in your Instagram post you mentioned that you’re more of a ‘creator’ than a performer. Had that been an issue for a while for you, the need to write not marrying with being on the road?

Tom Furse: Yeah, it absolutely had been an issue for quite some time. Just because it’s such a different environment when you’re on the road. It’s great for all kinds of reasons, but if what you really want to do is make stuff, there’s pitfalls. It’s just really hard to find a quiet spot. I’d spend a lot of time with headphones on in noisy environments. I wasn’t always happy on tour. When Covid hit it was a bit of a lightbulb moment.

AMI: But of course, you are still a member of The Horrors, and you’re working on album six?

TF: Yeah, that’s slowly happening, we’re chipping away!

AMI: Your own career beyond The Horrors has been pretty varied, one of the things that has been interesting recently is your use of AI in your visual art and videos, not least the Baby We’re Ascending video you did with HAAi. AI seems like it’s quite a big area for you right now. How long have you been working with AI and what first attracted you to it?

TF: I’ve been using it for the last year. I just heard about it online, and I’d seen examples of [AI-generated visual art] and thought it was pretty cool. Then I heard an episode of the Interdependence podcast with Holly Herndon and Mat Dryhurst that really got me interested. I heard about an approach that married image synthesis with natural language input control. I really wanted to give it a go. So I did. It required the navigation of a virtual coding environment, it wasn’t like a nice easy user interface. It was a little bit tricky. But, I just started messing around with it and I haven’t stopped since.

After doing more of less the same thing with music for the past 15 years or so, this was like a much needed breath of fresh air – an entirely new medium that was much less explored than music. I think music is thirsty for new technology and sounds, and new places to go. I don’t think we’ve had that for quite a long time.

I’ve always loved exploring the element of surprise in music, with generative approaches. But, that’s more an illusion of surprise. This is much more strange and psychedelic. It’s quite something.

Tom Furse

AMI: Have you used generative and AI-based approaches in music before?

Tom Furse: No, not really because they don’t really exist yet. I went looking for it, and everything I found was pretty wanting to be honest. The processes I’ve used before have been based on logic and maths systems. You can set quite complex patterns, but it’s in reality a very simple computer, ones-and-zeroes approach. It’s not really intelligent but it is fun. It’s what Brian Eno’s been doing for decades, he’s done some of the most interesting work in that area. But we need new stuff – we can’t be Brian Eno all the time!

AMI: Do you think that we’ve only seen the tip of the iceberg really with what AI can do, particularly in music production?

TF: Oh, God, yeah. It’s a hard thing to capture because I think historically what people have been trying to do is train AI on like MIDI files, but they don’t necessarily convey what is interesting or *good* about the songs. Or, the style of a song or a sound. It’s not terrible data but it’s quite raw data. It doesn’t really capture the essence of stuff, which is what the visual art-aimed AI approaches do.

With what Holly Herndon and Mat Dryhurst are up to, where they have captured the essence of Holly’s voice with Holly+ – there you can drag an audio file into it and a very good approximation of Holly’s voice will sing it. With the latest version you can’t tell the difference between her voice and the synthesised version. So, there we’re getting into some really interesting territory. The lines are going to get so blurred.

One of the best-selling vocal albums of 2026 or 2027 will probably be made not using a real vocalist. People will start making things like Chet Baker techno records. Whatever the maddest thing you can imagine will be possible. We’re seeing this already with image synthesis. People will be mashing up stuff. If you think about how postmodern culture is now, we’re all really primed for this.

I think that’s more how entertainment will go. Giving the type of AI technology that we’re currently seeing in visual art to musicians is going to be wild, beautiful, scary and psychedelic. I can’t wait. But, there’ll undoubtedly be a lot of pushback.

AMI: Then you’ve got the other side of the AI-paradigm; platforms like AIVA and Amper which can manufacture tailor-made soundtracks using AI, what are your thoughts on that side of things?

Tom Furse: I mean, I’ve heard them and I think it’s almost like the equivalent of Dall-E. When I first saw that I suspected that a certain kind of illustrator might not have as much work, and that might also be the case here. I feel that I have a slightly savage opinion on it. My heart says, maybe everyone needs to try a little harder – why do people settle for this kind of mediocrity? I don’t enjoy it, when I’m watching a film, if there’s just a really vanilla score there, I don’t enjoy making that kind of music when I’ve done library projects before, I just feel like we should be rewarding bolder experiments.

This is perhaps a wake-up call for everyone. If you’re worried then you’re really saying you’re as much of a skilled craftsman as the AI is, but then the art is somewhere lost in there.

Having done a few films, I do understand that it is a very cutthroat industry, and music is such a tiny consideration when people are budgeting. It’s really undervalued, so it doesn’t surprise me that people are seizing on the chance to undervalue it more. I think a better AI solution would be a system that enabled their samples to sound more realistically like a certain style…

 

Tom Furse The Horrors

 

AMI: Well, there are quite a few string sample libraries out there that are pretty indiscernible from the real thing…

TF: They’re great, and I do use string sample libraries but the lengths you have to go to to make them sound natural is often quite extreme. I’ve been recording some live strings recently and there really is such a huge difference. It sounds very convincing, but when you get the real articulation and expression from a real player, that hasn’t been beaten yet.

I think you have a lot more scope to do that if you’re synthesising out of nothing – rather than trying to construct recorded samples to play in this natural way. But that’s not there yet. I think that side of the industry will be able to realise a lot more, with a lot less. It’s going to be pretty wild.

AMI: Do you have a similar stance on synths, are you averse to soft synths and prefer the real deal?

Tom Furse: No, I’m synth-neutral. At first you could kind of tell there was a difference, but it’s harder to discern now. I use Arturia’s stuff quite a lot. There’s loads of great Max for Live developers doing interesting synths, there’s so much good stuff happening. I do think we’ve reached a little bit of a plateau in terms of synth methods and sounds.

When I first started buying synths around 2007/2008, there wasn’t really anyone making any good new analogue synths, but now you can get a Behringer TD-3 for under £100. The accessibility is there now, we’ve conquered that particular mountain but where can we go from here?

AMI: How many synths are in your studio right now, or is that a silly question?

TF: Like ten I think, I definitely used to have a lot more, but I’m slimming down a little bit. I’ve got an Arturia Polybrute, and that’s a really amazing workhorse. I realised there’s lots of things I didn’t need anymore. I’ve kind of gone for a more efficient set-up.

AMI: Your second solo album, Ecstatic Meditations was quite a blissful record. Do you intend to continue making music in that vein?

TF: At the moment, it’s more a question of time. I’m just quite busy with a lot of projects that are more visual based. I am doing a bit of music but it’s really nice to have a break for a little bit. I’ve got a piano downstairs and I’ll sit and play something, realise I’m on to something but then might not chase it – But I know that ideas are still there. I used to chase it relentlessly. When your identity is wrapped up in this niche of creativity, it’s good to have a rest and explore something else. When I do come back to make music, it feels more focused. Less like a job.

I also feel like I’m preparing myself for what will be an exciting new wave of music technology. I feel like this investment of time is a very positive thing right now.

AMI: What’s next on the agenda for you Tom?

Tom Furse: Well, I’m working on another video for HAAi right now, then I’m finishing up some artwork for Temples. I’ve got a record of mine I’ve mixed with Ghost Culture that’s sounding cool so I’m going to get to that when I have a bit more time. Yeah, my current situation is that I wake up in the morning and I think ‘Right, what am I going to make today?’ Things are pretty open.

Follow Tom’s artistic and musical adventures over at tomfurse.com

The post The Horrors’ Tom Furse on how AI will revolutionise music production appeared first on Audio Media International.

]]>
https://audiomediainternational.com/tom-furse-interview/feed/ 0
Ten Ways The 2010s Changed Music Forever https://audiomediainternational.com/ten-ways-the-2010s-changed-music/?utm_source=rss&utm_medium=rss&utm_campaign=ten-ways-the-2010s-changed-music https://audiomediainternational.com/ten-ways-the-2010s-changed-music/#respond Fri, 15 Jul 2022 15:53:33 +0000 http://audiomediainternational.com/?p=89883 Building on both the rise of the internet and the boom in home studios throughout the 2000s, the 2010s ushered in numerous leaps and innovations that rippled across the whole spectrum of music. From slick software, smart noise reducing speakers, the guiding hand of artificial intelligence and the mainstream explosion of streaming platforms upturning the aural applecart. We explain why the 2010s changed music, and has set us on an unchangeable course…

The post Ten Ways The 2010s Changed Music Forever appeared first on Audio Media International.

]]>

Building on both the rise of the internet and the boom in home studios throughout the 2000s, the 2010s ushered in numerous leaps and innovations that rippled across the whole spectrum of music. From slick software, smart noise reducing speakers, the guiding hand of artificial intelligence and the mainstream explosion of streaming platforms upturning the aural applecart. We explain why the 2010s changed music, and has set us on an unchangeable course…

Here at Audio Media International HQ, we like to keep our eyes fixed on the very latest jaw-dropping music technology of today – and what might be lurking around the corner, but it always helps to reflect on the extent in which the music technology landscape has altered. A scant three decades ago, home music production required the skilled connecting of multiple pieces of hardware, live sound was frequently mired by inescapable atmospheric issues, the popular music chart was the central pillar of what was hot and ‘artificial intelligence’ was a concept confined mainly to the realms of science fiction.

As we near the middle of space year 2022, we find an almost unrecognisable playing field. Though the 2000s saw most of us dive into the World Wide Web, and we watched home computers grow from suspicious intruders lurking in the corners of spare rooms into our indispensable modern companions, we’d argue it was the 2010s which saw the most significant swings take place for musicians.

Across hardware and software, online and offline, live and in-studio and, perhaps most importantly, in the minds of music makers. Technological and cultural changes made us re-think both what we could achieve – and why we were doing it.

To illustrate how the 2010s changed music, we’ve cast our mind back across the last decade, and pinpointed ten of the most momentous sea changes that have set us on an unchangeable course.

10: High speed remote collaboration allowed us to make music with anyone, anywhere
As the 2010s gathered apace, e-mail, messenger and rudimentary social media stopped being the only ways to communicate. Faster speeds meant the ability to see our friends and colleagues via platforms like Skype. The type of lag-free video-calling we’d long been promised by Star Trek suddenly became reality. While video-calling presented real-time music collaboration opportunities via screen-sharing, software such as the DAW-syncing Splice, the free online music-making hub BandLab and in-DAW additions such as Cubase’s session-sharing plugin VST Transit were all indicative of a new trend for modern music makers – the encouraging of long distance music collaboration.

2010s changed music with remote collaboration

 

Now, with increasingly slick tech, such as Audiomovers’ Listento, making the process of sending lossless multichannel audio globally super smooth, this decade will undoubtedly see more great work made together by musicians who’ve possibly, never even met.

9: The vinyl revival guaranteed that the legacy format will never die
Written off as a long-dead relic of a bygone age, and a symbolic totem of music’s mythological past, vinyl’s glorious comeback in the early-to-mid 2010s astounded technologists and those who believed that the web’s de-physicalising of music would sound the death knell of the album. Perhaps in part due to the ease in which anybody could devour an artists’ back catalogue via streaming, the vinyl comeback was enticing for those wanting to underscore their commitment to their favourite artists – and physically own their cherished records as physical objects.

2010s vinyl revival

 

By 2014, vinyl sales surpassed 1 million for the first time since 1996, and as we move in the 2020s, the surge of interest shows little signs of abating, with 2021 marking the 14th consecutive year of growth since 2007. While physical manufacturing threats continue to dog the format (as does the issues pertaining to the small number of manufacturing plants) the original music listening medium remains a glorious way to enjoy music, and may outlast us all.

8: The refinement of festival and outdoor sound means compromise-free experience – wherever you stand
All too frequently plagued by inconsistent sound, festival-covering loudspeakers have long needed the input of skilled live sound engineers to make them work effectively. Even so, a surge in noise abatement orders and the difficulty of covering the span of the stage area of festivals such as Glastonbury, Coachella and the like have meant that if you’re standing in the wrong place, you’ll be getting a duff festival experience. Not so in the 2010s. As we’ve recently highlighted, Martin Audio’s MLA Loudspeaker Arrays have proved to be one such solution. Controlled by some adept software, Martin Audio’s multicellular loudspeaker arrays are a notable example of how these sound issues has been addressed, and are able to cleverly direct consistently impactful audio to the audience’s ears, while entirely preventing any spillage polluting the surrounding area. There’s also the impressive K series from L-Acoustics which uses 3D modelling to present the most finely-tuned audio presentation for the event in question. To put things simply, the days of tinny, quiet PA sound are done.

7: More people became enamoured by compression-free high-res audio
With the adoption of streaming, and the increased storage space on our smartphones, people en masse totally stopped regarding the humble CD – and its pristine sound quality – as the be-all and end-all of music listening through the 2010s. But, with digital audio came a cost, and in the early days of the internet the speeds required to transmit CD’s supreme 1411kbps quality lag-free just wasn’t there. Today, that’s a very different story. With the likes of Qobuz, TIDAL and Amazon Music HD granting us easy access to full-frequency, full-fat audio.

 

2010s growth of lossless

In the 2010s, music changed forever. Lossless audio stopped being the domain of the finickiest of audiophiles and started being appreciated by casual music listeners. Whether you can truly hear too much of a difference between a smartly compressed track and the same song in high-res is down to the sharpness of your ears. But, the quality benchmark has certainly been raised.

6: The quality of consumer grade music listening technology increased
With the flourishing of crystal clear music came the need for even more sophisticated home listening devices. The leap forward in consumer grade tech has been tremendous over the 2010s, with venerable Bluetooth transmission now giving way to music control over Wi-Fi and the much greater ease in which you can set up multiple room speakers over a closed network. Smart room speakers, such as those built by Sonos, Bose, Apple or Amazon – pull in the audio straight from built-in versions of the streaming platforms you’re subscribed to. Couple that with voice-command and you really can’t help but be reminded of the rapid rate in which we’ve arrived at a near-magical future. Just one of the ways in which the 2010s changed music forever, and we’ve not even mentioned headphones yet…

5: We learned that our grassroots music venues need our support
Though we’ve recently covered the devastating impact of the Covid pandemic on grassroots venues, the effect of 2020’s global public health crisis only accelerated the issues that were already causing much strife to the live sector. Throughout the preceding decade, concerns such as increased business rates, upkeep costs, an increase in property development leading to a flurry of noise complaints and demanding rents all piled the pressure on to a core fault line – that 93% of venue operators do not actually own their venues, resulting in widespread unwillingness from venue owners to work to resolve the ever-sprawling web of issues. The Music Venue Trust was established in 2014 to help the grassroots sector fight back, and is now pushing to bring swathes of the UK’s most cherished venues under collective ownership. “If we can resolve the issue of ownership, it would strengthen every other aspect of their resilience to these challenges.” The MVT’s CEO Mark Dayyd told us recently.

4: Artificial Intelligence began to help us write, produce and master our music
Infused throughout our software as the 2010s marched on, artificial intelligence (or machine learning algorithms) began to prove their mettle. Going well beyond what even the most seasoned producers were capable of, smart software such as iZotope’s RX 9, Zynaptiq’s Adaptiverb and Oeksound’s Soothe 2 sped up previously long-winded audio surgery, with millions of on-the-fly calculations determining the best course of action for our unique mixes. The growth of machine learning in the 2010s coincided with the increasing desire that many home producers felt to be entirely self-sufficient.

2010s AI boom

Handing the task of mastering up to well-trained, ever-evolving digital ears as opposed to a outsourcing the process to a human mastering engineer was one such area where AI has bloomed, just take a scan of the number of online mastering platforms ready and willing to tackle the job. It’s likely that artificial intelligence will continue to clear more and more previously inaccessible routes. But while it will undoubtedly empower many, it leaves some on edge, as we investigated recently.

3: Artificial Intelligence began making our music for us
While we’re on the topic of artificial intelligence, there’s another major way in which it has shook the foundations of the music world across the last ten years – and that’s by its growing role as a creator of music in its own right. Now increasingly relied on by creatives working in film, television or online as a way of coming up with instant original cues to fit their projects, platforms such as Aiva, Amper and Loudly AI Studio each provide quick ways to generate professional sounding, AI-crafted cuts. Taking simple genre or mood-based instructions, and using neural networks to scan huge libraries of tracks, recognise similarities and assemble their own takes, these AI-platforms have understandably been controversial. Jobbing soundtrack-ers have undoubtedly started to feel like this type of encroachment might put them out of work. While, hopefully this won’t be the case, it’s certain that we’re going to be seeing a great deal more computer-brain-built music through the 21st century.

2: The increase in computing power granted slicker digital audio workstations and plugins
One of the biggest ways in which the 2010s changed music forever, has been the overall advancement of the quality and capability of music technology. DAWs like Logic, Pro Tools, Cubase and Ableton Live flowered through the last decade, becoming ‘must-haves’ even for those just casually interested in music-making. Alongside our slicker DAWs, came mountains of colourful plugins, interfaces, hardware, software (and in-between) synths, a slew of virtual instruments, virtual mixing and mastering assistants and more.

2010s changed music with better DAWs

This has made the last decade the easiest in human history to experiment with any sounds you wanted, and build your own release-quality music. But while the complete swing to computer-centric music-production enabled people to craft songs without studio costs (or needing to get signed) another shift meant that it was harder than ever to get what you’d made heard…

1: The take-up of streaming platforms fundamentally changed how listeners think about music – and how we make our music
When it comes to the intrinsic structure of the music world, the most important development across the last ten years has undoubtedly been the mainstream move to streaming services as the delivery portal of choice for today’s newest songs. While subscription-based access grants the consumer an incredible offering of pretty much anything and everything, available to download instantly, it also leaves artists, songwriters and many labels struggling to recoup – as well as stand-out in a widening ocean of noise. It’s a complex problem we’ve covered at length in recent features. Yet, aside from the financial instability of the streaming dynamic, it’s also shifted the way artists and producers *approach* music-making.

2010s changed music with the boom of streaming

The idea of singles and even albums as a concept seems to mean less in a world wherein listeners are free to re-sequence at will. It’s also made it much harder to pinpoint where the mainstream currently is – easy enough to determine in previous, chart-oriented, decades. In the eclectic worlds of our playlists, genres old and new sit together. So, is streaming a good thing? For listeners, undoubtedly, though it’s undeniable that our rapid take-up of streaming has left things in a dizzy state of flux. The 2010s changed music, forever, but what happens next, we’ll see…

The post Ten Ways The 2010s Changed Music Forever appeared first on Audio Media International.

]]>
https://audiomediainternational.com/ten-ways-the-2010s-changed-music/feed/ 0
Artificial Intelligence in Music Production – Friend or Foe? https://audiomediainternational.com/artificial-intelligence-in-music/?utm_source=rss&utm_medium=rss&utm_campaign=artificial-intelligence-in-music https://audiomediainternational.com/artificial-intelligence-in-music/#respond Fri, 28 Jan 2022 12:40:28 +0000 http://audiomediainternational.com/?p=88889 During the 2010s we witnessed the rise of smart and fast machine learning algorithms, and their gradual integration into music production software. Now the artificial brains of countless plugins, platforms, virtual mastering suites – and even composers – are readily at the wheel as the cognisant drivers of today’s creative tools. But, has the balance between AI and human skill been tipped too far?

The post Artificial Intelligence in Music Production – Friend or Foe? appeared first on Audio Media International.

]]>

During the 2010s we witnessed the rise of smart and fast machine learning algorithms, and their gradual integration into music production software. Now the artificial brains of countless plugins, platforms, virtual mastering suites – and even composers – are readily at the wheel as the cognisant drivers of today’s creative tools. But, has the balance between AI and human skill been tipped too far?

“Once the machine thinking method had started, it would not take long to outstrip our feeble powers. They would be able to converse with each other to sharpen their wits. At some stage, therefore, we should have to expect the machines to take control.” So was the bleak forecast that the father of artificial intelligence, computer genius Alan Turing predicted. While, thankfully, we’re not quite facing such a nightmarish end-point just yet, Turing’s notion of a machine-driven cyber-brain, continually sharpening its senses via constant dialogue and refinement, is a concept that lay at the heart of the commercial application of artificial intelligence.

In our industry, we can see this perhaps clearer than any other. Multitudes of AI-based software tools now available, that excel in tasks (such as frequency editing, mix separating, audio restoration and mastering) that it would take even highly skilled human operators a much, much longer time to undertake. Constant refinement, and adaptive learning routines, allow AI-based software pathways to improvement. While the term ‘artificial intelligence’ is typically ascribed to any software that relies on algorithms to fulfil its criteria, there’s actually quite a wide spectrum of definitions. There are those that simply trigger a series of pre-determined actions that its creators have carefully crafted, and then there are those HAL-like virtual geniuses (iZotope RX 9, Zynaptiq Adaptiverb for example) that can inspect a waveform, precisely diagnose what needs to be done to bring the most clarity to it, and take immediate action.

MECHANICAL MELODIES

While these types of applications are perhaps more palatable, the increase in virtual composers of music has unsettled some. Initial experimentation with human-free music creation began as far back as the late 1950s, when University professors Lejaren Hiller and Leonard Issacson used an early computer to program the Illiac Suite (String Quartet No.4). This first foray into algorithmically-driven music creation nudged open a door that ensuing pioneers widened, leading eventually to 1997’s ‘Experiments in Musical Intelligence’ program. This was able to exceed human composers by generating a piece of music that matchlessly replicated the style of Bach.

While these toes in the water built the foundation for research, the last twenty years has witnessed an exponential explosion of compositional AI development, in conjunction with the rising speed of computer power. Now the options are vast. There’s Aiva – a classical and symphonic virtual music composer, which uses neural networks to scan huge libraries of classical music and replicate the commonalities it encounters. There’s Amper, which is able to conjure an infinite number of ready-to-go soundtracks for video games and TV, then there’s Loudly AI Studio, able to generate a range of tracks based on modern genres instantaneously.

AIVA, AI

While some of the results can be utterly superb, is the growing range of AI-based composers leading us step-by-step into a world where we’re surrounded by facsimile of human produced art? Aiva’s CEO, Pierre Barreau told The Naked Scientists that “Even if AI is objectively get better at composing music than humans, I think one crucial element that humans bring to the table is meaning in what they do. And an AI could come up with a new style of music, totally crazy style of music, but if there’s no creative intention that can be explained, I think it’s very hard for an audience to really connect.”

AI composition might be an easy – and substance free – solution for those needing the artifice of a pro-sounding soundtrack, without forking out the cash. But it’s understandable why so many jobbing professionals feel like their stand in the market is being devalued. It’s a debate that will undoubtedly continue.

SOLVING PROBLEMS WITH ARTIFICIAL INTELLIGENCE

Separate from their stance on the human vs machine debate within the compositional domain, many music producers have happily integrated AI and machine learning plugins into their workflows, aimed at speeding up and handling previously time consuming processes. Audio repair is one such field in which AI has delivered majorly impressive results, with world-leading companies such as iZotope proudly putting their reliance on machine learning as a front and centre USP of their product line which includes the audio post production package RX 9.

Melissa Misicka, Director of Brand Marketing at iZotope, explains to us that using artificial intelligence to this end was always a company ambition; “One of our goals as a company is to find ways of eliminating more time-consuming audio production tasks for our users so they can instead focus on their creative vision. Introducing assistive tech —that can intelligently analyse your audio and provide recommended starting points — felt like a perfect way to do that.”

RX 9 izotope

It’s not just about making time-consuming processes quicker, though. Many see AI as a method of achieving those tasks that humans are largely incapable of doing. iZotope explain how this idea has been put to use. “One example is source separation for speech cleanup.” Misicka tells us, “Our Modules like Dialogue Isolate or De-rustle rely on it to attenuate unwanted sounds like footsteps, bird chirping, or rustle of a mic hidden in clothes. Manual repair of these noises would be very laborious, because the noises change in time and overlap with speech”

“Another example is smart synthesis of replacement sounds.” Melissa continues, “When speech is coming from a telephone call, its frequency spectrum gets limited to 4 kHz, which results in a characteristic muffled sound. RX’s Spectral Recovery module uses machine learning to recreate the missing upper frequency band with realistic synthesised content to enhance the quality of speech. Manual ways for high-frequency synthesis would include tools like an exciter, but the quality and plausibility of the synthesised content would be nowhere near the results of machine learning.”While the arguments over artificial intelligence in the compositional domain continue to rage, few would object to the harnessing of machine learning to fulfil tasks that are largely outside of most of our aural and technical capabilities. Would they?

DEEP LISTENING

One of the most beneficial applications of AI for home musicians has been the quick availability of algorithm-driven mastering services. Take LANDR for example, this subscription service’s shrewd software leans on a mine of intelligence cribbed from 20 million mastered tracks. It uses this information to calculate how it applies tailored frequency-boosting and aural gloss for your song. “When LANDR first launched in 2014, it was a first of it’s kind solution for cloud-based AI mastering”. Patrick Bourget, LANDR’s Product Director, tells us. “In 2016 the landscape began to see similar but far less refined alternatives emerging in the marketplace.”

LANDR is an Artificial Intelligence driven platform

Bourget observes that many companies point to AI without justification, in contrast to LANDR’s always-evolving processes; “We see many companies leaning on the ‘AI’ buzz word, but they rarely deliver on their promise of truly intelligent production tools. LANDR maintains that our AI remains at the cutting edge of the AI field with one-of-a-kind results, every time—our engine adapts to a track’s unique sonic qualities when mastering. We never use presets to make cookie-cutter masters.”

“We’ve always felt that providing too many presets or options to users diminishes the value and trust we’ve built over the years with creators.” Patrick continues, “Our superior-quality masters stem from AI informed by millions of mastered tracks and tuning provided by the golden ears of industry giants.”

ARTIFICIAL INTELLIGENCE VS HUMANITY

With the everyday prevalence of platforms like LANDR, Patrick Bourget seems like a good person to ask about how he sees this human/machine dynamic evolving in the future, specifically in the mastering domain; “Given the accelerating pace of creation and the often tight budgets of music producers around the globe, we feel that there will always be room for both AI-mastering & mastering engineers.” Bourget explains, “We’ve heard from countless professionals that LANDR provides an affordable and elegant solution for quality masters at a fraction of the cost of traditional mastering.”

LANDRs Artificial Intelligence-driven mastering styles

But what about those that fear their livelihoods could come under threat by AI’s continual advances? Bourget takes the middle ground; “We don’t see AI-mastering as a question of OR, but rather as an AND that assists creators when needed. Our automatic mastering process and simple workflow gives musicians the power to complete a master in minutes. An example being that of a mixing engineer quickly delivering a mix in progress with the polish an artist expects to hear from a mastered track. It’s a great way to get quick feedback.”

iZotope echoes this fundamental point, that the most successful applications of artificial intelligence to date are those that help creatives and professionals meet their objectives, and not those that seek to supplant them. “We often imagine our assistive tech as, quite literally, a studio assistant who can take that first pass at repairs or a mix for you while you go get a coffee.” Melissa explains, “We’d reinforce that the mission of iZotope’s assistive tools is not to replace professional expertise, but to coach those who are still learning by suggesting next steps, and to assist those who are more experienced by getting them to a starting point more quickly.”

While it’s unarguable that artificial intelligence will continue to soak into our daily lives on many levels, it’s plainly apparent that rather than shrinking before its fathomless potential, musicians and producers have more to gain than to lose from its ever-developing abilities.

The post Artificial Intelligence in Music Production – Friend or Foe? appeared first on Audio Media International.

]]>
https://audiomediainternational.com/artificial-intelligence-in-music/feed/ 0
Edith Bowman to host podcast on the future of the industry, from festivals to production https://audiomediainternational.com/edith-bowman-to-host-podcast-on-the-future-of-the-industry-from-festivals-to-production/?utm_source=rss&utm_medium=rss&utm_campaign=edith-bowman-to-host-podcast-on-the-future-of-the-industry-from-festivals-to-production https://audiomediainternational.com/edith-bowman-to-host-podcast-on-the-future-of-the-industry-from-festivals-to-production/#respond Tue, 11 Aug 2020 09:55:04 +0000 http://audiomediainternational.com/?p=82713 Broadcaster Edith Bowman is hosting a new 10-part podcast series in which pioneers in music discuss a range of topics […]

The post Edith Bowman to host podcast on the future of the industry, from festivals to production appeared first on Audio Media International.

]]>

Broadcaster Edith Bowman is hosting a new 10-part podcast series in which pioneers in music discuss a range of topics facing the industry. Play Next, sponsored by BMW, will also provide a platform for new young artists.

Industry topics under discussion include the fate of festivals in a post COVID‑19 era, the role of technology in music production and where it goes next, and the power of music in driving social change.

The first episode drops August 12. In it, Bowman is joined by Gilles Peterson, DJ and founder of the Worldwide Festival in France, which should have celebrated its 15th edition in 2020. 

Episode two features Bowman in conversation with Hans Zimmer. With AI tools (Artificial Intelligence) gaining traction, the legendary film music composer will share his views on what this means for music production, and whether machines can ever be as creative as humans.

“The music industry is an ever evolving machine and I’m looking forward to speaking to a fascinating collection of people at the top of their game to find out where we go from here,” says Bowman. “But I’m particularly thrilled to be celebrating new music. It’s been exhilarating to explore and discover some wonderful bands and artists who are doing really fantastic things.”

“Music plays such an important role in our lives, from attending live events to simply listening in your car, it’s hard to imagine where we would be without it,” says Michelle Roberts, Marketing Director at BMW UK. “This podcast talks to innovators in this world, and also shines a light on the next generation of artists. We felt this was particularly important in 2020, as many of these artists haven’t had a platform this year. The Play Next partnership is an extension of BMW’s support for music. Like all other music fans, we might have had to cancel our festival plans this year, but we’re proud to bring this podcast to the listener.”

 The BMW Play Next podcast will be available on all major streaming platforms.

The post Edith Bowman to host podcast on the future of the industry, from festivals to production appeared first on Audio Media International.

]]>
https://audiomediainternational.com/edith-bowman-to-host-podcast-on-the-future-of-the-industry-from-festivals-to-production/feed/ 0
Dubler Studio Kit to let users transform their voice into ‘the ultimate MIDI controller’ https://audiomediainternational.com/dubler-studio-kit-to-let-users-transform-their-voice-into-the-ultimate-midi-controller/?utm_source=rss&utm_medium=rss&utm_campaign=dubler-studio-kit-to-let-users-transform-their-voice-into-the-ultimate-midi-controller Thu, 14 Mar 2019 13:51:59 +0000 http://audiomediainternational.com/2019/03/14/dubler-studio-kit-to-let-users-transform-their-voice-into-the-ultimate-midi-controller/ The kit was launched as a project on Kickstarter this week

The post Dubler Studio Kit to let users transform their voice into ‘the ultimate MIDI controller’ appeared first on Audio Media International.

]]>

London based creative music technology start-up, Vochlea Music, has launched the Dubler Studio Kit, a highly innovative, live vocal recognition MIDI controller.

Vochlea Music are graduates of the Abbey Road Studios music technology incubator and winners of the SXSW 2018 Pitch Competition.

The kit offers a method for musicians to translate their musical ideas into reality using just their voice. Capturing musical ideas using traditional instrumentation and MIDI inputs can be challenging and requires know-how, even for the most accomplished musician. Vochlea Music’s vocal recognition AI technology unlocks the power of the voice, allowing musicians to create music and control sounds quickly and intuitively.

The Dubler Studio Kit allows artists to hum a synth pattern, beatbox to trigger a virtual drumkit, or manipulate effects and filters with a “hmmm”, “laaaa” or “oohhh” sound — all in real-time straight into a DAW. Pre-launch beta testers of the new technology include Mercury Prize Nominated MC and producer Novelist, alongside a number of other musicians and producers.

The kit comes as two parts. First is the Dubler software, a virtual MIDI instrument for Mac and PC that’s compatible with any DAW. Second is the Dubler microphone – a custom low-latency USB mic – tuned for the Dubler software.

“The Dubler Studio Kit unlocks musical expression, fuels creativity and is generally a lot of fun. It speeds up the traditional music creation workflow by allowing the user to control and manipulate MIDI outputs through audio inputs,” said George Wright, Vochlea Music CEO and founder. “Essentially meaning you can lay down melodies, drum loops, effects tracks… whatever you want, directly from voice to DAW.”

Using all of the timbral qualities of the voice, Dubler Studio Kit allows musicians to trigger samples, control synths, manipulate filters and effects, track pitch, pitch-bend and control envelopes, velocity and MIDI mapping values simultaneously, based on the way they make their unique sounds.

The Dubler Studio Kit launched as a Kickstarter project this week and is looking to raise £40,000 over 35 days. During the campaign, backers will have the opportunity to pledge to be amongst the first Dubler Studio Kit users. Kickstarter backers will receive fulfilment priority when the kits are delivered in mid-2019.

https://www.kickstarter.com/projects/vochlea/2044359653?ref=763509&token=dee3df6a

The post Dubler Studio Kit to let users transform their voice into ‘the ultimate MIDI controller’ appeared first on Audio Media International.

]]>
Abbey Road welcomes computer ‘hackers’ to create “music for 2030” https://audiomediainternational.com/abbey-road-welcomes-computer-hackers-to-create-music-for-2030/?utm_source=rss&utm_medium=rss&utm_campaign=abbey-road-welcomes-computer-hackers-to-create-music-for-2030 Tue, 13 Nov 2018 11:53:50 +0000 http://audiomediainternational.com/?p=20333 Event saw software developers, designers and music producers working in teams to train AI (artificial intelligence) to create a record

The post Abbey Road welcomes computer ‘hackers’ to create “music for 2030” appeared first on Audio Media International.

]]>

Over 100 computer ‘hackers’ took over Abbey Road Studios this weekend in an effort to create a new generation of robot composers.

The brief from partners Microsoft and software company Miquido was: “How will artists create music in 2030?”

The event in Abbey Road’s Studio One saw software developers, designers and music producers working in teams to train artificial intelligence to create a song, using emotions to trigger different sounds and samples.

A number of innovative projects were showcased, including a Vochlea microphone that changes voice sounds into an instrument and a drum machine that can turn any surface into a percussion instrument.

Industry executives judged the best use of AI in creating a “sound art installation”, and the winners have been announced from each of the event’s tech sponsors: @Microsoft: ‘Rapple’ @Miquido : ‘HRMNI’ @LdnFldsBrewery: ‘XRSynth’ @Hackoustic: ‘SoundSoup’ @Cloudinary: ‘Crater’ @QMUL: ‘Xamplr’.

Studio One, with its 1905 Steinway piano, two Hammond organs and more than 800 microphones, has been used to score soundtracks for films including Star Wars, Raiders Of The Lost Ark, Harry Potter and The Lord Of The Rings.

“In the same room that witnessed the inception of the recording industry, we are embracing the next shift in music creation — exploring the influence of the newest technologies and high-performance computing on our creative tools,” said Dom Dronska, Abbey Road’s head of digital. “For the first time ever, we are bringing together the brightest technologists and music producers and creating a unique inspirational atmosphere where beautiful accidents can happen.”

The post Abbey Road welcomes computer ‘hackers’ to create “music for 2030” appeared first on Audio Media International.

]]>
AI and its impact on the music hardware business https://audiomediainternational.com/ai-and-its-impact-on-the-music-hardware-business/?utm_source=rss&utm_medium=rss&utm_campaign=ai-and-its-impact-on-the-music-hardware-business Wed, 22 Aug 2018 10:31:39 +0000 http://audiomediainternational.com/2018/08/22/ai-and-its-impact-on-the-music-hardware-business/ Pete Downton (deputy CEO) and Manan Vohra (operations director) from digital music solutions company 7Digital tell AMI about the current – and future – impact of AI.

The post AI and its impact on the music hardware business appeared first on Audio Media International.

]]>

Advancements in AI, AR and VR are making a big impact on the music and audio hardware businesses. The development of new user experiences with AI and related technologies in particular is going to drive the next wave of innovation. Here, Pete Downton (deputy CEO) and Manan Vohra (operations director) from digital music solutions company 7Digital tell AMI about the current – and future – impact of AI.

Manan Vohra: There’s no doubt about it – hardware is changing. Over time, there has been a real convergence of software and devices. For example, Amazon’s Alexa is the product of the evolution of research in natural language processing, speech recognition, machine learning, microphone and speaker enhancements. The device is the Amazon Echo product you see, but it’s actually the advancements of software and hardware technologies that come together to deliver your contextualised experience.

Pete Downton: They’re great consumer products, but we’re just at the stage of using a first-generation technology. The quality of the voice interface and its ability to understand what we’re asking of it remains limited. You need to know exactly what you want and how to manipulate the voice assistant in order to get the desired response. That’s going to develop over time. In fact, technology already exists that is much better than Alexa, but Amazon has the market dominance.

Vohra: It’s true: smart speakers right now are dealing with simplified tasks and we do see fundamental user experience problems. But considering the nascent stage of these products, there is a lot more to be done with the underlying software (with all the machine learning algorithms and deep learning processes that entails) and availability of user behaviour data that will help improve the end user experience.

Downton: We’re going to see more companies coming into the smart speaker category, trying to take advantage of the mainstream audience that these products have opened up. They want to reach the kind of people that are used to spending their time listening to the radio rather than just targeting the seasoned music streaming aficionados, who are only a small portion of music listeners out there.

Vohra: And the term ‘smart speakers’ is a somewhat misleading label considering they are more input than output devices. In the case of Google Home, Amazon Echo and Apple Pod, these devices listen to our music needs (“Play me some jazz”), process these commands against vast amount of music metadata, debate the context using machine learning algorithms, and then out comes a track or playlist to provide us a lean-back experience. They create a moment of serendipity for users who don’t know what they want.

Downton: Where do you think AI is going to lead the music hardware business? In the immediate future, AI means an increase in the sales of smart speakers (we’ve already seen that happen), and will probably cause a continued shift towards hardware that has voice recognition functionality and less reliance on displays and touch interfaces. That’s only likely to increase as consumers become more familiar to interacting with music in this way, and as things like improved natural language processing and better availability of metadata makes music discovery easier.

Vohra: Right, and ultimately, technology is going to enable a future that is less about carry-on devices and more about wearable and shareable devices. Looking ahead 50-100 years, you’ll be able to take your unique experience anywhere with you without actually owning the device. Logging into your own account through fingerprint recognition on any device means there will be no need to carry a phone everywhere with you. In that world, the idea of ownership of any kind of device, or even a car, becomes an old-fashioned idea.

Downton: That vision of the future isn’t as far off as it may seem! Having previously worked in the labels for decades, my concern is that we’ve seen the music industry be complacent in the face of new technologies before. There’s an opportunity to recognise the value of AI and immersive experiences in music here, but it could slip through our fingers. We need to collaborate and grow through connections with other industries (like consumer technology, automotive, and others) before the world moves on to solve other problems.

Vohra: Obviously, I’m really interested in how we can use new technologies, but it must always be rooted in what is useful and makes a real difference in the lives of consumers. Technology should be seamless and frictionless. As much as the industry promotes the idea of ambient interfaces powered by AI, there is still a need for hardware that uses tactile interfaces (not least for reasons of accessibility), and many users will need time to adjust to emerging voice-enabled search and discovery models.

Downton: Absolutely. We’re starting to see that there’s a significant market out there for this new generation of hardware – one that’s been created by music streaming.

The post AI and its impact on the music hardware business appeared first on Audio Media International.

]]>
Audionamix Releases IDC: Instant Dialogue Cleaner https://audiomediainternational.com/audionamix-releases-idc-instant-dialogue-cleaner/?utm_source=rss&utm_medium=rss&utm_campaign=audionamix-releases-idc-instant-dialogue-cleaner Wed, 08 Aug 2018 08:14:30 +0000 http://audiomediainternational.com/2018/08/08/audionamix-releases-idc-instant-dialogue-cleaner/ A real-time, cloudless solution that uses deep neural network (DNN) artificial intelligence to automatically clean up speech.

The post Audionamix Releases IDC: Instant Dialogue Cleaner appeared first on Audio Media International.

]]>

Audionamix has released the IDC: Instant Dialogue Cleaner plug-in, a real-time, cloudless solution that uses deep neural network (DNN) artificial intelligence to automatically clean up speech. 

IDC automatically detects and separates speech regardless of the surrounding content, such as noise from wind, birds or insects, car and plane interference and troubleshooting distant, roomy recordings. 

"IDC is unique because unlike traditional denoisers that learn and remove noise, it works by separating and preserving speech, regardless of the interference," said Maciej Zielinski, CEO of Audionamix. 

"This plug-in offers immediate dialogue clean up with the turn of a knob and addresses common audio issues such as complex variable noise interference including weather, traffic noise, music and room ambience."

The post Audionamix Releases IDC: Instant Dialogue Cleaner appeared first on Audio Media International.

]]>
AI platform LANDR offers royalty-free sample service for music producers https://audiomediainternational.com/ai-platform-landr-offers-royalty-free-sample-service-for-music-producers/?utm_source=rss&utm_medium=rss&utm_campaign=ai-platform-landr-offers-royalty-free-sample-service-for-music-producers Mon, 14 May 2018 08:55:33 +0000 http://audiomediainternational.com/2018/05/14/ai-platform-landr-offers-royalty-free-sample-service-for-music-producers/ To thank its creator community of 1.7 million artists, the new sample service will give users free access to this library that will continue to expand with upcoming releases

The post AI platform LANDR offers royalty-free sample service for music producers appeared first on Audio Media International.

]]>

AI platform LANDR has launched a cross-genre service built to provide royalty-free samples to music producers. 

The platform for music creators partnered with top talent including Dirty Projectors and Blue Hawaii to create Samples, a highly curated set of sample packs exclusively for the LANDR community. 

Having pioneered AI mastering and artist-friendly distribution, the four-year-old company continues to release exclusive creative resources working towards becoming a one-stop-shop for modern producers and musicians.

To thank its creator community of 1.7m artists, the new sample service will give users free access to this library that will continue to expand with upcoming releases. In addition to the regular release of new packs, LANDR is set to accept sample submissions from its user community in the future.

In addition to partnering with additional professional talent, new sample content will be curated from top talent in the LANDR community, helping emerging producers establish new channels of visibility.

“As a musician, you often find yourself stuck creatively. With Samples, you can spark new ideas for tracks by playing with sounds from outside your main genre,” said LANDR’s creative director and musician Rory Seydel. 

“We are giving back to the musicians that have helped make LANDR a success, and providing new possibilities for them to create great music in the future.”

LANDR now plans to provide support services for musicians creating samples, including educational resources on best practices for creating sample packs, selling loops, one-shots and other sample-based content.

“Samples can be a great source of inspiration when you’re not sure where to start or what else to add. I hope the sounds I’ve contributed can give other musicians a spark of creativity when they need it,” concluded Berlin-based techno producer Marc Houle.

https://samples.landr.com

The post AI platform LANDR offers royalty-free sample service for music producers appeared first on Audio Media International.

]]>