Samuel Floyd Posted March 5 Report Posted March 5 Checking out adobe podcast for the first time. What vudu is this? Has anyone incorporated running every file thru this for their post workflow? Would love to hear experience using this. Is there a point to turning of AC during production anymore with these AI dark arts? Quote
Jason Nicholas Posted March 5 Report Posted March 5 I’ve not used Adobe Podcast but we’ve applied dxRevive Pro extensively on The Apple and Biscuit Show in the last five episodes or so; It’s…impressive. We are recording the interviews now via Clean Feed which can give a quite high quality discreet recording for each participant in the interview but, of course, it’s dependent on the mic and environment that the person is in. We’ve had a few interviews with people in bad acoustics on a laptop mic that have cleaned up remarkably well with dxRevive. I think the thing that isn’t just there yet for dialogue editing is the ability to more precisely match another recording’s characteristics when repairing the audio. Perhaps that will come by incorporating some manner of convolution ingest or the ability for the software to ‘listen’ to a sample of clean audio from the person speaking and then take that into account when doing the repair. But, as it is, we’ve some amazing tools at hand especially for things like podcasts or any kind of interview format where you don’t have to be as concerned with matching up to other material, you just want it to sound good for a given duration of tape. Quote
Olle Sjostrom Posted March 5 Report Posted March 5 I used it as a beta tester but only for one thing really: my recordings of myself as a kid in cassettes. Those tapes were recorded on just a regular tape player standing on a table in my room so totally worthless audio wise. Thought maybe re-synthesizing my voice with material from other recordings could help, but no… first barrier of course being the language. One recording I used Adobe Podcast on came back sounding like every single language, it even changed my voice from female to male at points. So, totally worthless material still equals worthless after that. But that figures, obviously. Quote
Jim Feeley Posted March 5 Report Posted March 5 Olle, were you beta-testing Adobe Enhance Speech v1 or v2? V2 was released about three months ago. Apparently it's built on a different model/process than v1. Perhaps overall better but not always better, so users can choose which model, v1 or v2, they want to use through a little dropdown: https://podcast.adobe.com/enhance Seems to me the big win for Adobe Enhance Speech is accessibility, price (free or part of an Adobe subscription), and simplicity. So for people who don't have and might not want to buy Supertone Clear, Hush, RX, etc... Adobe's thing is worth a try. Here's a quick comparison of v1 and v2 by an Adobe employee. Aimed at an audience of podcasters and creators...not JWS denizens (note that I'm at best semi competent in audio post). 😉 And to Samuel's question/hope-- ya, I already have producers saying even more than before "they can fix it in post." Sigh. (ps- sorry to sound so hype-y). Quote
Olle Sjostrom Posted March 6 Report Posted March 6 It was probably the first version, early. I bet you none of the pro AI models can do my tapes any good, unfortunately. At least the ones with real bad noise.. Quote
Samuel Floyd Posted March 6 Author Report Posted March 6 22 hours ago, Jason Nicholas said: I’ve not used Adobe Podcast but we’ve applied dxRevive Pro extensively on The Apple and Biscuit Show in the last five episodes or so; It’s…impressive. We are recording the interviews now via Clean Feed which can give a quite high quality discreet recording for each participant in the interview but, of course, it’s dependent on the mic and environment that the person is in. We’ve had a few interviews with people in bad acoustics on a laptop mic that have cleaned up remarkably well with dxRevive. I think the thing that isn’t just there yet for dialogue editing is the ability to more precisely match another recording’s characteristics when repairing the audio. Perhaps that will come by incorporating some manner of convolution ingest or the ability for the software to ‘listen’ to a sample of clean audio from the person speaking and then take that into account when doing the repair. But, as it is, we’ve some amazing tools at hand especially for things like podcasts or any kind of interview format where you don’t have to be as concerned with matching up to other material, you just want it to sound good for a given duration of tape. So, you are saying that tools such as adobe podcast and Dxrevive can be useful, yet they create a "new" recording for each file? So if you were to use a lav track and a boom track layered they wouldn't line up? 20 hours ago, Jim Feeley said: Olle, were you beta-testing Adobe Enhance Speech v1 or v2? V2 was released about three months ago. Apparently it's built on a different model/process than v1. Perhaps overall better but not always better, so users can choose which model, v1 or v2, they want to use through a little dropdown: https://podcast.adobe.com/enhance Seems to me the big win for Adobe Enhance Speech is accessibility, price (free or part of an Adobe subscription), and simplicity. So for people who don't have and might not want to buy Supertone Clear, Hush, RX, etc... Adobe's thing is worth a try. Here's a quick comparison of v1 and v2 by an Adobe employee. Aimed at an audience of podcasters and creators...not JWS denizens (note that I'm at best semi competent in audio post). 😉 And to Samuel's question/hope-- ya, I already have producers saying even more than before "they can fix it in post." Sigh. (ps- sorry to sound so hype-y). I have not yet heard of supertonic or hush. I currently use Waves clarity VX (the under $100 one), and isotope 9 de-noise and felt like adobe enhance worked far better. Have I simply been in the dark on far superior options? Quote
Olle Sjostrom Posted March 6 Report Posted March 6 I reread this post and found that you could read a lot of snark into it, didn’t mean it that way at all! So please have that in mind… These ” far superior” options haven’t really been around for that long, so you haven’t been in the dark exactly. There’s already kind of a plethora of them and all equally usable. If you think of it as different tools, specifically screwdrivers, I’d say izotope is a screwdriver with replaceable bits of a higher standard than your regular Target (or equivalent) screwdrivers might be, whereas the DxRevive is maybe a specialized special metal screwdriver that will work on most screws but not all, and the other ones are just variations on that. No one of them really offers a way to very easily apply them and get a good result, they still need tweaking and listening before applying. Izotope is going to see itself outrun in a few years unless they come up with something really bonkers mindblowingly crazy in like a year, like stem separating speech from a live feed and cleaning it with no audible artifacts, accurately identifying accents and language quirks, generating text and making that text editable and still match the source like Jim mentioned.. (I mean those things will probably happen within two years or is already in the works) but still I don’t feel like it’s ever going to be not a tool. There are no things that can do EVERYTHING. If that were the case than we’d have ONE audio recorder and ONE microphone… i guess what I’m saying is that new tools are great and they make things sound good, but they still need post producing and curating. I work in radio, and most journalists are not interested in good sounding material, they want the “text” or audible speech, not good sounding speech. These AI tools are great for “rescuing” bad audio, but the downside is - now they have even more reason not to properly learn to capture good sounding speech! And since bad audio = bad audio however you look at it, then.. well. Maybe that will change someday, if someone comes up with a way to physically move the sound source to the microphone instead of the other way around… sorry for getting all philosophical, too Quote
Samuel Floyd Posted March 7 Author Report Posted March 7 8 hours ago, Olle Sjostrom said: I reread this post and found that you could read a lot of snark into it, didn’t mean it that way at all! So please have that in mind… These ” far superior” options haven’t really been around for that long, so you haven’t been in the dark exactly. There’s already kind of a plethora of them and all equally usable. If you think of it as different tools, specifically screwdrivers, I’d say izotope is a screwdriver with replaceable bits of a higher standard than your regular Target (or equivalent) screwdrivers might be, whereas the DxRevive is maybe a specialized special metal screwdriver that will work on most screws but not all, and the other ones are just variations on that. No one of them really offers a way to very easily apply them and get a good result, they still need tweaking and listening before applying. Izotope is going to see itself outrun in a few years unless they come up with something really bonkers mindblowingly crazy in like a year, like stem separating speech from a live feed and cleaning it with no audible artifacts, accurately identifying accents and language quirks, generating text and making that text editable and still match the source like Jim mentioned.. (I mean those things will probably happen within two years or is already in the works) but still I don’t feel like it’s ever going to be not a tool. There are no things that can do EVERYTHING. If that were the case than we’d have ONE audio recorder and ONE microphone… i guess what I’m saying is that new tools are great and they make things sound good, but they still need post producing and curating. I work in radio, and most journalists are not interested in good sounding material, they want the “text” or audible speech, not good sounding speech. These AI tools are great for “rescuing” bad audio, but the downside is - now they have even more reason not to properly learn to capture good sounding speech! And since bad audio = bad audio however you look at it, then.. well. Maybe that will change someday, if someone comes up with a way to physically move the sound source to the microphone instead of the other way around… sorry for getting all philosophical, too The philosophical approach is what I was looking for! Thank you for the perfect analogy. I am looking for my next purchase in post for this noise removal after stumbling across adobe enhance and got to thinking about our jobs as a whole for the future as well. I'm just trying to figure out what screwdriver works best for me! Quote
Olle Sjostrom Posted March 7 Report Posted March 7 And sometimes all you need is a wrench and a hammer. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.