Jump to content

Denis Goekdag

Members
  • Posts

    24
  • Joined

  • Last visited

Contact Methods

  • Website URL
    http://www.zynaptiq.com/

Profile Information

  • Location
    In Front Of Computer
  • About
    Sound Designer, Composer, Mix Engineer, CEO @zynaptiq
  1. Phase cancellation? We do none of that in UNVEIL...where did you get that notion from? Also, we don't do any transient designer type processing, all we do is *bypass* some transients to allow for heavier processing without losing crispness. UNVEIL actually uses pattern recognition and de-mixing to do what it does. At any rate, please do not judge the sound quality based on YouTube videos - the YoutUbe audio codec may be causing what you're not liking. There's a free trial version available at the Zynaptiq site that you can use to get an idea of what the results are like. Cheers, Denis @zynaptiq
  2. Hey all! Just a short heads-up that UNVEIL version 1.5 is coming November 15th, adding a bunch of stuff: Now supports RTAS, AAX Native and VST on MacOS X and Windows in addition to AudioUnits New preset management functionality accessible from within the plug-in GUI allows user presets to be used across all plug-in formats on all platforms New multi-mono capability in Logic Pro 9.x to allow for individual settings for channels in multi-channel/surround scenarios New authorization app for more streamlined user experience Completely re-designed automation system for improved automation workflow New factory presets New option to enter values numerically by double-clicking controls This is a free update for users of v1.0.x. We'll be showing it at AES in SF, Oct 26-29, booth 639, so if you're in the area it'd be great to meet you! Cheers, Denis
  3. Bernie: no, PITCHMAP has no equalization functionality. Robert: at this point, there's no method of giving UNVEIL an impulse response as example, so yes, you need to set parameters manually. But in practice, it's a pretty fast process.
  4. Well, for any intelligent process, more information will always be better, yes. But I believe that a good tool should not depend on having other material than that which is to be processed to do what it does --- as it would then only help you achieve results if this condition is met. So IMHO relying on additional information is not an ideal strategy, and kind of "the easy way out". It is of course always good if the tool *allows* you to *optionally* supply it with additional information, but the tool needs to *already* know a lot so that it will always be able to deliver at least satisfactory results right out of the box. Cheers, Denis
  5. Working on ProTools and VST for Mac & Win as we speak.
  6. Well, I am obviously not at liberty to talk about what we're currently working on, but let me say that I'm confident you will love what we're up to when it's ready ;-)
  7. Hehe, well our implementation of these will also include a causality inversion function, which causes directors to consider audio the most important aspect of the film and allocate more time & resources to that department. We're still having some issues with the code at this point, though ;- Funnily, PITCHMAP has a similar kind of effect for composers. When the director comes in to listen to the results of the only orchestral recording session that was in the budget, and goes "Oh, that's AWESOME, it came out just like we wanted....but can we please change that chord sequence to something more like...like...Ryuchi Sakamoto versus Motorhead, you know....yeah, that's what we need, can we hear that please?".....you can now just play some new target chords to implement just that, hopefully demonstrating that it's all fine as-is *grin*
  8. Enervating? If I came across like I found your feedback enervating: that definitely wasn't my intention at all, it's just the nerd in me showing ;-) Actually, we love feedback of any type -- after all, we make these tools for the people using them, and feedback can only help improve them, which is a win-win thing. As to future incarnations, we've got some cool stuff for UNVEIL as well as some more advanced audio processing products in the pipeline....</tease> By the way....is there any other "nah, that can't be done ....but I'd LIKE to be able to do that" type process you guys are missing in your arsenal? We've got some stuff in the making that we think qualifies for that description, but if there's anything in particular that would enhance your workflow, or even change it for the better, it can't hurt to let us know ;-) Cheers, Denis
  9. Hi! I do understand that, I'm an advocate of sticking to naming conventions myself, but it is inherent to the nature of the beast that perceptually meaningful parameters are of a multi-layered, complex nature, and that there *is no* established terminology for naming them. Adaptation would really be the only parameter than can be expressed in a standard metric like seconds --- in an approximate way. We've got looking into that on our to-do list, I'm thinking we'll display an approximation in seconds on the main display, but that won't make it in until after we finish the VST and PT versions for Mac/Win, which is our #1 priority at the moment. It's a similar thing with our de-mixing based re-composition tool, PITCHMAP, where we use parameter names like "PURIFY", "FEEL" and "ELECTRIFY" to describe complex, multi-layered parameter groups ;-) As to the rough guess --- compare the ADAPTATION display "envelope" shape to the shape of the input signal in the display visually. If they match approximately visually, they'll match sufficiently accurately under the hood to do the job. By taking the parameter to very low values, you can in some cases focus on removing early reflections, by the way. As a starting point, I usually go for around 10:30. Cheers, Denis
  10. Hey all, we've just updated UNVEIL to v 1.0.7, which implements the following changes: Main controls are now in relative mode (for the "old" absolute mode: hold ALT before clicking on the controls) There is now an output gain slider to compensate for any level changes The Stand-Alone app will now use the sample rate of the source file instead of the system sample rate The Stand-Alone app can now record its output to a new audio file We've increased processing resolution a little further for less artifacts when using extreme settings Added parameter smoothing to prevent rare zipper noise during automation Various small fixes & enhancements ....so except for the rubber-band transfer function for the frequency-dependent FOCUS and for supporting more plug-in formats we've implemented most user-requested changes I guess ;-) --d
  11. It's going to be a rubberband curve for FOCUS, and possibly also for ADAPTATION if that turns out to be helpful during testing. You can obviously store the entire plug-in setting as a preset, but would you need additional presets just for the frequency nodes on the transfer curve? I'd think that most of the time, you'd be adjusting that on a case-by-case basis anyway, no? We'll definitely not let the GUI draw curves that would require infinite amounts of time to process unless we find an application where that would make sense. You know, removing reverb when inside a black hole or something like that...*grin*
  12. I'll talk to my partner about what is possible. I do believe that the ideal solution would be a "rubberband" free-form transfer curve....kind of like a parametric EQ like found in the FLUX Epure or DMG eQuality....that way, you'd get the ability to change the "bands" without going into a text editor :-)
  13. Hi all! Jay --- nice examples, thank you for sharing these and your findings! I'm on the road at the moment, so not near any decent monitoring setup, but I'll have a close listen & a go at the original files once I'm back in the studio/office on Monday. WRT to the generic AU GUI: Peak (like a variety of other hosts) still uses the deprecated 32-bit Carbon framework. Our plug-ins use the newer Cocoa framework for native 64-bit support. Carbon hosts are not able to load the Cocoa GUI, so they default to the generic AU GUI. We do, however, install a Carbon-compatible version in /applications/zynaptiq plug-in support/legacy/, so for use with Peak, simply replace the UnveilAU.component in library/audio/plug-ins/components/ with the "legacy" version and you'll get the proper GUI. You'll lose the 64-bit AU support though, so it's probably best to not overwrite the component but move it somewhere else instead, for swapping it back in for 64-bit hosts as needed. I see the point about wanting more control or a different set of center frequencies for the BIAS sliders. We'd probably need to stick to 10 as the number of sliders for the moment, but we may be able to have a different set of center frequencies selectable for dialog work in a future update. What set of 10 frequencies would you find most useful? Something like 200, 400, 750,1k, 2k, 3k, 4k, 5k, 6k, 10k? On a longer time-scale, we've been thinking about a transfer curve like you would find in a free-form parametric EQ, but that's neither something I could promise right now or give an ETA on (but I would like to have that in there for sure!). Have a nice week-end, Denis
  14. Well, we actually do write what they technically do in the manual. The thing with neural network based technology is that you don't actually explicitly write the function it will later perform. You explicitly write the structure of the network itself, but from there on, it is somewhat of a black box. You train it to recognize a particular feature that you want it to recognize, then breed it through various mutating generations until it does what it is supposed to do. So when we describe a parameter as ...that is pretty much what we implemented, really. Let me write a short overview of how the process works. Basically, we use pattern recognition and a perceptive model to identify signal components that the human auditory system and the human "analysis logic" perceive as "foreground", or "significant" components, and we then consider everything else to be "background" or "insignificant". The pattern detection has been pre-trained to focus on reverb-like background components, so the "background" is pretty close to being the reverb only. But, hence we speak of "FOCUS", and not "(De-)Reverb-Amount", this also grabs other signal components, such as background ambience, some types of noise, or - when using musical signals - "mud" in a mix. This differentiation is then used to de-mix those two elements. By setting the relative amounts of these two "layers" in the output mix, you can attenuate or boost reverb amount. Let me try to describe the controls with other words than the rather abstract ones used in describing pattern recognition parameters. FOCUS: think of this as a cross-fader between the reverb/background signal components (fully CCW), the unprocessed input signal (12:00) and the direct/foreground signal components (fully CW). This analogy is pretty precise, actually. FOCUS BIAS: these sliders offset the FOCUS value for 10 frequency bands. Kind of like having a RATIO control per band on a multiband compressor. With FOCUS at 12:00, these cover the entire range. With FOCUS at maximum, raising the BIAS sliders will have no effect, and setting them to their minimum values is like setting FOCUS to 12:00 for that particular band. These are useful if you would like to reduce reverb one one element in a mixed signal but not reduce background info on others (example: reduce reverb on dialog between 1kz and 5kz, but leave the "mumble/rumble" of a background ambience between 250 and 750 hz in place). t/f LOCALIZE: an analogy to this would be FFT frame size, where shorter sizes preserve more time detail, and longer sizes resolve the frequency more precisely (we do NOT use an FFT, though, and this is not a parameter for the transform but for the...erm...priorities within the pattern detection). So basically, low values for this parameter tend to give less artifacts and sound "crisper", but may not remove as much reverb as higher values when reverb and signal overlap. For example in very small rooms with a low amount of direct signal compared to the reverb amount, this parameter will usually need to be set to higher values to catch the reflections that are "fused" to the direct components. Higher values may however start sounding unnatural when removing a lot of reverb. When you want to isolate the reverb for up-mixing or off-screen-placement purposes, set this high, otherwise set it as low as possible for starters. t (REFRACT): oookay, now this one is pretty tricky to explain ;-) Essentially, you can think of this as the reaction time of the neural network and how long it thinks about what to do before deciding. This means that low values (= short reaction/decision time) will remove more early reflections, but the probability that short-time signal features get misinterpreted as reflections also rises. Higher values allow for a "better educated guess" at what parts of the signal are reflections, so you get less wanted signal components removed, which results in a more natural sounding signal. The trade-off is that you also retain some more early reflections. Raising the value of this parameter can help counteract adverse effects caused by high LOCALIZE values. ADAPTATION: this sets the length of reverb that the pattern detection is looking for. You're giving it a clue to help it do it's job. For most applications, set this to a value that approximates the actual reverb. PRESENCE: this introduces some randomness into the pattern detection, which statistically results in less reverb reduction and less artifacts, but also tends to highlight the *presence* of a signal (as found on old broadcast equalizers). In a way it also changes the frequency response of the reverb, as reduction of reverb removal using PRESENCE is a function of frequency. Raising this makes the removed reverb darker, and the remaining signal brighter (at least on average that's what it does). Also good for counteracting effects of high LOCALIZE values. TRANSIENT THRESH sets a detection threshold for transients in the input signal, which are then bypassed. This allows more reverb reduction while keeping transients crisp. The threshold takes dynamics as well as statistical signal properties into account. Hope that helps ;-) --d
  15. Sorry to hear you're not getting results yet. It's hard to tell from a distance what settings you're trying. You could send me a part of your dialog file and I'll send you back a setting that suits, to help get you started. I sent you an email recently, so you should have my email address. Here's a couple of considerations. The ADAPTATION parameter should be set to approximately the reverb time in your signal. Basically, just tweak the control until the slope of the adaptation display in the main display looks similar to the decay slope of the signal level display. When you have set that, increase FOCUS to de-reverberate. If you're not getting enough de-reverberation at max value, try setting LOCALIZE to it's minimum or maximum value, to see at which extreme you're getting most reverb reduction. If LOCALIZE is set to a high value, you may get some warbling/artificialness. To counteract this, raise REFRACT and/or PRESENCE. A good balance between these parameters for most signal types is when they're at their default values. If you're working with a signal that has a lot of verb from a very short room, disable the transient bypass function (move the TRANSIENT THRESH slider all the way top the right), as some of the reverb may be passed through that. Also note that room *resonances* will not necessarily be removed, as these are usually interpreted as discrete echoes, which we leave in place. To get as much resonance removal as possible, you will want to be using high LOCALIZE values and low REFRACT values, typically. I say "typically" as this is not a linear process like an expander or FFT-gate. WRT to the documentation...well, it may not be as well-written as your books - it is so far only a quick-start-guide- but it DOES tell the user what the controls do. What particular control description are you unclear about? Please note that it is not possible to describe what a pattern recognition system does in classic DSP terms (no thresholds, RMS-values, FFT-amplitude-thresholds etc involved). HTH, Denis
×
×
  • Create New...