Jump to content

borjam

Members
  • Posts

    301
  • Joined

  • Last visited

  • Days Won

    3

1 Follower

Profile Information

  • Location
    Bilbao/Spain
  • About
    Bilbaina Jazz Club
  • Interested in Sound for Picture
    Yes

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Then I doubt my explanation is the good one. Maybe the signal received by the Micplexer was too strong and it was overloading? That would be the simple explanation. "My" theory would apply in a case in which the first element was amplifying a strong signal that would be rejected by the second filter. The impedance mismatch between the input of the second filter and the output of the first amplifier might cause overload on the amplifier, which would cause distortion. But, was there a strong signal within that 5 or 10 MHz difference between the two filters? Not intended as a interrogation or course!
  2. I see. Anyway unless the active antenna is passing a strong interfering signal to the Mixplexer filter there should be no problem. Is that the case? Is the Mixplexer filter narrower than the filter at the active antenna?
  3. Where did you place the attenuator? Between the amplifier output and the filter input? An attenuator should definitely help to prevent distortion at the amplifier output because it will help correct the impedance presented to it.
  4. Also, bear in mind that most RF filters are reflective, just “bouncing back” the energy present at the frequencies they reject. Their impedance on those frequencies is different than the design impedance, say, typically 50 or 75 ohms. If you set up a filter connected to the output of an active RF component such as an amplifier you can cause distortion in case the amplifier is feeding a strong, unwanted signal to the filter. So, filters should be installed between an antenna and the amplifier input. Mini-circuits is now selling a special kind of absorptive filters that dissipate that energy turning it into heat instead of reflecting it but I guess they are only useful in very specific situations.
  5. When we think about the spectrum of a signal we tend to consider the almost steady “sustain” part. I’m talking about the “classic” envelope phases: Attach, decay, sustain, release, which describes lots of sound pretty well. But the attack phase of a sound can have a lot of relatively high frequency components. They will only be present very briefly but without them the attack will be dramatically different. A prominent expert on violin construction told me many years ago (when I was very interested on synthesis) that attack is the most important part of a musical sound and what really imprints a “character”. Also, I guess an audiologist is only concerned about intelligibility. Capture some sounds like a kick drum (for example) and have a close look at a sonogram. The attack will show a brief surge of high frequencies. Or try to record a double bass using a microphone with a very poor high frequency response and the sound will be completely lifeless.
  6. Nice, but. First, I don’t see the antenna is made of Oxygen free Copper. So, inferior materials. Second you are overlooking the devastating effect of femto vibrations on FM modulation. A proper granite support with less than femto metric oscillations would be in order. I said audiophile grade antenna
  7. I should start a business of audiophile grade antennas for wireless microphones
  8. Well, aligning using timecode achieves a precision of 1/24th or 1/25th of a second (a frame). While it is sufficient in order to align audio files from a macroscopic point if view (ie, aligning audio to video or, for instance, different instruments so that you won't perceive a discrepancy while listening to it), at the microscopic level consider the number of samples in a frame. Which would translate into phasing issues if two different microphones connected to different converters pick up a correlated signal (ie, the same instrument). Unless you keep a constant clock (which can be achieved using GPS, a word clock signal or Dante's synchronization over the network) clocks will surely drift. Even outstanding temperature controlled clocks can have a drift of 0.5 ppm (parts per million). As long as there is no correlation at all between them there is no need for phase accuracy, right. So, if two microphones pick up an entirely different signal, not sharing anything at all, you don´t need phase coherency. It's trickier than it seems. And it doesn´t help that the audio market is flooded by snake oil sellers
  9. I don’t think it requires power, as it is passive. The data sheet states that one port can pass DC if required which indeed is a handy feature.
  10. USB-C is a complex beast. Normal USB-C is 5.0 volts. Period. Power delivery is a special specification implemented only on some devices (like Macs, some USB-C monitors, phones...) but it is not mandatory. Devices implement some kind of negotiation to enable PD. Other than that, USB-C is 5.0 V. I wouldn't dare to do an expensive experiment and I would ask Sound Devices. But it is safe to assume that MixPres don's support PD and in that case the expected voltage on the USB-C port is 5 V. I am not sure if some form of over voltage protection or regulation is mandatory on USB-C ports but I wouldn't assume it beforehand. This is a question you should ask Sound Devices. Your best bet is to use the battery terminals, connecting a regulator to them. There is even a Hirose adapter designed to fit like the battery caddy and it includes a voltage regulator. I wouldn't worry much about power noise. I am pretty sure Sound Devices filters the USB-C power input thoroughly.
  11. TL;DR: Don't without a voltage regulator. USB-C is a USB port, I imagine they haven't added a voltage regulator because USB is just 5 V. USB-C can go higher but some negotiation between is involved. As a desperate measure play with the battery terminals, which can get 7.5V (NP-F battery packs). Again, regulate.
  12. I know it's an old topic, but some time ago I found a supplier in France that seems to have nice and cheap antennas. They have a cheap Yagi and several log periodics among lots of stuff for wireless experimenters. https://www.passion-radio.com/wifi/50
  13. Old topic but I also have my own answer As I have understood it LUTs are generally used for non linear color encoding schemes used in contemporary digital imaging sensors. An old example of an "audio LUT" could be the µ-Law or m-Law curve used for telephone voice encoding. Nowadays anyway I guess all the equipment we have uses linear coding. In the analog domain the rough equivalent could be a noise reduction companding system such as the Dolby family but with multi band dynamics processing it's no longer something you can specify with a straoghtforward calculation or, as its name says, a look up tablle. So, not really valid.
  14. Pity. Is your controller up to date? Anyway these switches can be managed in the old fashioned way, although it can be quite annoying for someone without experience on Cisco network equipment.
  15. John Eargle's book was updated in 2011. https://www.routledge.com/Eargles-The-Microphone-Book-From-Mono-to-Stereo-to-Surround---A-Guide/Rayburn/p/book/9780240820750
×
×
  • Create New...