Woowave Sync Pro Serial
I just shot a wedding with three camera’s. One camera was AVCHD.MTS files. I also recorded the audio with a separate small Olympus DS-30 recorder. I have Adobe Premiere CS6 for editing.
Woowave Dreamsync Review, Is Dreamsync Pro Worth a Pro Price? If you’re a videographer, being able to sync multiple audio and video files with just a few clicks of.
I am wanting to do a multicam edit. I just learned that there are two software that will make syncing the audio very easy and fast.
One is “Pluraleyes” by RedGiant and the other is WooWave. Has anyone had experience with either of these two software and how good are they? Pluraleyes is very expensive $199. Is there a cheaper way to purchase it? WooWave, I believe, is $49.00. Any help or advice would be appreciated.
Thanks in advance Harry. I have not used WooWave, but I do use Pluraleyes and I'm a big fan. I have FCP X, which has the same feature built in, but Plural Eyes is able to sync some audio that FCP can't. It was worth the cost for me. I've used PluraalEyes since FCP7, then on to Premiere Pro and then with FCPX.
TBH I've found things that plural eyes could sync that FCPX couldn't, but equally things FCPX can sync that PluralEyes can't In fact, it's probably 6 months since I used PluralEyes because FCPX has been so reliable. I had used v1 and v2, but v3 just didn't work for me at all, so we gave up on it. Select the audio clip in the timeline, go to the following menu: Clip -->Audio Options -->Render & Replace Premiere Pro will generate a new 32 bit float version of the file and replace it in the sequence. Once you have this, plural eyes should be able to sync it along with all the others. Sometimes you may need to do render & replace on all the audio tracks, but it depends on what cameras you are using. For instance, I used to have to do this on the Canon XF100 files but not the Canon 5D2/3 files.
However, after I imported the sync'd sequence I'd go back and replace all the 'render & replace' footage with the originals again and then delete any duplicates created, or brought in by PluralEyes.
OK, it's time for you guys to help me!:-D I downloaded a trial of PluralEyes to see if it would help my workflow. I usually shoot long events like plays, live bands, or other performances and record my audio directly from the mixing console with my Marantz PMD-660 audio recorder. What I then have to do is sync the audio recorder with my camera footage and as anyone who has tried this without genlock knows, the two devices will drift over time so you have to split your audio recording up and continue to re-sync it every so often. I thought PluralEyes might do this automatically so I split my audio recording at 10 minute intervals, select all of my footage and audio and ran PluralEyes, I even told it the events were in chronological order. This is what my timeline looked like BEFORE: Unfortunately, This is what my timeline looked like AFTER: OK, what did I miss? It didn't sync anything!!! Your help will be graciously appreciated.:) ~jr.
[Theo van Laar] 'What helped me in those cases is not synchronizing all the short clips at the same time with the longer audio events but only a few each time. Just see if that helps for you.' I tried just selecting the first camera event and the first audio event and I had already synced them by hand. PluralEyes moved them way out of sync and interesting, it moved the camera event which is NOT what I want. Is there any way to tell PluralEyes that the camera is the master and the audio is the slave I'm trying to sync? I don't want the camera events moved at all.
Perhaps it simply wasn't designed to sync audio with video.:( ~jr. [Theo van Laar] 'It looks like the amplitude of the upper audio track is much bigger then the amplitude from the lower audio track. 'You know, it is possible that the audio from the camera is too muddy to be synced. Fnaf 1 Free Download Full Pc Games.
There is a lot of room echo/reverberation because it's in an auditorium while the audio from the mixer is clean and dry. It may just be too different given the reverb/dry mix. Even though I can hear the words clearly, the human brain can filter the human voice much better than a computer program ever could. I was really hoping this would work too.:( ~jr. [Shoestring Videos] 'Tell it to try really hard(its an option) Try not splitting the audio file Try not telling it they are in chronological order Try splitting the video files into a couple segments instead of audio'I tried everything but nothing worked. I even added media markers and turned on the option to use them and it still gave the same results. I've wasted more time at this point than it would have taken me to sync the audio by hand (usually only a 15 - 20 minute job) so I'm guessing that PluralEyes cannot help my workflow.:( Thanks everyone for all the suggestions.
I guess my audio is too different between camera mic and console mix for PluralEyes to it figure out. (on the bright side I saved $200) ~jr. I do this all the time.
I generally use one digital audio recorder as the effective timebase, since it will be run through the entire shoot, while I may be starting and stopping cameras. No obvious reason it shouldn't work the other way.
The audio/video sync shouldn't be any worse than two camcorders, unless you have a poor audio device (early Zooms were good for less than 20 minutes, typically, but they fixed that in the H4n and other recent units). One hour might get you a slight de-sync, but I'd deal with that in a second pass. Of course, make sure your video segments are split by time on capture. A single clip that's got a time discontinuity will be unsyncable. I haven't used the 'events in order' switch. Not sure what else could go wrong, it has always pretty much just worked for me. Can't comment on PluralEyes as I sync by eye and ear, but like you, I usually have continuously recording events.
I have not had a sync issue with any of the long events I have recorded over 80 minutes, and I am mixing tape (FX1) with data recorder (Z1P + Datavideo DN60), memory card (PJ760), with an XLR feed going into an NX5. I have had some sound delay issues with the audio feed from the sound desk as their compression tends to add a delay, but once the audio is lined up, it stays that way. It does not drift.
There is a suggestion I have read on other forum of setting the Marantz PMD-660 to record at 99.9% speed to match the 29.97 fps of NTSC. I am the WooWave developer and would invite all to try out woowave. If you can't afford it, you can get a license just by simply posting your process on youtube(screen record). All you need to do is compare woowave to other app doing similar thing and post the video online(comment would be great but not essential) What is important though is to provide link to the footage used, or proxy of the footage so anyone can recreate the tests.
Enough of biased reviews and PR. Woowave will always post footage used in tests. Fair& Transparent:) Igor Woowave.com tools for digital artists. It all depends on the source audio. In my own case, when I developed WooWave, I had to do hundreds of tests with different audio. Tradeoffs are usually unusual frequencies.
Without having access to some of you audio, one can't really say much:). In my own case, the biggest problem was to get rid of false positives or get less. What I do in WooWave, and what is probably done in Plural Eyes is applying Fourier transform with window function to get Spectrogram of the audio. Then this 'image' is analyzed for patterns in all audio clips. If they match, sync is established as an offset. In this case, if it works with standalone, xml parser is to blame, but I would guess that the problem lies with the nature of the source audio. The above is oversimplified description:) There is so much to this to get it going.
Lots of small tricks and innovative algorithms to get precision and speed. Igor Woowave.com tools for digital artists. I plan on trying woowave when I get home tonight (assuming there is a trial to try). I'll let you know how I make out. My guess is that the reverberation from the auditorium probably produces a significantly different spectrogram from the FFT than the dry audio coming from the desk which is causing sync to not be established.
I'll load both up into Sound Forge and look at the spectrograms myself. Still, you can see that the waveforms look similar in Vegas because that's how I'm lining them up by eye and have been for years (simply using amplitude). We shall see. Derivatives of Mel-Frequency Cepstral Coefficients, or just the FFT, with cross-correlation? I had to solve a somewhat similar problem a few years back (software defined radio implementation of NTSC television, naturally had to run in realtime, had to reliably find the H/V syncs, also the color clock. FFT on that made it easy to find the phase offset). I had one decoder running on a single Intel Core 2 core, an MPEG-4 encoder on a second core.
Eight cores per rack. Kind of cool, and yet, a stupid way to avoid using a $2.50 chip:-) Using the SDR wasn't my idea. [Geoff Candy] 'Just a long shot. PluralEyes comes as a standalone as well as plugin. Maybe the standalone version would work. I've tried a short piece shot with A1E Vid+Audio with independant picked up using a Zoom. Just followed a tutorial and it worked fine.
Not tried it from the timeline yet.' The problem is that I'm trying to fix drift. So I have one video file and one audio file and it takes me two seconds to sync them. Then after 15 - 20 minutes they drift and I have to split the audio and re-sync it.
I was looking for a program that would allow me to split the audio every 5 or 10 minutes and sync it automatically for me. I need to to work on Timeline Events, not on individual files so I can't use any standalone versions of anything. I don't want to render individual files because I can sync them manually faster at that point. There is a trial:). Not only that, but if you post screen recording of your session while comparing WooWave and Plural Eyes on the same footage an dpost that online, you get a license for free. Woowave.com/woowave-challenge There is only one requirement, footage used must be available so others can recreate the test. If the audio is very long, woowave may take much longer to process, but it will eventually find the sync (especially if you choose 'sacrifice precision' option before starting sync) This option is best for noisy voice recordings.
I would like to see how you're getting on with it. Free Download Mxtube For Iphone 3gs. All suggestions are precious:) Igor Woowave.com tools for digital artists. Assuming they're using a real quartz crystal, and not some cheap-ass ceramic resonator (which is more like 0.1% to 1% accurate), the run-of-the-mill super cheap ($0.50) PC-class crystals have an accuracy of +/- 100ppm or better, and usually with only marginal temperature compensation at best. So that's worst-case 0.01% accurate, or at 48KHz, that's up to 17,280 samples per hour. Or a drift of about 1/3 second.
Of course, if you have two of these running at the same time, you could actually be 2/3 of a second apart at the end of an hour (16 frames at 24fps, 40 frames at 60fps). So in short, with cheap crystals, your WooWave is going to have potential trouble with clips over 1/2 hour. HOPEFULLY no one's using cheap crystals like these.
The super cheap ($0.50) 32kHz crystals used in PCs and other consumery devices for the TOD clock are usually around 30ppm. I found a video of the Zoom H4 (older model) drifting 6-7 frames versus a Canon 7D over a ten minute shot. Even given a frame rate of 30fps, that's actually three times worse than my 100ppm worst-case. But the H4 and earlier Zoom recorders are known to have awful crystal stability, about 0.05%, or 500ppm -- possibly even worse. Not sure about the 7D. Rumor has it the H4n has a clock accuracy of 50ppm; Zoom/Samson themselves say it's accurate for shorter clips, and suggest software handles the issues in longer clips. Of course, they're poised as the go-to field recorder for DSLR videographers, most of whom use Canon, and Canon's still inflicting the EU's issue with camcorder taxes on all of their DSLR users.
So you can't record more than 29'59' continuous on any Canon DSLR (many others go longer, but they're a minority in the business). That would mean about 4 frames, worst case, at 24fps, with an equally reliable timebase in the camera. I looked up the Marantz recorder John's using, and several new ones, but there's no indication anywhere of the timebase accuracy. So much for 'professional'. Though the higher end unit, the PDM-671, has some kind of add-on for external sync. And it fixes a known problem with the pre-amps in the PDM-660, not sure what that is.
I'm assuming the audio's clear enough to be a good sync source. For some of the digital radio stuff I've done over the few years, I use a 1.5ppm crystal as a timebase for the frequency synthesizer.
This is a TCXO (Temperature Compensated Xtal Oscillator), too, so it's pretty good over temperature. That's going to mean 260 samples per hour, or a maximum device to device drift of 520 samples per hour.
10.8 microseconds. Not a practical problem in the least. Even good musicians can only hear about a millisecond of timing drift. But I had to pay a whole $15 each for those crystals (well, my company did). 32ms precision is used in the first pass. There are three passes:) for up to 96ms(losing precision to 3 frames) if there is no match in the main pass. In addition to this (Plural Eyes engineers, listen carefully:) ), Files are segmented and the drift becomes less of a problem.
The only thing you lose is your 'frame' precision, which is not a big problem as all you need to do is trim few frames later on. And in addition to this, I am 'blurring' the patterns so the matches will be found even if you got terrible drift. You can try this with the sample footage on my site, it is ultra low quality, similar spectros as it's similar music played. Plural eyes had big problem syncing it. Woowave managed to sync most of them, with one or two false positives only.
Igor Woowave.com tools for digital artists.