ericseipel wrote:If I layout all cameras side by side and forego the 16gu space for a pixel consistent one, then due to the different heights of the actual real world screens, objects appear to grow when they are on the outboard screens. Conversely, when staying in the 16gu space, the objects shrink when moved to the outboard screens. I would like to load background graphics that contain all three content areas in one file, and just span it across the four cameras on each server. Maybe just split everything into 3 files for each scene and be done with it.
You are correct the 3560 is to keep the vesa timing happy, technically I would need the card to spit out 3550x1080 and thats not a legal resolution. I have the 2 datapaths set up to throw away 10 pixels each, the first 10 for PB output 1 and and the last 10 from PB output 2 (just like a center justified blend in an encore). So not a typo.
Center screen overlaps are 289, not alot, but doable.
3550? Sounds like you're doing your Data doubling on the Datapath boxes. If you add 2 extra cameras you can compose 2 full 1920x1080 raster feeds side by side in each output, double-sample in PB, and then do your split in the datapaths down the middle without any weird scanning math to complicate things. That being said, if you're happy with it and it works I guess you probably don't need to fix it.
I think I'm still having a little bit of a hard time figuring out exactly what your solution strategy is. Here's what I'm gathering: You want PB to create an INTERNAL relationship between the physical set dimensions and it's Pixel-oriented-workflow, pixel space? If yes, then the issue is that you have a higher pixel density on some of your targets than others.
There's a few ways you could deal with it: spliting and pre-scaling everything is certainly one option, but I'm getting the impression that you'd probably rather change the global sampling setting somehow so that media is globally scaled to look uniform, and is interpreted properly using PB's 16 GE units for 1920px convention. Have you tried adjusting the Resolution on the side cameras? For example, if you use a multiplier that reflects the pixel density variance between your screens and stay in a pixel oriented workflow, Pandora should just scale your media for you. Try setting your side *cameras* from 1920x1080 to 2494x1404. This does mean that you are undersampling your side screen media, and that it will be anti-aliased more. You could achieve similar results by adjusting the FOV of the cameras instead, but if you want to be math-y about it this makes it a bit quicker.