See How it Works
Learn why technologists and engineers join
Learn why technologists and engineers join
View the companies that support us
Access information about your local section
Access information about your student chapter
Learn more about SMPTE standards
SMPTE engineering documents and others
Get involved in creating next generation standards
Online and in-person meeting schedule
Operations manual and guidelines
Klaus Weber, responsible for the worldwide product marketing of the imaging products for Grass Valley a Belden Brand, presented "Beyond HD -- The status of the image acquisition solutions for the next generation broadcasting formats." Weber noted that he is mainly talking about live broadcasts. He looked at the requirements for the optimum pixel size, CMOS vs. CCD images, rolling shutter vs. global shutter. He then went over additional requirements, including HFR, HDR and wide color gamut. And he suggested some potential solutions.
What do all today's cameras have in common? "There is one thing it's the pixel size," he revealed. "All cameras have the same size regardless of the kind of camera. The only exception is CCD cameras in interlace operation, running at 1080i." In comparing CMOS vs. CCD imagers, they are quite similar. But if it's switched to progressive, the CCD imager has half the sensitivity. The same CMOS camera will give you the same sensitivity for both," he said. "There is no argument anymore about going to progressive."
What about dynamic range? The dynamic range currently is 800 percent in interlace for a CMOS camera versus 600 percent for CCD cameras, both at 1080i. At 1080p, the CMOS gives, again, the same, 800 percent, but the CCD drops to 300 percent. There is one negative side effect: Many CMOS imagers offer a rolling shutter rather than a global shutter. That means pixels are exposed and read-out one by one, meaning the exposure is at a different moment for each pixel, introducing wobble, skew, smear and partial exposure. "Global shutter is required for demanding live applications," he said.
We want progressive imagery but not the rolling shutter. "In every single pixel you need to add two additional transistors," he said. "The TxG allows for separation of the exposure from the read out, and the RST allows for resetting of the storage after the signal read-out." But there are disadvantages, he pointed out. "First of all, you need a certain manufacturing technology to build it in the right size and not consume enough space," he said. "Today it's possible, but it still reduces the fill factor. There are different ways to compensate for that."
Why not reduce pixel size? "Many of the quality parameters are directly related to the pixel size," Weber said. "Any reduction in the pixel size would lower the performance of the pixel. For a 4K imager, the pixel size would need to be four times smaller. This leads to at least 4 times lower sensitivity and a reduced dynamic range." Is it possible to improve pixel performance to compensate? "The quantum efficiency is already above 60 percent," he answered. "The effective fill factor is already close to 100 percent so there's nothing to gain. And it will be very difficult to raise the Qe. It's not possible to expect a four-times improvement any time in the near future."
With regard to HFR operation, this is easy to do with CMOS; the main bottleneck for HFR operation is the A/D conversion. The reduced exposure time of HFR has a direct effect on the sensitivity. "HFR operation with smaller pixels would not be usable," he said. Another topic under discussion is HDR. "Again, HDR is easy to realize with CMOS," he said. "In addition the latest FT-CMOS cameras offer a dynamic range of 13-1/2 stops in regular operation with a linear exposure and readout of the imager." CMOS imagers can offer a further extended dynamic range by using a multiple readout of the pixels during one exposure cycle. With regard to wide color gamut, the RAW RGB-imager data offers a much wider color gamut than that used with Rec709
He suggested some potential solutions. "We need to accept a compromise." One potential resolution is to use large imager solutions; a single 4K imager would capture 50 percent green, 25 percent blue and 25 percent red. "That could be compensated for by using a larger 8K imager, but all the other limitations would become bigger," he noted, listing physical limitations in the availability of zoom ranges and a very narrow depth-of-field, which wold be an issue in sports and other live applications. A B4-mount to PL-mount conerter is not a solution since it introduces a loss of more than 2.5 F-stops.
His preferred solution is a 2/3-inch imager solution that uses the full potential of a three-imager solution. "You get the full sensitivity and dynamic range of a 2/3-inch HD camera and 2/3-inch HD lenses with B4 mount can be used," he said. "This is a concept optimized for live productions. You get the same range for angle of view, no change in depth of field or sensitivity and the best possible color resolution."
One alternative solution would be the potential of a four-imager solution (four 2/3-inch imagers with a special dichroic beam splitter, using two 1080p imagers for green and one each for blue and red). It offers 4K resolution in green with spatial offset techniques but there are more compromises including reduced sensitivity by around one F-stop, increased system complexity and the inability to optimize optical filtering for color.
In conclusion, any next generation broadcast format will be progressive. Higher resolution, HFR and HDR will require CMOS imaging. For live applications, global shutter is required as well as large pixels on small imagers. "The best compromise will most likely be a three-imager solution," he concluded.