MSU Video Quality Measurement Tool: SDK
- MSU Brightness Independent PSNR
This metrics allow to compare codecs correctly, even if one of them have changed average brightness of the frame.
- MSU Drop Frames Metric
Calculates number of dropped frames.
- MSU Brightness Flicking Metric
Estimates level of flicking.
- MSU Noise Estimation Metric
Metrics Noise Estimator is intended for calculation of noise level for each frame of video sequence.
- MSU Scene Change Detector
Scene Change Detector is made to automatic identification of scene boundaries in video sequence.
Since version 1.2 MSU Video Quality Measurement Tool supports plugins.
In MSU VQMT, each metric has following properties:
- Number of input videos (currently only 1 or 2 input video are supported)
- Structure of output values (metric can have any number of output values)
- Configuration abilities (metric can be configured with a string)
PSNR is the simplest example of a metric. It has two input frames, returns only one value that corresponds to both frames and is not configurable, but some metrics have much more complicated structure.
MSU VQMT handles metric as follows:
- Metric instance is created. Lifetime of one instance is one measurement of a pair (or one) of video sequences. For example, if you compare with PSNR video O to videos A1 and A2, two instances of metric are created: first one to compare O and A1, second one to compare O and A2.
- Metric is initialized with information about frame sizes and selected colorspace.
- At this point, metric can reserve a number of identifiers (ID). Metric should take one ID for each return value. For example, PSNR has only one ID, and MSU Blurring Metric with visualization enabled has two IDs. It is possible to point which ID corresponds to which frames.
- Metric is measured: it is consequently called on video frames.
- Mean result is obtained from metric.
- Metric instance is destroyed.
For more details on how this interaction happens, see SDK documentation.
- Codecs Comparison & Optimization
- Video Filters
Semiautomatic Visual-Attention Modeling