...

MPEG

by user

on
Category:

hardware

49

views

Report

Comments

Description

Transcript

MPEG
Department of Computer Engineering
University of California at Santa Cruz
Video Compression
Hai Tao
Department of Computer Engineering
University of California at Santa Cruz
Why does video compression work ?




JPEG compression exploits the spatial redundancy in
image data through DCT
For video data, each frame can be compressed using JPEG
and a compression ratio of 20:1 can be achieved. Can it be
do more ?
Consecutive frames are very similar. If we encode the first
frame and encode where each region moves to in the
second frame, we obtain a prediction of the second frame.
Only the residual needs to be encoded.
This is similar to predictive coding that exploits the
temporal redundancy in video data
Department of Computer Engineering
University of California at Santa Cruz
MPEG compression


Motion Picture Expert Group, established in 1990 to create
standards for delivering audio and video data
MPEG-1: for VHS quality video (VCD, 320 x 240
pixels/frame or audio of 1.5 Mbits/sec). The components
of MPEG1 are
• JPEG+motion prediction for video coding
• MUSICAM based audio
• Stream control


MPEG-2: designed for various bitrates from 352x240
consumer video (4Mbit/s), 720x480 studio TV(15 Mbit/s),
to HDTV 1440x1152 (60 Mbit/s)
Department of Computer Engineering
University of California at Santa Cruz
Motion Prediction

Suppose the first frame is encoded already using JPEG,
what is the best way of encode frame 2 ?
• JPEG
• Motion prediction

Motion prediction
Frame 1
Frame 2
Department of Computer Engineering
University of California at Santa Cruz
Motion prediction


For motion prediction to work, we need to record the motion of every
pixel. This can be done more efficiently using image blocks called
“Macroblocks”
The predicted macroblock and the actual image block are compared
and the difference is encoded
Department of Computer Engineering
University of California at Santa Cruz
Motion prediction




Previous frame is called “reference” frame
Current frame is called “target” frame
The target frame is divided into 16x16 macroblocks
For each macroblock, its best match in the reference frame
is computed, the 2D motion vector and the image
prediction error are recorded
• Prediction error: DCT+Quantization+RLE+Huffman
• Motion vector: Quantization+entropy coding
Department of Computer Engineering
University of California at Santa Cruz
Matching Macroblocks


Different match measures and methods can be used for
finding the best match for each macroblock
Different measures
• Mean absolute difference
1 16 16
MAD( I , I ' ) 
  | I (i, j )  I ' (i, j ) |
16 16 i 1 j 1
• Mean squared difference
1 16 16
MSE( I , I ' ) 
  ( I (i, j )  I ' (i, j )) 2
16 16 i 1 j 1
Department of Computer Engineering
University of California at Santa Cruz
Matching Macroblocks

Different matching methods
• Full search method - search the RxR regions to find the position
with minimum MAD or MSE
Department of Computer Engineering
University of California at Santa Cruz
Matching Macroblocks

Two-dimensional
logarithmic search
• Search at the largest scale
at nine locations
• Find the best match
• Start from the best match,
reduce the scale, repeat
the previous steps
Department of Computer Engineering
University of California at Santa Cruz
Matching Macroblocks

Hierarchical motion
estimation
• Build image pyramid by
down-sampling the
image
• Estimate the motion at
the coarse level
• Propagate the motion
from the coarse level to
the next fine level
• Refine the motion at fine
level
• Repeat these steps until
the finest level
Department of Computer Engineering
University of California at Santa Cruz
MPEG compression

MPEG encodes video frames using the following pattern

I-frame: Intraframe
P-frame: Interframe
B-frame: Bi-directional frame, search for macroblocks
both in I-frame and P-frame
So B frames are decoded after next P frame is decoded



Department of Computer Engineering
University of California at Santa Cruz
Why B frame ?

Images in video are best predicted by both previous and
following images, especially for occluded areas
Frame 1


Frame 2
Frame 3
In frame 2, the black region can not be predicted from
frame 1, because it is not visible in frame 1
But it can be inferred from frame 3
Department of Computer Engineering
University of California at Santa Cruz
B-frame encoding

Department of Computer Engineering
University of California at Santa Cruz
Slices

To make the process of decoding more resilient to
transmission errors, each frame is divided into slices. If
one slice is corrupted, the decode will restart from the
beginning of the next slice. The following image is divided
into 7 slices
Department of Computer Engineering
University of California at Santa Cruz
MPEG video bitstream

Department of Computer Engineering
University of California at Santa Cruz
MPEG video bitstream

Sequence information
Video Params include width, height, aspect ratio of pixels, picture
rate.
Bitstream Params are bit rate, buffer size, and constrained parameters
flag (means bitstream can be decoded by most hardware)
Two types of QTs: one for intra-coded blocks (I-frames) and one for
inter-coded blocks (P-frames).

Group of Picture (GOP)Information
Time code: bit field with SMPTE time code (hours, minutes, seconds,
frame).
GOP Params are bits describing structure of GOP.
Department of Computer Engineering
University of California at Santa Cruz
MPEG video bitstream

Picture information
Type: I, P, or B-frame?
Buffer Params indicate how full decoder's buffer should be before
starting decode.
Encode Params indicate whether half pixel motion vectors are used.

Slice information
Vert Pos: what line does this slice start on?
QScale: How is the quantization table scaled in this slice?

Macroblock information
Addr Incr: number of MBs to skip.
Type: Does this MB use a motion vector? What type?
QScale: How is the quantization table scaled in this MB?
Coded Block Pattern (CBP): bitmap indicating which blocks are
coded.
Fly UP