Home   >   Movie Dictionary   >   MPEG

MPEG

Acronym for Moving Picture Experts Group, a small group charged with the development of video and audio encoding standards. Since its first meeting in 1988, MPEG has grown to include approximately 350 members from various industries and universities. MPEG is ISO/IEC JTC1/SC29 WG11.

MPEG has standardized the following compression formats and ancillary standards:

  • MPEG-1: Includes the popular Layer 3 (MP3) audio compression format.
  • MPEG-2: Video and audio standards for broadcast-quality television. Used on most DVD movies.
  • MPEG-3: Originally designed for HDTV, but abandoned in favor of MPEG-2.
  • MPEG-4: Expands MPEG-1 to support video/audio "objects", 3D content, low bitrate encoding and support for Digital Rights Management.
  • MPEG-7: A formal system for describing multimedia content.
  • MPEG-21: MPEG describes this future standard as a Multimedia Framework.

How MPEG works

The MPEG codecs use lossy data compression using transform codecs. In lossy transform codecs, samples of picture or sound are taken, chopped into small segments, transformed into a 'frequency' space, and quantized. The resulting quantized values are then entropy coded.

The moving picture coding systems such as MPEG-1, MPEG-2, and MPEG-4 add an extra step, where the picture content is predicted from past reconstructed images before coding, and only the differences from the reconstructed pictures, and any extra information needed to perform the prediction, are coded.

MPEG standardizes only the bitstream format and the decoder. The encoder is not standardized in any way but there are reference implementations available for members that produce valid bitstreams.

MPEG-1

MPEG-1 is the designation for a group of audio and video coding standards agreed upon by MPEG (Motion Pictures Coding Experts Group). MPEG-1 video is used by the Video CD format. MPEG-1 audio layer 3 is the full name for the popular audio format MP3.

MPEG-1 consists of several parts, as follows:

  • Synchronization and multiplexing of video and audio.
  • Compression codec for non-interlaced video signals.
  • Compression codec for perceptual coding of audio signals. The standard defines three "layers," or levels of complexity, of MPEG audio coding.
  • Procedures for testing compliance.
Reference: ISO/IEC JTC1/SC29/WG11 (June 1996)

MPEG-2

MPEG-2 (1994) is the designation for a group of audio and video coding standards agreed upon by MPEG (Motion Pictures Coding Experts Group), and published as ISO standard 13818. MPEG-2 is typically used to encode audio and video for broadcast signals, including digital satellite and Cable TV. MPEG-2, with some modifications, is also the coding format used by standard commercial DVD movies.

MPEG-2 is similar to MPEG-1, but also provides support for interlaced video (the format used by broadcast TV systems.) MPEG-2 video is not optimized for low bit-rates (less than 1 Mbit/s), but outperforms MPEG-1 at 3 Mbit/s and above. MPEG-2 also introduces and defines Transport Streams, which are designed to carry digital video and audio over unreliable media, and are used in broadcast applications. With some enhancements, MPEG-2 is also the current standard for HDTV transmission. A standards-compliant MPEG-2 decoder should be capable of playing back MPEG-1 streams.

MPEG-2 audio, defined in Part 3 of the standard, enhances MPEG-1's audio by allowing the coding of audio programs with more than two channels. Part 3 of the standard allows this to be done in a backwards compatible way, allowing MPEG-1 audio decoders to decode the two main stereo components of the presentation, or in a non backwards compatible way, which allows encoders to make better use of available bandwidth. MPEG-2 supports various audio formats, including MPEG-2 AAC.

MPEG-2, the Standard

General information about MPEG-2 Video and MPEG-2 Audio excluding modifications when used on DVD / DVB.

A MPEG-2 System Stream typically consists of two elements:

video data + time stamps
audio data + time stamps

MPEG-2 video coding (simplified)

MPEG-2 is for the generic coding of moving pictures and associated audio and creates a video stream out of three types of frame data (intra frames, forward predictive frames and bidirectional predicted frames) arranged in a specified order called the GOP structure (GOP = Group Of Pictures - see below).

Typically the originating material is a video sequence at a pre-set pixel resolution at 25 (CCIR) or 29.997 (FCC) frames/second with sound.

MPEG-2 supports both interlaced and progressive scan video streams. In progressive scan streams, the basic unit of encoding is a frame, while in interlaced streams, the basic unit is a field. In the discussion below, the generic terms "picture" and "image" refer to either fields or frames, depending on the type of stream.

The MPEG-2 stream is made up of a series of data frames encoding pictures. The three ways of encoding a picture are: intra-coded (I picture), forward predictive (P picture) and bidirectional predictive (B picture).

The video image is separated into one luminance (Y) and two chrominance channels (also called color difference signals U and V). It is also divided into "macroblocks", which are the basic unit of coding within a picture. Each macroblock is divided into four 8x8 luminance blocks. The number of 8x8 chrominance blocks per macroblock depends on the chrominance format of the source image. For example, in the common 4:2:0 format, there is one chrominance block per macroblock for each of the channels, making a total of six blocks per macroblock.

In the case of I pictures, the actual image data is then passed through the encoding process described below. P and B pictures are first subjected to a process of "motion compensation", in which they are correlated with the previous (and in the case of B pictures, the next) image. Each macroblock in the P or B picture is then associated with an area in the previous or next image that is well-correlated with it. The "motion vector" that maps the macroblock to its correlated area is encoded, and then the difference between the two areas is passed through the encoding process described below.

Each block is treated with an 8x8 discrete cosine transform. The resulting DCT coefficients are then quantized according to a pre-defined scheme, re-ordered to maximize the probability of long runs of zeros, and run-length coded. Finally a fixed-table huffman encoding scheme is applied.

I pictures encode for spacial redundancy, P and B pictures for temporal redundancy. Because adjacent frames in a video stream are often well-correlated, P pictures may be 10% of the size of I pictures, and B pictures 2% of their size.

The sequence of different frame types is called the Group of Pictures (GOP) structure. There are many possible structures but a common one is 15 frames long, and has the sequence I_BB_P_BB_P_BB_P_BB_P_BB_P_BB_. A similar 12 frame sequence is also common. The ratio of I, P and B pictures in the GOP structure is determined by the nature of the video stream and the bandwidth constraints on the output stream, although encoding time may also be an issue. This is particularly true in live transmission and in real-time environments with limited computing resources, as a stream containing many B pictures can take three times longer to encode than an I-picture-only file.

The output bit-rate of an MPEG-2 encoder can be constant or variable, with the maximum bit rate determined by the playback media - for example the DVD movie maximum is 10.4 Mbit/s. To achieve a constant bit-rate the degree of quantization is iteratively altered to achieve the output bit-rate requirement. Increasing quantization leads to visible artefacts when the stream is decoded, generally in the form of "mosaicing", where the discontinuities at the edges of macroblocks become more visible as bit rate is reduced. Sony Launched The MPEG IMX Professional Videotape For an Bitrate Of Up To 50Mbps @ 4:2:2P@ML

MPEG-2 audio encoding

MPEG-2 also introduces new audio encoding methods. These are :
  • low bitrate encoding with halved sampling rate (MPEG-1 Layer 1/2/3 LSF)
  • multichannel encoding with up to 5.1 channels

MPEG-2 on DVD

Additional restictions and modifications on DVD.
  • Resticted to one of the following resolutions
    • 720 x 480 pixel, 59.994 fields/sec ((FCC))
    • 720 x 480 pixel, 29.994 frames/sec ((FCC))
    • 720 x 576 pixel, 50 fields/sec ((CCIR))
    • 720 x 576 pixel, 25 frames/sec ((CCIR))
  • max. 9,8 Mbps
  • YUV 4:2:0
  • additional subtitles possible
  • Audio
    • MPEG-2 Layer 2 Audio with 48 kHz possible
    • Digital Dolby up to 448 kbps with 48 kHz possible
    • dts with 754 or 1510 kbps possible
    • There must be at least one track in MPEG Audio or DD audio
  • Restrictions in the GOP structure

MPEG-3

MPEG-3 is the designation for a group of audio and video coding standards agreed upon by MPEG (Motion Pictures Coding Experts Group). MPEG-3 was designed to handle HDTV signals in the range of 20-40Mbps.

It was soon discovered that similar results could be obtained through slight modifications to the MPEG-2 standard. Shortly thereafter, work on MPEG-3 was discontinued.

MPEG-3 should not be confused with MPEG-1 Layer 3, commonly referred to as MP3.

MPEG-4

MPEG-4, introduced in 1998, is the designation for a group of audio and video coding standards agreed upon by MPEG (Motion Pictures Coding Experts Group). MPEG-4 is primarily designed to handle low bit-rate content, from 4800 bit/s to approximately 4 Mbit/s. The primary uses for the MPEG-4 standard are web (streaming media) and CD distribution, conversational (videophone) uses, and broadcast television.

MPEG-4 absorbs many of the features of MPEG-1 and MPEG-2, adding new features such as (extended) VRML support for 3D rendering, object-oriented composite files (including audio, video and VRML objects), support for Digital Rights Management and various types of interactivity.

Most of the features included in MPEG-4 are left to individual developers to implement. This means that there are very few complete implementations of the MPEG-4 standard. Anticipating this, the developers added the concept of "Profiles," allowing various capabilities to be grouped together.

MPEG-4 consists of several standards—termed "Layers"—as follows.

Layer 1 describes synchronization and multiplexing of video and audio.
Layer 2 is a compression codec for video signals.
Layer 3 is a compression codec for perceptual coding of audio signals.
Layer 4 describes procedures for testing compliance.
Layer 5 describes systems for Software simulation.
Layer 6 describes Delivery Multimedia Integration Framework (DMIF).

See also DivX.

MPEG-7

MPEG-7 is a Multimedia content description standard. Thus it is not a standard which deals with coding of moving pictures and audio, like MPEG-1, MPEG-2 and MPEG-4.

It was designed to standardise:

  • a set of description schemes and descriptors
  • a language to specify these schemes, called the Description Definition Language (DDL)
  • a scheme for coding the description
The combination of MPEG-4 and MPEG-7 is described by MPEG as Tools for Killer Applications. The combination of these two standards is intended to be the ideal solution for efficient streaming of content, content manipulation and indexing of this content.

More Filmbug  favorites:  Indiana Jones | Movie History | Blu-Ray | Movie Aspect Ratios | Film Noir | Movie Crew