A Software-Based MPEG-4 Video Encoder Using Parallel Processing .

1y ago
5 Views
1 Downloads
521.74 KB
12 Pages
Last View : 17d ago
Last Download : 3m ago
Upload by : Jayda Dunning
Transcription

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 8, NO. 7, NOVEMBER 1998909A Software-Based MPEG-4 VideoEncoder Using Parallel ProcessingYong He, Student Member, IEEE, Ishfaq Ahmad, Member, IEEE, and Ming L. Liou, Fellow, IEEEAbstract—In this paper, we describe a software-based MPEG4 video encoder which is implemented using parallel processingon a cluster of workstations collectively working as a virtualmachine. The contributions of our work are as follows. First, ahierarchical Petri-nets-based modeling methodology is proposedto capture the spatiotemporal relationships among multiple objects at different levels of an MPEG-4 video sequence. Second, ascheduling algorithm is proposed to assign video objects to workstations for encoding in parallel. The algorithm determines theexecution order of video objects, ensures that the synchronizationrequirements among them are enforced and that presentationdeadlines are met. Third, a dynamic partitioning scheme isproposed which divides an object among multiple workstations toextract additional parallelism. The scheme achieves load balancing among the workstations with a low overhead. The strikingfeature of our encoder is that it adjusts the allocation andpartitioning of objects automatically according to the dynamicvariations in the video object behavior. We have made variousadditional software optimizations to further speed up the computation. The performance of the encoder can scale accordingto the number of workstations used. With 20 workstations, theencoder yields an encoding rate higher than real time, allowingthe encoding of multiple sequences simultaneously.Index Terms—Data partitioning, dynamic scheduling, load balancing, MPEG-4, parallel and distributed processing, Petri nets,video encoder.I. INTRODUCTIONTHE current and emerging multimedia services demandmany more functionalities than those offered by thetraditional standards. For example, mobile communicationrequires very low bit-rate video coding and error resilienceacross various networks, virtual reality and animation requireintegration of natural and synthetic hybrid object coding, andinteractive digital video requires a high degree of object basedinteractivity. Instead of traditional frame-based interactionsuch as fast forward, fast backward, etc., new ways of interactivity are needed to efficiently realize such applications. Thenew standard, MPEG-4, which is currently being developedby MPEG, will enable the integration of the production,distribution, and content access paradigms in a multimediaManuscript received October 31, 1997; revised May , 1998. This work wassupported by the Hong Kong Telecom Institute of Information Technology.This paper was recommended by Associate Editor M.-T. Sun.Y. He and M. L. Liou are with the Department of Electrical and ElectronicEngineering, Hong Kong University of Science and Technology, Clear WaterBay, Kowloon, Hong Kong.I. Ahmad is with the Department of Computer Science, Hong KongUniversity of Science and Technology, Clear Water Bay, Kowloon, HongKong.Publisher Item Identifier S 1051-8215(98)08385-2.environment [1]. With a flexible toolbox approach, MPEG-4 iscapable of supporting diverse new functionalities, and hencewill cover a broad range of present and future multimediaapplications.MPEG-4, due to its content-based representation nature andflexible configuration structure, is considerably more complexthan previous standards. Any MPEG-4 hardware implementation is likely to be very much application specific. Therefore,software-based implementation seems to be a natural andviable option. In addition, a software-based approach allowsflexibility, portability, scalability, and permits the inclusion ofnew tools, which are extremely desirable features for MPEG4-based interactive multimedia systems. The main obstaclein such an approach is that it requires a large amount ofcomputing power to support real-time encoding and decodingoperations. However, the latest developments in parallel anddistributed systems promise a higher degree of performanceat an affordable cost (such as a network of workstations orPC’s), provided the parallelism from the application at handis effectively extracted.A parallel implementation of the MPEG-4 encoder is anontrivial task, and cannot be accomplished using a straightforward multitasking or data-partitioning strategy. This is becauseobjects in MPEG-4 video add or drop from a video scene, withtheir sizes varying from time to time. In addition, variousobjects need to be tightly synchronized. Finally, dependingupon the application requirements, MPEG-4 allows us to adoptdifferent encoding efficiencies and levels of scalability onvarious objects. Orchestrating various tasks of the encoderand distributing and dividing objects into pieces for concurrentexecution pose some research challenges. Thus, parallelizationof the MPEG-4 encoder requires highly efficient schedulingand a load-balancing scheme. An effective implementationof the encoder also needs modeling tools that can capturethe spatiotemporal relationships between different MPEG-4objects.We are currently building an MPEG-4-based interactivemultimedia environment for supporting applications in theareas of CAD, teaching, and multimedia authoring. As a partof this system, we have implemented an MPEG-4 encoderwith a software-based approach using parallel processing ona cluster of workstations. The main contributions of our workinclude the following. A Petri-net-based modeling scheme for capturing the spatiotemporal relationships between MPEG-4 video components at various levels (video session, object, or videoobject plane level).1051–8215/98 10.00 1998 IEEE

910IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 8, NO. 7, NOVEMBER 1998Fig. 1. MPEG-4 video codec (encoder and decoder) structure. Efficient parallel processing of the encoder through aneffective scheduling algorithm. The algorithm uses theinformation generated by the model to allocate objects toworkstations for parallel encoding, and ensures that thesynchronization requirements among various objects areobserved, presentation deadlines are met with a guaranteeof quality of service, and that maximum speed up isobtained in terms of compression time. Allocating objectsto workstations for concurrent processing is equivalent toexploiting control parallelism. A dynamic and adaptive data-partitioning scheme thatmaximizes the parallelism by further dividing an objectamong multiple workstations is proposed. Since the sizeof a video object may change from time to time, thepartitioning strategy ensures load balancing among thedivided parts. Division of a video object among multipleworkstations is equivalent to exploiting data parallelism.Our encoder encodes various input video objects by adjusting the allocation and partitioning of objects automaticallyregardless of the dynamic variation of the video object behavior. Various levels of software optimization have been usedto speed up the computation. The performance of the encodercan scale according to the number of workstations used. With20 workstations, the encoder yields an encoding rate higherthan real time on some sequences. This allows us to encodemultiple sequences at the same time.The rest of this paper is arranged in the following manner:Section II gives a brief overview of MPEG-4 video verification model. Section III describes the proposed implementationapproach in detail, including: 1) a Petri nets modeling methodology introduced to model and represent the timing constraintsof the video session, 2) an effective scheduling algorithmwhich schedules various subtasks of the encoder, and 3) adynamic data-partitioning scheme used for further speed-upgain. Section IV provides the experimental results, and thelast section presents the conclusion.II. OVERVIEWOFMPEG-4 VIDEOMPEG-4 is scheduled to become an international standardin November 1998. As one of the major parts of MPEG-4,MPEG-4 video is an object-based hybrid natural and syntheticcoding standard which specifies the technologies enablingthe functionalities such as content-based interactivity, efficientcompression, error resilience, and object scalability [2]. Fig. 1(a)(b)Fig. 2. Representation of the VOP (person Akiyo). (a) Image of original“Akiyo” VOP. (b) Binary alpha phase of “Akiyo” VOP.is the overall structure of MPEG-4 video codec (encoder anddecoder) which is based on the concept of video object planes(VOP’s) defined as the instances of video objects.The video encoder is composed of a number of VOPencoders as is the decoder. The same coding scheme is appliedto each video object separately, and the reconstructed videoobjects are composited together and presented to the user. Theuser interaction with the objects such as scaling, dragging, andlinking can be handled either in the encoder or in the decoder.In order to describe the arbitrarily shaped VOP’s, MPEG4 defines a VOP by means of a bounding rectangle called a“VOP window.” The window surrounds the VOP with the minimum number of macroblocks, as depicted in Fig. 2(a). Thereare three kinds of macroblock (MB) within a VOP window:

HE et al.: MPEG-4 VIDEO ENCODER911the transparent MB, the contour MB, and the standard MB.The contour and standard MB include the pixels belonging tothe object, and the transparent MB lies completely outside theobject area.Each VOP encoder consists of three main parts: shapecoding, motion estimation/compensation, and texture coding.Shape information of VOP is referred to as alpha planein MPEG-4. As Fig. 2(b) shows, the alpha plane has thesame format as the luminance file, and its data indicate thecharacteristics of the relevant pixels (inside or outside theobject). The shape coder performs the compression on thealpha plane. Because the transparent MB has no object pixelsinside, it will not be processed for the motion and/or texturecoding.Motion estimation and compensation (ME/MC) are usedto reduce temporal redundancies. A padding technique isapplied on the reference VOP which allows polygon matchinginstead of block matching for rectangular image. SAD (sumof absolute difference) is used as the error measure, and iscalculated only on the pixels inside the object. SAD is givenbyorIn addition to the basic motion technique, unrestrictedME/MC, the advanced prediction mode, and bidirectionalframe) are supported by theME/MC (especially for theMPEG-4 video to obtain a significant quality improvementwith a little increase in complexity.The intra and residual data after motion compensation ofVOP’s are coded by texture-coding algorithms including DCTor shape-adaptive DCT (SA-DCT), MPEG or H.263 quantization, intra dc and ac prediction, and VLC to achieve furthercompression. For contour MB’s, a low-pass extrapolationpadding technique is employed before performing DCT.MPEG-4 also supports the scalable coding of video objectsin both spatial and temporal domains, and provides errorresilience across various media. In addition to the abovebasic technologies used in the encoder structure, the toolboxapproach of MPEG-4 video makes it possible to achieve moreimprovement for some special cases by using dedicated tools.Further details on MPEG-4 video encoder can be found in [3].As mentioned earlier, the hardware-based MPEG-4 encoderis likely to be very much application specific. The flexibleand extensible nature of MPEG-4 requires a highly flexibleand programmable encoder which is more feasible using asoftware-based approach. But the computational requirementof a software-based encoder is simply too enormous to be handled by a single processor PC or a workstation. It is, therefore,natural to exploit the high computational power offered by ahigh-performance parallel or distributed system. The architecture of MPEG-4 encoder as shown in Fig. 1 also happens to bevery suitable for distributed computing. Each input VOP is encoded separately, and efficient performance can be achieved bydecomposing the whole encoder into separate tasks with individual VOP encoders and running these tasks simultaneously.In a simpler approach, one could use a single workstationto encode one VOP, but this scheme does not fully exploitthe computational power of the system because it is notscalable and the degree of parallelism is rather limited. A moreeffective approach is to form groups of workstations, with eachgroup working on a single VOP while additional parallelismis exploited by partitioning a VOP among the workstationswithin the group. This scheme, however, requires a carefuldivision of video objects as they are interrelated. Furthermore,the sizes of VOP’s change with time, implying that distributionand partitioning of VOP’s will need to be adjusted accordingly.Since this must be done in real time, the cost of schedulingand distribution must be kept low to ensure that the benefitsgained from an efficient parallelization are not outweighed bya lengthy scheduling time.In our scheme, we divide a given number of workstationsto groups, and assign the encoding task of one VOP to onegroup. However, when the VOP’s are distributed to differentgroups of workstations, the spatiotemporal relationships between various VOP’s must be preserved. Such relationshipscan be kept to an extremely detailed level by using a Petrinets model which is described below. A scheduling algorithmis proposed to allocate the workstations proportionally, and todecide their execution sequence in accordance with the priorityof each VOP so that the timing constraint can be satisfied.The data of a VOP are divided among the workstationswithin a group, allowing further gain in computing speed. Fordistributing the data of a VOP, various simple partitioningschemes are possible. We propose a shape-adaptive datapartitioning scheme that ensures load balancing among thedivided pieces and incurs a low overhead. The details ofthe modeling methodology, scheduling algorithm, and datapartitioning scheme are described next.III. PROPOSED IMPLEMENTATION APPROACHMost multimedia applications have real-time requirementswhich demand the codec to be highly efficient. In order to dealwith arbitrarily shaped objects, more sophisticated techniquesare needed to achieve an efficient compression. This, however,can introduce extra complexity in the encoder, which in turnrequires additional computational power. Since the encoder ofMPEG-4 video is much more complex and time consuming incomputing than the decoder, it is more challenging to speedup the computation in the encoder.A. Modeling MPEG-4 Video Sequence with Petri NetsIn MPEG-4 video encoder, one of the most important issuesto consider is the synchronization of various video objects.Each object may have certain presentation timing constraintswhich, in turn, may be dependent on the other objects. Theplayout time requirements and associated synchronization constraints among multiple video objects must be satisfied in realtime to guarantee that a smooth flow of video sequence ispresented to the user.

912IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 8, NO. 7, NOVEMBER 1998Fig. 3. Graphical representation of a Petri net.Fig. 4. Playout time chart for video session.To identify the timing constraints among multiple objects,a synchronization reference model is required to describetemporal relationships for determining an appropriate scheduling scheme. Several modeling tools have been proposedfor specifying the temporal behavior of various multimediasystems [4]. We choose Petri nets as the modeling toolsince it is a simple but effective tool for describing andstudying systems with concurrent, distributed, and parallelcharacteristics [5]. A number of variations of the Petri netsmodel, such as OCPN [6], XOCPN [7], and TSPN [8], havebeen widely used in multimedia communication applicationsdue to its intuitive graphical representation and the simplicityof the modeling concept.Fig. 3 depicts a graphical representation of a Petri net.The circles and bars represent the places and transitions,respectively, and the arcs indicate both input and output flowdirections. A Petri net is executed by the firing rules thattransmit the marks or tokens from one place to another, andsuch firing is enabled only when each input place has a tokeninside. Thus, by using firing transition and a token distributionstate, Petri nets can describe the information flow or systemactivities in a straightforward way.Because a Petri nets modeling structure may become complex when it is used to model a complicated real-world system,a hierarchical Petri net can be used to refine the systembehavior in a step-by-step fashion. As for the MPEG-4 videosession, due to its object-based nature, the number of objectspresented within a scene may vary from time to time sincean object can be added in or dropped out from the scenerandomly. Moreover, while some of the objects may be tightlytime dependent with each other, and thus must be synchronizedaccordingly, others may not be required to be stringentlysynchronized. To represent such a complex session, we canutilize hierarchical Petri nets with a similar structure as thesyntax definition of MPEG-4 video which consists of videosession (VS) level, video object (VO) level, video object layer(VOL) level, and video object plane (VOP) level (for the sakeof simplicity in our implementation, we consider only VS,VO, and VOP levels.)By using a hierarchical model we can achieve coarse orfine-grained synchronization by applying scheduling schemeson different levels. Fig. 4 shows the playout time chart of ageneral MPEG-4 video sequence. The sequence has four videoobjects (VO’s); VO , VO , VO start at time unit 0, and VOstarts at time unit 4. VO and VO are synchronized with eachother and both end at time unit 4, while VO and VO arealso synchronized and end at time unit 12. The duration of aframe for both VO and VO is two time units, and four forVO and VOFig. 5 represents the hierarchical Petri nets model for theabove case. In the hierarchical Petri nets model, we definethe place as object intermedia unit (OIU) and the transitionas timing constraint point (TCP). At the VS level, an OIUrepresents the whole video session with just two TCP’s (thesession start and end point). At the VO level, each OIU represents one object within the session; here, the TCP’s indicatethe temporal relationship and timing constraints among variousobjects. At the VOP level, each OIU represents one frame of

HE et al.: MPEG-4 VIDEO ENCODER913Fig. 5. Hierarchical Petri nets model of video presentation.the object, whereas the TCP’s indicate the intra and/or interVOP’s synchronization on the frame level.Generally, all video objects play out simultaneously in thenatural video session, which results in the same frame rate andpresentation deadlines. For some synthetic sequences, differentvideo objects may be introduced or halted randomly by theoperations such as user interaction and content-based retrieval,and therefore may have different timing characteristics. Inorder to build a Petri nets model for such a dynamic behavior,we have to obtain all of the necessary information suchas processing times, frame rate, object dependencies, andsynchronizations beforehand. It is possible for nonreal-timeapplications to generate the complete Petri nets model for staticscheduling with the temporal information of all video objectsavailable in advance. For real-time applications, however, suchknowledge can only be obtained at run time, and the modelis generated partially along the playout sequences which, inturn, determines a dynamic scheduling scheme. For example,in the above case, we can obtain the playout deadlines andframe rates of VO , VO and VO after a short time ofobservation at the beginning of the session. The model canthen be constructed, and it remains the same if the status ofall VO’s is stable. Until a new object (VO is added or someexisting objects (VO and VO are deleted at a certain time(time unit 4), the model construction can then be changedaccording to the new knowledge obtained after another shortobservation time.B. Scheduling Objects to WorkstationsWe use a scheduling algorithm to allocate objects in avideo session to workstations. The objective of a schedulingalgorithm in a parallel processing environment is to minimizethe overall execution time of a concurrent program by properlyallocating the tasks to the processors and sequencing theirexecutions [9]. A scheduling algorithm can be characterized asbeing either static or dynamic. A static scheduling algorithmdetermines the schedule with the complete knowledge ofall of the tasks before the program execution. In contrast,a dynamic scheduling algorithm deals with task assignmentat run time because the information about the tasks is notavailable in advance. Static scheduling incurs little run-timecost, but cannot adapt to the indeterministic behavior of thesystem. On the other hand, although dynamic scheduling ismore flexible as it can be adjusted to system changes, it incursa high run-time cost. In an MPEG-4 video session, even thougha static scheduling scheme is feasible for some nonreal-timeapplications, it is not suitable for most real-time applicationsbecause of the unpredictable characteristics of the VOP’s.In our implementation, we have designed a hybrid staticand dynamic scheduling scheme. Using the Petri nets model,the information about the video objects can be acquired afterobserving the objects for a short time at the beginning ofeach presentation period. The length of the presentation perioddepends on the availability of objects. During that period, thetemporal characteristics of the video objects, such as framerates and synchronizations, are assumed to be relatively stable.Therefore, we can perform a static scheduling scheme on eachperiod with the knowledge obtained at the beginning of theperiod, and reschedule the new period with the updated parameters. Such a scheduling scheme combines the advantagesof static and dynamic strategies, and, with a little overhead,adapts to the variation of both deterministic and indeterministicvideo objects. Fig. 6 depicts a scheduling scenario at VO levelfor the example shown in Fig. 4. The scheduling algorithmis invoked whenever a new presentation period begins. Thescheduling period is bounded by the successive object scheduling instants (OSI’s), and the complexity of the schedulingdepends on the number of OSI’s during the whole videosession.A number of scheduling algorithms have been developed fordistributed and parallel systems [10]. The proposed algorithmis a variant of the earliest deadline first (EDF) algorithm whichhas been widely employed in many applications [11]. Thebasic principle of this algorithm is that the tasks with earlierdeadlines are assigned higher priorities and are executed beforetasks with lower priorities. In our implementation, VOP’s withthe earlier playout deadlines or synchronization constraintsare encoded and delivered first. Fig. 7 shows the Petri netsmodel of the scheduled VOP execution order at VOP levelVOPand VOPfor the case shown in Fig. 4. VOPhave a playout time of unit 0 which is earlier than those ofother VOP’s; thus, they are processed first. Then VOP and

914IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 8, NO. 7, NOVEMBER 1998Fig. 6. Petri nets model for dynamic scheduling.TABLE IMPEG-4 VIDEO TOOLS DEPLOYMENTFig. 7. Petri nets model of EDF scheduling.VOP whose playout deadline is time unit 2, are processednext, and so on. For the tasks with the same TCP such assince a video object with a larger sizeVOP and VOPusually requires more computing power, and vice versa, weallocate the available processors to each VOP proportionallyaccording to the VOP size ratio between the VOP’s.C. Dynamic Shape-Adaptive Data PartitioningParallel programming paradigms can be classified into various models such as object-oriented model, control-parallelmodel, and data-parallel model. The data parallel paradigmemphasizes exploiting parallelism in large data sets. The mainidea of data partitioning in video encoding is to decompose thewhole frame data into a number of data blocks, and map theseblocks onto the corresponding processors. Since the processorsperform the computation on their own data simultaneously, ahigh speed up can be achieved.In a data parallel program, the issue of load balancing shouldbe carefully addressed. Load balancing means equalization ofthe processors’ workloads to minimize their idle times [12].In MPEG-4 video, the size and location of each object mayvary with time, and such behavior cannot be predicted beforehand. Therefore, no matter how initial tasks are assigned,the workloads of the processors will become unbalanced lateron. Some processors will become highly loaded, while othersare idle or lightly loaded. Furthermore, some computationintensive algorithms of the encoder are data dependent, andtheir execution times are different for various data regions.For example, as depicted in Table I, some operations areperformed on all macroblocks, while others just act on contourand/or standard MB’s. Thus, the problem of load balancingbecomes nontrivial.We have developed a dynamic shape-adaptive data partitionmethod to guarantee the workload balancing during the wholevideo session with low run-time overhead and fine granularity.This method can be explained by considering that the entireMPEG-4 video session can be characterized by the numberof time intervals. The time interval boundary depends on thevariation of the VOP window size. A new time interval beginswhenever a VOP window size changes. For example, Fig. 8shows the intervals of the test sequence “Weather” (personwoman) with the number of frames ranging from 150 to 300.Since the knowledge of video objects, including the objectsize and the contour and standard macroblock distribution, canbe obtained at the beginning of the interval. We can performthe partitioning scheme (as described below) within each timeinterval. During that interval, we can assume that the spatialcomputation distribution is relatively stable, and that thereis no need to change the partition. Therefore, the proposedmethod can handle the object variations with a small run-timeoverhead. Since most of the algorithms are macroblock based,

HE et al.: MPEG-4 VIDEO ENCODER915Fig. 8. Time interval example of the “Weather” sequence (person woman).(a)(b)(c)Fig. 9. Some simple partitioning methods. (a) Strip-wise decomposition. (b)Blockwise decomposition. (c) Recursive bisection.we employ a macroblock-based data partitioning to map aninteger number of macroblocks to each processor. This allowseach processor to execute the compression algorithm on itsown data.In its simple form, a data-partitioning method may restrictthe subregion to a rectangular shape to avoid the use ofcomplex data structures. Fig. 9 shows some simple partitioningmethods. A stripwise partition divides the whole VOP windowhorizontally or vertically into subregions for processors. Itis easy to determine the area of subregions for correspondingprocessors, while the number of boundary pixels is high. Ablockwise partition divides the VOP window evenly alongboth the horizontal and vertical dimensions. In this case, thenumber of boundary pixels of the subregion is minimal, butthe number of processors to be used is restricted. The recursive bisection method [13] divides the whole VOP windowrecursively in a binary fashion. Although the computationalload can be optimally distributed, it is relatively expensive toexecute the recursive operations during the decomposition.For MPEG-4, when an object is large enough and almostfills the entire VOP window, rectangular region partitioningmethods may achieve good load balancing because the contourand standard MB’s are likely to be distributed uniformlyamong multiprocessors. In general, some subregions of thewindow are full of transparent MB’s, while others may be fullof contour and/or standard MB’s. Therefore, no partitioningmethod can equally distribute rectangular subregions in astraightforward way. In addition, because the object size may(a)(b)Fig. 10. Rectangular block partition example on “Children.” (a) Strip-wisepartitioning. (b) Block-wise partitioning.be too irregular, it may not be possible to employ the stripwiseor blockwise partition. Fig. 10 shows a partitioning example ofboth stripwise and blockwise decomposition on the first frameof the test sequence “Children” (QCIF). With nine processorsavailable, it is apparent that some processors are assignedalmost all transparent macroblocks (which require low com-

916IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 8, NO. 7, NOVEMBER 1998(a)(b)(c)Fig. 11.(d)Arbitrary partitioning example.puting power), while other processors may be overloaded withcomputation-intensive contour and standard macroblocks.In our shape-adaptive partitioning method, the initial separated subregion may have an arbitrary shape to minimizethe imbalances. The extended rectangular subalpha plane isfurther redefined to avoid unnecessary computation for eachprocessor.As depicted in Fig. 11, gray blocks represent the contourand standard MB’s, while white blocks represent the transparent MB’s. By using the alpha plane information, first, weget the statistical distribution of the contour and standardMB’s. Then, we equally distribute them to a given number ofprocessors. As illustrated in Fig. 11(a), there are 20 contourand standard MB’s within the window, and each processor is assigned five contour and standard MB’s. Thus, eachmay get an arbitrarily shaped subregionprocessor ( –[see Fig. 11(b)]. As it stands, this partitioning will causean irregular data structure problem. Moreover, because thebit stream can only indicate the rectangular data formationby the syntax such as vop horizontal/vertical mc spatial refand vop width/height, the decoding and picture compositionwill become mo

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 8, NO. 7, NOVEMBER 1998 909 A Software-Based MPEG-4 Video Encoder Using Parallel Processing Yong He, Student Member, IEEE, Ishfaq Ahmad, Member, IEEE, and Ming L. Liou, Fellow, IEEE . new standard, MPEG-4, which is currently being developed by MPEG, will enable the .

Related Documents:

Adobe Audition 3 can not open MPEG-1 or MPEG-2 files. An earlier version of Audition, Adobe Audition 2.0, could read MPEG files only if it was installed as a part of the Adobe Creative Suite Production Studio software suite. (Audition does not include the software license required to encode and decode MPEG files; the license was a part of .

MPEG Generators MTX100BMPEG-2RecorderandPlayerDataSheet MTX100B MPEG-2 Recorderand Player Features &Benefits T

AXIS 215 MJPEG/MPEG-4 IP PTZ camera Bidirectional 1/1 AXIS 216FD MJPEG/MPEG-4 IP fixed dome Bidirectional 1/1 AXIS 216MFD MJPEG/MPEG-4 IP fixed megapixel dome Bidirectional 1/1 AXIS 221 MJPEG/MPEG-4 IP fixed camera No 2/1 *Valid with latest service release of Omnicast 4

The MPEG first-phase (MPEG-1) audio coder operates in single-channel or two-channel stereo mode at sampling rates of 32, 44.1, and 48 kHz. In the second phase of development, particular emphasis is placed on the multichannel audio support and on an extension of the MPEG-1 to lower sampling rates and lower bit rates. MPEG-2 audio consists

Import and export assets using Adobe Media Encoder Updated MPEG-2 exporters Adobe Media Encoder has updated MPEG-2, MPEG-2 Blu-ray, and MPEG-2-DVD export formats. The updates include performance enhancements and the following user interface changes: Note: There are no changes to the functionality of these exporters.

Converting ("ripping") DVD's to mpeg or avi files DVD's can be converted into mpeg or avi files, and use with the MMP software as a source of animation feedback. To convert any DVD into the mpeg format, use a commercial DVD "ripper". These products allow a user to legally convert any owned DVD into the PC format.

movie. The VSX-C300/C300-S has the flexibility to decode all these formats. MPEG-2 Decoder The MPEG-2 sound format is emerging as an important medium to deliver multichannel soundtracks, especially for music, and the VSX-C300/C300-S is fully equipped to handle MPEG-2 format discs. Dolby Pro Logic Decoder This was the first multichannel sound .

The report of last year’s Commission on Leadership – subtitled No More Heroes (The King’s Fund 2011) – called on the NHS to recognise that the old ‘heroic’ leadership by individuals – typified by the ‘turnaround chief executive’ – needed to make way for a model where leadership was shared both ‘from the board to the ward’ and across the care system. It stressed that one .