FPGA IMPLEMENTATION OF MIMO SYSTEMS FOR ENSURING .

2y ago
98 Views
2 Downloads
1.83 MB
76 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Noelle Grant
Transcription

FPGAFORMIMO S YS T E M SE N S U R IN G M U L T I M E D I A Q O S O V E RWIRELESS CHANNELSIMPLEMENTATION OFS AKET G UPTA AND S PARSH M ITTALB.T E C H . IV T H Y E A RD E P A R T M E N T O F E L E C T R O NI C S A ND C O M P U T E R S ,I N D I A N I N ST I T U T E O F T E C H NO L O G Y , R O O R K E EF O R P A R T I A L F U L F I L L M E N T O F B A C H E L O R O F T E C H N O L O G Y I N E L E C T R O NI C S A ND C O M M U N I C AT I O N SE NG I NE E R I N GUNDER THE SUPERVISION OFD R . S. D A SG U P T AA SS I ST A NT P R O FE SSO R ,D E P A R T M E N T O F E L E C T R O NI C S & C O M P U T E R SI N D I A N I N ST I T U T E O F T E C H NO L O G Y , R O O R K E ED AT E : 22 T H M AY , 20081

CERTIFICATEThis is to certify that Mr. Saket Gupta (Enrollment No.040157), Sparsh Mittal (EnrollmentNo.041112), students of B.Tech IVrd year, Electronics and Communications Engineering,Department of Electronics & Computer Engineering, Indian Institute of Technology, Roorkeehave successfully worked on a research project entitled “FPGA implementation of MIMOSystems for ensuring Multimedia QoS over Wireless Channels” as a part of the final yearB.Tech project.They have obtained extremely competent results for the problem assigned to them using manynovel approaches. Their results are satisfactory and they have 4 accepted and 2 submittedresearch papers in reputed conferences of ACM and IEEE.Date: May 22, 2008 .Dr. Sudeb DasguptaAssistant Professor,Department of Electronics & Computer Engineering,Indian Institute of Technology Roorkee2

ACKNOWLEDGEMENTWe would like to thank our project guide Dr Sudeb Dasgupta for his help and guidance in the Final YearB. Tech. project, and his monitoring our work continuously by taking regular reports. Under his matureguidance we have felt quite secure in taking up our work even in the most challenging area, in which wehad no background whatsoever. Dr. Dasgupta gave us guidance as to what should be our line of action,which is very vital. Although there is limited equipment in this area in the department, he providedaccess to all what he had, including working in the lab.We would also like to thank Dr. Ankush Mittal for his help and support. His constant encouragementsand background vision for our work, apart from his wonderful suggestions and personal interest havehelped us a lot. Through his personal example we have learnt to be dedicated and responsible for ourwork, and this made our working on the project a truly learning experience. He has been very heartilyproviding us with answers to our unceasing questions and doubts in the area.We would also like to thank our seniors: Amit Pande (B.Tech ’06 Batch), PhD student, Iowa StateUniversity, Ames, Praveen Kumar Verma (senior PhD student, IITR), Naveen (M.Tech 2nd year), whohelped us to come up with the ideas for our project, and gave us help from time to time.Sparsh MittalSaket GuptaB.Tech. (IV Year, Electronics and Communications Engineering)Department of Electronics & Computer EngineeringIndian Institute of Technology Roorkee3

ABSTRACTExisting multimedia software in e-learning do not provide par excellence multimedia dataservice to the common user; hence e-learning services are still short of intelligence andsophisticated end user tools for visualization and retrieval. Network QoS (Quality of Servcie)becomes critical with precision requiring low motion video streaming, over wireless scarceresource networks having fluctuating bandwidth, fading channels, multiple paths andrequirement of minimal and optimal power usage. In this project, QoS for low motion videostreaming with best perceptual quality is being guaranteed with novel techniques. We considereducational videos for our research. Our strategy is to segment video into different segmentsand code by our pooled compression scheme (CEZW ), and employ Forward Error Correctioncoded OFDM signals for transmission over MIMO (Multiple Input Multiple Output) wirelesschannels.We guarantee transmission QoS for such compressed video streaming with maximum reliabilityand perceptual quality; selective video frames transmission for least data redundancy; highdata rates with MIMO systems; optimal power allocation for transmission at different BWlevels; and preferential allocation of fluctuating bandwidth according to relative importance ofsegmented video blocks. We exploit both Spatial Multiplexing and Alamouti Space Time BlockCoding for transmission. Experimentation results demonstrate effectiveness of our proposedschemes. We exploit low motion video characteristics to achieve maximum compression andstreaming throughput. The MIMO system is implemented on Xilinx Spartan FPGA (FieldProgrammable Gate Array). Parallel implementation of MIMO-OFDM internal configuration onFPGA through specifically designed process which uses System Generator tool guaranteesoptimal performance of testbed which is measured through parameters like prototypedevelopment time, synthesis error elimination, processing time for transmission bit generationand decoding, FPGA resource utilization and reliability over conventional algorithms for FPGAimplementation like those employing VHDL, and Verilog. The results are compared with thestate of the art transmission and hardware schemes over the network to illustrate the superiorperformance of our approach.4

PUBLICATIONSFROM THE WORKPUBLISHEDACCEPTED:AND1. Saket Gupta, Sparsh Mittal, S. Dasgupta and A. Mittal, "MIMO Systems For EnsuringMultimedia QoS Over Scarce Resource Wireless Networks", Published in the proceedings ofACM International Conference On Advance Computing, India, February 21-22, 2008,Proceedings Yet To Arrive.2. Sparsh Mittal, Saket Gupta, and S. Dasgupta, "System Generator: The State-Of-Art FPGADesign Tool For DSP Applications", Accepted for the proceedings of Third InternationalInnovative Conference On Embedded Systems, Mobile Communication And Computing(ICEMC2 2008), August 11-14, 2008, Global Education Center, Infosys.3. Sparsh Mittal, Saket Gupta, and S. Dasgupta, "FPGA: An Efficient And Promising Platform ForReal-Time Image Processing Applications", Accepted for the proceedings of NationalConference On Research & Development In Hardware & Systems (CSI-RDHS 2008) June 20-21,2008, Kolkata.4. Saket Gupta, Sparsh Mittal and Sudeb Dasgupta, “Guaranteed QoS with MIMO systems forScalable Low Motion Video Streaming over Scarce Resource Wireless Channels”, Accepted forthe proceedings of International Conference on Information Processing, ICIP, 2008. Proceedingto be published by Springer, August 8-10, 2008.SUBMITTED:5. Sparsh Mittal, Saket Gupta, and S. Dasgupta, "MIMO Systems on FPGA using SystemGenerator for Providing Educational Multimedia services to Masses", submitted to IEEETENCON 2008, Hyderabad, India, November 18-21, 2008.6. Saket Gupta, Sparsh Mittal and S. Dasgupta, "Optimal Performance FPGA testbeds for lowmotion video streaming over MIMO wireless Channels", submitted to 15th IEEE InternationalConference on High Performance Computing (HiPC), December 17-20, 2008,TOBESUBMITTED:5

7. Saket Gupta, Sparsh Mittal and Sudeb Dasgupta, “OTMS: An optimal Testbed for Lowmotion video streaming over MIMO wireless channels”, to be submitted in ElsevierInternational Journal of Electronics and Communications.TABLE OF CONTENTSCERTIFICATEACKNOWLEDGEMENTnot defined.2Error! BookmarkABSTRACTBookmark not defined.PUBLICATIONSBookmark not defined.Error!Error!LIST OF FIGURESBookmark not defined.Chapter 1 : INTRODUCTIONBookmark not defined.Error!Error!1.1 Need for Video compression91.2 Need for QoS Enhancement for Multimedia Delivery.Error! Bookmark not defined.1.3 Need for mimo testbeds111.4 Problem statement131.5 Contribution of this work131.6 Organisation of the report15CHAPTER 2 : BACKGROUND STUDY162.1 Video Compression Scheme162.1.1 DWT: The Discrete Wavelet TransformError! Bookmark not defined.2.1.2 Color Embedded Zerotree Wavelet (CEZW) Scheme172.1.3 Motion Estimation and Motion Compensation as in MPEG 1192.1.4 Frame Packaging212.2 MIMO ISSUES242.2.1. Existing MIMO coding Techniques242.2.2 MIMO-OFDM242.3 MIMO FPGA Testbed Issues252.3.1 Utilization of FPGA in DIP and DSP applications2.3.2 Superiority of FPGA over other implementation platforms2.4 FPGA DESIGN OPTIONS2.4.1. Xilinx System Generator(XSG)252628286

2.4.2. Superior performance offered by XSGCHAPTER 3 : SYSTEM OVERVIEW29313.1 Video classification and Segmentation3.2 Pooled Video Compression3.3 MIMO SYSTEM ARCHITECTURE3.3.1. FEC and OFDM coding for overcoming ISI3.3.2. Spatial Multiplexing3.3.3. STBC and Transmitter system3.3.4. Channel and CSI3.4. QOS GUARANTEEING3.4.1. Fluctuating Bandwidth:3.4.2. Optimal Power Allocation (OPA)3.4.3. Data Rate Increase3.4.4. Reliability3.5. XSG IMPLEMENTATION OF MIMO SYSTEM ON FPGA3.6 FPGA HARDWARE DESIGNCHAPTER 4 : IMPLEMENTATION4.1 STEPS THAT LED TO THE FINAL DESIGN4.2 CODE DEVELOPMENT MODEL4.2.1 Segmentation Module4.2.2 Compression Module4.2.3 Network Streaming Scheme4.2.4 Bandwidth Estimation4.2.5 MATLAB To FPGA4.3 CHALLENGES / FAILURES FACED4.4 CODE DEVELOPMENT FEATURES FOR HIGHER PERFORMANCE4.5 LIMITATIONS & BOTTLENECKSCHAPTER 5: RESULTS5.1. PIB FRAME PACKAGING COMPRESSION5.2. DATA RATE INCREASE5.3. RELIABILITY5.4. BANDWIDTH ALLOCATION5.6. BER THROUGH FPGA IMPLEMENTATION OF HYBRID ALAMOUTI5.7. OPTIMAL PERFORMANCE OF XSG FOR FPGA’SCHAPTER 6: CONCLUSIONS AND FUTURE WORKREFERENCESAPPENDIX: CONTENTS OF THE 154565657585960616465707

List of Figures8

CHAPTER 1: INTRODUCTIONDemands for the multimedia services over wireless are rapidly increasing while the expectationof quality for these services is becoming higher and higher. However, inherently limited channelbandwidth and unpredictability of the channel propagation becomes significant obstacle forwireless communication providers to offer high quality, reliability and data rates at minimumcost. Transmission and streaming of e-learning videos has been a hot topic and a challengingissue for many years. Real time delivery or streaming is an essential for most educationalstructures. Many institutes, such as MIT and IITs have opened their web servers for freelecture-on-demand on several courses [1-2]. The concept of remote laboratories also demandsreal-time multimedia content delivery [3-4].Educational videos possess inherent characteristics which can be exploited for robust andoptimal transmission over wireless. These include acceptable coding with scalable bitstream,provisioning of low bpp for transmission using suitable coding schemes, use of static camera,predefined components like instructor, blackboard and background and low component motionrate in video allowing minimal transmission of coded bits. General video coding standards andformats like MPEG-1, MPEG-2, and H.261 etc. achieve a high rate of video compression but9

educational videos and these characteristics are not separately dealt by these standards. Thus,end to end transmission structure for transmission of such videos needs to be assembled.Fig. 1. Snapshots of classroom Lecture sessions in low motion videos.1.1 Need for video compressionThe smallest unit of quantization in an image is called a pixel element or pixel. A standard videomonitor displays a frame usually with the resolution of 800 * 600 pixels. In color image a pixel isrepresented by 3 bytes of data.(one for Red, Blue and Green respectively).Thus even oneuncompressed image requires 1.373 MB of storage. One hour Video at 15 frames per secondwill require 72.07 GB of space in our hard-disk and is impossible to transmit. This leads to needfor compression of videos.One hour of video coded on MPEG standards still takes 500-600 MB of storage. MIT videos arecoded in Real Video and the size is further decreased to around 160 MB. But this is alsounsatisfactory. Educational lectures are slow moving videos and specific applications built tocompress them on the basis of their content can achieve very high compression standards. Inthis project we have achieved a high compression rate for educational providing a scalablesolution to ensure best Multimedia QoS over various network conditions.1.2 Need for QoS Enhancement for Multimedia Delivery.The distribution of network resources is generally done statically in traditional multimediaframeworks. However, serious blockages towards such educational multimedia delivery exist.Such delivery and QoS issues are summarized as follows:10

1 Addressing QoS issues directly through network and channel issues, thus necessitatingtechniques far superior than traditional protocol development for guaranteeing QoS. Such aneed is more acute in video streaming.2 Requirement of an efficient coding scheme without transmission of redundant data.Generally, in such videos, the frame-to-frame macroblock instructor or instructionalblackboard or background contents motion is very slow. Conventional coding schemes donot exploit such redundancy.3 Fluctuating bandwidth proves as a bottleneck for viewing the lecture videos at goodresolution because of their large size.4 Educational lectures are extensively used in various institutions and firms, thus entailingdelivery at minimum cost, requiring most optimal power usage for transmission. Withfluctuations in bandwidth, power requirement and channel noise also change, requiringdynamic system adaption.5 In a bit rate regulation scheme, the educational video source might sometimes be requiredto decrease its output flow due to high traffic load across the network. This decreasecertainly leads to quality degradation (since the quantization distortion becomes morenoticeable at lower bit rates). However, real time educational video streaming requires highdata rates to ensure best uninterrupted perceptual quality of the video. Thus, tradeoffbetween the two requirements is needed.6 Requirement of high reliability, as any loss in instructional content is prohibited.7 Fading in wireless channels while transmission, resulting in unpredictable loss.8 Accounting for multi-paths delays and ISI, as this is unsafe for educational video streaming(which require high precision in bits transmission).Although considerable work has been done on content-based classification [5-6], content-basedstreaming [7], bandwidth adaptation and on network issues [8], a complete framework thataddresses all these issues to provide an end-to-end solution with QoS for educational videos11

does not exist. Many attempts to exploit the above characteristics for overcoming blockageshave been made. Liu et.al [9] provides a real-time content analysis method to detect andextract content regions from instructional videos and then adjusts the Quality of Service (QoS)of video streams dynamically based on video content. However, real time network scenarios asmentioned above are not included. [7] uses a content based retransmission scheme, but isgenerally not preferred in streaming over wireless. In videos with static camera, there is a needto segment an individual frame into objects so that special coding techniques can be applied toeach of them. Moreover, the regions having instructional content obtained by the otherapproaches have arbitrary dimensions which cannot be directly used with the CEZWcompression scheme or ISU/ITU standards like MPEG-1, H.261, etc [10].In area of formulating resource allocation schemes, Zhang, et. al. [11] addresses such resourceallocation problems. The work is novel and addresses many QoS issues from the network andprotocol perspective. Real time QoS guaranteeing however, involves wireless network behaviorfeedback and channel state information for effective feedback and system adjustment.1.3 Need for MIMO TestbedsWiFi and existing high-speed cellular networks being deployed today meet some of the aboveneeds, but OFDM-MIMO, used by WiMAX 802.16e or beyond 3G, is the technology needed toallow for economic and scalable wireless broadband. MIMO and OFDM are key technologies forenabling the wireless industry to deliver on the vast potential and promise of wirelessbroadband.MIMO systems can be used to increase system capacity as well as data reliability in wirelesscommunication systems. Research has been in developing space-time codes [12] fortransmission over MIMO systems. Where these codes provide increase in capacity whileimproving data reliability, they assume that all the data bits are equally important for thereceiver. However, videos coded using most of the current standards; different parts of thebitstream have different importance. This is especially so in educational videos [13]. For highdata rates spatial multiplexing schemes are employed parallel sub transmission [14]. MIMO12

systems are also used to effectively distribute power available between different videosegments in the most optimal way. [15] presents unequal power allocation (UPA) scheme fortransmission of JPEG compressed images over MIMO systems. [16] guarantees QoS on MIMOwireless for enhancement and base layers, with differential power allocation. However, theywork under constant transmit power constraint. Power requirements depend heavily uponnetwork conditions, video quality and network bandwidth. A low cost optimal power allocationcan be achieved only by considering all three factors simultaneously. Where a large amount ofwork has been done for UPA for SISO wireless systems, there is very little published work todate in UPA for image and video communication for MIMO systems, and practically no suchresearch on real time streaming.MIMO systems generally require a large processing time when working with video lectures, asthe blocks of bits generated through compressed educational videos are still quite large innumber for real time processing, thus necessitating a huge processing time on software. FPGA’scan be employed for increase in speed enhancement, as they offer parallel implementation oftime consuming blocks, thus increasing the speed drastically. Where the speed of software islimited by internal processor clocking and other processes running on the system, dedicatedhardware for such MIMO systems can be developed using FPGA’s.Recently, Field Programmable Gate Array (FPGA) technology [18] has become a viable target forthe implementation of algorithms suited to Digital Signal Processing applications [19]. Fieldprogrammable gate arrays (FPGAs) are nonconventional processors built primarily out of logicblocks connected by programmable wires. Each logic block has one or more lookup tables(LUTs) and several bits of memory. As a result, logic blocks can implement arbitrary logicfunctions (up to a few bits). Therefore FPGAs, as a whole can implement circuit diagrams, bymapping the gates and registers onto logic blocks. With more than 1,000 built-in functions aswell as toolbox extensions, MATLAB is an excellent tool for algorithm development and dataanalysis [20]. An estimated 90% of the algorithms used today in DSP originate as MATLABmodels [21]. Simulink is a graphical tool, which lets a user graphically design the architectureand simulate the timing and behavior of the whole system. It augments MATLAB, allowing the13

user to model the digital, analog and event driven components together in one simulation.Using Simulink one can quickly build up models from libraries of pre-built blocks. Xilinx SystemGenerator (XSG) for DSP is a tool which offers block libraries that plugs into Simulink tool(containing bit-true and cycle-accurate models of their FPGA’s particular math, logic, and DSPfunctions).1.4 Problem StatementThe problem tackled in this project is to design a framework for an end-to-end e-learningsolution capable of dynamic video compression and transmission over scarce resource wirelessnetworks, and implement the system on dedicated hardware for real time robust processing.The system must be competent to guarantee all necessary QoS in wireless networks, withoutmanipulating the network protocol systems.1.5 Contributions of the workIn this project, we propose a new approach for guaranteeing QoS by bridging all the abovementioned gaps in transmission, with network aware optimal resource allocation foreducational video streaming. This approach, employing MIMO systems implemented on FPGA,is applicable to all low motion videos is exploited and explained in this project for educationalvideos. The major contributions of our work are:a) System rises above conventional network protocol issues to address QoS for videos.b) Enabling of a streaming server-client system to perform real-time processing (using FPGA’s)of videos.c) Optimal bitstream generation (without redundancy) by pooled compression scheme.d) Network QoS of optimal power, bandwidth adaptability, high reliability, high data rate andno loss due to delays in routing, are being guaranteed by the system.14

e) We combine both spatial multiplexing and STBC schemes to achieve high data rates evenover long distances by the use of MIMO channels. Thus we use available spectrum with theutmost efficiency to allow higher data throughput over the wireless linkf) Huge Processing time required by MIMO OFDM system is reduced to a fraction of theoriginal by its implementation on FPGA testbed.g) Method employed for such implementation is the most optimal as compared to otherconventional methods of ‘burning’ systems on FPGA’sh) Our system works with different kinds of lecture videos, with varying illumination, changinglecture environment and noise (unpredictable situations) in the lecture video.i) MIMO-OFDM coding scheme for video streaming eliminates many conventional complexprocessing techniques usually required at transmitter and receiver side (like CRCretransmissions, ISI removal, etc).j) Many research groups, professionals, companies and so on, working in the field of digitalelectronics, MIMO, wireless communication, image processing, medical science etc areshifting towards the use of higher level tools and superior methodologies and alsoultimately going for hardware prototyping and implementation of their project. Our workwill provide a boost to their research by giving many insights, and directions.The proposed system architecture takes pre-recorded lecture videos and efficiently compressesthem so that they can be transmitted by the user in a scalable manner over MIMO channels.1.6 Organisation of the reportChapter 2 gives the introduction and related works in Video compression and packagingschemes and MIMO FPGA testbeds.Chapter 3 explains the overall system architecture both at server and client ends. It explains thetheory behind various modules.15

Chapter 4 discusses the implementation of the proposed system architecture. It was firstprototyped and simulated in Matlab. Later, the implementation was performed on FPGA. It alsogives the details of implementation, challenges faced in each step, code development model,steps that lead to final design and briefly explain the code development features.Chapter 5 discusses the results obtained over several input videos.Chapter 6 gives the conclusion and suggested future works.The references are given at the end.16

CHAPTER 2: BACKGROUND STUDYThis section discusses the basics of video processing and computer networks and also presentsa literature review of recent developments in these fields.2.1 Video Compression SchemeTransform coding has been a dominant method of video and still image compression. It takesadvantage of energy compaction properties of various transforms (such as DCT, DFT, DWT, etc.)and properties of the Human Visual System to minimize the number of useful coefficients. DCThas been the popular choice for image and video processing schemes [22, 23]. Here we discussthose techniques which set a foundation of understanding our hybrid compression scheme.2.1.1 DWT: The Discrete Wavelet TransformIn image processing, the DWT (Discrete Wavelet Transform) is obtained for the entire image,and it results in a set of independent, spatially oriented frequency channels or subbands. Thewavelet transform is typically implemented using separable and possibly different filters. Itallows localization in both the space and frequency domains.Fig 2 : One level of Wavelet DecompositionTypically the full image is decomposed into a hierarchy of frequency sub-bands. Thedecomposition is achieved by filtering along one spatial dimension at a time to effectivelyobtain four frequency bands. The lowest subband (LL), represents the information at all coarser17

scales (as shown in Figure.) and it is decomposed and subsampled to form another set of foursubbands.This process can be continued until the desired number of levels of decomposition is attained.Two analysis filters, namely g and h, carry out the decomposition, into independent frequencyspectra of different resolutions, producing different levels of detail. Formulation of sub-bandsdoes not cause any compression (same number of samples are required to represent thesubbands as is for the original image), but arranges the data in more efficiently codable format.Figure 2 shows one level of wavelet decomposition.Several efficient coding schemes have been used for coding DWT coefficients. EmbeddedZerotree Wavelet (EZW) introduced by Shapiro [24] is one such coding scheme for gray scaleimages. It exploits the interdependence between the coefficients of the wavelet decompositionof an image, by grouping them into spatial orientation trees (SOT). It outputs an embeddedbitstream. An embedded bit stream can be truncated at any point during decoding, and can beused to obtain a coarse version of the image. Decoding additional data from the compressed bitstream can then refine this version.2.1.2 Color Embedded Zerotree Wavelet (CEZW) SchemeFor color images, the same coding scheme can be used on each color component. However,this approach fails to exploit the interdependence between color components. It has beennoted that strong chrominance edges are accompanied by strong luminance edges. However,the reverse is not true, that is, many luminance transitions are not accompanied by transitionsin the chrominance components. This spatial correlation, in the form of a unique spatialorientation tree (SOT) in the YUV color space, is used in a technique for still image compressionknown as Color Embedded Zerotree Wavelet (CEZW) [25-26]. CEZW exploits theinterdependence of the color components to achieve a higher degree of compression. Theparent child dependency in CEZW is illustrated in figure 3.18

The coding strategy in CEZW is similar to Shapiro's EZW [85], and can be summarized as follows:Let fi (m, n) be a YUV image, where i {Y ,U ,V } and let W [ fi (m, n)] be the coefficients of thewavelet decomposition of component i .Fig 3 : Parent Child Dependency in CEZW scheme1. Set the threshold T max i (W [ f i (m, n)]) / 2, i {Y ,U ,V } .2. Dominant pass:The luminance component is scanned first. Compare the magnitude of each wavelet coefficientin a tree, starting with the root, to the threshold T.If the magnitudes of all the wavelet coefficients in the tree (including the coefficients in theluminance and chrominance components) are smaller than T, then the entire tree structure19

(that is, the root and all its descendants) is represented by one symbol, the zerotree (ZTR)symbol.Otherwise, the root is said to be significant (when its magnitude is greater than T), orinsignificant (when its magnitude is less than T). A significant coefficient is represented by oneof two symbols, POS or NEG, depending on whether its value is positive or negative.The magnitude of a significant coefficient is set to zero to facilitate the formation of zero treestructures. An insignificant coefficient is represented by the symbol IZ, isolated zero. The twochrominance components are scanned after the luminance component. Coefficients in thechrominance components that have already been encoded as part of a zerotree are notexamined. This process is carried out such that all the coefficients in the tree are examined forpossible subzerotree structures.3. Subordinate pass (essentially the same as EZW): The significant wavelet coefficients in theimage are refined by determining whether their magnitudes lie within the interval [T; 3T/2),represented by the symbol LOW, or the interval [3T 2; 2T), represented by the symbol HIGH.4. Set T T/2, and go to Step 2. Only the coefficients that have not yet been found to besignificant are examined.The compressed bit stream consists of the initial threshold T, followedby the resulting symbols from the dominant and subordinate passes, which are entropy codedusing an arithmetic coder [23].20

Fig 4 : Flow diagram of CEZW coding algorithm2.1.3 Motion Estimation and Motion Compensation as in MPEG 1In image and video coding schemes image is divided into small blocks for operation byprediction techniques. Motion estimation is used to determine the movement of a macroblockfrom the reference frame to the current frame. Motion is estimated by searching for themacroblock in the reference picture that provides the closest match, as shown in figure 5. Thedifference between the values of both the macroblocks is coded for reconstruction at thedecoder. To reduce the distortion between the decoded and the original picture, the encoderuses a reconstructed reference frame to perform motion estimation. This reconstructedreference frame is same as used at the decoder side.Fig 5 : Motion estimation of a block21

Motion estimation computes one motion vector per macroblock. Usually, the search isconducted for the luminance component only. A predictive frame is constructed from themotion vectors obtained for all macroblocks in the frame, by replicating the macroblocks fromthe reference frame at the new locations indicated by the motion vectors. The differencebetween the values of the predicted and the current frames, known as predictive error frame(PEF), is then encoded using the same procedure as for an in

5 PU B L I C A T I O N S F R O M T H E W O R K P U B L I S H E D A N D A C C E P T E D: 1. Saket Gupta, Sparsh Mittal, S. Dasgupta and A. Mittal, "MIMO Systems For Ensuring Multimedia QoS Over Scarce Resource Wireless Networks", Published in the proceedings of ACM International Conf

Related Documents:

Mimo evolution SU-MIMO MU-MIMO Massive SU-MIMO Massive MU-MIMO Massive multi-layer MU-MIMO Multi-layer s MU-MIMO LTE: Long Term Evolution SU MIMO: Single User Multiple Input Multiple Output MU MIMO: Multiuser Multiple Input Multiple Output G. Fodor, N. Rajatheva, W. Zirwas, L. Thiele, M.

of a universal Multi-Input Multi-Output (MIMO) power con-verter with many Linear Extendable Group Operated (LEGO) building blocks, namely LEGO-MIMO architecture. The LEGO-MIMO architecture can be used to synthesize a wide range of power converters with universal input and output range. In a LEGO-MIMO design, multiple dc-ac units are coupled

In this thesis, FPGA-based simulation and implementation of direct torque control (DTC) of induction motors are studied. DTC is simulated on an FPGA as well as a personal computer. Results prove the FPGA-based simulation to be 12 times faster. Also an experimental setup of DTC is implemented using both FPGA and dSPACE. The FPGA-based design .

algorithmic work. So while FD MIMO does have a relatively high antenna count, it still relies upon the UE to interpret the pilot signals. Can you do Massive MIMO in FDD communications? 3GPP Release 13 allows for FDD LTE systems to use FD-MIMO. If one accepts that FD-MIMO is "massive", one could say that the answer to this question is "yes".

FPGA ASIC Trend ASIC NRE Parameter FPGA ASIC Clock frequency Power consumption Form factor Reconfiguration Design security Redesign risk (weighted) Time to market NRE Total Cost FPGA vs. ASIC ü ü ü ü ü ü ü ü FPGA Domain ASIC Domain - 11 - 18.05.2012 The Case for FPGAs - FPGA vs. ASIC FPGAs can't beat ASICs when it comes to Low power

Step 1: Replace ASIC RAMs to FPGA RAMs (using CORE Gen. tool) Step 2: ASIC PLLs to FPGA DCM & PLLs (using architecture wizard), also use BUFG/IBUFG for global routing. Step 3: Convert SERDES (Using Chipsync wizard) Step 4: Convert DSP resources to FPGA DSP resources (using FPGA Core gen.)

The LabVIEW implementation of the control system consisted of two main parts; (i) host PC virtual instrument (VI) and (ii) FPGA VI. A host PC VI was deve loped to model the PID control transfer function and inter act with the FPGA based RIO hardware. The FPGA VI was programmed in LabVIEW and synthesized to ru n on the FPGA.

the massive MIMO system for symbol synchronization. For OFDM-based massive MIMO systems, both channel estimation and frequency synchronization are considered. For feasible channel estimation for the massive MIMO system, the time-division duplex (TDD) is assumed, in which case, the standard least-square (LS) channel estimation is applied in the .