IEEE TRANSACTIONS ON CLOUD COMPUTING, VOL. XX, NO.

3y ago
50 Views
9 Downloads
601.44 KB
11 Pages
Last View : 8d ago
Last Download : 3m ago
Upload by : Allyson Cromer
Transcription

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCC.2016.2594172, IEEETransactions on Cloud ComputingIEEE TRANSACTIONS ON CLOUD COMPUTING, VOL. XX, NO. XX, MONTH YEAR1Cost-Aware Multimedia Data Allocation forHeterogeneous Memory Using GeneticAlgorithm in Cloud ComputingKeke Gai, Student Member, IEEE, Meikang Qiu, Member, IEEE, Hui Zhao Student Member, IEEEAbstract—Recent expansions of Internet-of-Things (IoT) applying cloud computing have been growing at a phenomenal rate.As one of the developments, heterogeneous cloud computing has enabled a variety of cloud-based infrastructure solutions, suchas multimedia big data. Numerous prior researches have explored the optimizations of on-premise heterogeneous memories.However, the heterogeneous cloud memories are facing constraints due to the performance limitations and cost concerns causedby the hardware distributions and manipulative mechanisms. Assigning data tasks to distributed memories with various capacitiesis a combinatorial NP-hard problem. This paper focuses on this issue and proposes a novel approach, Cost-Aware HeterogeneousCloud Memory Model (CAHCM), aiming to provision a high performance cloud-based heterogeneous memory service offerings.The main algorithm supporting CAHCM is Dynamic Data Allocation Advance (2DA) Algorithm that uses genetic programmingto determine the data allocations on the cloud-based memories. In our proposed approach, we consider a set of crucial factorsimpacting the performance of the cloud memories, such as communication costs, data move operating costs, energy performance,and time constraints. Finally, we implement experimental evaluations to examine our proposed model. The experimental resultshave shown that our approach is adoptable and feasible for being a cost-aware cloud-based solution.Index Terms—Cloud computing, genetic algorithm, heterogeneous memory, data allocation, multimedia big data!1I NTRODUCTIONThe advance of cloud computing has motivated a variety ofexplorations in information retrieval for big data informaticsin recent years. Heterogeneous clouds are considered a fundamental solution for the performance optimizations withindifferent operating environments when the data processingtask becomes a challenge in multimedia big data [1], [2].Thegrowing demands of multimedia big data have driven theincreasing amount of cloud-based applications in Internet-ofThings (IoT). Combining heterogeneous embedded systemswith cloud-oriented services can enable various advantagesin multimedia big data. Currently, cloud-based memories aremostly deployed in a non-distributive manner on the cloudside [3]. This deployment causes a number of limitations, suchas overloading energy, additional communications, and lowerperformance resource allocation mechanism, which restricts theimplementations of the cloud-based heterogeneous memories[4]–[6]. This paper concentrates on this issue and proposes an K. Gai is with the Department of Computer Science, Pace University,New York, NY 10038, USA, kg71231w@pace.edu.M. Qiu (Corresponding author) is with the Department of ComputerScience, Pace University, New York, NY 10038, USA, mqiu@pace.edu.H. Zhao is with Software School, Henan University, Kaifeng, Henan,475000, China, zhh@henu.edu.cnThis work is supported by NSF CNS-1457506 and NSF CNS-1359557.Manuscript received October XX, 2015innovative data allocation approach for minimizing total costsof the distributed heterogeneous memories in cloud systems.Contemporary cloud infrastructure deployments mainlymitigate data processing and storage to the clouds [7]. CentralProcessing Units (CPU) and memories offering processingservices are hosted by individual cloud vendors. This type ofdeployment can meet the processing and analysis demands forthose smaller sized data [8]–[10]. The incontinuous or periodically changeable implementations of the big data-orientedusage have caused bottlenecks for constructing firm performances [11]. For example, some data processing tasks aretightly associated with the industrial trends or operations, suchas annual accounting and auditing [12], [13]. Therefore, aflexible approach meeting dynamic usage switches has becomean urgent requirement in high performance multimedia big data.Moreover, another challenge is that deploying distributedmemories in clouds is still facing a few restrictions [14], [15].Allocating data to multiple cloud-based memories results inobstacles due to the diverse impact factors and parameters [16].The divergent configurations and varied capabilities can limitthe entire system’s performance since a naive task separation isinefficient to fully operate heterogeneous memories. It impliesthat minimizing the entire cost of using cloud memories isrestricted by multiple dimensional constraint conditions.To address the main challenge, we propose a novel approachentitled Cost-Aware Heterogeneous Cloud Memory (CAHCM)Model, which aims to achieve a reduced data processing time2168-7161 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCC.2016.2594172, IEEETransactions on Cloud ComputingIEEE TRANSACTIONS ON CLOUD COMPUTING, VOL. XX, NO. XX, MONTH YEAR2Fig. 1. The architecture of cloud-based heterogeneous memories using data allocation techniques in big datathrough heterogeneous cloud memories for efficient Memoryas-a-Service (MaaS). The costs can be any expenditures during the operations, such as energy, time, and communicationsources [17], [18]. Fig. 1 represents the architecture of cloudbased heterogeneous memory in which our proposed modelwill be applied. The operating principle of the proposed modelis using genetic algorithm to minimize the processing costsvia mapping data processing capabilities for different datasetsinputs. The operating flow is based on the distributed parallelcomputations in heterogeneous memories located in variouscloud vendors. The task assignments are completed by a dataallocation operation in which tasks are assigned to cloud memories based on the mapping of the memory capabilities to theinput data.For supporting the proposed model, we propose an algorithm, Dynamic Data Allocation Advance (2DA) Algorithm,which is designed to obtain the minimum total costs usinggenetic algorithm. Implementing 2DA enables a dynamic dataallocation to heterogeneous cloud memories with varied capacities. We consider a group of crucial factors that can impacton the costs when using cloud-based distributed heterogeneousmemories, such as communication costs, data move operatingcosts, energy performance, and time constraints. The target ofthe proposed algorithm is to provide data allocation mechanismgaining the minimum computing costs.The significance of this research is that the findings introduce a new approach for solving big data problems that requestthe scaled up capability memories deployed in clouds. Usingour proposed scheme can facilitate a high performance of theefficiency in generating solutions to data allocations, which isa NP-hard problem. The main contributions of our researchmainly include three aspects:1)2)We venture to solve the cost optimization problemon heterogeneous memories in a polynomial time,which is a NP-hard problem. The outcomes are optimal solutions under certain constraints and suboptimalsolutions with non-constraint.The proposed model is an attempt in using heterogeneous cloud memories to solve multimedia big data3)problems by dynamically allocate data to various cloudresources.We produce an approach that can be used to increasethe entire usage rate of the cloud infrastructure alongwith the enhancement of the computation capability,which is also an approach for generating an optimizeddata processing mechanism in cloud-based multimediadata processing.The remainder of the paper is organized by the follows.Section 2 reviews recent academic works in the relative fields.Next, we express a motivational example concerning the application of the proposed model in a simple implementationscenario in Section 3. Furthermore, main concepts and theproposed model statements are given in Section 4. Moreover, inSection 5, we provide detailed descriptions about the proposedalgorithm. In addition, experimental evaluations are stated inSection 6, which configures the experimental settings andexhibits some experimental findings. Finally, we conclude theresearch in Section 7.2R ELATED W ORKThis section has reviewed two crucial aspects related to ourresearch, including multimedia big data and cloud resourcemanagement for multimedia in cloud computing. Investigationsin these two hemispheres are theoretical supports for ourresearch background.2.1Multimedia Big DataMultimedia big data is a new technical term describing bigdata mechanisms applied in the multimedia field. We focus onthe computation workload dimension even though a few otherconcentrations have been addressed by the prior researches.First, the high performance-oriented data mining is oneresearch direction in multimedia big data. One research wasexplored to prove that social images shared on cloud-basedmultimedia had a high level of similarities when users aresocially connected [19], [20]. This research was an attemptof using multimedia to further expand the outcomes of bigsized data mining. Next, data-oriented scheduling was also2168-7161 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCC.2016.2594172, IEEETransactions on Cloud ComputingIEEE TRANSACTIONS ON CLOUD COMPUTING, VOL. XX, NO. XX, MONTH YEARan alternative for enhancing the computation capability byapplying resource scheduling algorithms [21], [22]. However,the computation workloads were not considered in this research,which was the main bottleneck for increasing the performanceof the multimedia big data. Most data mining techniques canbe directly used in multimedia big data field [23]. But theprocessing efficiency was a challenging issue due to the datasize feature of the multimedia.To address this challenge, a variety of preprocessing techniques have been examined by recent researches as well.One data preprocessing operation was configured in wirelessmultimedia sensor networks by deleting those data that haveless impact on the results [24]. Computing the data havinghigher-level weights can obtain approximate solutions [25].Some other data mining uses time series the determine theweights of the data by predictions [26]. However, this approachhighly depends on the fault tolerance rate. The approach cannotensure the accuracy if the rate is well configured even thoughthe computation workload is reduced. Our work focuses onproducing an innovative data allocation method for improvingefficiency without reducing workloads.Moreover, the feature selection was a research directionthat could be used to eliminate those data with less datamining values. For example, an approach was proposed to usedifferent constraints for re-ranking the image features [27]. Thistechnique was also combined with Web services [23]. Despitemany advantages of feature selections, this type of solutions isstill attached to the software side. The computation speed wasnot increased, such that the balance of computation workloadand outcome quality is still the critical part in this researchdirection. Our work targets at increasing the computation speedby optimizing the method of data allocations.Therefore, most previous research work has addressed thesoftware improvement for multimedia big data rather than thehardware side. Our research focuses on using cloud-basedmemories by deploying distributed heterogeneous computingresources.purpose of resource management, such as the amount of datareads and writes.Moreover, from the perspective of security and forensics,resource management in cloud computing is considered anoption of maximizing the security level by applying multipleconstraints via VMs [31], [32]. Combining various inspectionapproaches on multiple VMs can increase the threat detectioncapability [33]. Correspondingly, the cost of the implementations will become greater while the amount of VMs increases [34]. The optimizations usually address the provisionsof the heterogenous computing from distributed resources [35].However, solving the resource management problem is stillchallenging when the number of the parameters or variablesgrows. It results in the difficulties of gain an adaptable solutionthat can be completed in a polynomial time. In the perspective,our research proposes an adaptive method that can ensure ahigh-efficiency of resource management method generations.Furthermore, cloud task scheduling was also a researchdirection in resource management for big data. One recent optimization dimension was using iterative ordinal optimizationsto adapt dynamic workloads in cloud computing [36]. Thisresearch aimed to gain sub-optimal scheduling solutions byoptimizing each iteration. Meanwhile, some other schedulingoptimizations focused on reducing the latency and constraintscaused by bandwidths and data diversity [37]. A similar research focusing on reducing data transfer workloads used datapartition techniques for optimizations. However, most priorresearches did not consider implementing heterogeneous cloudcomputing such that the potential optimizations were ignored.Therefore, in our research, we concentrated on a crucialresearch dimension that had rarely been addressed by theprior researches. Applying the heterogeneous memory in cloudcomputing is an approach to increase cloud service efficiencyand its challenge is data allocations to diverse memories. Thetarget addressed by our research is producing a method that canefficiently produce data allocation plans with minimum costs.32.2 Resource Management for Multimedia in CloudComputingComputing resources on cloud computing are deployed in a distributive manner. The approach of maximizing cloud resourceshas been explored by recent researches in different perspectives.First, interconnecting various clouds has become a popularresearch topic in cloud computing. The relations between nodesin cloud computing are considered as one of the crucial aspectsin increasing the entire system’s performance [28], such as inInternet of Things (IoT). Prior researches have addressed a fewdimensions increasing the system’s performance. One of thecrucial aspects in cost-aware approaches was saving energyby applying scheduling algorithms on Virtual Machines (VMs)[29], [30]. This type of optimizations mainly depends on theavailability of the parameters for mapping cost requirements.Our proposed approach also requires a few parameters for the3M OTIVATIONAL E XAMPLEWe give a motivational example for clarifying the operationalprocesses of CAHCM in this section. Assume that there arefour cloud vendors offering MaaS with different performances,namely M1 , M2 , M3 , and M4 . Table 1 displays the costs fordifferent cloud memory operations.As shown in Table 1, we consider four main costs thatinclude Read (R), Write (W), Communications (C), and Move(MV). R and W refer to the operation costs of reading andwriting data. C means the costs happened to communicationprocesses through the Internet. MV represents the costs takenplace at the occasions when switching from one memory vendorto another set. β is a criterion for memory limits. There aretwo working modes based on β s, including a normal workingstatus and an over limit status. Table 2 represents differentperformance critical points for different cloud memory limits.According to Table 2, cloud memories have different memory service offerings. For example, M1 has a critical capability2168-7161 (c) 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCC.2016.2594172, IEEETransactions on Cloud ComputingIEEE TRANSACTIONS ON CLOUD COMPUTING, VOL. XX, NO. XX, MONTH YEARTABLE 1The costs for different cloud memory operations. Four types ofmemories are M1 , M2 , M3 , and M4 ; parameter (PAR) β representsthe performance critical points; main costs derive from four aspects,including Read, Write, Communications (Com.), and Move.OperationsRead (R)Write (W)Comm. (C)Move ABLE 2Performance critical points for different cloud memory limits and thenumber of memories.β1 (GB)β2 (GB)Number of MemoriesM1 4 42M2 3.2 3.22M3 3.5 3.53M4 4 43TABLE 4B Table: The costs of allocating each data to different memory unitsconsidering β zes (GB)2.352.703.303.632.762.063.36Assume that the initialized data allocations are A M2 ,B M2 , C M4 , D M2 , E M3 , F M4 , andG M3 . The costs of allocating each data to memory unitsare given in Table 4. This table is called B Table, which is usedto mapping all the costs for all potential activities of the data.The mapped the costs and the number of accesses matchthe conditions of our proposed model. Thus, we remap the datathat are shown in Table 5. The remapping table is named as DTable, which is used to generate optimized memory selectionsbased on the data from B Table. There are two steps in theremapping process. First, we sort the data in an ascending orderby summing up the number of Read and Write. For example, Ais 13 deriving from (7 6), which is shown in the table. The 701015156818M4700500710150250610300step is sorting the cost by the memory index number for eachdata. We assign an index number to each memory by 1 M1 ,2 M2 , 3 M3 , and 4 M4 . We sort the index for eachdata according to the costs of the data allocations given in Table4. According to the memory availability, we always select thelowest costs first, from the left side to the right side in the table.For example, in Table 4, the data allocation costs for data A arerespectively 77 (M1 ), 48 (M2 ), 34 (M3 ), and 700 (M4 ). Thesorted order for A is M3 , M2 , M1 , and M4 in Table 5.TABLE 5D Table: Sorted memory indices deriving from Table 4point at 4 GB, which means the performance will be dramatically diminish when the input dataset is larger than 4 GB. Theexample configuration is that there are 2 M1 , 2 M2 , 3 M3 , and3 M4 . Moreover, Table 3 represents the number of memoryaccesses and data sizes. There are 6 input data, including A,B, C, D, E, F, and G as well as their corresponded sizes. Forinstance, Data A are required to read 7 times and write 6 timesand the data size is 2.35 GB.TABLE 3The number of memory accesses and data )(23)M1 (118)M2 (48)M2 (86)M2 (34)M1 (38)M2 (23)M4 (150)M4 (710)M1 (77)M1 (110)M1 (55)M4 (300)M1 (30)M2 (1010)M2 (7050)M4 (700)M4 (610)M4 (500)M2 (2513)M4 (250)M3 (1015)Based on the remapping, we select memories for each data,which starts with the top of the table to the bottom. Accordingto the memory availability, we always select the lowest costfrom the ava

IEEE TRANSACTIONS ON CLOUD COMPUTING, VOL. XX, NO. XX, MONTH YEAR 1 Cost-Aware Multimedia Data Allocation for Heterogeneous Memory Using Genetic Algorithm in Cloud Computing Keke Gai, Student Member, IEEE, Meikang Qiu, Member, IEEE, Hui Zhao Student Member, IEEE Abstract—Recent expansions of Internet-of-Things (IoT) applying cloud computing .

Related Documents:

IEEE 3 Park Avenue New York, NY 10016-5997 USA 28 December 2012 IEEE Power and Energy Society IEEE Std 81 -2012 (Revision of IEEE Std 81-1983) Authorized licensed use limited to: Australian National University. Downloaded on July 27,2018 at 14:57:43 UTC from IEEE Xplore. Restrictions apply.File Size: 2MBPage Count: 86Explore furtherIEEE 81-2012 - IEEE Guide for Measuring Earth Resistivity .standards.ieee.org81-2012 - IEEE Guide for Measuring Earth Resistivity .ieeexplore.ieee.orgAn Overview Of The IEEE Standard 81 Fall-Of-Potential .www.agiusa.com(PDF) IEEE Std 80-2000 IEEE Guide for Safety in AC .www.academia.eduTesting and Evaluation of Grounding . - IEEE Web Hostingwww.ewh.ieee.orgRecommended to you b

IEEE Transactions on Cloud Computing IEEE TRANSACTIONS ON CLOUD COMPUTING 1 xxxx Application-Aware Big Data Deduplication in Cloud Environment . Yinjin Fu, Nong Xiao, Hong Jiang, Fellow, IEEE, Guyu Hu, and Weiwei Chen. Abstract —Deduplication has become a widely deployed technology in cloud

Signal Processing, IEEE Transactions on IEEE Trans. Signal Process. IEEE Trans. Acoust., Speech, Signal Process.*(1975-1990) IEEE Trans. Audio Electroacoust.* (until 1974) Smart Grid, IEEE Transactions on IEEE Trans. Smart Grid Software Engineering, IEEE Transactions on IEEE Trans. Softw. Eng.

IEEE TRANSACTIONS ON CLOUD COMPUTING, VOL. X, NO. X, XXXX 201X 1 . on Cloud Computing (IEEE Cloud 2012). The main reason is that the size of cloud data is large in general. Downloading the entire cloud data to verify data i

10.1109/TCC.2014.2350475, IEEE Transactions on Cloud Computing IEEE TRANSACTIONS ON CLOUD COMPUTING, VOL. XX, NO. YY, MONTH 2014 1 Workload Prediction Using ARIMA Model and Its Impact on Cloud Applications’ QoS Rodrigo N. Calheiros, Enayat Masoumi, Rajiv Ranjan, Rajkumar Buyya Abstract—As companies shift from d

IEEE Transactions on Cloud Computing IEEE Transactions on Cloud Computing (TCC) publishes peer reviewed articles that provide innovative research ideas and applications results in all areas relating to cloud computing. Topics relating

Chapter 10 Cloud Computing: A Paradigm Shift 118 119 The Business Values of Cloud Computing Cost savings was the initial selling point of cloud computing. Cloud computing changes the way organisations think about IT costs. Advocates of cloud computing suggest that cloud computing will result in cost savings through

Prosedur Akuntansi Hutang Jangka Pendek & Panjang BAGIAN PROYEK PENGEMBANGAN KUR IKULUM DIREKTORAT PENDIDIKAN MENENGAH KEJURUAN DIREKTORAT JENDERAL PENDIDIKAN DASAR DAN MENENGAH DEPARTEMEN PENDIDIKAN NASIONAL 2003 Kode Modul: AK.26.E.6,7 . BAGIAN PROYEK PENGEMBANGAN KURIKULUM DIREKTORAT PENDIDIKAN MENENGAH KEJURUAN DIREKTORAT JENDERAL PENDIDIKAN DASAR DAN MENENGAH DEPARTEMEN PENDIDIKAN .