Dhaka University Of Engineering & Technology, Gazipur

2y ago
9 Views
3 Downloads
2.10 MB
84 Pages
Last View : 8d ago
Last Download : 3m ago
Upload by : Jayda Dunning
Transcription

Dhaka University of Engineering & Technology, GazipurAN IMPROVED CLUSTERING ALGORITHM FOR FINDING SPHERICALAND NON SPHERICAL CLUSTERS ON DATA MININGbyMd. Zakir HossainSupervised byDr. Md. Nasim AkhterProfessorDepartment of Computer Science and Engineering,Dhaka University of Engineering & Technology, GazipurGazipur, Bangladesh21 May, 2019

The thesis entitled ‗AN IMPROVED CLUSTERING ALGORITHM FOR FINDINGSPHERICAL AND NON SPHERICAL CLUSTERS ON DATA MINING‘ submitted by Md.Zakir Hossain, Student No.: 142428-P has been accepted as satisfactory in partial fulfillment ofthe requirement for the Degree of Master of Science in Computer Science and Engineering on21 May, 2019.BOARD OF EXAMINERS1.(Dr. Md. Nasim Akhtar)ProfessorDepartment of Computer Science and EngineeringDhaka University of Engineering & Technology, GazipurGazipur-1700, Bangladesh.Supervisor(Chairman)(Dr. Mohammad Abdur Rouf)Professor and HeadDepartment of Computer Science and EngineeringDhaka University of Engineering & Technology, GazipurGazipur-1700, Bangladesh.Ex-officio2.3.(Dr. Mohammod Abul Kashem)ProfessorDepartment of Computer Science and EngineeringDhaka University of Engineering & Technology, GazipurGazipur-1700, Bangladesh4.MemberMember(Dr. Md. Waliur Rahman Miah)Associate ProfessorDepartment of Computer Science and EngineeringDhaka University of Engineering & Technology, GazipurGazipur-1700, Bangladesh.5.(Dr. Kaushik Deb)ProfessorDepartment of Computer Science & EngineeringChittagong University of Engineering and Technology,Chittagong, Bangladesh.Member(External)

DECLARATIONI declare that this Thesis is my own work and has not been submitted in any form for anotherDegree at any University or other Institute of tertiary education. Information derived from thepublished and unpublished work of others has been acknowledged in the text and a list ofreferences is given.Signature of the candidate(Md. Zakir Hossain)I

DEDICATIONTO MY BELOVED PARENTSII

ABSTRACTIn the recent years, the amount of data increases exponentially in every sector such asbanking and securities, communications and media, sports, healthcare, education,manufacturing, insurance, consumer trade, transportation, energy. It can be observed thatmost of these data are unstructured and contain noise and outliers. In addition, these data aredifferent in terms of shapes, sizes, and densities. In such a case, it is challenging to retrievethe desired pattern from these large data sets by traditional clustering methods. In theliterature, several data clustering techniques have been proposed for finding a specific patternfrom large datasets. The K-Means method is one of them which is widely used for the dataclustering purpose. Despite this, the clustering performance of K-Means method is highlydepended on the user defined parameter K. For the real-world clustering problem, it ischallenging to set the exact value of K in order to perform the clustering of the unknowndatasets. This degrades the clustering performance of K-Means method. To address this issue,a dynamic K-means method is proposed which is the first contribution of this research. Theproposed dynamic K-Means method has the ability to select dynamically the K value for theclustering purpose. The proposed method is compared with the well-known clusteringmethods over the Real Datasets are obtained from UCI machine learning repository and someartificial data sets. The results confirm the superiority of the proposed method over the wellknown clustering methods.Another problem of K-Means method is that it does not performs well for the non-sphericalcluster. DBSCAN is one of the popular methods for the non-spherical clustering. It has theability to find the spherical as well as non-spherical clusters for large datasets. However,DBSCAN method highly depends on two user-defined parameters, namely Eps (radius of thecluster) and MinPts (minimum number of points required in the cluster) which degrade itsclustering performance. Because, if a larger value is chosen for Eps than DBSCAN forms theclusters having dissimilar data. On the other hand, if a small value is chosen for Eps then thistechnique forms the clusters having the small amount of similar data. In this thesis, wepropose an improved DBSCAN technique for the spherical as well as non-sphericalclustering of large data sets which is the second contribution of this research. The proposedtechnique calculates the best Eps and MinPts dynamically which improve the cluster qualityof large data set. We examine the performance of the proposed technique over the originalDBSCAN technique using popular large data sets. The results show that the proposedtechnique outperforms than the original DBSCAN technique in terms of clustering accuracy.III

ACKNOWLEDGEMENTAt first, I would like to show my gratitude to Almighty Allah for giving me the opportunity tostudy in M.sc. Engineering. I am very thankful and pleased to be able to acknowledge theassistance made by all those who have supported me in my research.I would like to thank my supervisor, Dr. Md. Nasim Akhter, Professor, Department ofComputer Science and Engineering, Dhaka University of Engineering & Technology,Gazipur for his invaluable guidance and support over the past few years. He led me into thiswonderful research area, taught me how to do research, and how important critical thinkingis.The thesis work was carried out at the Department of Computer Science and Engineering,Dhaka University of Engineering & Technology, Gazipur.I want to thank my parents for their love and support throughout my life and at last I want tothank my wife for her continuous support, love, encouragement, and for understanding andbelieving on me.IV

LIST OF FIGURES2.1The process of knowledge discovery in database (KDD).92.2The relationship of data mining with other areas.102.3Data mining task112.4Clustering methods with their well-known algorithms.132.5Hierarchical clustering method.142.6Grid-based clustering method.142.7Graph-based clustering method.152.8Partitioning clustering method.162.9DBSCAN algorithm identifying a neighborhood of a point based onMinPts.192.10Density-based clustering method.193.1Flow chart of the K-Means algorithm.273.2DK-Means clustering.323.3K-means clustering (K 2, initial centroid is (2, 2) and (1, 14)).333.4K-means clustering (K 2, initial centroid is (2, 2) and (1, 14)).333.5Flow chart of the DK-Means algorithm353.6Performance metrics363.7Comparison based on sum of the inter cluster distance (Iris Dataset).383.8Comparison based on sum of square error (Iris Dataset).383.9Comparison based on sum of inter cluster distance (artificial data sets).403.10Comparison based on sum of square error (artificial data sets).40V

4.1The flow chart of the first step of proposed method.484.2The flow chart of the second step of proposed method494.3Dynamic DBSCAN clustering result.574.4Half ring data set without clustering.594.5Half ring data set with clustering.594.6Three spirals data set without clustering.594.7Three spirals data set with clustering.594.8Corners data set without clustering.604.9Corners data set with clustering.604.10Semi-circular data set without clustering.604.11Semi-circular data set with clustering.604.12Half moon data set without clustering.614.13Half moon data set with clustering.614.14Aggregation data set without clustering.614.15Aggregation data set with clustering.614.16Performance of the DDBSCAN algorithm62VI

LIST OF TABLE2.1Evaluation of the density-based clustering algorithms based on various23criteria.3.1Sample data set.273.2Distance matrix for simple data set283.3Distance matrix with sum and mean of each row283.4Distance matrix with threshold value293.5Data group and centroid303.6Comparison among three well known algorithm and DK-Means for iris data37set.3.7Comparison among three well known algorithm and DK-Means for39artificial data set.5.1Sample Data set.505.2Euclidean Distance matrix for sample data set.515.3Performance measure among DDBSCAN and two well known algorithm62VII

TABLE OF CONTENTSChapter 1Chapter 21Introduction1.1 General Importance .21.2 Background .21.3 Motivation .31.4 Objectives 41.5 Contributions .51.6 Outline of the Thesis .56Literature Review2.1 Introduction .72.2 Data Mining .72.3 Positions of Data Mining .102.4 Data Mining Tasks 102.4.1 Predictive Task 102.4.2 Descriptive Task .102.4.3 Classification and Regression .112.4.4 Association Rule .112.4.5 Anomaly Detection .122.4.6 Clustering .122.4.7 Summarization .122.5 Clustering .122.6 Clustering Methods .132.6.1 Hierarchical Clustering Methods .132.6.2 Grid-based Clustering Methods .142.6.3 Graph-based Clustering Methods .152.6.4 Partitioning Clustering Methods 152.6.5 Error Minimization Algorithm .162.6.6 K-Means Clustering .172.6.7 Graph-theoretic Clustering 182.6.8 Density-based Clustering .182.6.9 Density Function 212.6.10 DENCLUE .21VIII

Chapter 3Chapter 4Chapter 5References2.7 Evaluation of the Density-based Clustering Algorithms .212.8 Chapter Summary 23Dynamic K-Means Clustering Algorithm243.1 Introduction . 253.2 K-Means Clustering Algorithm 253.3 The Proposed Dynamic K-Means clustering algorithm 263.4 Performance metrics 353.5 Result Analysis 363.6 Chapter Summary . 41Dynamic DBSCAN Clustering Algorithm424.1 Introduction . 434.2 Related Work .434.3 DBSCAN Algorithm . . .454.4 Proposed Method 464.5 Example of Finding Clustering Using the Proposed Method . .504.6 Experimental Setup 584.7 Result Analysis .584.8 Chapter Summary .63Conclusion and Future Works645.1 Conclusion .655.2 Future work 665.3 Research Publication .67 . 68IX

Chapter-1Introduction1

Chapter 1Introduction1.1 General ImportanceWith the rapid development of computer and information technology in the last severaldecades, an enormous amount of data in science and engineering has been producing and willSbe generated in massive scale. For this reason, data sets are growing day by day and sharing,transfer, capture, storage, etc. are the main challenges in large data sets. The large data sets,in the order of tera-to peta-bytes, has fundamentally changed science and engineering,transforming many disciplines from data-poor to increasingly data-rich, and calling for new,data-intensive methods to conduct research in data mining.Data mining is a popular technique to extract the meaningful patterns of such valuableinformation. Basically, data mining techniques discover important patterns from largedatasets which helps to predict the outcome of a future observation. Data mining consists ofsix different techniques: anomaly detection, association rule learning, classification,clustering and regression. Among them, clustering is the most important part of data miningwhich is the key to find the accurate data patterns of large datasets. Clustering is widely usedin many applications including marketing research, network analysis, image processing,biology, business, geographic observation, web analysis and medical-based applications [1].1.2 BackgroundIn the past few years, many studies have focused on the development of different clusteringalgorithms, because of their significant role in allowing automatic identification of unlabeledrecords by grouping them into clusters based on similarity measurements [1]. The K-meansclustering is one of the well-known partitioning clustering algorithms. This algorithm startswith an initial estimation for the Κ centroids, which can either be randomly generated orrandomly selected from the data set. In the real world, if it is selected or generated randomlythen the clustering performance degraded. To remove this difficulty, some research workshave taken initiative for improving the cluster quality [11] [30] [35].In the K-Meansclustering algorithm another challenging task to find the optimal value of K in order toperform the clustering of the unknown dataset. To remove this difficulty, some researchworks have taken initiative to find the optimal value of K [13] [31] [34]. Some research2

works have taken initiative to saving running time of K-Means clustering algorithm [29].Another problem of K-Means algorithm is that it‘s only forms the spherical clusters whichdegrade the accuracy of finding meaningful patterns of large data.Density-based spatial clustering of applications with noise (DBSCAN) is one of theprominent density-based clustering techniques to find the data patterns by means of nonspherical clusters. Density-Based Spatial Clustering of Applications with Noise (DBSCAN)is a density-based clustering algorithm because it finds a number of clusters starting from theestimated density distribution of corresponding nodes. It is challenging task to estimatedensity in advance. Some research works try to solve this difficulty [21]. Another problem ofDBSCAN technique, it is highly depends on two user-defined parameters, namely radius ofthe cluster (Eps) and minimum number of points required in the cluster (MinPts) whichdegrade its clustering performance. In the real world, it is very challenging task to find theoptimal value of Eps and MinPts in advance. Some research works have taken initiative tosolve this problem [23] [36] [37] [38] [39] [40]. However, these research works may notovercome the difficulty of setting the value of K, Eps and MinPts which is the main concernof this thesis.1.3 MotivationIn the last few decades, many studies have focused on the development of different clusteringalgorithms. The K-means algorithm is one of the familiar clustering techniques which havebeen widely used for data mining. The K-Means algorithm works on three different stages. Inthe first stage, this algorithm takes a specific number of clusters K as an input. In the secondstage, it chooses the number of K centroid for the given dataset. In the final stage, it forms theK number of clusters where each cluster represents a local minimum. In the real-worldproblem, it is challenging to find the optimal value of K in order to perform the clustering ofthe unknown dataset [1]. To remove this difficulty, some research works have taken initiativefor improving the cluster quality of large dataset. However, these research works may notovercome the difficulty of setting the value of K which is the main concern of this thesis.The K-Means algorithm is only forms the spherical clusters. It is another problem of The KMeans algorithm. However, many real-world data sets required non-spherical clusters toidentify the accurate data pattern.3

Density-based spatial clustering of applications with noise (DBSCAN) is one of theprominent density-based clustering techniques to find the data patterns by means of nonspherical clusters. DBSCAN mainly focusing on minimizing the number of input parameters,discovering clusters with arbitrary shapes, clustering large data efficiently, no need for priorknowledge of the number of clusters, and handling noise [2]. DBSCAN have severalapplications in different field like medical images, satellite images, anomaly detection intemperature data, and GPS data in the literature [2]. It has the ability to find the spherical aswell as non-spherical clusters from large datasets. To do this, DBSCAN technique highlydepends on two user-defined parameters, namely radius of the cluster (Eps) and minimumnumber of points required in the cluster (MinPts) which degrade its clustering performance.Because, if a larger value is chosen for Eps then DBSCAN forms the clusters havingdissimilar data. On the other hand, if a small value is chosen for Eps, then this techniqueforms the clusters having a small amount of similar data. In such cases, the exact value of Epsand MinPts are challenging to find which one of the concerns of this thesis is.By considering the above research gaps, this research aims to develop a dynamic K-meansalgorithm and DBSCAN algorithm so that the good quality clusters can be found with betteraccuracy.1.4 ObjectivesThe specific research objectives of this thesis are as follows:1. To identify the limitations of existing K-means and DBSCAN algorithms,specifically, optimal parameter selection problem.2. To design a new K-means and DBSCAN algorithm that can provide the good qualityclusters by means of overcoming the found limitations.3. To demonstrate the accuracy of the proposed K-means and DBSCAN algorithmscompared to other well-known clustering algorithm based on low sum of square error(SSE), low intra-cluster distance, and high inter-cluster distance.4

1.5 ContributionsThis study makes the following original research contributions in data mining techniques aswell as clustering algorithms.1. Propose a new K-means algorithm that can find good quality of spherical cluster andsolve the initial parameter selection problem of the traditional K-means algorithm.2. Propose a new DBSCAN algorithm that can find good quality of spherical cluster aswell as non-spherical cluster and solve the initial parameter selection problem of thetraditional DBSCAN algorithm.1.6 Outline of the ThesisThe remaining of this thesis is organized as follows. The background materials of this thesisare provided in Chapter 2. Chapter 3 contains overview of the proposed methodology andworking procedure. This chapter also contains performance evaluation of the proposed DKmeans clustering algorithm. In Chapter 4, a dynamic DBSCAN clustering algorithm isproposed. This chapter also contains overview of the proposed methodology, workingprocedure and performance evaluation of the proposed DDBSCAN clustering algorithm.Finally, Chapter 5 concludes this thesis.5

Chapter-2Literature Review6

Chapter 2Literature Review2.1 IntroductionThis chapter contains the background materials that are used throughout this thesis.Particularly, the idea of data mining which can be applied to the large datasets for discoveringpatterns which id used to make decision for further development in the real-world problems.An overview of data mining and its key techniques are provided. The popular clusteringalgorithms that are related to this study are introduced. The basic idea of the K-means methodis introduced. An overview of existing K-means method is also discussed. Finally, densitybased clustering method and DBSCAN clustering method are explained.2.2 Data miningData mining is the process of finding uncovering patterns in a large dataset. The field offinding useful patterns in data has an innumerable of names including but not limited to datamining, knowledge extraction, information discovery, data archaeology and data patternprocessing. The term of ―Data Mining‖ use statisticians and it is very popular in the field ofdatabases. The terms of knowledge discovery in the databases were introduced at the firstKDD workshop in 1989 which has been used in the AI and machine-learning fields [3]. Theterm ―Data Mining‖ becomes visible in academic journals as early as 1970. But it becameinto widespread use in the 1990s, after the rise of the internet.The new interdisciplinary field of computer science is data mining. This is the process offinding data patterns automatically from the large database [4]. The necessity of data miningis increasing day by day since previous ten or fifteen years and so now in this time on themarketplace is a very challenging competition to the efficiency of information andinformation rapidly performed an important role to find out a decision of plan and provided agreat offer of information in industry, society and all together. In real-world, a large numberof data is available in which it is difficult to retrieve the useful information. Due to thepractical importance, it is important to retrieve the structure of data within the given timebudget. The data mining provides the way of eliminating unnecessary noises from data. Ithelps to provide necessary information from the large dataset and present it in the properform when it is necessary for a specific task. It's very helpful to analyze the market trend,7

search the new technology, production control based on the demand of customer and so on.In a word, the data mining is harvesting of knowledge from a large amount of data. We canpredict the type or behavior of any pattern using data mining.In the field of computer technology, using the increasing power of computers, we develop anessential tool for working with data. Such as, it can work with the large size of the datasetsand its complexity. And also need to refine the automatic data processing, which has beenhelped by further discoveries in computer science, means that our ability for data collectionstorage and manipulation of data has been increased. Among these discoveries of importance,are the neural networks, cluster analysis, genetic algorithm [5], decision trees [6] and supportvector machines [7].Data mining can help governments and various companies to know their increasing ordecreasing costs or revenue. For example, an early form of data mining was used bycompanies to analyze huge amounts of business data from supermarkets. This analysisrevealed when people were more likely to shop, and when they were most likely to buycertain products, like women or baby products. This enabled the retailer to maximize revenueby ensuring they always had enough products at the right time in the right place. One of thefirst best selling systems was A.C. Nielson‘s best-selling Spotlight, which broke downsupermarket sales data into multiple dimensions, including volume by region and producttype.Data mining or knowledge discovery in the database which is used to discover the importantinformation from the large data set. This is a powerful novel technology for Knowledgediscovery. Computational technology is an essential tool which is able to handle thechallenges posed by these new types of data sets.The area of data mining grows up day by day, to extract useful information from the fastgrowing volumes of data. It seeks information within the dataset that queries and reportscannot adequately express.Data mining is an essential part of the field of knowledge discovery in database (KDD). Theknowledge discovery in database refers to the whole process of discovering useful knowledgefrom the large data set, and data mining is a particular step in the process of discoveringuseful knowledge from the large data set. The KDD task is to convert raw data into usefulknowledge or useful information from a large data set, as shown in Figure 2.1:8

Figure 2.1: The process of knowledge discovery in database (KDD).The process of knowledge discovery in the database contains a series of transformation steps,from data reprocessing to data mining results [8]. The input data involve flat files,spreadsheets or relational tables, and may be residing in a centralized database [9] ordistributed across multiple sites.To transform raw data into the proper format, pre-processing phase has been done tosucceeding analysis. This step includes mixing data from multiple sources, removing noiseand duplicates observations to cleaning purpose, select relevant features and record of thedata mining task. This step is the most time-consuming because of the many different typesof data.The valid and useful results are integrated and incorporated into the decision support system,which ensures the post-processing phase. In this step use visualization, that allows theanalysts to explore the data and the data mining results from a variety of viewpoints.Statistical methods and probability can also be applied during this step to eliminate unreliabledata mining results.If the discover patterns do not meet the desired standard patterns, then it is necessary to reevaluate and change the pre-processing and data mining. If the discover patterns to achieve9

the desired standard patterns, then the final step is to interpret the discovered learned patternsand turn them into knowledge.2.3 Positions of Data MiningThe traditional data analysis techniques have challenges because always new types ofdatasets. To handle these new types of challenges, researchers have developed more efficientand scalable tools that can more easily handle different types of data.Data mining has been accepting from other areas, such as optimization, signal processing,evolutionary computing, information theory, visualization and information retrieval [10].Another area, also supporting this field of research, is the use of databases, indexing andquery processing. Figure 2.2 represents the relationship of data mining with other areas.Figure 2.2: The relationship of data mining with other areas.2.4 Data Mining TasksData mining tasks are usually divided into two primary categories, as shown in Figure 2.3:2.4.1 Predictive TasksThe predictive task objective is to predict the value of one specific attribute, based on otherattribute values. The attributes used for making the prediction is named the independentvariable, and the other value which is to be predicted is commonly known as the dependentvalue.2.4.2 Descriptive TaskThe descriptive task objective is to assume underlying relations in data. This task of datamining, values are independent and frequently require post-processing to validate results.10

Figure 2.3: Data mining task.There are six essence includes in the data mining task. These are given in the followingsections.2.4.3 Classification and RegressionThe predictive modeling task divided into two types. First one is Classification, that is used toevaluate discrete target variable, and the second one is regression, which manages continuestarget variables. The primary goal of Classification and Regression is to build a model, andthis model produces minimum error between predict and real value. An example ofclassification is the classification of an email as primary or social. On the other hand, predictthe future price of a product is a Regression task because the price is a continues-valueattribute.2.4.4 Association RuleAssociation rule is a method that describes associated features in data and findingrelationships between variables. As an example, Web pages that are accessed together can beidentified by association analysis.11

2.4.5 Anomaly DetectionThis class identifies anomalies data records; this anomaly data make an error in the data files,or might be of interest and requires further investigation.2.4.6 ClusteringThis class discovers groups and structures from a large data set, which in some aspect issimilar or dissimilar, without using known structures in the large data set.2.4.7 SummarizationThis class attempts to provide a more compact representation of the data set, includingvisualization and report generation.2.5 ClusteringCluster evaluation of data is an important task in knowledge finding and data mining. Clusterformation is the process of creating data group based on the data similarities from largedataset. The clustering process is done by supervised, semi-supervised or unsupervisedmanner [11].Clustering is the method used to divide data into groups, where the object within a group issimilar to one another and dissimilar with objects in other groups according to specificsimilarity measurements. Clustering techniques are categorized into a number of methods.These methods are different from one another on the way clusters are formed. However, theperformance of these clustering methods is evaluated based on the several requirements, suchas, Minimizing the number of input parameters, discovering clusters with arbitrary shapes,Clustering large data efficiently, Handling noise and No prior knowledge of the number ofclusters.The clustering algorithms are powerful meta-learning tools for analyzing the data producedby modern applications. The purpose of clustering is to classify the data into groupsaccording to similarities, traits, and behavior of data [12].Many clustering algorithms have been proposed for classification of data. Most of thesealgorithms are based on the assumption that the number of clusters in a large data is fixed.The problem with this assumption is that if the assumed number of clusters is small then thereis a higher chance of adding dissimilar items into the same group. On the other hand, if the12

number of clusters is large, then there is a higher chance of adding similar data placed intodifferent groups [13].Figure 2.4 shows the different types of clustering methods with their well-known algorithms.Given below briefly describe how these methods work, along with their main algorithms, andmore description of the partitioning method and density-based clustering method since it isthe focus of this study.Figure 2.4: Clustering Methods with their Well-known Algorithms.2.6 Clustering MethodsThe section provides the necessary background of existing clustering methods.2.6.1 Hierarchical Clustering MethodsHierarchical Clustering Method produces a multi-level clustering in which data is groupedinto a sequence of partitions where the sub-cluster belongs to the super-cluster. Hierarchicalclustering method consists of two algorithms are agglomerative "bottom-up" and divisive"top-down" [2]. The similar

Department of Computer Science and Engineering Dhaka University of Engineering & Technology, Gazipur Gazipur-1700, Bangladesh. Member 5. ( Dr. Kaushik Deb) Professor Department of Computer Science & Engineering Chittagong University of Engineering and Technology,

Related Documents:

the rickshaw is totally discarded as a form of transport under the pressure of modern technology. The rickshaw came into use in Dhaka City in 1938 when there were only six rickshaws (Rashid 1986: 2). In 1941, Dhaka City had only 37 rickshaws. In course of time Dhaka became known as the city of rickshaws. Dhaka City is the

Mercantile Bank Limited International Division, Head Office, Dhaka SWIFT RMA (Relationship Management Application) LIST Last Update Date: 18-09-2019 . Dhaka Export Import Bank Of Bangladesh Ltd. EXBKBDDH 24-06-2008 33. Dhaka SOCIAL INVESTMENT BANK LTD SOIVBDDH 19-08-2008 34. Dhaka SOUTHEAST BANK LIMITED SEBDBDDH 19-08-2008

United Attires Limited Enayetnagar, Godnail, Shiddirganj, Narayanganj, Narayangonj unitedknit@dhaka.net 880171306393 2 Sirajgonj Fashions Ltd. 101/1/A Barabagh, Mirpur-2, Dhaka lutfar.sfashion@g 01914226615 WORLD VICTORY GMT LTD. SatmasjidSupperMarket Dhaka EVANAGMT LTD. House#4,Road#132 Dhaka

Dhaka city, 60 percent of the residents use rickshaw for commuting. Dhaka City Corporation (DCC) the only rickshaw licensing authority issued 79,554 rickshaw licenses until stopped in 1986. But, nowadays, in Dhaka city, the actual number of . Organising the Informal Economy Workers: A Study of Rickshaw Pullers in

Table 2.6: Hours and wages of the poor, by occupation and gender. 21 Table 2.7: Unemployment and Underemployment in Dhaka SMA by Income Group. 22 Table 2.8: Comparison of Average Monthly Wages between Dhaka and Rural Areas in

Aga Khan Program in Islamic Architecture and the Harold Horowitz Student Research Fund. In addition to the support from these two generous sources, I would like to thank the International Center for Climate Change and Development in Dhaka for hosting me as a visiting scholar during my time in Dhaka.

Trends in Punitive Actions against Environmental Pollution in Bangladesh 1Tanvir Ahmed , Md. Quamrul Hasan1 and Mohammad Osman Sarwar2 1Department of Civil Engineering, Bangladesh University of Engineering and Technology, Dhaka-1000, Bangladesh 2Department of Civil Engineering, Military Institute of Science and Technology, Mirpur Cantonment, Dhaka-1216,

Andreas Werner The Mermin-Wagner Theorem. How symmetry breaking occurs in principle Actors Proof of the Mermin-Wagner Theorem Discussion The Bogoliubov inequality The Mermin-Wagner Theorem 2 The linearity follows directly from the linearity of the matrix element 3 It is also obvious that (A;A) 0 4 From A 0 it naturally follows that (A;A) 0. The converse is not necessarily true In .