Integrating SAP BusinessObjects With Hadoop - Visual BI

1y ago
11 Views
1 Downloads
2.42 MB
18 Pages
Last View : 3d ago
Last Download : 3m ago
Upload by : Harley Spears
Transcription

Integrating SAPBusinessObjects withHadoopUsing a multi-node Hadoop ClusterMay 17, 2013

Integrating SAP BusinessObjects with HadoopSAP BO – HADOOP INTEGRATIONContents1. Installing a Single Node Hadoop Server . 22. Configuring a Multi-Node Hadoop Cluster . 63. Configuring Hive Data Warehouse. 104. Integrating SAP BusinessObjects with Hadoop . 121Visual BI Solutions Inc.http://www.visualbis.com

Integrating SAP BusinessObjects with Hadoop1. Installing a Single Node Hadoop ServerInstalling a single node Hadoop server involves the following steps1.2.3.4.5.6.7.8.9.10.11.Install a stable Linux OS(Preferably CENT OS) with ssh, rsync and recent jdk from Oracle.Download Hadoop .rpm(Equivalent to windows .exe) from the apache website.Install the downloaded file with rpm or yum package manager.Apache provides generic configuration options (mentioned below) that can be deployed byexecuting the scripts packed with the .rpm file.Execute the configuration process by running the hadoop-setup-conf.sh script with rootprivilege. Select the “default” option for config, log, pid, NameNode, DataNode, job-trackerand task-tracker directories and provide the system name for NameNode and DataNodehosts.To install single node server .conf files, run hadoop-setup-single-node.sh script with rootprivilege and select the default option for all categories.Setup the single node and start Hadoop services by running hadoop-setup-hdfs.sh scriptwith root privilege. The .rpm file used comes with some basic examples like wordcount, pi,teragen etc. This can be used to test if all the services are working.Hadoop requires six different services to run for perfect functioning.(a) Hadoop NameNode(b) Hadoop DataNode(c) Hadoop JobTracker(d) Hadoop TaskTracker(e) Hadoop Secondary NameNode(f) Hadoop History ServerIf all services are running then the single node cluster is ready for operation.Hadoop services status can be checked with the following linux commands. root : service hadoop-namenode status (These services are located in /etc/init.d dir)Similarly to start or stop services service Linux command can be used. root : service hadoop-datanode start root : service hadoop-jobtracker stop.For more Detailed Info on Hadoop Services: http://www.cloudera.com , http://www.wikipedia.orgFor more Installation Options: http://hadoop.apache.org2Visual BI Solutions Inc.http://www.visualbis.com

Integrating SAP BusinessObjects with HadoopHadoop Running Services can be monitored through the web interfaces.NameNodeDataNode3Visual BI Solutions Inc.http://www.visualbis.com

Integrating SAP BusinessObjects with HadoopJobTrackerTaskTracker4Visual BI Solutions Inc.http://www.visualbis.com

Integrating SAP BusinessObjects with HadoopHadoop Basic Commands5Visual BI Solutions Inc.http://www.visualbis.com

Integrating SAP BusinessObjects with Hadoop2. Configuring a Multi-Node Hadoop ClusterSingle node Hadoop server can be expanded to a Hadoop cluster. In cluster mode the HadoopNameNode will have many live DataNode and many TaskTracker.Steps involved in the installation of multi-node Hadoop cluster.1. Install stable Linux (preferably CENT OS) in all machines (master and slaves).2. Install Hadoop in all machines using Hadoop RPM from Apache.3. Update /etc/hosts file in each machine, so that every single node in cluster knows the IPaddress of all other nodes.4. In Master node /etc/hadoop directory update the master and slaves file with the domainnames of master node and slaves nodes respectively.5. Generate SSH key pair for the master node and place the public key in all the slave nodes.This will enable password-less ssh login from master to all slaves6. Run the script hadoop-setup-conf.sh in all nodes. In master let all nodes point to the masterUrl. In slaves update NameNode and JobTracker urls to point to master node, other urlspoint to the localhost.7. Open firewall ports for communication in both master and slave nodes.8. In master run the command start-dfs.sh, this will start NameNode (In master) andDataNodes (Both Master and Slaves)9. In master run the command start-mapred.sh, this will start JobTracker (In master) andTaskTracker (Both Master and Slaves).10. Now the NameNode and JobTracker will have more active nodes compared to single nodeserver.For More configuration options, /hadoop.apache.org/docs/stable/cluster setup.html,6Visual BI Solutions Inc.http://www.visualbis.com

Integrating SAP BusinessObjects with HadoopSome Screenshots of the Multi-node Hadoop Cluster at workNameNodeDataNode7Visual BI Solutions Inc.http://www.visualbis.com

Integrating SAP BusinessObjects with HadoopList of DataNodesList of TaskTrackers8Visual BI Solutions Inc.http://www.visualbis.com

Integrating SAP BusinessObjects with HadoopJobTracker Job StatusTaskTracker Task Status9Visual BI Solutions Inc.http://www.visualbis.com

Integrating SAP BusinessObjects with Hadoop3. Configuring Hive Data WarehouseHive Data Warehousing environment runs on top of Hadoop. It performs ETL at run time and makes dataavailable for reporting. Hive has to be installed initially and then hosted as a service using Hive-Serveroption.Steps Involved in Configuring Hive1. Install and Configure Hadoop on all machines and make sure all the services are running.2. Download Hive from Apache website.3. Now install MySQL for HIVE metadata storage or just configure the default Derby Database.Any RDBMS system can be used for Hive metadata. This can be done by placing the correct JDBCconnector in the hive lib directory. For detailed info on connectivity follow this linkhttps://ccp.cloudera.com/display/CDHDOC/Hive Installation#HiveInstallation-HiveConfiguration4. Copy the needed .jar files to the required directories as per the instructions in the above link.5. Now go to /bin directory in Hive package folder and execute hive command.6. Queries can now be executed in the shell.7. Hive Web Interface can be started by executing hive command as - hive --service hwi.8. Hive Thrift Server can be started by executing hive command as - hive --service hiveserver.9. Open the Hive server port (default 10000) in firewall for connection through JDBC.10. If security is needed for hive server then configure Kerberos network authentication and bind itto hive server. For more information, refer http://www.cloudera.com.For more config options: http://hive.apache.orgFor Hive – JDBC t.html#HiveClient-JDBC10Visual BI Solutions Inc.http://www.visualbis.com

Integrating SAP BusinessObjects with HadoopScreenshots of the Hive ServerHive Web InterfaceHive Command Line11Visual BI Solutions Inc.http://www.visualbis.com

Integrating SAP BusinessObjects with Hadoop4. Integrating SAP BusinessObjects with HadoopUniverse Design Using IDTSteps Involved in Configuring SAP BusinessObjects for use with Hadoop1. Configure SAP BusinessObjects with Hive JDBC drivers, if the server is of a version lower than BO 4.0with SP5. In BO Server 4 SP5, SAP Provides Hive connectivity by default.In order to configure JDBC drivers in earlier versions refer to page 77 of this documenthttp://help.sap.com/businessobject/product guides/boexir4/en/xi4sp4 data acs en.pdf.2. Create BO universe.1. Open SAP IDT and create a user session with login credentials.2. Under sessions, open connections folder. Create a new Relational connection.12Visual BI Solutions Inc.http://www.visualbis.com

Integrating SAP BusinessObjects with Hadoop3. Under Driver selection menu, select Apache - Hadoop Hive - JDBC Drivers.4. In the next tab enter The Database URL:port, Username & Password and Click TestConnectivity. If it is successful, save the connection by clicking finish.5. Now create a new project in IDT and create a shortcut for the above connection in theproject.13Visual BI Solutions Inc.http://www.visualbis.com

Integrating SAP BusinessObjects with Hadoop6. Now create a new Data Foundation layer and bind the connection with the datafoundation layer.7. This connection will be used by Data Foundation layer to import data from Hive Server.8. From the Data Foundation layer, drag and drop the tables which are needed by theuniverse. Create views in the Data foundation if required.9. Create a new Business layer and bind the data foundation layer with the business layer.10. Attributes can be set as measures with suitable aggregators in Data Foundation Layer.11. Right click the business layer and select Publish - Publish to Repository. Use integritybefore publishing to check dependencies12. Now log on to CMC and Set universe access policy for users.13. Now Open WEBI Launchpad or Rich Client and select Universe as Source. The Publisheduniverse must be listed.For Detailed Info Refer http://scn.sap.com, http://help.sap.com14Visual BI Solutions Inc.http://www.visualbis.com

Integrating SAP BusinessObjects with HadoopSome Screenshots of Universe DesignData Foundation LayerBusiness Layer15Visual BI Solutions Inc.http://www.visualbis.com

Integrating SAP BusinessObjects with HadoopConvert To MeasurePublish Universe16Visual BI Solutions Inc.http://www.visualbis.com

Integrating SAP BusinessObjects with Hadoop3. Create reportsPublished universe can be accessed through WEBI, Dashboards or Crystal Reports. Select Hive universeas Data Source and build queries using the Query Panel. Universe will convert user queries to HiveQLStatements and return the results for the report.Some Screenshots of Text Processing ReportsWEBI Mobile Report on Word Count17Visual BI Solutions Inc.http://www.visualbis.com

4. Integrating SAP BusinessObjects with Hadoop Universe Design Using IDT Steps Involved in Configuring SAP BusinessObjects for use with Hadoop 1. Configure SAP BusinessObjects with Hive JDBC drivers, if the server is of a version lower than BO 4.0 with SP5. In BO Server 4 SP5, SAP Provides Hive connectivity by default.

Related Documents:

SAP BusinessObjects Lumira SAP BusinessObjects Explorer SAP BusinessObjects Analysis for OLAP SAP BusinessObjects Design Studio SAP BEX Web Application Designer SAP BusinessObjects Dashboards SAP BusinessObjects Analysis Office Live Office EPM Add-In SAP BEX Analyzer Analysis Office Convergence Result Planned as of H1 2017 Lumira 2.x

1: hadoop 2 2 Apache Hadoop? 2 Apache Hadoop : 2: 2 2 Examples 3 Linux 3 Hadoop ubuntu 5 Hadoop: 5: 6 SSH: 6 hadoop sudoer: 8 IPv6: 8 Hadoop: 8 Hadoop HDFS 9 2: MapReduce 13 13 13 Examples 13 ( Java Python) 13 3: Hadoop 17 Examples 17 hoods hadoop 17 hadoop fs -mkdir: 17: 17: 17 hadoop fs -put: 17: 17

SAP EPM Add-In SAP BEX Analyzer Analysis Office BI Convergence 2019 -Recommended go to products Includes hybrid combinations of cloud and on premise software SAP BusinessObjects Explorer SAP BusinessObjects Dashboards SAP BEX Web Application Designer SAP Lumira Discovery and Designer SAP BusinessObjects Analysis for OLAP SAP Roambi

SAP ERP SAP HANA SAP CRM SAP HANA SAP BW SAP HANA SAP Runs SAP Internal HANA adoption roadmap SAP HANA as side-by-side scenario SAP BW powered by SAP HANA SAP Business Suite powered by SAP HANA Simple Finance 1.0 2011 2013 2014 2015 Simple Finance 2.0 S/4 HANA SAP ERP sFin Add-On 2.0

SAP Certification Material www.SAPmaterials4u.com SAP Certification Material for SAP Aspirants at Low cost Home Home SAP Business Objects SAP BPC CPM SAP BPC 7.0 SAP EWM SAP GTS SAP Public Sector SAP Real Estate SAP FSCM SAP FI/CO SAP AC - FI/CO SAP BI 7.0 SAP CRM 5.0

(formerly SAP BusinessObjects Polestar), SAP BusinessObjects Web Intelligence (WebI), SAP BusinessObjects Xcelsius, and Crystal Reports. These naming conventions are subject to change as SAP makes adjustments to the functionality or repositions certain tools. With any new functionality in the SAP space, our first concern is how the changes affect

SAP BusinessObjects Dashboards (Editions) Includes connectivity to the SAP BusinessObjects Business Intelligence platform and SAP Netweaver BW, Business Suite. Description SAP BusinessObjects Dashboards Interactive Viewing Enterprise license for viewing dashboards Intended for users who view and interact

This manual explains how to use the API (application programming interface) functions, so that you can develop your own programs to collect and analyze data from the oscilloscope. The information in this manual applies to the following oscilloscopes: PicoScope 5242A PicoScope 5243A PicoScope 5244A PicoScope 5442A PicoScope 5443A PicoScope 5444A The A models are high speed portable .