Cask Data Application Platform (CDAP)

2y ago
29 Views
5 Downloads
735.88 KB
22 Pages
Last View : 25d ago
Last Download : 3m ago
Upload by : Farrah Jaffe
Transcription

Cask Data Application Platform (CDAP)CDAP is an integrated application development framework for Hadoop. It integrates and abstractsthe underlying Hadoop technologies to provide simpler and easy-to-use APIs to build, deploy andmanage complex data analytics applications in the cloud or on-premise.1Accelerated ROI-Faster time toMarket, Faster time to Value2Maximize developer productivityand minimize TCO3Simple, Easy and Standard APIs forDevelopers & Operations4Enables Reusability andSelf-Service5Future Proof - Distribution andDeployment Agnostic6Support different workloads Transactional and Non-transactionalYou can buildIt’s built forData Ingestion Applications -- Batch or RealtimeData Processing WorkflowsReal-time ApplicationsData ServicesPredictive Analytics ApplicationsDevelopersOperationsData EngineersData ScientistsBusiness Analytics ApplicationsSocial Applications, and many moreTry itDownload CDAP Standaloneto build your applicationCDAP Standalone Dockerthrough KitematicUse Cloudera ManagerCSD to install on clusterLicenseApache 2.0Cask Data Application Platform - CDAP 101Copyright 2016 to Cask Data Inc. Proprietary and Confidential

ArchitectureThis section describes the functional and physical architecture of CDAP.Functional ArchitectureCask Data Application Platform - CDAP 101Copyright 2016 to Cask Data Inc. Proprietary and Confidential

APIApplicationAn Application is a standardized containerframework for defining all services. It simplifies thepainful integration process in heterogeneousinfrastructure technologies running on Hadoop.It’s responsible for managing the lifecycle ofPrograms and Datasets within in an application.E.g. Wikipedia Analysis, Twitter SentimentAnalysis, Fraud Detection, etc.Application TemplateDatasetAn Application Template is a user-defined, reusable,reconfigurable pattern of an Application. It isparameterized by a configuration that allowsreconfigurability upon deployment. It simplifies byproviding one generic version of an applicationwhich can be repurposed, instead of ongoingcreation of specialized applications. It exposes there-configurability and modularization of anApplication through Plugins. E.g. User definedTemplate, or Cask provided templates like CDAPETL Batch, CDAP ETL Real-time, CDAP DataPipeline Batch, CDAP Data Pipeline Real-time,CDAP Spark Streaming Pipelines, Data Quality, etc.A Dataset is a standardized container framework fororganizing, storing and accessing data from variousstorage engines. It simplifies integration withdifferent storage engines, allowing one to buildcomplex data patterns across multiple storagetypes on Hadoop. It’s responsible for exposingtransactionally consistent data patterns, integrationwith query engines, schema evolution and datalifecycle management. E.g. Indexed Dataset, TimePartitioned Fileset, Partitioned Fileset, OLAP CubeDataset, Indexed Object Store, Object Store,Timeseries Dataset, etc are different types ofdatasets that can be defined in CDAP.Extension1Application Template with a domain specific UIintegrated into the CDAP UI. E.g. Cask Hydratorand Cask Tracker.ProgramA Program is a container of well-defined tasks forprocessing or servicing Datasets to generate zero ormore Datasets. It is responsible for managing thelifecycle, and integration with transactions, metrics,logging and the metadata system. For e.g. SparkProgram, MapReduce Program, Workflow Program,Worker Program, Service Program, Flow Program,etc.1PluginA Plugin is a customizable module exposed andused by an Application or an Application Template.It simplifies adding new features or extending thecapability of an Application. Plugin implementationsare based on interfaces exposed by the Application.For e.g. CDAP ETL Batch Application templateexposes three plugins namely Source, Transform &Sink, CDAP Data Quality Application templateexposes Aggregation plugin.ArtifactAn Artifact is versioned packaging format used toaggregate one or more Application, Dataset, Plugin,Resource and associated the metadata. It’s a JAR(Java Archive) containing Java classes andresources required to create and run theApplication.Extensions are a new concept within CDAP and are not ready for general use.Cask Data Application Platform - CDAP 101Copyright 2016 to Cask Data Inc. Proprietary and Confidential

ToolsCommand Line Interface (CLI)The CDAP CLI allows developers and operations teams script and automate interactions with local or remoteCDAP entities from the shell. CLI uses the CDAP REST APIs to provide this functionality. Using CLI one canmanage the lifecycle of Applications, Artifacts, Programs and Datasets. More information can be found hereUser Interface (Console)The Console provides a user friendly graphicaluser interface with well-designed userworkflows for deploying and managing thelifecycle of Applications, Programs, Datasetsand Artifacts. Operations managementcapabilities allow deeper and faster insightsinto diagnosing issues with different entities. Italso exposes administrative capabilities formanaging CDAP.Testing Framework2Performance Framework34An end-to-end JUnit scaffolding over CDAP thatallows developers to test their Applications, Pluginsand Programs during development. It’s built asmodular framework allowing developer to also testindividual components. The tests can be integratedwith Continuous Integration (CI) tools like Bamboo,Jenkins and Teamcity.A CDAP performance framework provides you theability to load test and capture performancemetrics to diagnose bottlenecks within yourApplication.JDBC / ODBC DriverMonitoring IntegrationsThe CDAP JDBC and ODBC drivers enable users toaccess Datasets (HDFS, HBase or a CompositeDataset) on Hadoop through Business Intelligence(BI) applications with JDBC or ODBC support. Thedriver achieves this integration by translating OpenDatabase Connectivity (JDBC/ODBC) calls from theapplication into SQL and passing the SQL queries tothe underlying Dataset management and Queryengine (Hive is the default).These integrations allow external systems tomonitor CDAP and Applications running within it.Integrations with Nagios, Sensu, Cacti and Splunkare supported, and they are achieved by pluginsthat access status, logs and metrics throughREST API.2Framework only available in Java.3In-memory CDAP - Abstracted to in-memory structures for easy debugging (shorter stack traces)4Isn’t publicly available yet, but will be provided access on-demand. We are still evolving it.Cask Data Application Platform - CDAP 101Copyright 2016 to Cask Data Inc. Proprietary and Confidential

RouterService DiscoveryService discovery allows users to register data Service running in containers on a cluster. It achieves this byregistering one or more service endpoints announced with Zookeeper and actively maintaining the live state ofthe running services.Service DispatcA request for accessing a service method is appropriately routed to the right container running a service on thecluster. In case of multiple instances of a service, a routing strategy is engaged automatically to distribute theload across multiple instances.REST APIsREST APIs are HTTP interfaces exposed by CDAP for a multitude of purposes: everything from deploying andmanaging Applications, Artifacts, Plugins and Datasets , to ingesting data events, to query data fromdatasets, to checking the status of various system and user services. More information about the differentREST APIs exposed can be found hereCDAP System ServicesIn order to simplify the deployment model of CDAP on a Hadoop cluster, CDAP uses a small portion of thecluster to run mission critical CDAP system services in YARN containers. It also does this to support elasticscaling of the services without having to stop them. So, CDAP partly runs on Edge Nodes 5 and partly withinthe cluster. The system is smart enough to ensure that the system services are distributed evenly across thenodes on the cluster and don’t interfere with normal operations and jobs running on the cluster. Moreinformation about deployment and services can be found here More on services can be found at .Additional InformationDatasetand ServiceDataset and Service -- Data within Datasets can be exposed to external clients through a Service Program.Developers can implement custom Service exposing data from Dataset or they can also be used to write toDataset. Service exposes user defined REST APIs to Dataset. Service execute as YARN containers on thecluster eliminating an additional step to migrate data into traditional database to be exposed to applications.Future for this is to automatically for Datasets to expose REST APIs. Developers can use annotations to mapREST endpoints to methods within Datasets simplifying Data-As-A-Service concept.5Edge nodes are the interfaces between the Hadoop cluster and outside network. They are also referred to as gateway nodes.They run client applications and cluster administration tools.Cask Data Application Platform - CDAP 101Copyright 2016 to Cask Data Inc. Proprietary and Confidential

Dataset and ServiceCDAP provides isolation of application and data through Namespace. Namespace conceptually can bethought of partitions of a CDAP instance. Application and Dataset in one space are not accessible in othernamespace. It’s first step towards introducing multi-tenancy in CDAP. This feature can be used for partitioninga single Hadoop cluster into multiple namespaces:To support different environments, such as development, QA and staging;To support multiple customers; andTo support multiple sub-organizations within a organization.multiple sub-organizations within an organization.More information on Namespace can be found hereSecurityCDAP supports Kerberos enabled clusters and also supports perimeter level security for authenticationthrough LDAP or JASPI or Basic mechanisms. Authorization is currently being worked on in 3.4 release. Moreinformation on CDAP security can be found hereSupported Hadoop DistributionsCDAP and all it’s Applications are agnostic to the distribution they run on. On nightly basis the platform as wellas test Applications are tested on various flavors of Hadoop Distribution. Information about tests can be foundhereSupported Programming LanguagesJava is the only programming language currently supported by CDAP. CDAP has plans to support otherlanguages like Python, R and Javascript. Near term plan is to support Python through Py4j integration. hereCask Data Application Platform - CDAP 101Copyright 2016 to Cask Data Inc. Proprietary and Confidential

Deployment ArchitectureThis section describes different components of the CDAP runtime system and how they are deployed on aHadoop cluster. For more information please see hereThe CDAP runtime system is made up of two major components:CDAP ServerThe CDAP Server is a collection of services (Figure 2) essential for successfully running CDAP. It can beinstalled on one or more edge nodes of a cluster and is responsible for managing only the cluster to whichit’s configured. The services are installed for a few reasons:Fixed IP/Hostname for accessing REST APIsImpersonating a user in secure mode to run in a clusterManage (Start/Stop/Monitor) system services running within the clusterCDAP ServicesThese are mission critical CDAP system services running in YARN containers on a Hadoop cluster.Thelifecycle on these services are managed by the CDAP Server. Below are the services running on the clusterDataset Executor: Responsible for managing dataset lifecycleMetadata: Responsible for managing metadata for applications and datasetsLog and Metrics Aggregator: Responsible for aggregating and indexing logs and metrics across allapplications and datasetsTransactions: Responsible for providing consistency guarantees across applications and datasetsExplore: Responsible for exposing the querying (SQL) interface for datasetsStream: Responsible for data ingestion either in realtime or batchCask Data Application Platform - CDAP 101Copyright 2016 to Cask Data Inc. Proprietary and Confidential

Use-case(s)Following are a few use-cases that CDAP is well-suited for and are being used within customer environments.Data LakeHigh VolumeStreaming AnalyticsInformationSecurity ReportingReal-time brandand marketingcampaign monitoringCask Data Application Platform - CDAP 101Copyright 2016 to Cask Data Inc. Proprietary and Confidential

Use-case(s)Following are a few use-cases that CDAP is well-suited for and are being used within customer environments.Data LakeBuilding an enterprise data lake requires building a reliable, repeatable and fullyoperational data management system, which included ingestion, transformations &distribution of data . It must support varied data types and formats, and must be able tocapture data flow in in various ways. The system must support the following:Transform, normalize, harmonize, partition, filter and join dataInterface with anonymization and encryption services external to the clusterGenerate metadata for all data feeds, snapshots and datasets ingested, and make it accessible throughAPIs and webservicesPerform policy enforcement for all ingested and processed data feedsTracking and isolating errors during processingPerforming incremental processing of data being ingestedReprocessing data in case of failures and errorsApply retention policies on ingested and processed datasetsSetup common location format (CLF) for storing staging, compressed, encrypted and processed dataFiltered views over processed datasetsMonitoring, reporting, and alerting based on thresholds for transport and data quality issues experiencedduring ingestion. This helps provide the highest quality of data for analytics needs.Annotate Datasets with business/user metadataSearch Datasets using metadataSearch Datasets based on schema field names and typesManage data provenance (lineage) as data is processed/transformed in the data lakeOutcomeA team of 10 Java (Non-Hadoop) developers were able to build an end-to-end ingestion system with thecapabilities described above using CDAP. Lower barrier to entry.These developers provided a self-service platform to the rest of the organization(s) to ingest, process andcatalog data. Abstractions helped them build at a much faster pace and get it to their customers faster.Time to market.The ingestion platform standardized and created conventions for how data is ingested, transformed andstored on the cluster, allowing the platform users to on-board at much faster rate. Time to value.CDAP’s native support for incremental processing, reprocessing, tracking metadata, workflow, retention,snapshotting, monitoring and reporting expedite the efforts to get a system to their customers. Time tomarket.CDAP is installed in 8 clusters with 100s of nodes.Data Lake users were able to locate Datasets faster and had faster access to metadata, data lineage anddata provenance. This allowed them to efficiently utilize their clusters and also aided them in datagovernance, auditability, and improving the data quality of Datasets. CDAP Tracker provided this set ofcapabilities.Cask Data Application Platform - CDAP 101Copyright 2016 to Cask Data Inc. Proprietary and Confidential

High Volume Streaming AnalyticsBuilding a high speed, high volume streaming analytics solutions with exactly-oncesemantics is complex, resource intensive and hard to maintain and enhance. Thisuse-case required data collection from web logs, mobile activity logs and CRM data, inreal-time and batch. The data collected was then organized into customer hierarchiesand modeled to deliver targeted ad campaigns and marketing promotion campaigns. Italso had to provide advanced analytics for tracking the campaigns in real-time. Thisapplication had to support the following:Support processing of 38 billion transactions per day in real-timeCategorizing customer activity into buckets and hierarchiesGenerating unique counts in real-time, to understand audience reach, tracking behavior trend, and the like.Generate hourly, daily, monthly and yearly reports on multiple dimensionsProvide unique stat count on an hourly basis rather than weeklyReprocessing data without side effects due to bug fixes and new featuresExactly-once processing semantics for reliable processingProcessing data both in real-time and batch.OutcomeCDAP’s abstraction and its real-time program simplified building this application and in getting it tomarket faster. Time to market.The team replaced a MapReduce based batch-system to a realtime system, delivering insights everyminute instead of days.CDAP’s exactly-once and transactional semantics provided high-degree of data consistency duringfailure scenarios, making it easy to debug and reason the state of data.CDAP’s Standalone and Testing frameworks allowed the developers to build this application efficiently.No distributed components were required to run functional tests.Cask Data Application Platform - CDAP 101Copyright 2016 to Cask Data Inc. Proprietary and Confidential

Information Security ReportingIn a large enterprise environment there are traditional sources that house a great deal ofdata. There is a constant need to load data into Hadoop clusters to perform complexjoins, filtering, transformations and report generation. Moving data to Hadoop iscost-effective as there is the need to run many complex, ad-hoc queries that wouldotherwise require expensive execution on traditional data storage and queryingtechnologies.The Customer has been attempting to build a reliable, repeatable data pipeline for generating reports acrossall network devices which access resources. Data is currently aggregated into five different Microsoft SQLServers. Aggregated data is then periodically (once-a-day) staged into a secured (Kerberos) Hadoop cluster.Upon loading the data into the staged area, transformations (rename fields, change type of field, project fields)are performed to create new datasets. The data was registered within Hive to run Hive SQL queries for anyad-hoc investigation. Once all the data is in final independent datasets, the next job is kicked off -- that joinsthe data from across all five tables to create a new uber table that provides a 360 degree view for all networkdevices. This table is then used to generate a report that is part of another job. Following are the challengesthe customer faced:Ensuring that the reports aligned to day boundariesRestarting the failed jobs from the last point where they had failed (had to reconfigure pipelines to restartfailed jobs)Adding new sources required a lot of setup and development timeInability to test the pipeline before it was deployed — this lead to inefficient utilization of the cluster as allthe testing was performed on the clusterThey had to cobble together a set of loosely federated technologies -- Sqoop, Oozie, MR, Spark, Hiveand Bash ScriptsOutcomeThe in-house Java developers with limited knowledge of Hadoop built and ran the complex pipelines atscale within two weeks after four (4) hours of trainingThe visual interface enabled the team to build, test, debug, deploy, run and view pipelines duringoperationsThe new process reduced system complexity dramatically, which simplified pipeline managementThe development experience was improved by reducing inappropriate cluster utilizationTransforms were performed in-flight with error record handlingTracking tools made it easy to rerun the process from any point of failureCask Data Application Platform - CDAP 101Copyright 2016 to Cask Data Inc. Proprietary and Confidential

Real-time brand and marketing campaign monitoringEnterprises use Twitter to know when people are talking about their brand andunderstand sentiment toward their new marketing campaigns. Real-time monitoringcapabilities on Twitter allows them to keep a close eye on the results of marketing efforts.Developing a real-time pipeline that ingests the full Twitter stream, then cleanses,transforms, and performs sentiment and multi-dimensional analysis of the Tweets thatwere related to campaign delivers a valuable real-time decision making platform. The aggregated data isexposed through REST APIs to an internal tool for visualization, making consumption of the output easier.The pipeline is built using Storm, HBase, MySQL and JBoss. Storm is used to ingest and process the streamof Tweets. The Tweets are analyzed using NLP algorithms to determine sentiment. They are aggregated onmultiple dimensions like number of re-tweets and attitude (positive, negative or neutral). The aggregations arestored in HBase. Periodically (twice-a-day) the data from HBase is moved into MySQL. JBoss exposed RESTAPIs for accessing the data in MySQL.The goal of this use-case was to reduce the overall complexity of the pipeline, moving away from maintaininga separate cluster for processing the real-time Twitter stream, integrate NLP scoring algorithms for sentimentanalysis and exposing the aggregated data from HBase with lower latency, thereby reducing the latencybetween the data being available in HBase to delivery via REST API. The result - an easy to build, deploy andmanage real-time pipeline with better operational insights.OutcomeCask Hydrator pipeline for processing full twitter stream was built in 2 weeks.Cleansing, Transforming, Analyzing and Aggregating tweets at about 6K/sec in-flight.Consolidated infrastructure into a single Hadoop cluster.Java Developers were able to build the pipeline and plugin with less learning curve.CDAP Service on OLAP Cube reduced the expensive data movement and reduced the latency betweenthe aggregation being generated to the results being exposed through REST APIs allowing them to makebetter decisions faster.CDAP and Cask Hydrator seamlessly transparency provided easy operational insights through customdashboards and aggregated logs for debugging.Cask Data Application Platform - CDAP 101Copyright 2016 to Cask Data Inc. Proprietary and Confidential

A day in life with CDAPThis section of the document describes how a day would go for a developer and operations team memberbuilding and deploying an application or a solution on Hadoop in production using CDAP. In order todemonstrate this we will take a common use-case with an organization using Hadoop.Use-caseJoltie, a Java developer, and Root, a operations team member, work in anenterprise. They have been tasked with building and operationalizing adata pipeline for processing data that is ingested and available on HDFS.The data is delivered to a standard directory on a daily basis with anapproximate size of 600GB per day. The processing pipeline has to beoperationalized to process daily data within a SLA of 3 hrs. Developer andOperations only have access to CDAP, not Cask Hydrator.The data pipeline must include the following :1234A dataset integrity job thattakes a pass over all thedata from a day to check ifthe data on HDFS is reliableenough to be processed.A transformation job thatwill process a day's worthof data -- appliestransformations, filteringand field level validation.The output of the job is anew transient dataset.The output from the step 2is then picked up byanother job that interactswith the encryption systemto encrypt certain fields inthe feed. This job willgenerate two outputs -one that is encrypted andthe other that is not. Thejob also writes both theoutput sets to datasetspartitioned by day. Thesedatasets are explorableusing Hive.The in-encrypted partitionoutput is further processedto build and update a datamodel in HBase. The datain HBase is also explorablethrough Hive.Conditions1The jobs in step 1 and step2 can run in parallel.2The job in step 3 is executed only if thejob in step 1 clears that the data isreliable and step 2 job is successful.3The job in step 4 isdependent on successfulcompletion of step 3.EnvironmentThe users are provided a very constrained environment for developing and testing the pipeline. In total thereare three (3) clusters made available to them:A Dev Cluster - 2% capacity of production clusterA QA Cluster - 5% capacity of production clusterA Production ClusterCask Data Application Platform - CDAP 101Copyright 2016 to Cask Data Inc. Proprietary and Confidential

For Joltie - The DeveloperJoltiea mid-level Java developer with experience building web applications isvery excited about this opportunity to build a Big Data application. Shehas taken the initiative to learn about the basics of Hadoop. She isintimidated by the complexity of Hadoop and has been looking atdifferent examples of how to process data on Hadoop. Following areadditional requirements that she has been tasked with:Make the application code modular and testableCommit changes on regular basisWrite unit test cases for each piece functionality that is added to the applicationSet up a job in Jenkins that runs unit tests on every check-inSetup an end-to-end test for the application on the QA clusterInstrument the application correctly to provide more insights into business processingThe application should handle some critical edge casesIn case of an error in the pipeline, the user should be able to fix the issue and easily reprocessThe pipeline must support incremental processingThe pipeline must handle the case where data arrives lateJoltie has to work with Root (operations team member) to successfully deploy the applicationto QA and productionJoltie has to use CDAP to build this applicationMaterial AvailableCDAP DocumentationTools NeededCDAP SDKCDAP Apps & ExamplesDesign & ResearchMavenJoltie starts by looking at the pipeline requirements and in parallelIDEspends time reading CDAP documentation and running examples.LaptopShe is delighted that she doesn’t need to install Hadoop.NodejsJavaCask Data Application Platform - CDAP 101Copyright 2016 to Cask Data Inc. Proprietary and Confidential

DevelopmentDAY1First Cut of the ApplicationJoltie starts up by creating a base plate CDAP Application using CDAP Maven Archetype.All the necessary dependencies are included, so she is ready to get startedShe then modifies the Application to build a Directed Acyclic Graph (DAG) for processing input data, asdescribed in the use-case, using the Workflow API provided by CDAP.She also uses the CDAP JUnit scaffolding to build a unit test for testing the Workflow. She finds it’s veryuseful to have the ability to test the workflows before being deployed on a cluster or anywhere else.Joltie builds the project using Maven to generate the application artifact.She then starts a CDAP Standalone version on her laptopShe deploys the artifact into CDAP Standalone and plays around with it. She cannot test it at scale, butis able to experience working with Application as it would have been in the cluster.She Iterates through the development cycle without having to touch the a clusterShe goes home feeling like he has accomplished a great deal and has developed the first version of theApplication by herself.DAY2Application Enhancement and CI SetupIt’s a beautiful day. Joltie is ready to include a few of the edge cases that were provided as requirementsand take the application to completion.She reads about Partitioned Fileset Datasets and configures her Application to include this in orderto solve the edge cases. The Partitioned Fileset Datasets are transactional and provide consistencyrequired to handle new partitions and also handle errors efficiently.She iterates on it a few times to make sure she includes test scenarios for edge cases and also tries thesame within CDAP Standalone. she is able to simulate a few of the scenarios, but she has to try this on areal cluster to make sure it works as expected under error conditions.Next, she is ready to setup a Jenkins job to periodically trigger tests on check-in. she specifies theconfigurations and commands to run the tests the same way she would in a shell.She now wants to setup an end-to-end test of her application -- she is not sure how she can accomplishthis, so, she opens an issue within Cask’s support portal. She gets a response back within a few hourswith information around how she can set-up the end-to-end test.She follows the instructions specified by a Cask representative to set-up an end-to-end job on Jenkins.She has accomplished a lot on the second day!Cask Data Application Platform - CDAP 101Copyright 2016 to Cask Data Inc. Proprietary and Confidential

DAY3Operations HandoffJoltie reaches office early, with a spring in her step, to get his Application into the QA environment.Before starting to work with Root, Joltie goes in and adds metrics to track the many business metricsthat would be useful to debug issues and provide insights into the Application itself. She uses theCDAP-provided Metrics APIsShe meets Root and explains the Application and what it does to him. Root, having worked on Hadooppreviously, is not very excited about a new Application being dropped on his lap for operations.Joltie starts of by deploying the Application into a CDAP Standalone and starts it. He proceeds to explainthe different aspects of Application/Workflow and how the data will be processed to Root. Root hasalready started feeling comfortable and is asking a lot of questions around how he can monitor thisApplication.Joltie then easily builds a dashboard using the CDAP Console and highlights the metrics that Root canuse to monitor the application.Cask Data Application Platform - CDAP 101Copyright 2016 to Cask Data Inc. Proprietary and Confidential

For Root - The Operations GuyRootthe operations guy is charted with deploying and managingthe data pipeline in QA and production environments. Inorder to be successful with this project, Root needs toaccomplish the following:Install and configure CDAP on a cluster using the package manager provided by the distribution he is usingSecure the CDAP instance with authentication, so that only authenticated users are allowed to gain accessto the instanceEnsure that the authentication can be integr

JDBC / ODBC Driver The CDAP JDBC and ODBC drivers enable users to . (BI) applications with JDBC or ODBC support. The driver achieves this integration by translating Open Database Connectivity (JDBC/ODBC) calls from the application into SQL and passing the SQL queries to . Sensu, Cacti and Sp

Related Documents:

The Cask of Amontillado Edgar Allan Poe Summary Montresor feels that T H E B I G Writing About the Big Question A conflict can reveal the truth about a person when. Note-taking Guide Use this chart to record the four most important events of the story. The Cask of Amontillado 45 Event 1 Montresor meets Fortunato. He mentions the cask of .

BELGIUM BRAND Age CASK YD YB BLEND REGION JETON (1cl) GLASS (2,5cl) SAMPLE (5cl) DISTILLERY BOTTLER CL PRICE/BOTTLE Belgian Owl #LA036275 3Y First Fill Bourbon Cask 46 2014 Single Malt 1 EUR 3

6.4 Cooling System Sizing 6-118a 6.5 Cavity Relief Valve 6-124 6.6 Other Cask Valves 6-141 6.7 Confirmation of Cask Thermal Performance 6-149 's 6.8 Decay Heat Limits 6-165 6.9 Section Conclusions 6-166 6.10 References 6-168 Appendix VI-1: IF-300 Shipping Cask Demonstration * Testing Repo

collection, and timing of new/one‐time inspection to ensure timely detection of aging effects. The . Extended Long-Term Storage and Transportation of Used Fuel . Managing Aging Effects on Dry Cask Storage Systems for Extended Long-Term Storage and Transportation of Used Fuel .

Poe is classified as an American Romantic writer, a detective fiction writer, and a Gothic writer. Some critics refer to Poe as the first truly modern writer because he probed the individual and the mystery of the self. The Cask of Amontillado 56 UNIT 1 THE SHORT STORY Author Search For more about Edgar Allan Poe, go to

their narrators such as in “The Cask of Amontillado.” (Jump-off ideas: 21 Grams, Momento, Fight Club, Wolf’s Story, Wicked, etc.) (15 min) Connect journals and homework together by discussing the idea of the “unreliable narrator” and how perspective plays a role in literature through “The Cask of Amontillado” examples.

Model lesson for “The Cask from “The Caskof Amontillado” Amontillado.” coat -of arms or family crest. Student-written critique evaluating the suspense and the ending of either “Sonata for Harpand Bicycle” or “The Cask of Amontillado” graphic illustration of a section of

Mary plans to take Colin to see the secret garden. Mary’s visits make Colin feel a lot better. Martha’s brother, Dickon, visits Colin one day with Mary and brings lots of tame animals with him. Colin is delighted. Mary and Dickon take Colin secretly into the garden. Colin realises it is his mother’s garden, and says he will come every day. Colin spends a lot of time in the garden with .