CustoVal: Estimating Customer Lifetime Value Using

2y ago
11 Views
3 Downloads
2.44 MB
10 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Nadine Tse
Transcription

CustoVal: Estimating Customer Lifetime Value UsingMachine Learning TechniquesDept. of CIS - Senior Design 2013-2014 Trisha KothariCharu Jangidjangidc@seas.upenn.edu kotharit@seas.upenn.eduUniv. of PennsylvaniaUniv. of PennsylvaniaPhiladelphia, PAPhiladelphia, PAEdward WadsworthJarred Spearjarreds@seas.upenn.edu wadse@seas.upenn.eduUniv. of PennsylvaniaUniv. of PennsylvaniaPhiladelphia, PAPhiladelphia, PAABSTRACTThe estimation of Customer Lifetime Value (CLV) is one ofthe core pillars in strategy development and marketing. CLVis a measurement in dollars associated with the long term relationship between a customer and a company, revealing howmuch that customer is worth over a period of time. CLV israther predictively powerful when considering customer acquisition processes, as well as for selecting optimal servicelevels to provide different customer groups.Current methods for estimating CLV involve building asingle model using the entire population of customers as input. This not only loses the granularity in the data, butalso gives rise to poor targeting and strategic advertising forconsumers. This paper seeks to show the advantages of combining smaller, targeted models intelligently in order to buildseparate models for customers buying different product categories. This enables an effective use of the data that firmshave about customers to make intelligent strategic decisions.Using a sample dataset, the proposed implementation ofMultitask learning ([2]), in which knowledge learned is sharedbetween related categories, yields a strong improvement inCLV forecast accuracy as compared to using a single, largemodel on product categories with lower numbers of transactions (less than 150). Additionally, this same method reduces the standard deviation of error when compared to thesingle large model. Most significantly, the Multitask learningmodels tend to perform better than single models when categories have sparse data to train on, traditionally considereda harder task. These results indicate that Multitask learningtechniques can lead to a better outcome than current industrystandards, and perhaps is a better alternative to the existingmethodology.1.INTRODUCTIONIn marketing, Customer Lifetime Value (CLV) is a prediction of the net profit attributed to the entire future relationship a company has with a customer. Accurate and timelycalculation of CLV is important because knowing which customers are likely to be very valuable to any given company,as well as those which will cause the firm to lose money, Advisor: Dr. Eric Eaton (eeaton@seas.upenn.edu).can help greatly with tailoring the marketing, product offerings, and the purchasing experience to specific customer segments in order to maximize revenues and profits [15]. CLVcalculations allow firms to build a picture of which type ofcustomers are valuable or under which circumstances theybecome valuable. This can drastically improve the value ofmoney spent on customer acquisition and marketing. Thispaper provides a method for using CLV to predict whichproduct categories are valuable, which is a particularly difficult estimation in new categories with sparse data.This paper proposes a model for calculating CLV without losing the granularity in the data, unlike other models. Whereas many methods used today take all of anygiven customer’s transactions into account when calculating their CLV, the proposed model will subdivide each customer’s transaction by product category, followed by calculating each customer’s CLV specific to any given productcategory. For example, rather than calculating the CLV of acustomer who has bought both televisions and cameras, theproposed model would calculate a separate CLV for that customer for each product category (the total CLV would thenbe the sum of those two CLVs). This, however, introduces aproblem: since each category will have less data to build amodel upon than the original model, some category’s models will be weak and largely overfitted if their data is usedin isolation.To solve this problem without compromising the advantages of tuning models to specific categories, the model willincorporate Multitask learning. Multitask learning is a branchof machine learning consisting of several algorithms for sharing knowledge between different tasks. Product categorieswhich have fewer transactions (and therefore inherently weakermodels) will be bolstered by information gained from morepopular and related product categories. This method leavespopular categories’ models relatively unchanged, while greatlystrengthening less popular categories’ models. In short, theproposed model is designed to address the following problemstatement:Given sample transactions, predict Customer Lifetime Valueat the product category level, even if data per category issparse.The model will take as input a predefined transaction log,

and it will output a CLV for each customer in each productcategory they have made purchases in. Additionally, useful graphics and other visualizations are produced based onthe output to make the data more accessible and useful inderiving key insights.This document examines related efforts at solving thisproblem, then outlines the schema and software implementation of the proposed system, and finally discusses the resultsof the system on sample data and looks towards potentialimprovements. First, it reviews the current research in calculating CLV so as to show how this approach is novel anduseful. Second, it describes the proposed model and its corresponding step-by-step implementation of an algorithm topredict CLV using Multitask learning. Then, it expoundson the evaluation criteria to be used to determine the success of this project and how well the model compares tothese criteria. Finally, suggestions for future improvementsor expansions are put forward and explained.2.2.2RFM ModelOne of the first attempts to gauge customer value was asystem called the RFM model, standing for Recency, Frequency, and Monetary Value. These models were originallyintended to determine how successful a direct marketingcampaign (e.g. postcards) would be to each customer, individually. They incorporated only three variables: the timebetween now and the customer’s last purchase, the averagetime between a customer’s purchases, and how much moneythe customer spends on any given purchase (Wei et al. [19]).The most typical application of this model would break customers up by quintiles on each of the three factors beinglooked at, yielding 125 groups of customers. Based on someformula, the costs of marketing to each cell and the predictedvalue from each cell would be calculated, and a breakevenline would be formed. The company would then deploy itsmarketing campaign to only target those cells which it considers could be profitable.This technique can be fairly readily applied so as to determine CLV. The technique is simple, intuitive, and does notrequire a large amount of complicated data. Essentially, acompany wishing to determine CLV of its customers wouldassume a constant direct marketing campaign for the remainder of each customer’s lifetime, and then determine theprofitability of each customer (Cheng et al. [3]). However,this technique has a number of drawbacks. The conceptof a constant marketing campaign is impractical; the campaign itself would be incredibly expensive and the customerswould eventually grow immune to it. Looking only at transaction histories without accounting for demographic factorscould lead to major oversights. Unless the formula is manually tweaked to backtest better, there is no impetus for themodel to learn from past successes and failures.The proposed approach will improve on the basic RFMmodel in a variety of ways. Most prominently, the tech-Econometric ModelsSome models exist that seek to take more covariates intoaccount than probability models, which typically only userecency, frequency and monetary value of transactions inorder to estimate the different elements of CLV.An example of this is the use of proportional hazard models to estimate how long customers will remain with the firm.The general form of these equations isλ(t, X) λ0 (t) exp(βX)RELATED WORKSeveral models have been used over the years to quantifyCLV. They have ranged from simplistic to complex, and theyhave incorporated ideas from diverse fields such as mathematics, econometrics, and computer science. The Multitasklearning model makes use of some of these models, and aimsto improve on all of them when data is sparse in some areasbut rich in others.2.1nique incorporates learning, thereby increasing the strengthof the model over time. It also accounts for sequential data,rather than selecting a fixed time period and looking only atthat bucket of data. Finally, as has been shown elsewhere(Gupta et al. [8]), models which incorporate more than thesethree factors do a better job predicting CLV than RFM models. Although RFM was not developed explicitly to calculateCLV, it is an important benchmark to be compared against.where λ is the hazard function, which is the probabilitythat the customer will leave the firm at time t given thatthey have remained with the firm until time t, and X is aset of covariates which may depend on time. λ0 is the basehazard rate; typical examples include the exponential model(in which case it is memoryless) or the Weibull model (tocapture time dependence). See Cox [4] for the original paperon proportional hazard models, and Knott et al. [11] for anexample of use.Once this hazard rate has been established, the survivorfunction is calculated as ZtS(t) P (T t) exp λ(u)du 0Where T is the actual time that the customer leaves thefirm. For the exponential distribution, the hazard functionis constant; this makes estimating the survival function relatively simple. There is also the following:P (T t) 1 S(t)If the time that customers leave the firm is known, thelikelihood of customers leaving can be estimated. Hence, topredict the function parameters (β), the total likelihood canbe maximized (or minimize log likelihood).While this use of covariates take more into considerationthan probability models, it focuses on aspects of CLV independently (for example, treating customer retention and theamount that customers spend separately). The proposed approach treats CLV as a final goal, allowing the shared basis,a representation of basis vectors to define each of the tasks,to act across these components.Furthermore, these models treat all customers equally.The proposed approach will be able to extract more fromthe given data by dividing the customers into sections, whilestill allowing each segment to learn from the others via theshared basis.2.3Persistence ModelPersistence models focus on modeling the behavior of components such as acquisition, retention and cross selling. These

models can be used to study how a change in one variable(such as a customer acquisition campaign) impacts othersystem variables over time. This approach can be used tostudy the impact of advertising, discounting and productquality on customer equity. It projects the long run or equilibrium behavior of a variable or group of variables of interest.For example, a firm’s acquisition campaign may be successful and bring in new customers (consumer response).That success may prompt the firm to invest in additionalcampaigns (performance feedback) and possibly finance thesecampaigns by diverting funds from other parts of its marketing mix (decision rules). At the same time, the firm’s competitors, fearful of a decline in market share, may counterwith their own acquisition campaigns (competitive reaction).Depending on the relative strength of these influence mechanisms, a long-run outcome will emerge that may or may notbe favorable to the initiating firm. Dynamic systems can bedeveloped to study the long run impact of a variable and itsrelationship with other variables.Persistence models are well suited for CLV because it isa long term performance metric. Such models help quantify the relative importance of the various influence mechanisms in long-term customer value development, includingcustomer selection, method of acquisition, word of mouthgeneration, and competitive reaction. However, this approach cannot perform well when there is little data to workwith, as it relies on consumer behaviors that only revealthemselves over longer periods of time.2.4Diffusion ModelUnlike the other approaches, the prime focus of the diffusion model is on Customer Equity (CE). The purpose ofthe more aggregated approach is that CLV often restricts itself to focus on customer selection, customer segmentation,campaign management, and customer targeting. A broaderapproach of integrating the CLV of both current and future customers can help produce a strategic metric usefulfor higher-level executives.In essence, CE is the sum of the CLV of future and current customers. There are two key approaches used in themeasurement of CE. The first is the production of probability models of acquiring a certain consumer using disaggregate data (Thomas 2001 [17]; Thomas, Blattberg, and Fox2004 [18]). This is primarily the methodology used by theapproaches described so far. The alternate approach is touse aggregate data and diffusion/growth models to predictnumber of customers likely to be acquired in the future.Gupta, Lehmann, and Stuart [9] showed that CE is a goodestimate for 60% of the five companies investigated. Theexceptions included eBay and Amazon, an indication of theweaknesses for the model to be applied with larger online retail firms. A particularly interesting insight of this approachis the relative importance of marketing and financial instruments, where a 1% change in retention would negativelyaffect the CE by 5%. This is in stark contrast with a similarchange in discount rate producing only a 0.9% change in CE.For example, a 45 million expenditure by Puffs facial tissues to increase its ads awareness ratings by 0.3 points willcompound to eventually result in producing a 58.1 millionin CE.Diffusion models can also be used to assess the value of alost customer (Hogan, Lemon, Libai [10]). The key premiseof this proposition is that firms that lose a consumer wouldnot only lose the customer’s CLV, but also the word-ofmouth effect generated by that consumer. This indirect effect can be compounded to a four-fold effect, as investigatedin the online banking industry. The above proposition of incorporating lost consumers in the model is of key importancein prediction of lifelong customer value, due to the triggeringof a butterfly effect on the strength of the customer base.2.5Probability ModelsResearching existing techniques for calculating CLV showsthat the Pareto/NBD (Schmittlein, Morrison and Colombo[16]) is one of the most widely used methods. The proposedmodel applies the Pareto/NBD to each separate productcategory. In Fader et al. [7], a paper that implements thePareto/NBD model, CLV is defined as follows:CLV margin transaction revenue DETMargin: How much profit the firm makes per sale to thiscustomer. This data is typically unavailable to researchers(for big companies, financial statements may be useful), butdoes not really affect trends in data. It is a value between 0and 1.Transaction Value: How much the customer is expectedto spend during each transaction. Multiplied together withmargin, this gives the monetary value of each transaction tothe firm.DET - Discounted Expected Transactions: This is the total number of transactions a customer is expected to makein her lifetime. Transactions that are further in the futureare discounted in order to take the time value of money intoaccount.For model fitting purposes, it may not be realistic to calculate lifetime transactions; this is a difficult number to findin order to test against (since customers may stay with a firmfor many years). A cleaner approach for model fitting is tochoose a certain time horizon and to estimate the numberof transactions that a customer will do in the period; thiscan then be compared to actual data for training. Similarly,if margin information is unavailable, it would be better toleave it from calculations while training models. Therefore,a new metric, “Future Customer Spend”, is defined to betested against. Note that once models have been estimated,the parameters can be used to build full CLV estimationsagain.FCS(t) trans(t) transaction revenueWhere trans(t) is the expected number of transactions inthe next t periods.Two factors need to be estimated to calculate FCS: thenumber of future transactions a customer is likely to make,and expected revenue per transaction. Following Fader etal. [7], these factors are assumed to be independent and aremodeled separately, using the Pareto/NBD and Gamma/Gammamodels.2.5.1Pareto/NBDThe Pareto/NBD model is used to predict the future transactions a customer will make, and relies on the followingassumptions: Customers are “alive” for a certain amount of time,after which they are permanently inactive.

While alive, customer i’s purchasing behavior followsa Poisson process with transaction rate λi . Heterogeneity in transaction rates across customersfollows a gamma distribution with parameters r andα:g(λi r, α) αr λr 1e λi αiΓ(r) The time that a customer i remains active follows anexponential distribution with dropout rate µi . Heterogeneity in dropout rates across customers follows a gamma distribution with parameters s and β:g(µi s, β) β s µs 1e µi βiΓ(s) λi and µi vary independently across customers.This technique uses only the length of time the customerwas observed for, the time of the customer’s last transaction,and how many transactions the customer made. All otherinformation, such as demographic information or time ofearlier transactions, is unused.Note that a model for any customer’s number of transactions is obtained with four parameters which are the samefor the entire population: r, α, s, and β.For a detailed note on implementation and derivation ofthe model, see Fader and Hardie [6].2.5.2Gamma/GammaThe Gamma/Gamma model is used to predict how mucha customer will spend on each transaction. Assumptions: The amounts customer i spends on transactions arei.i.d. gamma variables with shape parameter pi andscale parameter vi . p is constant across customers, and vi is distributedacross customers according to a gamma distributionwith shape parameter q and scale parameter γ.Note that a model for any customer’s total spending pertransaction is obtained with three parameters which are thesame for the entire population: p, q, and γ.For more details on implementation and derivation, seeFader et al. [7].2.6Multitask learningMultitask learning includes a broad set of tools and techniques to learn several related tasks through a shared representation. One of the strengths of this approach is theability of the model generated to leverage the commonalities in the data. Through the learning of several relatedtasks in parallel, each task learned can be improved.Multitask learning has shown to work well particularly insituations of sparse data ([12]). In situations of sparsity, eachtask can be represented as a linear combination of underlyingbasis tasks with sparse coefficients. The degree of sharingcan be correlated with the amount the tasks have an inherentoverlap in their sparsity pattern. This can enable capturegranularity in the data, by leveraging the similarity in basisrepresentation patterns.3.SYSTEM MODELCurrent industry standards follow a relatively simple approach to CLV estimation, since most firms take the product of past transaction values and the forecast period in thefuture to provide a numerical value of CLV. More refinedmodels use the Pareto/NBD and the Gamma/Gamma inorder to predict future CLV, as shown in the Related Workssection.First, a brief overview of the proposed model is provided.Second, the current existing standards of probability modelsare examined and their shortcomings are noted. Third, theproposed model is expounded upon to provide to delve intomore details. Finally, some key applications of the modelare listed.The proposed system extends current naive and advancedindustry standards by taking the model forward by incorporating multitask learning. When the solutions to certain tasks are related through some underlying structure,an agent may use multitask learning to share knowledgebetween tasks. In the proposed model, first, the transactions are segmented by product category to produce an individual model for each of the categories. Second, an individual model is produced for each of the product categories, outputting parameters for the Pareto/NBD and theGamma/Gamma distribution. Third, sparse coding is performed on the shared basis, which includes all parametersfor each of the product categories, refining the parameters ofthe individual categories further. Finally, the refined parameters obtained from the multitask approach are then used tocalculate the expected value of CLV for a certain time span.Figure 1 explains how the model splits products by product categories and applies a Pareto/NBD and Gamma/Gammamodel to each category individually. In Figure 1, samplesdrawn from product categories 1, 2, and 3 are sufficient to fitthe statistical model individually to each of those categories.For simplicity of the figure, α and β are used to representmodel parameters rather than the full seven parameters thatwill be used in reality. For each model that fitted, parameter vectors α and β describe the product category. In thefigure, α1 and β1 are the model parameters for category 1,α2 and β2 are the parameters for category 2, while α3 andβ3 are parameters for category 3.Consider a new product category, product category 4,with sparse data. Data is considered to be sparse if it has atraining set of less than 150 data points to produce a wellfitted model. The model encounters the statistical problem that a small sample may not capture the full range ofthe population. Any model estimated for the above category would be subject to overfitting, primarily due to thelack of data points for the generation of an accurate model.This can have drastically negative effects, including misallocation of resources and ineffective strategy and planning.This problem may arise either because product category 4 isnew, or because of inherent changes in the market structureyielding less data.At this point, the single model approach would be to simply fit a model across all the product categories and thenapply that model’s parameters to describe category 4 (alongwith all other categories). The separate model approachaims to avoid discarding granularity; the implementation ofMultitask learning is used to solve this problem instead (asillustrated in Figures 2 and 3).Multitask learning enables knowledge acquisition for re-

Figure 1: Fitting per category models. Category4 is for a new product category in which a limitednumber of transactions have been observed.Figure 2: The proposed model lies somewhere between single category models and a large sharedmodel.Figure 3: Shared knowledge is obtained by fitting amodel to the parameters of the individual models.The new shared model is then used to supplementthe parameters for the sparse models.lated tasks through a shared representation. Such a formof learning provides an opportunity for categories with insufficient data to learn from previously obtained knowledgethrough models that have sufficient data to produce a goodfit. It is crucial to note that the degree of sharing is notabsolute, but rather catered to each dataset. This producesa generic model for CLV estimation, rather than a carefullycatered representation to the data being used. After the individual models are estimated using probability models, ashared basis is produced. The shared basis is a set of linearly independent vectors, which in a linear combination,can represent the entire vector set. This shared basis includes rows of the seven parameters estimated, four fromthe Pareto/NBD and three from Gamma/Gamma, for eachof the product categories.In the paradigm of multitask learning, several related tasksof CLV estimation are learned together by sharing knowledge between the tasks. It is important to note that thetasks in the shared basis can selectively share informationwith other tasks, resulting in a more independent and dynamic system. Consider the representation of each task parameter vector to be a linear combination of a finite numberof underlying basis tasks. The coefficients of the linear combination are generally sparse for the tasks being estimated.Moreover, there is an overlap in the sparsity patterns between different tasks, which can determine the degree ofsharing in the model. Since each task is only representedby a vector of seven parameters, the model exists in a lowdimensional subspace allowing the tasks to overlap with eachother in multiple bases. This form of Machine Learning isknown as sparse coding, a technique used to represent an input signal by a small number of basis functions [13]. Sparsecoding outputs a basis set and a weight vector, which, whencombined, reconstructs the signal to integrate shared knowledge from other tasks. This product of the basis matrix andthe weight vector hence produces a new matrix of the refinedparameters for each of the product categories.Consider a finite but large set of transaction data in thetraining set X [x1 , x2 , x3 .xn ] in Rm n . Here, n is thenumber product categories being modelled, and m is theseven parameters described above. Let D in Rm k be thedictionary representation, wherein each column representsa basis vector. Hence, X admits a sparse approximationover the dictionary with a atoms. Let L(x, D) be a lossfunction which must be minimized for a well fitted D. Thenumber of product categories, n, is larger than the sevenmodel parameters implemented, as is typical for applicationsof this type of learning. Moreover, the degree of sharing, k, isat most equal to n, since sparse coding is implemented ratherthan an overcomplete dictionary. This degree of sharing isa control knob that can be moved along the spectrum tofind the optimal solution. The loss function L(x, D) is theoptimal value of the Lasso regularization problem with theregularization parameter λ:1L(x, D) minα Rk x Dα 22 λ α 12Here, λ is a regularization parameter. A fundamentalcharacteristic of this technique is that this l1 penalty resultsin a sparse solution for α in the basis pursuit. Sparse coding hence enables a representation of shared basis vectors,which further refine the individual parameters.One of the challenges of sparse coding is the determination of the optimal degree of sharing. The degree of sharingmay lie in the spectrum between completely individualizedmodels to a single model with all data being shared. In order to determine the best degree of sharing, the model usesan out-of-sample testing procedure to compare the CLV estimates produced by the shared representation with the actualCLV. The k producing the model with the least percentageerror is considered optimal. Hence, the development set isleveraged to ensure that an overfitted model is not chosen.These parameters are then used to estimate the conditionalexpected value of the amount a customer will spend, in order to determine the value of the CLV per category for eachof the customers. For any customer, the aggregate of theCLV for all product categories is the projected future CLV.A robust implementation of the system is provided, andexperiments are run to compare the system with existingmodels. This enables testing the validity and reliability ofthe proposed model. Further details are provided in Section 4.

Figure 4: System implementation pipeline.4.SYSTEM IMPLEMENTATIONIn order to compare the proposed model to existing models, two different models are being generated using the dataavailable. One uses the existing single model approach inindustry, and the second applies the proposed solution (theshared knowledge model). In order to evaluate the strengthof the proposed model, it was tested on data comprising82,000 anonymized rows of transactions of a large online retailer, Amazon, from January 2012 to December 2012. Thedata (see Appendix A) contains rows of transaction data,consisting of several variables such as time of transaction,the relevant product category, item purchased, customer ID,zip code, average income level, etc. Note that since the datais anonymized, for CLV estimation, each customer is identified with their unique customer ID. The data was obtainedfrom ComScore, through the Wharton Research Data Services (WRDS).Before starting, transactions are divided into training, development, and testing sets, with the training set preceding the development and testing set chronologically. A twomonth span is used for each of the sets. As described in Section 3, the metric used to test results is Future CustomerSpend (FCS). The training period will be used to forecasthow much customers will spend in the testing period. Thedevelopment set is used to determine the degree of sharingwhich would be optimal for the given dataset. This is thenverified by observing actual spending behavior in the testperiod.To measure the effectiveness of building separate modelsfor each category, the following steps are conducted:i. The Pareto/NBD and Gamma/Gamma distributions areused to estimate CLV per customer using all transactions observed in the training period for each category;ii. These parameters are then fed into a matrix containingthe 7 parameters defining all the product categories;iii. Sparse coding with 100 iterations is performed on thematrix, with an increasing degree of sharing;iv. The matrix containing the refined parameters obtainedfor each degree of sharing is then used to estimate theconditional expected value of the CLV on the development set. This process

company wishing to determine CLV of its customers would assume a constant direct marketing campaign for the re-mainder of each customer’s lifetime, and then determine the pro tability of each customer (Cheng et al. [3]). However, this technique has a number of drawbacks. The concept

Related Documents:

Electrical Construction Estimating Introduction to Electrical Construction Estimating Estimating activites will use the North State Electric estimating procedures. Estimating and the Estimator Estimating is the science and the art by which a person or organization determines in advance of t

more customer-centered and to help it build “Customer Equity.” They define a firm’s customer equity as the total discounted lifetime value of all of its customers. The concept of Customer Lifetime Value is well known to traditional mail-order and direct

Section 3, Cost Estimating Methods, discusses historical, conceptual, risk-based, and cost-based estimating methods and estimating software. Section 4, Cost Estimating Factors, discusses cost drivers and the impact that each has on the construction cost estimate throughout the project development process.

GAO Cost Estimating and Assessment Guide -The "12-Steps" 1. Cost estimating (CE) purpose 2. Cost estimating plan 3. Define program characteristics 4. Determine estimating structure 5. Identify ground rules and assumptions 6. Obtain data 7. Develop point estimate; compare to ICE 8. Conduct sensitivity analysis 9. Conduct risk and .

retention, repeat purchases, customer referrals, reduced support costs, and possibly even price premiums. The value of a customer over the lifetime of a business relationship can be measured using a simple formula. (See Figure 2.1.) As with all mathematical equations, changing a value in the equation changes the result. Thus, increasing customer

manage,and improve customer retention. Further,these organizations are examining how to measure and improve long term customer lifetime value. This Management Accounting Guideline examines new tools and techniques for measuring and managing customer profitability,retention,and lifetime value.It will include recent examples in

Fidelity Freedom Lifetime Income Secure a stream of income1 that's guaranteed2 for life. Lifetime income. You (or you and your spouse) will receive income payments guaranteed1 to last a lifetime in return for a lump-sum payment. To receive these lifetime income payments, however, you will have limited or no access to your assets.

Grade 2 must build on the strong foundation of Grades K-1 for students to read on grade level at the end of Grade 3 and beyond. Arkansas English Language Arts Standards Arkansas Department of Education