Intention-based Long-Term Human Motion Anticipation

2y ago
21 Views
2 Downloads
1.36 MB
10 Pages
Last View : 1d ago
Last Download : 3m ago
Upload by : Javier Atchley
Transcription

Intention-based Long-Term Human Motion AnticipationJulian Tanke*, Chintan Zaveri , Juergen GallUniversity of Bonn{tanke gall}@iai.uni-bonn.deAbstractRecently, a few works have been proposed to model theuncertainty of the future human motion. These works donot forecast a single sequence but multiple sequences forthe same observation. While these works focused on increasing the diversity, this work focuses on keeping a highquality of the forecast sequences even for very long timehorizons of up to 30 seconds. In order to achieve this goal,we propose to forecast the intention of the person ahead oftime. This has the advantage that the generated human motion remains goal oriented and that the motion transitionsbetween two actions are smooth and highly realistic. Wefurthermore propose a new quality score for evaluation thatcorrelates better with human perception than other metrics.The results and a user study show that our approach forecasts multiple sequences that are more plausible comparedto the state-of-the-art.1. IntroductionAnticipating human motion is highly relevant for manyinteractive activities such as sports, manufacturing, or navigation [25] and significant progress has been made in forecasting human motion [8, 9, 10, 11, 15, 17, 23, 26, 35].Most progress has been made in anticipating motion over ashort time horizon of around half a second. However, thesemethods fail when anticipating longer time horizons as theyeither produce unrealistic poses or the motion freezes. Another issue that occurs when the time horizon gets larger isthe fact that there are more than one future sequence that areplausible for a single observed sequence of human motionas it is shown in Figure 2. Going from a short time horizonof less than one second to a larger time horizon of a few seconds therefore imposes the following challenges: (a) Howcan we model the uncertainty of the future? (b) How canwe ensure that the motion remains plausible? (c) How canwe measure the quality of methods that generate more thanone sequence?Handling the uncertainty of the future has been so faronly addressed in very few recent works [4, 28, 37] for hu* equalcontributionman motion anticipation. These approaches are able to forecast diverse sequences from the same observation, but thequality of the sequences decreases for longer time horizonsbeyond 1 second. In this work, we also propose a networkthat generates multiple sequences as shown in Figure 2, butour goal is to generate more plausible sequences for timehorizons of multiple seconds. In order to achieve this goal,we not only model the human motion but also the intention of the person as illustrated in Figure 1. In fact, humanmotion anticipation depends on two factors, namely the pastmotion and the intention. The latter, which is ignored by existing works, is very important for longer sequences since amotion without a goal is perceived as random and unrealistic. We therefore model the intention as discrete actionsand propose to forecast the intention as well as the humanmotion. The key aspect is that our model forecasts the intention ahead of time and that the forecast human motion isconditioned on the past motion and on the forecast intentionas shown in Figure 1.It, however, remains an open issue how methods thatgenerate multiple sequences are best compared. Recentworks suggest to evaluate both the quality of the generatedmotion as well as the sample diversity. While diversity iscommonly measured by using the average pairwise distancebetween multiple generated predictions [4, 37], measuringthe quality is still an open problem. In [37], for instance,multiple sequences are forecast but only the error of the sequence with the lowest error is reported. Such measures aremisleading since they evaluate only one forecast sequencewhile the other sequences can be implausible. In fact, weshow in the supplementary material that this measure can beeasily fooled by a simple but unrealistic baseline approach,yielding competitive results on clearly unrealistic motion.In [4], pre-trained skeleton-based action classifiers are usedto compute the inception score and a quality score over allgenerated sequences. While the inception score is an indicator for plausibility it is highly depended on the model. Theauthors did not make the models publicly available, makingan evaluation very difficult. Normalized Power SpectrumSimilarity [10] (NPSS) evaluates sequences in the powerspectrum to account for frequency shifts that cannot be cap-

Figure 1: Intention-based human motion anticipation. Given a human motion input sequence (red-blue skeletons), our methodforecasts the intention of the person ahead of time (top row) and the human motion (green-yellow skeletons) conditioned onthe previous motion and the future intention. This allows not only long-term forecasting but also realistic transitions betweendifferent actions. For example, the blue and orange boxes show how the motion already prepares for the next action leaningdown or standing, respectively.Figure 2: Our approach forecasts multiple sequences ofplausible future human motion for long time horizons. Eachrow shows a different prediction of three seconds madeby our model, given the same input sequence (Discussionfrom Human3.6M [14]). The red-blue skeletons representthe ground-truth input while the green-yellow skeletons aremodel predictions. During the first second, the model generates fairly consistent human poses but it starts to generatediverse but realistic human motion after 1 second. The qualitative results are best viewed in the supplementary video.tured by MSE. However, NPSS is uni-modal as it comparesthe motion to a single ground-truth sequence. We thereforepropose a new complementary similarity score that measures the normalized directional motion similarity betweenmotion snippets of forecast and real motions that have thesame semantic meaning. The measure has the advantagethat it takes the multi-modality of human motion into account and that it correlates much better with human perception than NPSS.Our contribution is therefore two-fold: We propose a novel quality score for long-term humanmotion anticipation that measures the plausibility ofmultiple generated sequences and that correlates betterwith human perception than other metrics. We propose a novel approach for human motion forecasting that forecasts the intention of a person ahead oftime and that is capable of generating multiple plausible future sequences for long time horizons.2. Related WorkHuman Motion Anticipation: In recent years, deep neuralnetworks [12, 6, 9] have been used to synthesize and antic-ipate human motion. Auto-regressive methods [9, 23, 38]model first-order motion derivatives using the sequence-tosequence model [32] popularized in machine translation.QuaterNet [27] replaces the exponential map representation with quaternions, which do not suffer from common3D rotational problems such as gimbal locks. Furthermore, the authors show that the model can generate cyclicmotion for very long time horizons when frame-wise usercontrol is provided, similar to [16, 18, 19, 31]. A similar approach is utilized in Hierarchical Motion Recurrentnetworks [20] and Structured Prediction [2] where novelRNN structures are proposed which better represent skeletalstructures. Graph-convolutional neural networks [22] canbe utilized to learn human motion in trajectory space, using Discrete Cosine Transform, rather than in pose space.Highly competitive results are achieved by recent attentionbased models [21]. The idea of utilizing discrete representations for human poses was first proposed in [33] where aconditional restricted Boltzmann machine (RBM) is usedas a generative model for synthesizing or filling missingpose data. While RBMs or Deep Belief Networks learn abinary representation of the data, they are nowadays outperformed by other approaches that learn continuous hidden states such as RNNs. Recently, an adversarial generative grammar model was proposed in [28] for future prediction where stochastic production rules are learned jointlywith its latent non-terminal representations. By selecting various production rules during inference, many different forecast outputs can be generated. However, ourexperiments show that the model does not forecast longterm natural human motion. Recently, models based onadversarial training gained some attention: Convolutionalsequence-to-sequence models [17] utilize a convolutionalencoder-decoder structure with the adversarial loss to prevent overfitting. The adversarial geometry-aware encoderdecoder [11] utilizes two adversarial losses: one to tacklemotion discontinuity, which is a common problem in previous models, and one to ensure that realistic motion is generated. On top of that, the geodesic instead of the Euclideandistance is used as reconstruction loss. MotionGAN [29]frames human motion anticipation as an inpainting problem. Wang et al. [35] combine an adversarial loss with rein-

forcement learning to forecast realistic poses. Early workson multi-modal human motion anticipation utilize stochastic conditional variational autoencoders [30, 7]. Recently,novel sampling methods [4, 37] for conditional variationalautoencoders were proposed for multi-modal human motionanticipation. While Mix-and-Match [4] randomly perturbsthe hidden state to increase stochasticity, DLow [37] mapsa random variable to a latent code. It employs a two-stageapproach by first learning a conditional variational autoencoder and then the mapping.Human Motion Evaluation: Evaluating complex multivariate time series with a high degree of stochasticity, suchas human motion, remains a challenging research problem. The simplest approaches calculate the Euclideandistance [12, 15, 23] to a target sequence independentlyfor each time step, which works well for very short timehorizons ( 0.5s). However, frame-wise distances completely ignore motion dynamics and forecasting only thelast pose results in competitive results [23]. To addressthese challenges, frequency-based metrics have been proposed. Frequency-based methods such as NPSS [10] incorporate motion information, but they accumulate it over theentire sequence. On top of that, distances in the frequencydomain are difficult to interpret and make it hard to pinpointwhen a motion can still be considered as realistic or not. In[4] the inception score [13] is adapted by training a modelon skeleton data to evaluate the quality of the generated sequences. Complementary, a binary classifier is trained forquality assessment. However, both models are not publiclyavailable, making comparisons difficult. While the work[37] uses the average pairwise distance to measure diversityas [4], it only evaluates the best generated sequence usingthe quality metrics from [12, 15, 23].3. Stochastic Human Motion Anticipationfrom IntentionIn this work, we address the task of forecasting humanmotion. This means that we observe 3d human skeletonsfor t frames, which are denoted by xt1 (x1 , . . . , xt ) Rt d and where d is the feature dimension that representsthe human pose, and our goal is to forecast plausible future pose sequences x̂Tt 1 p(xTt 1 xt1 ) where x̂Tt 1 (x̂t 1 , . . . , x̂T ) R(T t) d and p(xTt 1 xt1 ) is the distribution of all plausible future sequences given the observedhuman motion.As it is illustrated in Figure 2, our approach does notpredict a single sequence but aims to learn the distributionp(xTt 1 xt1 ) such that we can generate multiple plausible future sequencesTX̂t 1 {x̂Tt 1 : x̂Tt 1 p(xTt 1 xt1 )}.(1)While we introduce in Section 4 a new quality score thatTevaluates the plausibility of the set X̂t 1and that correlatesFigure 3: Overview of our method. The blue-red skeletonsare the observed human poses while the yellow-green skeletons are forecast future human poses. The network forecaststhe human poses at two levels: at the pose level (yellow) andat an intention level (green). During inference, the networkforecasts the intention labels ahead in time which guide thenthe generation of the future poses. By conditioning the posedecoder dp in addition on z, multiple plausible sequencescan be generated for a single sequence of observed humanposes.very well with human perception, we first discuss the novelapproach that forecasts (1).Although the recent works [4, 28, 37] are able to forecastdiverse sequences, the quality of the sequences decreasesfor longer time horizons beyond 1 second as we show inthe user study reported in Table 4. This is expected sincethe methods model human motion but not the intention ofthe person. The latter, however, is very important for longersequences since a motion without a goal is perceived as random and unrealistic.We therefore propose an approach that generates multiple future sequences that remain plausible even for longertime horizons of 4 seconds. In order to achieve this goal,our network not only forecasts human poses, but also theintention as shown in Figure 1 and 3. An important aspectof our network is that it forecasts the intention ĉTt 1 aheadin time, which then guides the generated posesx̂Tt 1 p(xTt 1 xt1 , ĉTt 1 )(2)and ensures plausible motion transitions when the intentionchanges. We describe the module of the network that forecasts the intention in Section 3.1 and the module that forecasts the human motion conditioned on the intention in Section 3.2.3.1. Intention AnticipationWe model the intention by a categorical representationct C where C is set of possible intention classes. Whilewe forecast the intention ahead in time as shown in Figure 1,we estimate it for each future frame ĉTt 1 . In Section 3.3, wedescribe how C can be obtained in an unsupervised way.To anticipate future intent, we use a recurrent encoderdecoder where the recurrent encoder el takes as input a se-

quence of observed human motion xt1 and the recurrent decoder dl forecasts the future intentions ĉTt 1 :ĉTt 1 dl (el (xt1 )).(3)With decoder dl being auto-regressive, we are not constrained to a fixed time horizon and as such T can be aslarge as needed. We represent both el and dl with singlelayer GRUs.For training, we utilize the categorical cross-entropy asloss function:Lsym C TXX1cτ log(ĉτ j )T t τ t 1 j 1(4)where C is the total number of discrete intention labels, cτdenotes the reference label at time step τ , and ĉτ j denotesthe predicted probability of the j-th class at time step τ . Wewill discuss in Section 3.3 how the reference labels cτ arecomputed for the training set.In order to generate plausible sequences of future intentions, we furthermore add an adversarial loss: Ladvsym min max Ec log Dlabel (c)dl Dlabel(5) Ex 1 log Dlabel (dl (x))where Dlabel is a one-hidden-layer feed forward network.3.2. Human Motion AnticipationIn order to sample sequences of future human poses fromp(xTt 1 xt1 , ĉTt 1 ), we utilize a conditional GAN [24] withnormal distributed noise vector z N (0, 1) as shown inFigure 3. It is conditioned on the past human motion sequence xt1 and the forecast intent ĉTt 1 .Specifically, we first encode xt1 into the vector hp usingthe recurrent pose encoder ep , i.e., hp ep (xt1 ). We thenconcatenate hp and z and auto-regressively generate futureposes for t τ T using the pose decoder dp :(x̂τ , hτ ) dp (x̂τ 1 f (ĉττ 1 γ ) hτ 1 )(6)where ht hp z, x̂t xt , and denotes the concatenation of two vectors. The pose encoder ep consists of asingle layer GRU while the pose decoder dp consists of athree layer GRU.The pose decoder dp , however, not only depends for eachframe τ on the previous generated pose x̂τ 1 and the previous hidden state hτ 1 , but also on f (ĉττ 1 γ ), i.e., onthe intention which is forecast already γ frames ahead. Ifγ 1, the decoder does not look ahead and it takes onlythe estimated intention labels until the current frame intoaccount. We will show in the experiments that this resultsin less plausible sequences since the decoder cannot preparethe transition between two types of motions if they change,e.g., between leaning down and standing as shown in Figure 1. If we allow the decoder to look ahead, the transitions are more smooth and plausible. We found that γ 10(0.4s) is sufficient to obtain good results. Before adding theprobabilities ĉττ 1 γ to the decoder, we aggregate them byf , which is a temporal convolutional layer with kernel sizeγ.During training, we optimize the adversarial loss Ladv min max Ex log Dpose (x)dp Dpose(7) Ex c z 1 log Dpose (dp (x, c, z)where Dpose is a two-hidden-layer feed forward network.While there is usually not a high variability of the plausible human motion directly after the last observed frame butthe diversity increases the longer the time horizon gets asshown in Figure 2, we additionally utilize a reconstructionloss with decreasing impact as τ increases:Lrec JTXX1λ(τ ) xτ j x̂τ j 2J · (T t) τ t 1 j 1(8)where J is the number of joints in the pose, and xτ j andx̂τ j denote the ground truth and model prediction of jointj at time frame τ , respectively. The weight λ(τ ) decreaseslinearly over time with λ(t) 1 and λ(t τrec ) 0. In ourexperiments, we show that τrec 15 (0.6s) is sufficient.For training the network, we use all four loss termswhere the loss terms Lsym (4) and Ladvsym (5) supervise theintention forecasting (green) and the loss terms Ladv (7) andLrec (8) supervise the human motion forecasting (yellow) asshown in Figure 3.3.3. Intention LabelsIn order to obtain the intention labels ct C for training, we cluster the training sequences. We first cluster theposes of all training sequences using k-means and assigneach frame to a cluster. Since these clusters only considerposes but not motion, we sequentially generate intentionlabels by detecting cycles of cluster ids in the training sequences. For all datasets, we use 8 intention labels. Moredetails are provided in the supplementary material where wealso evaluate the impact of the size of C.4. Long-term Human Motion Quality ScoreAs discussed in Section 3, we need for evaluation a scorethat measures the plausibility of forecast human motion forlonger time horizons beyond one second. Furthermore, themeasure needs to measure the quality of a set of forecastTsequences X̂t 1instead of a single sequence.We therefore propose a novel quality measure that correlates better with human perception. The main idea is that

870.5561.0770.8790.6160.536Table 1: NPSS measure from [10] for long-term motion anticipation.a plausible sequence of poses should be close to a real sequence. For long-time horizons, however, the sequencesare too long to compare them directly. Instead, we divideall sequences that have the same semantic meaning but thatare not part of the training data into overlapping short motion sequences of fixed length κ. We call the short motionsequences motion words and we use κ 8 for sequenceswith 25Hz. This results in a very large motion database D.TWhen evaluating a sequence x̂Tt 1 X̂t 1for obsertvation x1 , we split the sequence into overlapping motionwords as well, where we include the last κ 1 observedt 2Tframes, i.e., x̂t 1t 2 κ , x̂t 3 κ , . . . , x̂T 1 κ . We include thelast observed frames such that the transition between observed and forecast motion is also taken into account. Thisis important since discontinuities between observed andforecast frames are perceived by humans as highly unreTalistic. Using the motion words of all sequences of X̂t 1,we can then compute the plausibility score by measuringTthe similarity of the motion words of X̂t 1with the motionwords in D: 1Tfsim X̂t 1 ZXT 1 κX g x̂ττ κ , D , (9)Tτ t 2 κx̂Tt 1 X̂t 1Twhere Z (T t) X̂t 1 is the normalization factor.For computing the plausibility of a motion word, we findthe closest motion word in D using nearest neighbor search(NN) and compute the normalized directional motion similarity (NDMS), which is discussed in Section 4.1: g x̂ττ κ , D NDMS(x̂ττ κ , NN x̂ττ κ , D) .(10)The function g (x̂ττ κ , D) is 1 when D contains the exactmotion word x̂ττ κ and it is 0 g (x̂ττ κ , D) 1 otherwise. Using motion words and not single poses ensures thatthe score evaluates motion quality and consistency and notjust pose quality while the nearest neighbor approach ensures that the multi-modality of human motion is addressed.Due to the normalization factor Z, fsim (9) provides a plausibility score between 0 and 1 for a set of forecast humanmotions.4.1. Normalized Directional Motion SimilarityIn order to compare two motion words x and y, we needto define a similarity measure. The Euclidean distance ofthe poses is insufficient as this favours sequences that remain close to the mean pose. Similarly, using the meansquare error of the velocities favours small motion overlarger motion, as we discuss in the supplementary material.Instead, we measure the similarity of the motion directionand the ratio of motion magnitudes.Specifically, the proposed Normalized Directional Motion Similarity (NDMS) compares two motion words x, yof length κ byPκ 1 1 PJjj 1 Ψt (x, y)t 1 J(11)NDMS(x, y) κ 1!Tẋt,j ẏt,j1j1 Ψt (x, y) 2 ẋt,j · ẏt,j (12)min ( ẋt,j , ẏt,j )·max( ẋt,j , ẏt,j ) where J represents the numbers of joints of the human poseand ẋt,j is the 3D velocity of joint j at time t. The firstpart in (12) yields large values when the j-th joint of x andy move in the same direction while the second part yieldslarge values when the magnitudes of the vectors are similar.To prevent division by zero, we add a small 0. Thisway, Ψjt (x, y) produces values close to 1 when the motionsof x and y are similar and values close to 0 when they arevery dissimilar.It is important to note that the proposed quality measure(10) has several advantages compared to existing measures:a) the measure penalizes discontinuities in motion; b) it penalizes unrealistic motion at a fine-grained level; c) it canbe used to measure the quality of deterministic as well asstochastic approaches; d) it measures the plausibility of allforecast sequences even if they deviate from the observedfuture sequence; e) it correlates better than other measureswith human perception.4.2. Implementation DetailsFor the nearest neighbor search, we use the joint positions of wrists, elbows, shoulders, hips, knees, and ankles. For evaluation, we populate D with all relevant testsequences, e.g., all basketball test sequences for evaluatingbasketball. This way, models have to produce sequencesthat have the same semantic meaning as the current test set(e.g. walking, eating) and not just produce common motionpatterns observed in all sequences.5. ExperimentsWe evaluate our method on the two standard large scalemotion capture datasets: Human3.6M [14] and CMU Mocap [1]. We first analyze the quality of the forecast sequences using different measures including a user study forlong-term forecasting. In the supplementary material, we

seconds:0.40.81.21.6Seq2Seq [23]Trajectory [22]History [21]Grammer [28]Mix&Match 700.5970.6680.4150.6250.630Seq2Seq [23]Trajectory [22]History [21]Grammer [28]Mix&Match 220.4110.4040.2010.4190.412Seq2Seq [23]Trajectory [22]History [21]Grammer [28]Mix&Match 030.3740.3170.2130.4530.4712.02.4walking0.647 0.6170.580 0.4630.641 0.6570.315 0.2910.573 0.5640.586 0.592smoking0.426 0.4320.406 0.3810.404 0.3940.182 0.1790.408 0.4040.446 0.477posing0.381 0.3670.406 0.3280.246 0.2240.224 0.2200.421 0.4020.478 .4912.02.4eating0.483 0.4930.549 0.5430.405 0.4060.227 0.2100.506 0.4940.520 0.521discussion0.404 0.4150.302 0.2930.289 0.2900.162 0.1980.411 0.3950.474 0.467average0.445 0.4380.389 0.3640.337 0.3260.212 0.2050.444 0.4300.484 910.4670.3390.3380.3090.1810.465Table 2: NDMS scores on Human3.6M [14] for actions walking, eating, smoking, discussion and posing as well as averagedover all 15 actions. For Mix-and-Match and our approaches we report the mean score over 50 samples for a given inputsequence.seconds:0.40.81.21.6VAE [37]DLow 6360.3290.3450.584VAE [37]DLow 3560.3340.3220.319VAE [37]DLow 3010.2800.3320.3532.02.4walking0.322 0.3210.346 0.3440.608 0.577smoking0.330 0.3210.315 0.3080.327 0.345posing0.260 0.2640.315 0.2910.366 g0.314 0.3100.320 0.3200.406 0.388discussion0.288 0.2810.315 0.3070.336 0.320average0.294 0.2900.311 0.2980.366 le 3: NDMS scores on Human3.6 [14] using the 17 3D joint representation from DLow [37]. We report the mean scoreover 50 samples for a given input sequence.Seq2Seq [23]Trajectory [22]History [21]Grammer [28]Mix&Match [4]?DLow 2270.2790.1710.5230.4280.714Table 4: User study for the results on Human3.6M [14]. 28users were randomly asked to judge 4 seconds of forecasthuman motion. The users could only choose between realistic or not realistic where we count realistic as 1 and notrealistic as 0. In the table we report the mean values andsequences valued close to 1 are deemed highly realistic. ?indicates sequence length of 3.2 seconds.provide additional results for short-term forecasting and anadditional analysis of the quality measure.5.1. Comparison to State-of-the-ArtLong-Term Forecasting: For evaluating long-term humanmotion forecasting, we first report NPSS as described in[10], utilizing the publicly available implementation. The[36]0.26[34]1.700.413.00Diversity (Human3.6M)[5]Mix&Match [4]DLow [37]0.483.524.71Diversity (CMU)0.432.632.90Ours3.072.40Table 5: Average pairwise distance (APD) of recent stateof-the-art methods on Human3.6M [14] and CMU [1]. Results for DLow [37] are taken from [3].results of the long-term time scale of 2 4 seconds canbe seen in Table 1 where our method slightly outperformscurrent state-of-the art methods. Grammar [28] achievescompetitive results. We will, however, later show that thesequences that are generated by Grammar are less realisticthan the sequences of other state-of-the-art methods. Thisindicates that NPSS is not a very reliable measure for theplausibility of the forecast human motion.We therefore compare the methods using the proposedNDMS metric (see Section 4) with motion word size κ 8on Human3.6M. For each of the 15 actions in Human3.6M,we calculate the scores independently where we populatethe database D with the test sequences of the given action

only - to ensure that the forecast sequences are semanticallymeaningful and consistent with the action. The results forup to 4 seconds are reported in Tables 2 and 3.The results in Table 2 show that our approach outperforms stochastic and deterministic methods in terms ofquality. On cyclic motion such as walking, [21] producesvery strong results over long time periods. However, themotion freezes on non-periodic motion such as discussionand posing. As expected, other approaches including deterministic approaches [23, 22] perform fairly well for shortsequences up to 1.2 seconds. For such short time horizon,the results are quite similar to our approach. However, forlonger time horizons the benefit of forecasting the intentionbecomes evident and our approach outperforms the othermethods by a large margin. Since DLow [37] uses a different skeleton representation than the other methods, wealso report the NDMS score for the skeleton from [37] inTable 3. On average, our approach outperforms DLow anda variational autoencoder (VAE). It needs to be noted thatboth DLow and VAE suffer from a motion discontinuity between the observed frames and the forecast frames. TheNDMS score is therefore relatively low for the shortest timehorizon (0.4s).To validate our results, we conducted a user

Figure 1: Intention-based human motion anticipation. Given a human motion input sequence (red-blue skeletons), our method forecasts the intention of the person ahead of time (top row) and the human motion (green-yellow skeletons) conditioned on the previous motion and the future intention.

Related Documents:

Decisional Balance Worksheet Good things Not so good things Current Behavior Short Term Long Term Short Term Long Term Change Short Term Long Term Short Term Long Term . Thinking About Drinking Here is an example of someone exploring their ambivalence about alcohol use. Everyone’s decisional balance will look a little different.

SHORT-TERM VERSUS LONG-TERM PROFITABILITY 4. Introduction. Deriving value from both short-term and long-term visitors to a website is equally important. Short-term . visitors are often misunderstood to be people who visit a site just once (e.g., a "one-hit quitter" or a "hit-and-run" user).

Chart 12 The overall national NEET population, broken down by qualifications, disadvantage and long-term NEET status, for the first time Looking at the long-term Looking at the long-term 3. An ambitious agenda to tackle the long-term NEET iss

insurance called Qualified Long-Term Care Insurance. This regulation is intended to provide requirements for all long-term care insurance contracts, including qualified long-term care insurance contracts, as defined in the NAIC Long-Term Care Insurance Model Act and by Section 7702B(b) of the Internal Revenue Code of 1986, as amended.

2. Realized Net Long-Term Gains and Corporate Equity of Households, 1954-1985 28 3. Ratio of Realized Long-Term Gains to Gross National Product, by Income Group, 1954-1985 30 4. Average Marginal Tax Rates on Long-Term Gains for Selected AGI Groups 39 5. Marginal Tax Rates on Long-Term Gains and the Ratio of Long-Term Gains to Gross National .

into long-term health care benefit plans. Conversely, Medicaid expenses on long-term health care services for residents may be offset by similar amounts (as aforementioned) annually. This is about 5.07-5.78 percent of the Medicaid long-term care appropriations for Nursing Home Careor 2.99-3.40 percent of the total Medicaid Long Term

The Long-term Habits of a Highly Effective Corporate Board 3 Table of Contents TABLE OF CONTENTS 4 Executive Summary 5 The Long-term Habits of a Highly Effective Corporate Board 6 Spend More Time on Strategy 8 Ensure That Directors Have a Stake in Long-term Success 10 Communicate Directly with Long-term Shareholders

Camilla TOWNSEND (Rutgers University) In the latter half of the seventeenth century, one of the most remarkable women in the world lived in a stone cell in the Convent of San Jerónimo in Mexico City. This was Sor Juana Inés de la Cruz, famous as a poet and a philosopher, and later, as the author of La Respuesta a Sor Filotea1, an extraordinary defense of a woman’s right to study and to .