Emotion Reinforced Visual Storytelling - Microsoft

2y ago
20 Views
2 Downloads
1.96 MB
9 Pages
Last View : 15d ago
Last Download : 3m ago
Upload by : Philip Renner
Transcription

Emotion Reinforced Visual StorytellingNanxing Li †Bei Liu Zhizhong HanSchool of Software, TsinghuaUniversity & Beijing NationalResearch Center for InformationScience and Technology (BNRist)Beijing, Chinalinanxingthu@gmail.comMicrosoft Research AsiaBeijing, Chinabei.liu@microsoft.comDepartment of Computer Science,University of MarylandCollege Park, Maryland, USAh312h@umd.eduYu-Shen Liu‡Jianlong FuSchool of Software, TsinghuaUniversity & Beijing NationalResearch Center for InformationScience and Technology (BNRist)Beijing, Chinaliuyushen@tsinghua.edu.cnMicrosoft Research AsiaBeijing, Chinajianf@microsoft.comABSTRACTAutomatic story generation from a sequence of images, i.e., visual storytelling, has attracted extensive attention. The challengesmainly drive from modeling rich visually-inspired human emotions, which results in generating diverse yet realistic stories evenfrom the same sequence of images. Existing works usually adoptsequence-based generative adversarial networks (GAN) by encoding deterministic image content (e.g., concept, attribute), while neglecting probabilistic inference from an image over emotion space.In this paper, we take one step further to create human-level storiesby modeling image content with emotions, and generating textualparagraph via emotion reinforced adversarial learning. Firstly, weintroduce the concept of emotion engaged in visual storytelling.The emotion feature is a representation of the emotional contentof the generated story, which enables our model to capture human emotion. Secondly, stories are generated by recurrent neuralnetwork, and further optimized by emotion reinforced adversariallearning with three critics, in which visual relevance, languagestyle, and emotion consistency can be ensured. Our model is able togenerate stories based on not only emotions generated by our novelemotion generator, but also customized emotions. The introductionof emotion brings more variety and realistic to visual storytelling.We evaluate the proposed model on the largest visual storytelling Bothauthors contributed equally to this research.work was conducted when Nanxing Li was a research intern at MicrosoftResearch‡ Corresponding author.† ThisPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from permissions@acm.org.ICMR ’19, June 10–13, 2019, Ottawa, ON, Canada 2019 Association for Computing Machinery.ACM ISBN 978-1-4503-6765-3/19/06. . . t (VIST). The superior performance to state-of-the-art methods are shown with extensive experiments.CCS CONCEPTS Computing methodologies Natural language generation;Image representations; Neural networks.KEYWORDSStorytelling; Multi-Modal; Emotion; Reinforcement LearningACM Reference Format:Nanxing Li, Bei Liu, Zhizhong Han, Yu-Shen Liu, and Jianlong Fu. 2019.Emotion Reinforced Visual Storytelling. In International Conference on Multimedia Retrieval (ICMR ’19), June 10–13, 2019, Ottawa, ON, Canada. ACM,New York, NY, USA, 9 pages. ONRecent years, we have seen a bursting number of researches in bridging the gap between vision and language. Driven by the availabilityof large scale of pairs consisting of images and natural languagedescriptions and successful adoption of recurrent neural network(RNN), encouraging progress has been made in language generation from images [16, 25, 35]. In this paper, we tackle the problemof generating a story that consists of several sentences from a sequence of images, i.e., visual storytelling. Compared with imagecaptioning and image paragraphing that take one single image asinput, visual storytelling is a more subjective task which requiresan overall understanding on the connection among all images andaims to generate sentences with consistent semantics.Visual storytelling has been explored by many researches inrecent years [13, 19, 23, 26, 28, 30]. Most of them focus on modelingthe embedding between images and sentences and they consider thesequence of image contents as the most important factor for storygeneration. In other words, they only leverage low level featuresthat come from image content for the input of decoding sentences tofit the distribution of words in language level. However, the process

(a)(b)(c)User 1:(a) Today we went to the zoo and were amazed by the wild life.(b) Our little boy was really excited to see all those cute little animals.(c) We also saw the elephants wandering around in the field.(d) We ended the day relaxing with our very own animals at home.(d)User 2:(a) We visited the zoo last weekend.(b) The kid was a bit upset because the animals wouldn’t come out from their caves.(c) The elephants were fun to watch though.(d) We were exhausted after the day and saw our dogs lying lazily on the couch.Figure 1: Example of stories annotated by different users forthe same images. For image (b), user 1 reads excited from thekid while user 2 captures upset. For image (d), the day feelsrelaxing for user 1 while family is exhausted in the eye ofuser 2. Different interpretations from two users result in sentences that contain different contents for these two images.of making stories from a sequence of images, especially for humanbeing, is rather subjective. Different interpretations to the sameimage sequence will end with different stories. Even for the sameperson, making a story for the same image sequence at differenttimes will result in different stories if he/she holds various moods.Existing models that only take consideration of visual contents willresult in stories with general sentences due to the fact that theytend to model the common features of different sentences for thesame image while failing to modeling the specific features that leadto different sentences.To simulate the real process of storytelling by human, we consider emotion as an important factor that differentiates our interpretations to a given image sequence and guides to different stories.Given the example of Figure 1, different emotion interpretationsfor image (b) and image (d) result in different sentences for theseimages and also affect the trend of whole story.To generate a human-level story from an image sequence, we arefacing with the following challenges. First of all, emotion predictionfrom images is rather difficult, as it usually involves cross-modalityinference. Besides, to simulate human process of making storiesfrom image sequence, one image is possible to correspond to severalemotions. Moreover, emotion of each image can be influenced byits contextual images in a time sequence. To address the abovechallenges, we propose a novel emotion reinforced visual storygenerator, which is the first to introduce emotion as an importantfeature into visual storytelling. We consider two means of obtainingthe emotion features: 1) automatically generated from images and2) manually customized by users.Our novel emotion generator is based on the conditional generative adversarial network (cGAN) [24] so that we are able to generatediverse but realistic emotions in a continuous space for the sameimage. In order to make the generated emotions coherent in onesequence, we connect cGANs of emotion generative model for eachimage in the image sequence in a recurrent way and sequentiallyupdate the generated emotions for each image. Furthermore, ourmodel is capable of generating stories based on customized emotionfeatures. We believe such feature is crucial to visual storytelling,since the same image sequence can be interpreted differently interms of emotion. The introduction of emotion feature expands thepossibilities of visual storytelling.By leveraging emotion, stories are finally generated by a recurrent neural network and further optimized by policy gradient. Twodiscriminators and emotion affirmation are jointly used to providerewards for story generation approximation. The two discriminator networks are designed to guarantee the generated sentences’relevance to the visual content of image sequence and accordancewith story language style. While emotion affirmation is designed tomeasure the consistency of the generated story and input emotion.We conduct experiments on the largest visual storytelling dataset(VIST) [13]. The generated stories are evaluated in both objectiveand subjective ways. We define automatic evaluation metrics interms of visual relevance, emotion relevance and expressiveness.User studies are conducted concerning visual relevance, coherence,expressiveness, and emotiveness. Besides generated emotion, wealso test stories generated by having customized emotions as input.The contribution of this paper can be concluded as follows: We introduce emotion as an important factor to generatestory from image sequence. To the best of our knowledge,this is the first attempt to put forward emotion for visualstory generation, which enables a machine to generate variable stories for the same image sequence. We propose an emotion reinforced visual story generatorincorporating the emotion feature. It consists of an imageencoder and a story generator with image and emotion sequence as input, in which two discriminators and an emotionaffirmation measurement provide rewards for measuring image relevance, story style and emotion consistency. We conduct extensive experiments to demonstrate the effectiveness of our approach compared with several baselinemethods in both objective and subjective evaluation metrics.2RELATED WORKThere are many studies conducted on generating sentence(s) fromimages. We will review them based on two categories: visual description generation and visual storytelling.2.1Visual Description GenerationVisual description generation (image captioning and paragraphing)aims to find or generate sentence(s) to describe one image. It wasfirst considered as a retrieval problem so that to describe a givenimage, the algorithm returns several sentences with similar semantic meaning [9, 14]. The problem of search-based generation is thatit cannot provide accurate sentences for images. Template fillingmethod is thus proposed to overcome this problem [17]. Recently,with the development of deep neural network, integration of convolutional neural network (CNN) and recurrent neural network(RNN) is boosting the sentence generation research for readablehuman-level sentences [1, 7, 16, 20, 25, 31–34]. Later on, generativeadversarial network (GAN) [11] is utilized to improve generatedsentences for different problem settings [6, 35]. Latest work [2, 5]strive to discover other structures for this task. However, as wehave addressed, the target of image description generation is to

use sentence(s) to describe factual visual content while story is ancombination of visual contents and human subjective perception(emotion).sentence to generate story based on both story feature and emotionfeature of the current image.2.2where si denotes the sentences decoded by the decoder given thestory feature and emotion feature ei of the input image, and denotes concatenation.The emotion features can be generated automatically or definedwith customized emotions. To generate emotion automatically, wedesign a sequential conditional generative adversarial networkbased on conditional generative adversarial network (cGAN) [21]with an additional GRU layer. With the GRU layer, the generator ofone image is affected by both the image and its contextual predictedemotions. To generate diverse yet realistic emotions, we incorporatethis GAN structure instead of an classification network for generating emotion. Our model is also capable of handling customizedemotion to guide the generation of stories with given emotionfeature. Since the same sequence of images can be interpreted differently by different annotators in terms of emotion, we believebeing able to customize emotion is crucial for visual storytelling.Inspired by [26], we implemented reinforcement learning [29],where rewards r consist of three parts: image-relevance r I , storylikeness r S and emotion-consistency r E . The first two parts, imagerelevance and story-likeness are scored by two separate discriminators. Emotion-consistency is proposed to measure the consistencybetween emotion of generated story and input emotion (eithergenerated or customized).Visual StorytellingVisual storytelling is a rather new topic but has attracted manyattentions. Generating several sentences for the purpose of storytelling is more challenging than visual description for one image.Relationship between different visual contents need to consideredto form a good story and sentences for a story have to be coherent.Similar to visual description researches, early works mainly focuson search-based method to retrieve the most suitable sentence combination for an image sequence [23]. [19] proposes a skip GatedRecurrent Unit to deal with semantic relation between image sequence and generated sentences. Then methods leveraged by imagecaptioning, especially CNN-RNN framework is extended for storygeneration [13]. Recently, we have seen some works that utilizereinforcement learning and generative network for better storygeneration [12, 26, 28]. Though topic is introduced in [12], existingworks still lack of subjective perception of human when makingstories, which we first introduce in this paper.3APPROACHIn this research, we aim to generate a story for a sequence of images.Existing researches on this task ignore the diversity of stories forthe same image sequence in the training data, and it usually resultsin generating general and neutral sentences. In order to model thefactor that guides to different stories for the same image sequence,we introduce emotion as an important factor. For this reason, ourstory generator generates stories considering both visual contentsand emotions. The framework is shown in Figure 2.3.1OverviewWe design our model as an encoder-decoder framework, implemented with a hierarchical recurrent neural network (RNN) structure. This is intuitive since our target is to generate a sentencesequence based on the input image sequence, and each sentencecan be considered as a sequence of words. Given an image sequence,we first apply a convolutional neural network (CNN) to extract features of each image and use them as input of image encoder.The image encoder is a RNN with gated recurrent unit (GRU)[8] as its cell. It is used to encode the story feature of each image.At i-th time step, we feed image feature of i-th image to the GRUas its input, and take the hidden state of the GRU cell as its storyfeature output. The story feature of the i-th image is as follows:{ci } E I ({xi }),(1)where E I denotes image encoder and {ci } denotes the whole sequence of story features, where i ranges from 1 to N , with N beingthe total number of images in the sequence. Each story feature iscorresponding to each image. Coherence among images is enhancedwith RNN, and it is crucial to story generation from several imageinputs. Therefore we consider the output of the image encoder asthe story feature of the current image.In our encoder-decoder framework, story generator serves as thedecoder. We use a language model that predicts the best possible{si } Decoder ({ci ei }),3.2(2)RNN as Image EncoderWe utilize GRU as the image encoder. Since the GRU generatesthe output based on the current input and the previous inputs, theoutput contains the contextual information of the current image.Therefore, coherence among images is enhanced, which is a crucialpart of storytelling. We consider the output of the GRU ci as thestory feature of the input image:ci , hi GRU (xi , hi 1 ),(3)where hi denotes the hidden state of the GRU. In order to furtherenhance the coherence of our generated story features, we feed themean pooling of all the image features from the sequence to theGRU as the initial hidden-state h 1 .3.3Emotion FeatureThe emotion feature serves as one of the inputs to our story generator, which guides the latter to include emotion for the generatedstories. Existing researches has shown that there is a pattern between visual content and human perceived emotions [3, 27]. Givenan input sentence, we first predicts the emotions conveyed by thesentence through a probability distribution on multiple emotions.ei (a) p(a si , θ DM ), (a F ),(4)where F denotes the event space (i.e., all the possible emotions),and θ DM denotes the parameters of the emotion analysis model.p(a si , θ DM ) denotes the probability of emotion a in the distributiongiven by the analysis model. This probability distribution are thenused as the emotion feature in our experiments for training. We useemotion features extracted from ground-truth sentences as input

Images 𝐼𝑖ImageNoisefeatureCNN(c) Storygenerator(d) Criticsprovide rewardsCNNCNNMLPCNNGRU(a) Emotionfeature(b) t.GRUE-feat.GRU(e) Emotion UGRU(g) ��� 𝐼𝑖 , 𝑆𝑖 )(h) ���𝑡𝑟𝑢𝑡ℎ 𝑆1,2, ,𝑛 )Softmax(i) Emotion affirmation-Distance(𝐸𝑔𝑒𝑛 , 𝐸𝑔𝑡 )(f) Customized emotionsFigure 2: The framework of emotion reinforced visual story generator. The model can be considered as an encoder-decoderwith a hierarchical recurrent neural network structure. Sequential image encoder (b) encodes the images with GRU as storyfeatures. Emotion representations are either generated by our proposed emotion generator conditioned by images (e) or predetermined with customized emotions (f). Both story features and emotion features serve as input of story generator (c). Inaddition, we apply reinforcement learning with multi-modal discriminator (g), story-style discriminator (h) and emotion affirmation (i) to provide rewards for the optimization of the story generator.of the decoder during training. The extracted emotion features arealso utilized to train our emotion generator.The emotion generator is designed to generate emotion fromimages automatically. Since we consider the emotion generationprocess as a creative process rather than a predictive process, wedesign the generator as a generative model, which has shown itscreativity in recent work [4, 15]. Our emotion generator is basedon cGAN [21] and follows the generator-discriminator structure.The generator is a multi-layer perceptron followed by a GRU. Sincethe emotion features we need are probability distributions, we adda softmax function to the end of the generator.The emotion generator takes image features {xi } of an imagesequence and a sequence of random noise vectors {zi } as inputand learns to generate a sequence of creative yet plausible emotionvectors. The GRU in the generator enables it to generate contextaware emotions. The emotion generator can be denoted as:yi MLP(xi , zi ),(5)êi , hi GRU (yi , hi 1 ),(6)ei so f tmax(êi ),(7)where êi and hi denotes the output and the hidden state of the GRU,respectively.We design discriminators in two levels. An instance-level discriminator measures the image-relevance of the generated emotionsand a sequence-level discriminator measures the consistency of thegenerated story sequence. The generator and the discriminatorsare jointly trained in the same way as cGAN [21].The story generator is also capable of generating stories basedon customized emotion features, which we believe is crucial tostorytelling since the same image sequence can be interpreted withdifferent emotions. There are many ways to customize emotions.How customized emotion features are obtained in our experimentswill be further discussed in the experiment section.3.4RNN Decoder as Story GeneratorGiven the predicted story feature and emotion feature, the decoderpredicts the best possible sentence. We use a RNN language modelas decoder, which predicts sentences by predicting each word in asequence according to the story and emotion feature, as well as allthe previously predicted words.3.5Reinforcement LearningWe incorporate reinforcement learning [29] in our approach byconsidering the story generator as the agent, and the process of

11%16%picking up each word as an action given the situation. The gen19%erator is guided with a rewardthat consists of three parts:image12%relevance r I , story-likeness r S and emotion-consistency r E . The firsttwo measurements are judged by two discriminators, an instancelevel discriminator D I that measures the image-relevance and a10%sequence-level discriminatorD S that measures the story-likeness,9%as described by [26]. The instance-level discriminator is trainedto discriminate paired sentences10%and images 13%from randomly selected sentences and generated sentences, while the story-level0 1 2 3 4 5 6 7discriminator is trained to discriminate real stories picked from thedataset from stories formed with randomly selected sentences andgenerated stories.For the first two rewards, we simply use the probability predictedby the discriminators that measures how likely our generated sentences are ground-truth sentences as the reward functions:r I (si xi ) P D I (дroundtruth si , xi ),(8)r E (si ei ) sup ei (a) p(a si ) ,(10)5%8%5%44%8%6%5%9%10%01234567-1Figure 3: Distribution of emotions in the training data ofVIST. Emotions are analyzed by DeepMoji [10] and clusteredr S ({si }) P D S (дroundtruth {si }).(9)into 9 categories. For each category, a typical emoji is shown,Emotion-consistency is35 1 theformemojis.05 27 43 45 52 02 29 03 34 46 37 32 55 22 25 01 19 00 51 62 12 20 14 39as42 5758 30 13 38 56 0436 10 07 53 06emotions33 17 40 50 09 5444 15 2611 48 41of28 4921 24 47 08The16 63 23 59 61 18 60input emotion and emotion feature of generated , 5, 27, 43, 45,2, 29,3,34, 46,37,32,with neutral emotion, i.e., stowe use probability distribution as emotion feature, total variationries not displaying strong emotion.distance is applied as distance between two emotion features. Thenegative distance is used as the emotion-consistency reward:a Fwhere ei (a) and p(a si ) denote the probability of emotion a in theinput emotion feature ei and emotion feature extracted from thegenerated story si .The final reward is the weighted sum of the aforementionedthree parts of reward:r ({si } {xi }, {ei }) γ 1 {r I (si xi )} γ 2r S (1 γ 1 γ 2 ){r E (si ei )}.(11)With this reward, we incorporate policy gradient to train ournetwork. Since we consider story generator as the agent, and eachword picked as an action, we have the policy defined as:pθ (w i,t xi , ei , w i,t 1:0 ),(12)where w i,t denotes the t-th word picked in the i-th sentence.Thus, given the reward r , the loss function to minimize can bedenoted as:N ÕTÕ L(θ ) pθ w i,t xi , ei ; w i,t 1:0 r (si xi , ei ).(13)i 1 t 1Following [26], We utilize policy gradient to minimize the lossfunction and approximate it with Monte-Carlo sample by samplingeach w n,t . The approximated gradient can be similarly denoted as: θ L(θ ) N ÕTÕr (si xi , ei ) (14)i 1 t 1 θ pθ w i,t xi , ei ; w i,t 1:0 . As shown by [26], reinforcement learning with both the instancelevel and sequence-level discriminator greatly improves the qualityof the generated stories in terms of language. We take one stepfurther by introducing the emotion-consistency reward, whichhelps our model to generate stories that are more emotive.4 EXPERIMENT4.1 Dataset and Emotion AnalysisWe conduct our experiments on the VIST dataset created by [13].The VIST dataset is the current largest dataset created specificallyfor the task of visual storytelling. It contains 81,743 images obtainedfrom Flickr website with 20,211 image sequences arranged. Each image sequence is annotated with corresponding stories through AMT(Amazon’s Mechanical Turk). Each sequence contains 5 images andmost sequences has multiple annotations.For pre-processing, we filtered out sequences with images thatare no longer available in the dataset, with 40,143 images and 26,890sequences remaining for training set, 5,055 images and 1,011 sequences for testing set. In addition, we tokenized the sentences andfiltered out words with occurrence less than 4, creating a vocabularywith 10,698 words.To investigate the importance of emotions for stories, we makean analysis to sentences in the VIST dataset in terms of emotiondiversity. We check the diversity of emotions among annotatedsentences. We utilize a state-of-the-art method for emotion analysisfrom text, DeepMoji[10], which is trained on more than 1 billionsentences and the corresponding emotions, achieving as high as82.4% on human agreement[10]. Figure 3 shows the emotion (corresponding to each emoji respectively) distribution in the trainingdata. We can see that more than half of the sentences infer obviousemotions different and emotions are equally distributed amongthese emotive sentences.4.2Implementation DetailsIn our experiments, we use the outputs from the fc7 layer of a pretrained VGG16 model, which has 4096 dimensions, as our imagefeatures. The sizes of the hidden states of the story encoder RNNand the language decoder RNN are 1,000 and 1,025 respectively. Weutilized DeepMoji [10] to extract emotion features for sentences.

The emotion features are of 64 dimensions and are embedded intoa space with 25 dimensions.Before training our model with reinforcement learning, we firstpre-trained our generator without the discriminators. The weightsof three rewards used for reinforcement learning are empiricallyset to 0.72 for image-relevance, 0.18 for story-likeness and 0.1 foremotion-consistency, respectively.4.3Stories with Auto-Generated EmotionsWe first conducted experiment with automatically generated emotions with images as the only input. In this way, we can comparefairly with previous researches and demonstrate our model’s ability of generating diverse and emotive stories. We conducted bothobjective and subjective evaluation on the results.4.3.1 Compared Methods. We compare the results of our modelswith four baseline methods. We include several image/video captioning models and the previous state-of-the-art model on storygeneration. The models are: Sentence-Concat [25]: a classic method to incorporate thebasic CNN-RNN framework on the problem of image captioning. For story generation, we simply concatenate individualoutputs for each image together for the complete story. Regions-Hierarchical [16]: a hierarchical recurrent neural network that generates several sentences for one imagebased on regions and bidirectional retrieval. We use imagesinstead of regions to generate corresponding sentences. SRT [26]: a state-of-the-art visual storytelling model whichis the first to incorporate reinforcement learning in the taskof storytelling. We test their methods with two settings: SRTw/o D and SRT w/ D for fair comparison with our modelwith and without discriminators. Our Model: to examine the effectiveness of three critics asrewards, we train our model with five settings. Pre-trainedmodel without critics (Ours w/o critics), with discriminator only (Ours w/ Dm &Ds ), with emotion affirmation only(Ours w/ E) and with all critics (Ours).4.3.2 Objective Evaluation Metrics. For objective evaluation, similar to other visual storytelling researches, we compare the generated stories with reference stories and compute the languagesimilarity with NLP metric (BLEU [22]). As discussed similarly by[28], sentence translating metrics are not ideal metrics for the taskof visual storytelling, as it relies heavily on correlation betweenpredicted sentences and ground-truth sentences, which is heavilybiased. Thus, we define two more objective evaluation metrics tomeasure the emotiveness and novelty of sentences. The metrics areas follows: Relevance: BLEU is an evaluation metric for machine translation. This metric calculate scores based on the correlationbetween the generated stories and the ground-truth stories.Note that the relevance here only indicates the relevancebetween generated stories and reference stories and cannotreflect whether the generated stories are really relevant toimage sequence. Emotiveness: Since we introduce emotion to story generation, we define emotiveness as to what extent the sentencescan express emotions. We use the confidence score, i.e. , thesum of the probabilities of the top-5 candidate emotions,predicted by DeepMoji [10] for the measurement. Thereforethe score ranges from 0 to 1, with a higher score indicatesa more emotive story. Similar to inception score in GAN,since DeepMoji is a classification model, the confidence ofits prediction can be considered as the clearness and strengthof emotion in the sentences. Novelty: With the introduction of emotion, our generatedstories is expected to inclue more diverse and novel words.Therefore we evaluate our method with novelty. Novelty forstories is proposed in [28] for their subjective evaluation.We try to quantify it by computing the less frequent wordsused in the sentences. Following the novelty definition in[18], we compute the proportion of N-grams that occur inthe training dataset except the most 10% frequent ones. Weuse both bi-gram and tri-gram as measurement here due tothe fact that expressive expressions usually reside in phrasesrather than words.4.3.3 Subjective Evaluation. We conducted human evaluation sinceobjective metrics are not capable of perfectly evaluate the performance of generated stories, as the evaluation of stor

2.2 Visual Storytelling. Visual storytelling is a rather new topic but has attracted many attentions. Generating several sentences for the purpose of story-telling is more challenging than visual description for one image. Relationship between different visual contents need to considered to form a good story and sentences for a story have to be .Cited by: 9Publish Year: 2019Author: Nanxing Li, Bei Liu, Zhizh

Related Documents:

Guess the emotion: Choose a card, read or act out what’s on the back, and ask someone to guess the emotion. Emotion storytelling: In a group, ask each person to choose an emotion card and tell a story about a time when they felt that way. Heads up: Get children to stick an emotion

in discussing visual storytelling as a marketing approach for creating effective ads & the demonstration of advertising samples using visual storytelling as a strategy. Key words: Visual Storytelling, Communication, Marketing, Advertising, Media. *Corresponding Author: Doa

vary the overall capacity of the reinforced concrete and as well as the type of interaction it experiences whether for it to be either over reinforced or under reinforced. 2.2.2.1 Under Reinforced Fig. 3. Under Reinforced Case Figure 3.2 shows the process in determining if the concrete beam is under reinforced. The

Storytelling: The art of a good yarn Introduction Storytelling is an ancient and valuable art that extends around the globe. In this unit, students develop their own storytelling talents, apply the techniques of storytelling, create storytell

Digital Storytelling in Language Arts Digital Storytelling Cookbook byCenter for Digital Storytelling, Berkeley, CA A Workshop on the Art of Digital Storytelling, Dr. Jennifer C. Richardson & Nikki Kim, Purdue University “Powerful Tools for Teaching and Learning: Digital

concerned with a form of storytelling called "Nonlinear Storytelling" and specifically because we believe that a strong link in nonlinear storytelling with gamification can dramatically improve learning in online education. Others have cited that storytelling provides improved recall over simple presentation of hard facts (Gillett, 2014).

emotion concepts, she also becomes capable of emotional experi-ence and emotion perception. In this predictive processing ac-count, concepts are ad hoc, goal-based constructions that serve to make emotional meaning of sensory inputs. That is, in our view, emotional development is tantamount to emotion concept devel-opment.

Artificial Intelligence softwares are improving the speed and efficiency of the media production process and the ability to organize visual assets. Many gaming platforms are also adopting new technologies to bring more interactive gaming experience. Sports show maximum affinity towards using Artificial Intelligence for game preparation and real-time analysis of the on-field action. Banking and .