A Pedagogical Analysis Of Online Coding Tutorials

1y ago
14 Views
2 Downloads
693.47 KB
6 Pages
Last View : 11d ago
Last Download : 3m ago
Upload by : Braxton Mach
Transcription

A Pedagogical Analysis of Online Coding TutorialsAda S. KimAmy J. KoThe Information SchoolMary Gates Hall 015University of Washington 1 206-498-1216The Information SchoolMary Gates Hall 015EUniversity of Washington 1 s of people are using these resources every day to learnindependently, but we have only just begun to understand theireffectiveness. Recent work, for example, has explored thelearning outcomes of open-ended creative environments andMOOCs, finding that while many learners use sophisticatedprogramming language constructs [9, 11], there is still littleevidence that they produce robust programming knowledge [18,20, 34]. There is some evidence that explicit instruction andguidance through tutorials can improve learning [17], and morerecent evidence-based that while e-books for CS teacher trainingcan engage, learning is a continued challenge [33].Online coding tutorials are increasingly popular among learners,but we still have little knowledge of their quality. To address thisgap, we derived several dimensions of pedagogical effectivenessfrom the learning sciences and education literature and analyzed alarge sample of tutorials against these dimensions. We sampled 30popular and diverse online coding tutorials, and analyzed whatand how they taught learners. We found that tutorials largelytaught similar content, organized content bottom-up, and providedgoal-directed practices with immediate feedback. However, fewwere tailored to learners’ prior coding knowledge and only a fewinformed learners how to transfer and apply learned knowledge.Based on these results, we discuss strengths and weaknesses ofonline coding tutorials, opportunities for improvement, andrecommend that educators point their students to educationalgames and interactive tutorials over other tutorial genres.This evidence has several limitations. First, the evidence is sparse,only investigating a few types of tutorials; most of them areresearch prototypes [2, 15, 16, 19]. This means that we still knowlittle about the current content of the popular tutorials that learnersare using. Second, most of the evidence is narrow, in that itfocuses on specific measurements of learning and engagement,overlooking many important factors in learning that are moredifficult to measure and control for. The result is that teachershave little holistic guidance about how to choose effectivetutorials and researchers have little insight into the broader set ofonline materials and how they differ.KeywordsOnline learning; coding tutorials; curriculum; pedagogy1. INTRODUCTIONIn recent decades, desire to learn programming has increaseddramatically, while major government and non-policy efforts suchas the Hour of Code, CS Education Week and CS For All havebegun to create infrastructure for broad scale learning ofcomputing and coding. To meet this high demand, a variety ofonline resources for learning how to code have emerged. Some ofthese tutorials are open-ended, creative platforms such as Scratch[24] and Alice [8]. Others are lecture-style courses provided bymassively open online courses (MOOCs) like Coursera(coursera.org) and edX (edx.org). Some are tutorial-style curriculasuch as Khan Academy (khanacademy.org) and Codecademy(codecademy.com), which offer a range of content to teachpopular programming languages and platforms. There are alsomany evidence-based educational programming games likeGidget [19], Lightbot [15], and Code Hunt [2], which aim to teachcoding by gamifying some form of programming activity. Thereare of course also many reference guides with substantial examplecode, including W3 Schools (w3schools.com), Tutorials Point(tutorialspoint.com), and more social forums such as StackOverflow (stackoverflow.com) that provide significant referenceresources for learners. Popular tools such as the online PythonTutor even allow learners to visualize program execution [16].To address these problems, we took an analytical approach toevaluating online coding tutorials, investigating what onlinetutorials currently teach and how they teach it by analyzingtutorials against a set of curriculum design dimensions. Thebenefit of an analytical approach is that we could assess a large setof tutorials and we could assess aspects of tutorials that aredifficult to measure quantitatively. This approach is inspired by along history of curriculum evaluation frameworks, which offerprinciples and rubrics grounded in theories of learning [10, 28,29]. To improve the actionability of our results, we generatedpedagogical principles specific to coding tutorials, deriving themfrom more general pedagogical design principles.In the rest of this paper, we discuss our sampling and assessmentapproach in detail, describing how we derived our assessmentmodel. We then discuss our results and their implications in detail.2. METHOD2.1 Selecting TutorialsTo begin, we generated a list of tutorials to evaluate (Table 1).One of our criteria for selecting tutorials was popularity. Usingthe Google search engine with two query terms “online codingtutorial” and “coding tutorial,” we sampled and reviewed activecoding tutorial websites appeared in the first 10 pages. Weensured Google’s personalized search was turned off to preventany effects from the search history in the browser. We excludedthe websites that simply aggregated content from other sites.Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies are notmade or distributed for profit or commercial advantage and that copies bearthis notice and the full citation on the first page. Copyrights for componentsof this work owned by others than ACM must be honored. Abstracting withcredit is permitted. To copy otherwise, or republish, to post on servers or toredistribute to lists, requires prior specific permission and/or a fee. Requestpermissions from Permissions@acm.org.SIGCSE’17, March 8–11, 2017, Seattle, WA, USA. 2017 ACM. ISBN 978-1-4503-4698-6/17/03 15.00.DOI: http://dx.doi.org/10.1145/3017680.3017728In addition to search result popularity, we also estimated theamount of web traffic of each tutorial by using Alexa (alexa.com)321Most up-to-date version: 06/25/2021

on July 29th 2016. We used Alexa’s global rank, an estimate of asite's popularity relative to all other sites over the past 3 months,updated daily. The rank was based on a combined measure ofunique visitors, the number of unique Alexa users who visited asite on a given day, and page views, the total number of Alexauser URL requests for a site. The site with the highestcombination of unique visitors and page views was ranked thefirst. Based on the global rank provided by Alexa system, weincluded tutorial websites that ranked below 100,000.pedagogical principles: 1) connecting to learners’ priorknowledge [22, 23], 2) organizing declarative knowledge [3], 3)practice and feedback [1, 13], and 4) encouraging meta-cognitivelearning [14, 21]. We adapted these four principles from themajor effort over a decade ago to synthesize the seminaltheoretical and empirical discoveries in learning sciences andeducation research into actionable principles for teaching andlearning [3]. We decided to exclude principles related tocollaborative learning, as most of the coding tutorials in oursample are not explicitly social experiences.We also considered popularity in educational settings. Forexample, Scratch [24] and Alice [8] are broadly used inclassrooms but had relatively high Alexa rankings of 4,397,390and 212,300, respectively. Educational games such as Lightbot,powered by Hour of Code, ranked 214,897. As this paper aimed tocompare pedagogical approach across genres of online codingtutorials, we also included these tutorials.To assess the degree to which the tutorials in our sample followedthe four principles, we generated 24 pedagogical dimensionsspecific to individual learning in coding tutorials. Each of the 24dimensions we derived had significant nuance, but to simplifyanalysis and reporting, we reduced all but one dimension to abinary yes or no, where “yes” constituted satisfying a particularpedagogical design dimension, as defined by a written criterion.For example, the criterion for the utilization dimension (Table1.2) was “The instruction of the new stage explicitly requires touse at least one command or one function taught in the previousstage.” If a tutorial met this criterion, we marked a “yes”, and a“no” otherwise.Next, because many tutorial sites taught multiple programminglanguages, we also focused our assessments on the tutorials forpopular languages. To do this, we referred to four online sourcesof programming language popularity: GitHub, tag rankings inStack Overflow, TIOBE programming community index(www.tiobe.com/tiobe-index), and PopularitY of ProgrammingLanguage index (pypl.github.io/PYPL.html). We chose coursesand curricula that taught one of the six most popular languages(Java, Python, PHP, JavaScript, C#, C) overlapping across all foursources.After analyzing the first few tutorials with a prototype of severaldimensions we initially created, we evaluated them throughdiscussions. We refined the detail of each dimension to see itsnecessity for tutorial analysis and removed unhelpful oruninformative dimensions. We iterated until all dimension criteriawere sufficiently described and assessable.Our resulting sample included 30 tutorials, shown in Table 1. Tohelp compare the tutorials, we also categorized them under one offive genres of resources: With the final dimensions and criteria, we accessed each tutorialonline and went through the course to checked the criteria foreach dimension. In case that the category of answers was morethan binary (Table 1.4), we recorded all answers. Also we marked“NA” in case that a tutorial was not applicable for a particulardimension (Table 1.3). We analyzed at least one module of eachtutorial and in some cases analyzed an entire tutorial. It tookapproximately 2 hours per tutorial to check all criteria fordimensions and 60 hours overall.Interactive tutorials required learners to interact withcommand window, text editor, or equivalent in order to passsuccessive stages. This genre included sites such com. Some of these tutorials focused on specificfunctionality such as Regex Golf (regex.alf.nu) and Regex101.com.Web references played the role of a “dictionary.” Tutorialsunder this genre, such as Tutorials Point, help learnersproperly code against a library, API, or platform. Some webreferences such as W3Schools or Learnpython.org providedcode editors or command windows for learners who mightwant extra practice for reference code.MOOCs had a hierarchical structure with step-by-step stages,incorporating text-based quizzes and exams after a sequenceof instruction. This genre included popular lectures inLynda.com, edX, and Coursera.Educational games provided goals, story, and immediatefeedback and often provided a more visually rich graphicalenvironment. They often provided scores based onachievement or game items which can be consumed within asystem. This genre included games such as Gidget [19],Code Combat, and Code Hunt.Creative platforms provided learners with an editor andcontent, but little instruction other than a reference guide andno explicit goals. This included Scratch [24] and Alice [8].3. RESULTSOur final data set appears in Table 1, showing that tutorials variedwidely in their compliance with our pedagogical principles,though some genres were more principled than others. In thissection, we discuss each of the dimensions we evaluated in detail,organizing our discussion by our four core principles.3.1 Connecting to learners’ prior knowledgeOur first set of dimensions concerned tutorials’ approach tolearners’ prior knowledge. It is now widely accepted in learningsciences that people construct new knowledge based on what theyalready know and believe [6, 7, 22, 23, 31, 32]. Regardless of age[23], learners bring prior knowledge in the form of facts,perceptions, beliefs, values, and attitudes to the new learningcontext [6, 7, 22]. While accurate and complete prior knowledgefacilitates learning new knowledge, inaccurate and incompletepreconceptions hinders it. Therefore, in any learning context, it isimportant to understand learners’ prior knowledge and deeplyconnect instruction to this prior knowledge.2.2 Dimensions for analysisTo evaluate how the coding tutorials in our sample connected tolearners’ prior knowledge, we analyzed two groups ofdimensions: personalization (Table 1.1) and utilization (Table1.2) of knowledge. First, personalization of knowledgerepresented whether the tutorials customized teaching materials tomeet prior knowledge along three dimensions: whether tutorialsHere we describe our process for obtaining dimensions foranalyzing the tutorials. First, we needed a framework againstlearning science principles. We based our evaluation on findingsfrom learning sciences, focusing on the nine groups of 24dimensions, shown in Table 1. These groups spanned four core322

Table 1. Thirty tutorials analyzed across 24 dimensions. Each check mark represents satisfaction of a pedagogical principle.MOOCs like Coursera and edX, also had structural stages thatutilized the information taught in a stage to the new ones. Onlyone web reference tutorial, FunProgramming.org, had the similarform of structure. Only Codecademy, Khan Academy, and CodeAvengers (codeavengers.com) required knowledge from previousstages to pass the overview stage at the end of the module.helped learners select an appropriate learning material based ontheir age range (Table 1.1.a), educational status (Table 1.1.b), orprior coding knowledge (Table 1.1.c). Among many ways topersonalize learning materials, we chose these three because theyare common factors that curricula use to differentiate instructionin formal educational systems. Many coding tutorials did notpersonalize what they teach for their learners when we evaluatedthe level of personalization of three dimensions above. OnlyCode.org, Lightbot, and Scratch explicitly indicated appropriatelearners’ age range for tutorial selection. Fourteen out of 30tutorials considered learners’ education status, but it was rathersuperficial such as vaguely separating beginner, intermediate, andadvanced levels to indicate difficulty. None of the tutorialsrecommended specific stages or modules, based on learners’ priorcoding experience.3.2 Organizing declarative knowledgeTransforming factual information into robust declarativeknowledge is another key principle for effective learning [3]. Forsuccessful transformation, binding a large set of disconnectedfacts is important as well as connecting prior knowledge to newknowledge [1,23,31,32]. Providing a conceptual framework fororganizing information into meaningful knowledge helps learnersto gain a deeper understanding of learning material [4,5].To apply these principles to our evaluation, we analyzed thecontent of what tutorials taught (Table 1.3), focusing on the eightlearning objectives in the FCS1 assessment [28], which includedbasic programming language concepts such as variables, arrays,loops, and functions. All five genres of tutorials taught all eightlearning objectives except a few tutorials that focused on specificabstractions (namely Regex 101 and Regex Golf). Most of theeducational games did not teach, or at least not explicitly, theconcept of objects or object-orientation.For the “utilization” dimension, we analyzed how the tutorialshelped learners leverage the knowledge they accumulatedthroughout the tutorial (Table 1.2). For example, some tutorialssummarized the knowledge from prior lessons and showedlearners how to apply it to new concepts; others taught materialonce, and never mentioned it again. All educational gamesrequired learners to apply knowledge from stage to subsequentstages, which helped learners’ better connection of knowledgethan other genres of tutorials. A few interactive tutorials like CodeSchool and Codingbat Python (codingbat.com/python), and323

3.3 Practice and feedbackHow information is organized can influence application ofdeclarative knowledge [1]. Experts often organize informationhierarchically, indicating their deeper understanding of howvarious pieces of information fit within a complex structure. Inthat sense, we analyzed the organization of information (Table1.4), noting whether the tutorials structured information “bottomup” (starting with basic concepts and building up to complexones) or “top-down” (successively breaking down complexconcepts into smaller ones) (Table 1.4.a), and whether thestructured information was in a hierarchical form or not (Table1.4.b).Evidence is clear that deliberate practice helps learners achievemastery in a particular domain [12, 25]. Clearly structured andarticulated goals are critical to enhancing the effectiveness ofdeliberate practice [13]. Deliberate practice, however, must becoupled with appropriately targeted feedback, includinginformation about learners’ progress to guide them toward goals[1]. To evaluate whether the tutorials supported deliberatepractice, we analyzed two groups of three dimensions: learneractionability (Table 1.6) and feedback (Table 1.7).Deliberate practice becomes effective when learners activelyengage in it; the best way to practice coding is to write code.Therefore, our learner actionability (Table 1.6) dimensionmeasured whether the tutorial required learners to actually writeprograms of some kind to learn. We found that all genres oftutorials offered some kind of interactive editor requiring learnersto provide input, with the exception of a few web references thatprovided read-only information. We also found interestingdiversity in the type of editors across the interactive tutorials andeducation games. For example, Gidget equipped a sophisticatededitor panel so that learners even could see the error messages andsyntax errors in the editor, which was more instructive than justproviding a text guideline for practices. Khan Academy providedvisualized walkthrough with the editor panel so that learners couldmodify and run the code to see how their editing changed contentsin the walkthrough.Tutorials organized the information differently across the genres.For example, web references and interactive tutorials organizedcontent bottom-up, starting with the most elementary concepts,using one or two commands or functions at a time to solve aproblem. The most common case was teaching how to print“Hello World” using a certain programming language; in order todo this, they explained what kinds of command (e.g. print() inpython3) should be typed in the text editor or interpreter, anddisplayed how it worked.In contrast, educational games, MOOCs, and creative platformscombined both bottom-up and top-down approach or mainly usedtop-down approach more than web references and interactivetutorials. For example, Scratch suggested a goal (“Make the cat inthe screen dance”) that learners can model and provided step-bystep instruction to reach the goal, but also allowed high level offreedom for users to apply the instruction to design one’s owncode.Prior work has shown that immediate, targeted feedback is criticalfor meta-cognitive monitoring [1,3,5]. Therefore, to analyzefeedback, we judged two dimensions: whether tutorials providedfeedback at all (Table 1.7.a) and whether that feedback wasimmediate (Table 1.7.b). All interactive tutorials and educationalgames with a code editor provided some form of immediatefeedback, but much of these was shallow. For example, almosthalf of the tutorials did not provide feedback when learners madeerrors. These tutorials fell into two cases: 1) some tutorials likeScratch provided open-ended practice, but did not providefeedback about right or wrong code relative to a goal or 2) atutorial’s code editor did not produce feedback about errormessages. These latter tutorials were usually web references thatallowed learners to test functionality, but did not explain failures.Organizing information hierarchically can help learners connectscattered facts [1,5]. All games structured informationhierarchically, which included many simple stages teaching onecommand or function at a time under particular programmingtopics. MOOCs and interactive tutorials with high Alexa rankinglike Codecademy and Khan Academy did the same. For example,a module teaching conditional statements often included manysub-stages about how to correctly write if and while structures.Finally, we analyzed the context of how the information wasorganized (Table 1.5), judging the story, background, and otherconcrete details in which content was presented [1, 3, 4]. Weconsidered three dimensions of context. The first was the use oflectures, presenting content authoritatively (Table 1.5.a). MOOCsand the popular interactive tutorials used lecture-based contextsheavily (Table 1.5.a). We also considered the use of goal-drivenproject contexts, in which learners were given a high level goal toachieve by learning lower-level content (Table 1.5.b). Such goalscan help learners’ active engagement in goal-based practice [13].Only a few tutorials, primarily educational games and creativeplatforms, offered project-based contexts that provided an explicitgoal of a stage or a module. For example, one of the goals inGidget’s stage was to operate a small robot, named Gidget, tocarry a kitten to the basket. To achieve the goal, learners shouldthink about not only what functions to be written, but also how toarrange them. Finally, we considered the use of story-basedcontexts (Table 1.5.c), which were used to connect learning goals.For example, Code Hunt supposed learners as “hunters” whoperform a secret project by fixing fragments in codes.Some tutorials provided feedback through instructor or peercommunication. For example, MOOCs provided someopportunities to communicate with instructors or peers, and someresources had online communities in which learners could askquestions. While this feedback was available, none of it wasimmediate and learners had no guarantee of receiving answers totheir questions.3.4 Encouraging meta-cognitive learningTwo key ideas of meta-cognitive learning are learners’ ability topredict the outcomes of their learning tasks and monitoring theirunderstanding [4, 5]. Focusing self-reflection on what worked andwhat needs improving helps learners transfer what they learned tothe new settings and events [14, 21, 26, 27].To evaluate whether tutorials encouraged metacognitive learning,we analyzed whether they taught how, when, and why learnersshould use a particular command or a function to help learnerstransfer or apply knowledge learned from the tutorials (Table 1.8).Few tutorials guided learners in transferring and applyingknowledge to further learning contexts outside of the curricularprovided by tutorials. Most of the tutorials strongly emphasizedhow to use particular functions and commands in coding. OnlyMost of the web references did not establish a specific learningcontext for what they taught, whether an authoritative lecturebased context, a goal-driven context, or a story-based context.324

stage or not). In that sense, the tutorials might fulfill oneimportant criterion of effective learning: providing clearlystructured, and articulated goals for practice, in the beginningstages.five tutorials across three genres, web references, educationalgames, and MOOCs attempted to explain when and why learnersshould use a particular command or a function.We also analyzed whether the tutorials provided support byproviding additional materials outside the curriculum so thatlearners monitoring their understanding could seek answers totheir own questions beyond the tutorial content (Table 1.9).Almost all genres of tutorials provided some form of additionalsupport, whether it was a discussion form or additional referencesor resources. Four tutorials attempted to indicate where a learner’sperformance was ranked and how high it was, which mightencourage learners to self-monitor their level of progress inlearning. For example, Code Hunt provided information relatedhow fast and accurate the learner performance was after passingevery stage, which enhance learners’ engagement in playing andlevel completion speed [20]. Five tutorials proactively helpedlearners recognize errors in their actions. Gidget was a goodexample: The tutorial notified its learners when they omitted arequired expression at the end of the function (i.e. “When I try toaccess an object in the world, I need to terminate its name with a“/” character.”)This paradigm has several limitations. First, coding tutorials gavemore attention to emphasizing how to practice particularcommands and functions rather than to provide contextualinformation like when and why to use them. More generally, noneof the tutorials provided a detailed and systemized problemsolving instruction other than one- or two-sentence hints whenlearners made errors in each stage. These pedagogical choicesmight limit tutorials’ ability to teach learners’ to apply skills tobroader learning contexts outside of the curriculum.Lacking a personalized instruction might also limit effectiveness.As our first learning principle emphasizes, it is important toconnect existing knowledge to the new knowledge, and toconsider learners’ incomplete understandings and the false beliefsin that connecting process. However, most of the tutorials did notprovide access to any sort of agent or instructor to givepersonalized feedback to guide deliberate practice. Second, wefound that while tutorial feedback was immediate, it was rarelyprecise enough to improve learners’ conceptions of the material,and it was not customized at all to learners’ prior knowledge. Thisis a major area for future work that has yet to be deeply explored.3.5 Tutorial RecommendationsDespite their limitations, interactive tutorials and educationalgames satisfied the majority of the pedagogical principlesreflected in our dimension’s criteria. All tutorials in both genresrequired learners’ active engagement in writing code, and most ofthem provided a structured hierarchy including several stages ofgoal-directed practice with subsequent applying of learnedknowledge. The educational games in particular offered manyforms of context, which may help learners actively engage indeliberate practice. The educational games also provided the mostimmediate and personalized feedback, likely improving the gainsfrom deliberate practice. Therefore, from a pedagogicalperspective, we recommend games such as Gidget, Lightbot,Code Hunt, and tutorials provided by Code.org as the tutorialsmost likely to be effective at producing learning.5. LIMITATIONSThere are many limitations in our study to address in future work.Although our sample was diverse, it is not necessarilyrepresentative of all of the tutorials used around the world,particularly those in languages other than English. Although wetried to measure popularity by using Google search engine andAlexa, these methods could provide only general informationabout how many learners visited the website per day, not abouttheir actual progress. Moreover, online coding tutorials areconstantly evolving as companies seek ways to improve learningand engagement.Our analysis also has limitations. Most of our criteria were binaryjudgments, even though many of the dimensions havesubstantially more nuance. The first author was also the only onewho assessed all of the tutorials, and so there may have beensystematic bias in her evaluations that was not eliminated throughredundant coding.4. DISCUSSIONOur results reveal several trends in coding tutorial pedagogy: They largely teach the same content.Most teach content bottom-up, starting with low-levelprogramming concepts, and building up to high-level goals.Most require learners to write programs.Most provide some form of immediate feedback in responseto learner actions, but this feedback is shallow.Few explain when and why a particular concept is useful inprogramming.Few provide guidance for common errors.None provide personalization based on prior codingexperience or learner goals, other than rudimentary agebased differentiation.Another major limitation of our study is that we analyticallyassessed tutorials, rather than measuring learning outcomesdirectly. It may be possible that many of the tutorials are effectivedespite failing to satisfy many of the learning principles weidentified in prior work, as those principles might have been metin subtle ways not observed in this study.6. CONCLUSIONOur results suggest that most online coding tutorials are stillimmature and do not yet achieve many key principles in learningsciences. Future research and commercial development needs tobetter emphasize personalized support and precise, contextualizedfeedback and explore ways of explaining to learners why andwhen to use particular coding concepts. Based on our sampledtutorials, we recommend that teachers be very selective in theiruse of materials, focusing on the more evidence-based tutorials,particularly the educational games. All educational games in thelist provide hierarchical structure, immediate feedback, andopportunities that learners actively write code and use subsequentknowledge for coding throughout the tutorial. With futureDespite the diversity of languages and content, most of the codingtutorials shared a similar paradigm. They dissected coding into themost detailed, elemental level. This bottom-up approach inorganizing information enabled the tutorials provide goal-basedpractices with one simple task for each stage. For example, mostof the tutorials gave instruction about how to write a few simplelines of code (e.g., var1 1, var2 2, then what wouldbe var1 var2?) and test it by typing the answer to

(Java, Python, PHP, JavaScript, C#, C) overlapping across all four sources. Our resulting sample included 30 tutorials, shown in Table 1. To help compare the tutorials, we also categorized them under one of five genres of resources: Interactive tutorials required learners to interact with

Related Documents:

general pedagogical knowledge (principles and strategies of classroom management and organization that are cross-curricular) and pedagogical content knowledge (the knowledge which integrates the content knowledge of a specific subject and the pedagogical knowledge for teaching that particular subject).

ANNUAL PEDAGOGICAL PLAN 2. ANNUAL PEDAGOGICAL PLAN 2021-2022 2.1 PEDAGOGICAL PLAN COMMITTEE Name Designation Role in PPC Mr. Trijib Chandra Hota Board of Management Initiating, Planning and Guiding Ms Manisha Mitra Academic Advisor Initiating, Planning and Guiding Ms ViswaPrabha Project Coordinator Initiating, Planning, Guiding and Monitoring Ms Namashree Pati Headmistress Initiating, Planning .

Keywords: teacher knowledge, pedagogical content knowledge, metacognition, reading strategy instruction, summarizing 1. Introduction Knowledge of how to teach reading strategies in general can be classified as part of teachers’ pedagogical content knowledge (PCK) according to Shulman’s typology of teachers’ professional knowledge.

A rational account of pedagogical reasoning: . much more rapid pace than other animals. Or, in the words of Tomasello (1999), what forms the . stration, Topal et al., 2008 showed that the behavior of 10-month-old infants was highly dependent on the pedagogical nature of experimental settings. In their experiment, children were given the .

Visualizing data: new pedagogical challenges Isabel Meirelles information visualization, systems, graphical interfaces, design education . environment created by Ben Fry and Casey Reas in 2001, and currently used for research, pedagogical, commercial and artistic purposes (Reas & Fry, 2007). Also worth mentioning is the

pedagogical design, there is simply no escaping the need to adopt a theory of learning. Much of this report, therefore, maps . learning theory. onto . pedagogical approaches. Such a mapping is the logical and necessary precursor to any attempt to examine an e-learning implementation and position it in a pedagogical design framework. Like any .

Those dialects are Pedagogical French, an international language, Cajun French, an obsolescing language, and Mississippi Gulf Coast French, an extinct language. In order to determine the degree of variation, thirty-two lexical items or phrases that display variance with Pedagogical French are selected from Mississippi Gulf Coast French. They are

We call the first one structural analysis and second one pedagogical analysis. Our structural analysis is based on design of a large SE ontology and SE concept space graph (CSG). There has been some work in this field. A technique for representing course knowledge using ontologies is proposed in [1]. Course knowledge is represented as an