Next-Paradigm Programming Languages: What Will They

2y ago
8 Views
2 Downloads
576.64 KB
11 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Harley Spears
Transcription

Next-Paradigm Programming Languages: What WillThey Look Like and What Changes Will They Bring?Yannis SmaragdakisDepartment of Informatics and TelecommunicationsUniversity of Athenssmaragd@di.uoa.gr[W]e’ve passed the point of diminishing returns. Nofuture language will give us the factor of 10 advantagethat assembler gave us over binary. No future languagewill give us 50%, or 20%, or even 10% reduction inworkload.AbstractThe dream of programming language design is to bring aboutorders-of-magnitude productivity improvements in softwaredevelopment tasks. Designers can endlessly debate on howthis dream can be realized and on how close we are to itsrealization. Instead, I would like to focus on a question withan answer that can be, surprisingly, clearer: what will be thecommon principles behind next-paradigm, high-productivityprogramming languages, and how will they change everydayprogram development? Based on my decade-plus experienceof heavy-duty development in declarative languages, I speculate that certain tenets of high-productivity languages areinevitable. These include, for instance, enormous variationsin performance (including automatic transformations thatchange the asymptotic complexity of algorithms); a radicalchange in a programmer’s workflow, elevating testing froma near-menial task to an act of deep understanding; and achange in the need for formal proofs.CCS Concepts Software and its engineering General programming languages; Social and professionaltopics History of programming languages.Keywords programming paradigms, next-paradigm programming languagesACM Reference Format:Yannis Smaragdakis. 2019. Next-Paradigm Programming Languages:What Will They Look Like and What Changes Will They Bring?. InProceedings of the 2019 ACM SIGPLAN International Symposium onNew Ideas, New Paradigms, and Reflections on Programming and Software (Onward! ’19), October 23–24, 2019, Athens, Greece. ACM, NewYork, NY, USA, 11 pages. https://doi.org/10.1145/3359591.3359739Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copiesare not made or distributed for profit or commercial advantage and thatcopies bear this notice and the full citation on the first page. Copyrightsfor components of this work owned by others than the author(s) mustbe honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee. Request permissions from permissions@acm.org.Onward! ’19, October 23–24, 2019, Athens, Greece 2019 Copyright held by the owner/author(s). Publication rights licensedto ACM.ACM ISBN 978-1-4503-6995-4/19/10. . . 15.00https://doi.org/10.1145/3359591.3359739Robert C. Martin [18]1IntroductionSince the 1950s, high-level programming languages haveresulted in orders-of-magnitude productivity improvementscompared to machine-level coding. This feat has been a greatenabler of the computing revolution, during a time whencomputer memories and conceptual program complexityhave steadily grown at exponential rates. The history of computing is testament to language designers’ and implementers’accomplishments: of the 53 Turing awards to present (from1966 to 2018) a full 16 have been awarded for contributionsto programming languages or compilers.1At this time, however, the next steps in programminglanguage evolution are hard to discern. Large productivityimprovements will require a Kuhnian paradigm shift in languages. A change of paradigm, in the Kuhn sense, is a drasticreset of our understanding and nomenclature. It is no surprise that we are largely ineffective at predicting its onset,its nature, or its principles.Despite this conceptual difficulty, the present paper is anattempt to peer behind the veil of next-paradigm programming languages. I happen to believe (on even days of themonth) that a change of paradigm is imminent and all itstechnical components are already here. But you do not haveto agree with me on either point—after all, the month alsohas its odd days.Reasonable people may also differ on the possible catalysts of such a paradigm shift. Will it be machine learningand statistical techniques [23], trained over vast data sets ofcode instances? Will it be program synthesis techniques [9],employing symbolic reasoning and complex constraint solving? Will it be mere higher-level language design combined1 Thecount is based on occurrences of “programming language(s)” or “compiler(s)” in the brief citation text of the award, also including Richard Hamming who is cited for “automatic coding systems” (i.e., the L2 precursor ofFortran). Notably, the number does not include John McCarthy or DanaScott, who are well-known for languages contributions yet the terms donot appear in the citation.

Onward! ’19, October 23–24, 2019, Athens, GreeceYannis Smaragdakiswith technology trends, such as vast computing power andenormous memories?Regardless of one’s views, I hope to convince the readerthat there is reasonable clarity on some features that nextparadigm programming languages will have, if they everdominate. Similarly, there is reasonable clarity on whatchanges next-paradigm programming languages will inducein the tasks of everyday software development.For a sampling of the principles I will postulate and theircorollaries, consider the following conjectures: Next-paradigm programming languages will not displayon the surface the computational complexity of their calculations. Large changes in asymptotic complexity (e.g.,from O(n4 ) to O(n 2 )) will be effected by the language implementation. The language will not have loops or explicititeration. The programmer will often opt for worse asymptotic complexity factors, favoring solution simplicity andcountering performance problems by limiting the inputsof algorithms (e.g., applying an expensive computationonly locally) or accepting approximate results. Next-paradigm programming languages will need a firmmental grounding, in order to keep program development manageable. This grounding can include: a wellunderstood cost model; a simple and clear model on hownew code can or cannot affect the results of earlier code;a natural adoption of parallelism without need for concurrency reasoning. Development with next-paradigm programming languages will be significantly different from current software development. Minute code changes will have tremendous impact on the output and its computation cost. Incremental development will be easier. Testing and debuggingwill be as conceptually involved as coding. Formal reasoning will be easier, but less necessary.In addition to postulating such principles, the goal of thepaper is to illustrate them. I will use examples from real, deployed code, written (often by me) in a declarative language—Datalog. My experience in declarative programming is a keyinspiration for most of the observations of the paper. It isalso what makes the conjectures of the paper “real”. All ofthe elements I describe, even the most surprising, are instances I have encountered in programming practice. I beginwith this personal background before venturing to furtherspeculation.I’ve seen things, that’s why I’m seeing things.—me2Where I Come FromIn the past decade, I have had the opportunity to write declarative, logic-based code of uncommon volume and variety,under stringent performance requirements. This experienceunderlies my speculation on the properties of next-paradigmprogramming languages.Declarative code—lots of it. Most of my research (and thevast majority of my personal coding effort) in the past decadehas been on declarative program analysis [24]. My groupand a growing number of external collaborators have implemented large, full-featured static analysis frameworksfor Java bytecode [4], LLVM bitcode [2], Python [14], andEthereum VM bytecode [6, 7]. The frameworks have beenthe ecosystem for a large number of new static analysisalgorithms, leading to much new research in the area.These analysis frameworks are written in the Dataloglanguage. Datalog is a bottom-up variant of Prolog, withsimilar syntax. “Bottom-up” means that no search is performed to find solutions to logical implications—instead, allvalid solutions are computed, in parallel. This makes the language much more declarative than Prolog: reordering rules,or clauses in the body of a rule, does not affect the output.Accordingly, computing all possible answers simultaneouslymeans that the language has to be limited to avoid possiblyinfinite computations. Construction of new objects (as opposed to new combinations of values) is, therefore, outsidethe core language and, in practice, needs to be strictly controlled by the programmer. These features will come intoplay in later observations and conjectures.The Datalog language had been employed in static program analysis long before our work [10, 15, 22, 29]. However,our frameworks are distinguished by being almost entirelywritten in Datalog: not just quick prototypes or “convenient”core computations are expressed as declarative rules, but thecomplete, final, and well-optimized version of the deployedcode, as well as much of the scaffolding of the analysis. Asa result, our analysis frameworks are possibly the largestDatalog programs ever written, and among the largest piecesof declarative code overall. For instance, the Doop codebase[4] comprises several thousands of Datalog rules, or tens ofthousands of lines of code (or rather, of logical specifications).This may seem like a small amount of code, but, for logicalrules in complex mutual recursion, it represents a dauntingamount of complexity. This complexity captures core staticanalysis algorithms, language semantics modeling (includingvirtually the entire complexity of Java), logic for commonframeworks and dynamic behaviors, and more.Emphasis on performance, including parallelism. TheDatalog code I have contributed to in the past decade aims forperformance at least equal to a manually-optimized imperative implementation. That is, every single rule is written witha clear cost model in mind. The author of a declarative ruleknows, at least in high-level terms, how the rule will be evaluated: in how many nested loops and in what order, with whatindexing structures, with which incrementality policy for

Next-Paradigm Programming LanguagesOnward! ’19, October 23–24, 2019, Athens, Greecefaster convergence when recursion is employed. Optimization directives are applied to achieve maximum performance.Shared-memory parallelism is implicit, but the programmeris well aware of which parts of the evaluation parallelize welland which are inherently sequential. In short, although thecode is very high-level, its structure is anything but random,and its performance is not left to chance. Maximum effortis expended to encode highest-performing solutions purelydeclaratively.Design declarative languages. Finally, I have had the opportunity to see declarative languages not just from the perspective of a power user and design advisor, but also fromthat of a core designer and implementer [20, 21]. This dualview has been essential in forming my understanding of theprinciples and effects of next-paradigm languages.Many domains. My experience in declarative programming extends to several domains, although static analysis ofprograms has been the primary one. Notably, I have served asconsultant, advisor, and academic liaison for LogicBlox Inc.,which developed the Datalog engine [1] used in Doop until2017. The company built a Datalog platform comprising a language processor, JIT-like back-end optimizer, and specializeddatabase serving as an execution environment. All applications on the platform were developed declaratively—even UIframeworks were built out of a few externally-implementedprimitives and otherwise entirely logic-based rules. The company enjoyed a lucrative acquisition, mostly based on thevalue of its deployed applications and customers. All applications were in the domain of retail prediction—about as distantfrom static analysis as one can imagine. For a different example, an algorithm for dynamic race detection developed inthe course of my research [25] was implemented in Datalogand all experiments were over the Datalog implementation.An imperative implementation would be substantially moreinvolved and was never successfully completed in the courseof the research.Patrick S. Li [17]“Declarative languages aren’t good for this.” A repeatpattern in my involvement with declarative languages hasbeen to encode aspects of functionality that were previouslythought to require a conventional (imperative or functional)implementation. The possibility of the existence of the Doopframework itself was under question a little over a decadeago—e.g., Lhoták [16] writes: “[E]ncoding all the details ofa complicated program analysis problem [.] purely in termsof subset constraints [i.e., Datalog] may be difficult or impossible.” In fact, writing all analysis logic in Datalog hasbeen a guiding principle of Doop—extending even to parts ofthe functionality that might be harder to write declaratively.Very few things have turned out to be truly harder—it is quitesurprising how unfamiliarity with idioms and techniquescan lead to a fundamental rejection of an approach as “notsuitable”. Most recently, we got great generality and scalability benefits from encoding a decompiler (from very-low-levelcode) declaratively [6], replacing a previous imperative implementation [28]—a task that the (highly expert) authors ofthe earlier decompiler considered near-impossible.[A]ll programming languages seem very similar to eachother. They all have variables, and arrays, a few loopconstructs, functions, and some arithmetic constructs.3Principles of Next-Paradigm LanguagesBefore going on, I will emphasize again that the reader doesnot need to agree with me on the usefulness of declarativelanguages. Next-paradigm programming languages could bebased on any of several potential technologies—e.g., perhapson machine learning and statistical techniques, or on SMTsolvers and symbolic reasoning. Regardless of the technology,however, I think that some elements are near-inevitable andthese are the ones I am trying to postulate as “principles”. Iwill illustrate these principles with examples from declarativeprogramming, because that’s the glimpse of the future I’vehappened to catch. But other glimpses may be equally (ormore) valid.3.1Programming Model: CostPrinciple 1 (Productivity and Performance Tied Together).If a language can give orders-of-magnitude improvements inproductivitythen its implementation has the potential for orders-ofmagnitude changes in performance.Large variations in both productivity and performance arefunctions of a language being abstract. Neither is possiblewith the current, ultra-concrete mainstream languages. If oneneeds to explicitly specify “loops and arrays”, neither largeproductivity gains, nor large performance variations are possible. Instead, the language implementation (or “compiler” forshort2 ) of a high-productivity, next-paradigm language willlikely be able to effect orders-of-magnitude performance differences via dynamic or static optimization. For performancevariability of such magnitude, the asymptotic complexity ofthe computation will also likely change.Corollary 1.1. Programs , Algorithms Data Structures.Instead:Compiler(Program) Algorithms Data Structures.2Alanguage implementation consists of an interpreter or compiler (aheadof-time or just-in-time) and a runtime system. The term “compiler”, althoughnot always accurate, seems to encompass most of the concepts we are usedto in terms of advanced language implementations.

Onward! ’19, October 23–24, 2019, Athens, GreecePrograms in next-paradigm languages will likely notbe the sum of algorithms and data structures, contradicting Wirth’s famous equality. Instead, programs will bespecifications—carefully written to take into account an execution model that includes a search process (done by thecompiler) over the space of implementations. Major algorithmic elements and key data structure decisions will bedetermined automatically by this search. The compiler willbe a mere function from programs to concrete implementations, consisting of algorithms and data structures.Example: Choice of Algorithm. Language optimizationthat can affect the asymptotic complexity of the computation is hardly new. Relational query optimization is a primerealistic example.3 In our setting, we can revisit it, with Datalog syntax, before building further on it. Computation inDatalog consists of monotonic logical inferences that applyto produce more facts until fixpoint. A Datalog rule “C(z,x):- A(x,y), B(y,z)” means that if there exist values of x, y,and z, such that A(x,y) and B(y,z) are both true, then C(z,x)can be inferred to also be true.A common static analysis rule, responsible for interpretingcalls as assignments from actual to formal arguments, isshown below:Assign ( formal , actual ) : CallGraphEdge ( invocation , method ) ,FormalParam ( index , method , formal ) ,ActualParam ( index , invocation , actual ) .The logic just says that if we have computed a call-graphedge from instruction invocation to a method, then the i-th(index) actual argument of the call is assigned to the i-thformal parameter of the method.In terms of declarative computation, this rule is evaluatedvia a relational join of the current contents of (conceptual) tables CallGraphEdge, FormalParam, and ActualParam. But it isup to the compiler to decide whether to start the join from table CallGraphEdge or one of the others. This decision may beinformed by dynamic statistics—i.e., by current knowledgeof the size of each of the three tables and of the past selectivity of joining each two tables together. It could well be thatour input consists of overwhelmingly zero-argument functions. Thus, the join of CallGraphEdge and FormalParam willbe small. It is wasteful (up to asymptotically so) to start byiterating over all the contents of CallGraphEdge, only to discover that most of them never successfully match a methodwith an entry in table FormalParam. Instead, the join may bemuch quicker if one starts from functions that do take arguments, i.e., from table FormalParam. The LogicBlox Datalogengine [1] performs precisely this kind of dynamic, online3 Relationaldatabase languages, such as SQL, are a limited form of declarative programming. Due to the simplified setting and commercial success,many ideas we discuss have originated in that domain.Yannis Smaragdakisquery optimization, based on relation sizes and expectedselectivities.Example: Choice of Data Structures. Data structurechoice is already standard practice in relational languages.For instance, the Soufflé [11] implementation of Datalog automatically infers when to add indexes to existing tables,so that all rule executions are fast [27]. In our earlier example, Soufflé will add an index (i.e., a B-tree or trie) overtable FormalParam, with the second column, method, as key,and similarly for ActualParam, with either column as key.Then, if computation starts from an exhaustive traversal ofCallGraphEdge, only the matching subsets of the other twotables will be accessed, using the index to identify them. Weillustrate below, by denoting the partial, indexed traversalby a Π prefix on the accessed-by-index tables, and by underlining the variables bound by earlier clauses during theevaluation:Assign ( formal , actual ) : CallGraphEdge ( invocation , method ) ,Π FormalParam ( index , method, formal ) ,Π ActualParam (index, invocation, actual ) .Note that such choice of data structure is not based onlocal constraints, but on all uses of the table, in any rule ina (potentially large) program. However, per our discussionof trends, it is typically fine for the compiler to maintain anextra data structure, if this will turn an exhaustive traversalinto an indexed traversal, even if the benefit arises in veryfew rules.Generally, I believe it is almost a foregone conclusionthat next-paradigm programming languages will performautomatic data structure selection. The language will likelyonly require the programmer to declare data and will thenautomatically infer efficient ways to access such data, basedon the structure of the computation. Both technology trendsand data structure evolution conspire to make this scenarioa near certainty: Although many potential data structures exist, alogarithmic-complexity, good-locality, ordered structure(such as a B-tree or trie) offers an excellent approximation of most realistic data traversals. Both random accessand ordered access are asymptotically fast, and constantfactors are excellent. (Accordingly, most scripting languages with wide adoption in recent decades have madea standard “map” their primary data type.) If one adds aunion-find tree, abstracted behind an “equivalence class”data type, there may be nearly nothing more that a highproductivity language will need for the vast majority ofpractical tasks.Of course, the are glaring exceptions to such broadgeneralizations—e.g., there is no provision for probabilistic data structures, such as bloom filters, cryptographicallysecure structures, such as Merkle trees, or other classes

Next-Paradigm Programming Languagesof structures essential for specific domains. However, theuse of such structures is substantially less frequent. Additionally, a theme for next-paradigm languages will beescaping the language easily—as I argue later (Section 3.3). Adding an extra data structure vs. not adding a data structure is no longer a meaningful dilemma, under currentmemory and speed trends. The cost of additional ways toorganize data only grows linearly, while the speed benefitcan be asymptotic. Therefore, when in doubt, adding anextra B-tree or trie over a set of data is an easy decision.Example: Auto-Incrementalization. Another realistic example of asymptotic complexity improvements offered routinely in declarative languages is automatic incrementalization. Our earlier example rule is, in practice, never evaluated as a full join of tables CallGraphEdge, FormalParam, andActualParam. The reason is that other rules in a typical program analysis will use the resulting relation, Assign, in orderto infer new call-graph edges (e.g., in the case of virtual calls).This makes the computation of Assign mutually recursivewith that of CallGraphEdge. Therefore, the rule will be evaluated incrementally, for each stage of recursive results. Therule, from the viewpoint of the Datalog compiler looks likethis: Assign ( formal , actual ) : CallGraphEdge ( invocation , method ) ,FormalParam ( index , method , formal ) ,ActualParam ( index , invocation , actual ) .This means that the new-stage (denoted by the prefix)results of Assign are computed by joining only the newlyderived results for CallGraphEdge. Tuples in CallGraphEdgethat existed in the previous recursive stage do not need tobe considered, as they will never produce results not alreadyseen. (The other two tables involved have their contents fixedbefore this recursive fixpoint.) In practice, such automaticincrementalization has been a major factor in making declarative implementations highly efficient—often much fasterthan hand-written solutions, since incrementalization in thecase of complex recursion is highly non-trivial to performby hand.Incrementalization also exhibits complex interplay withother algorithmic optimizations. For instance, the latest deltaof a table is likely smaller than other relations, in which casethe exhaustive traversal of a join should start from it.Corollary 1.2 (Cheapest is hardest.). “Easy” in terms of (sequential) computational complexity may mean “hard” to express efficiently in next-paradigm languages.The shortcomings of next-generation languages may bemore evident in the space where human ingenuity has produced incredibly efficient solutions, especially in the lowend of the computational complexity spectrum (i.e., linearor near-linear algorithms). In the ultra-efficient algorithmOnward! ’19, October 23–24, 2019, Athens, Greecespace, there is much less room for automated optimizationthan in more costly regions of the complexity hierarchy.4Example: Depth-First Algorithms and Union-FindStructures. Current declarative languages are markedly badat expressing (without asymptotic performance loss) efficient algorithms based on depth-first traversal. For instance,declarative computation of strongly-connected componentsin directed graphs is asymptotically less efficient than Tarjan’s algorithm. Also, union-find trees cannot be replicatedand need special-purpose coding.Generally, algorithms that are hard to parallelize (e.g.,depth-first numbering is P-hard) and data structures thatheavily employ imperative features (both updates and aliasing) are overwhelmingly the ones that are a bad fit for declarative programming. It is reasonable to speculate that thisobservation will generalize to any next-paradigm programming language. After all, a high-productivity language willneed to be abstract, whereas imperative structures and nonparallelizable algorithms rely on concrete step ordering andconcrete memory relationships (i.e., aliasing). If this speculation holds, it is a further argument for the inevitability ofnext-paradigm programming languages. In most foreseeabletechnological futures, parallelism and non-random-accessmemory are much more dominant than sequential computation and a shared, random-access memory space. The algorithms that will dominate the future are likely amenableto general automatic optimization in a high-productivitylanguage.Corollary 1.3 (Even Asymptotics May Not Matter). Asymptotically sub-optimal computations may become dominant, forlimited, well-supervised deployment.Asymptotic performance degradation factors are impossible to ignore, since they typically turn a fast computation intoan ultra-slow or infeasible one. However, in next-paradigmlanguages, a programmer may routinely ignore even asymptotic factors and favor ultra-convenient programming. Toavoid performance degradation in a realistic setting, the applicability of inefficient computations may be limited to alocal setting, or approximate results may be acceptable [5].Example: Inefficient Graph Computations. In Datalogcode I have often favored quadratic, cubic, or worse solutions,as long as they are applied only locally or other constraintsensure efficient execution. Graph concepts offer generic examples. (In practice the computation is rarely about a literalgraph, but binary relations are often convenient viewed ingraph terms.) For instance, I have often used code that computes all pairs of predecessors of a graph node, genericallywritten as:4 This general conjecture may be easily violated in specialized domainswhere symbolic search already beats human ingenuity. E.g., program synthesis has already exhibited remarkable success in producing optimal algorithms based on bitwise operators [8].

Onward! ’19, October 23–24, 2019, Athens, GreeceBothPredecessors ( pred1 , pred2 , next ) : Edge ( pred1 , next ) ,Edge ( pred2 , next ) ,pred1 ! pred2 .As long as the in-degree of the graph is bounded, the“wasteful” all-pairs concept costs little to compute and canbe quite handy to have cached.Similarly, a wasteful but convenient concept is that ofdirected graph reachability without going through a givennode:ReachableExcluding ( node , node , notInPath ) : IsNode ( node ) ,IsNode ( notInPath ) ,node ! notInPath .ReachableExcluding ( source , target , notInPath ) : Edge ( source , target ) ,IsNode ( notInPath ) ,source ! notInPath ,target ! notInPath .ReachableExcluding ( source , target , notInPath ) : ReachableExcluding ( source , interm , notInPath ) ,Edge ( interm , target ) ,target ! notInPath .Importantly, the computation is worst-case bounded onlyby a n4 polynomial, for n graph nodes—the last rule enumerates near-all possible node 4-tuples, source, target, interm,and notInPath.Written as above, the computation would be infeasiblefor any but the smallest graphs. However, if we limit ourattention to a local neighborhood (for whatever convenientdefinition, since this pattern applies in several settings5 ) thecomputation is perfectly feasible, and, in fact, common inproduction code:ReachableExcluding ( node , node , notInPath ) : InSameNeighborhood ( node , notInPath ) ,node ! notInPath .ReachableExcluding ( source , target , notInPath ) : Edge ( source , target ) ,InSameNeighborhood ( source , target ) ,InSameNeighborhood ( source , notInPath ) ,source ! notInPath ,target ! notInPath .ReachableExcluding ( source , target , notInPath ) : ReachableExcluding ( source , interm , notInPath ) ,Edge ( interm , target ) ,InSameNeighborhood ( source , target ) ,target ! notInPath .5 Examplesof neighborhoods in different settings include: basic blocks inthe same function; data points in the same time epoch; web pages in thesame site or domain.Yannis SmaragdakisGenerally, I believe that programmers will be quite inventive in reshaping a problem in order to employ ultra-highlevel but inefficient computations. Coding simplicity andcorrectness clarity are excellent motivators for questioningwhether a full, exact answer is strictly needed.Corollary 1.4 (Implicit Parallelism). In any high-productivitysetting, parallelism will be pervasive but implicit.A next-paradigm programming language, offering ordersof-magnitude productivity improvements, will very likelyheavily leverage parallelism, yet completely hide it! Thereis no doubt that shared-memory concurrency correctness isamong the thorniest programming challenges in existence.High-productivity and explicit synchronization, of any form,are very unlikely to be compatible. High levels of abstractionalso seem to mesh well with automatic data partitioningand replication solutio

paradigm programming languages will have, if they ever dominate. Similarly, there is reasonable clarity on what changes next-paradigm programming languages will induce in the tasks of everyday software development. For a sampling

Related Documents:

-graphical programming languages PLC ladder diagram Classification according to programming language paradigm -procedural programming languages C, Ada83 -functional programming languages LISP -logical programming languages PROLOG -object-oriented programming languages C , Smalltalk, Ada95 Classification according to language level

Functional programming paradigm History Features and concepts Examples: Lisp ML 3 UMBC Functional Programming The Functional Programming Paradigm is one of the major programming paradigms. FP is a type of declarative programming paradigm Also known as applicative programming and value-oriented

Arduino Programming Part 6: EAS 199B Programming Paradigms To think about styles of programming, we can organize programming languages into paradigms Note that many modern program languages have features of more than one paradigm 26 Paradigm Representative Languages Procedural or Sequential Fortran, C, Basic Object-oriented C , smalltalk

Programming Paradigms We can distinguish programming languages in a variety of ways. One of the most important is which programming paradigm (or programming paradigms, as many languages support more than one paradigm) is the language based on. Let’s take a look at some of the most important paradigms:

Programming paradigm A programming paradigm is a style, or w, ay of programming. Paradigm can also be termed as method to solve some problem or do some task. Programming paradigm is an approach to solve problem using some programming language or also we can say it is a meth

1 Languages at Harvard 2014 – 2015 Why Study a Foreign Language? 2 Planning Your Language Study 5 Languages Offered 2014-2015 (by Department) 6 African Languages 7 Celtic Languages 9 Classical Languages 10 East Asian Languages 11 English 17 Germanic Languages 17 Linguistics 20 Near Eastern Languages 21 Romance La

Visual Paradigm for UML Quick Start Page 5 of 30 Starting Visual Paradigm for UML You can start Visual Paradigm for UML by selecting Start Menu Visual Paradigm Visual Paradigm for UML 7.1 Enterprise Edition. Importing license key 1. After you enter VP-UML, you will be asked to provide license key in License Key Manager.

Korean language textbooks and language teaching in terms of Korean honorifics. They have pointed out several problems in current teaching materials and emphasized the importance of pragmatic factors and the necessity of authentic data to fully reflect actual Korean honorific uses. Addressing these issues, the thesis demonstrates the need for teaching materials that introduce how honorific .