Wednesday, October 30, 2019
Issues in Secondary Education Essay Example | Topics and Well Written Essays - 2000 words - 2
Issues in Secondary Education - Essay Example This stage, which is preceded by the Concrete Operational stage, begins more or less at age 11, which is beginning of puberty and continues well into adulthood. (Abbeduto, 2006, 131)This stage is characterized by acquisition of the ability to think abstractly and draw conclusions from the information available. This stage is also important to our topic, because the students are entering puberty around this time, which leads them to many questions about themselves and their bodies; further leading to questions about just who they think they are. According to Erik Erikson, there are eight stages of human development. Along with the stages themselves, Erikson also states that a psychosocial crisis occurs during each of the stages; for this stage, the psychosocial crisis is titled Identity versus role confusion. As stated by Erikson, this stage is when the adolescent student becomes concerned with how they appear to others. Many adolescents ask themselves "Who am I? Where am I going in life?", and confusion occurs because of the cognitive and bodily changes happening to the learner. Peer groups also play a role in this stage, because they affect who a student thinks they are too. (Blair, 2006, 53) The confusion of trying to decide what peer group you fit into exacerbates the possible identity crisis. At last is Abraham Maslow, with his Hierarchy of requirements. At the same time, when above-mentioned theorists talked about various stages of development, Maslow's theory is a little bit different; his theory explains that regardless of the age of a person, everyone is willing to have their requirements fulfilled. (Cooper, P. & Mcintyre, 2008, 383) The requirements that relate the most to this topic are the Belonging and the Esteem Requirements.Ã Ã
Sunday, October 27, 2019
Linguistic Automatic Generation Natural Language
Linguistic Automatic Generation Natural Language 1. Introduction 1.1. The Problem Statement This thesis deals with the problem of Automatic generation of a UML Model from Natural Language Software Requirement Specifications. This thesis describes the development of Auto Modeler an Automated Software Engineering tool that takes Natural Language Software System Requirement Specifications as Input, performs an automated OO analysis and tries to produce an UML Model (a partial one in its present state i.e. static Class diagrams only) as output. The basis for Auto Modeler is described in [2][3]. 1.2. Motivation We conducted a short survey of the Software Industry in Islamabad in order to determine what sorts of Automated Software Engineering Tools were required by the Software houses. The result of the Survey (see Appendix-I for the survey report) indicated that there is demand for such a tool as Auto Modeler. Since such tools i.e. [2][3] that have already been developed are either not available in the market or are very expensive, and thus out of the reach of most software houses. Therefore we decided to build our own tool that can be used by the software industry in order to enable them to be more productive and competitive. But at present Auto Modeler is not ready for commercial use. But it is hoped that future versions of Auto Modeler will be able to cater to the needs of the Software Houses. 1.3. Background 1.3.1. The need for Automated Software Engineering Tools: In this era of Information Technology great demands are placed on Software Systems and on all those that are involved in the SDLC. The developed software should not only be of high quality but it should also be developed in minimal amount of time. When it comes to Software quality, the software must be highly reliable and it should meet the customers needs and it should satisfy the customers expectations. Automated Software Engineering Tools can assist the Software Engineers and Software Developers in producing High Quality Software in minimal amount of time. 1.3.2. Requirements Engineering: Requirements engineering consists of the following tasks [6]: à · Requirements Elicitation à · Requirements Analysis à · Requirements Specification à · Requirements Validation / Verification à · Requirements Management Requirements engineering is recognized as a critical task, since many software failures originate from inconsistent, incomplete or simply incorrect System Requirements specifications. 1.3.3. Natural Language Requirement Specifications: Formal methods have been successfully used to express Requirements Specifications, but often the customer cannot understand them and therefore cannot validate them [4]. Natural Language is the only common medium understood by both the Customer and the Analyst [4]. So the System Requirements Specifications are often written in Natural Language. 1.3.4. Object Oriented Analysis: The System Analyst must manually process The Natural Language Requirements Specifications Document and perform an OO Analysis and produce the results in the form of an UML Model, which has become a Standard in the Software Industry. The manual process is laborious, time consuming and often prone to errors. Some specified requirements might be left out. If there are problems or errors in the original requirements specifications, they may not be discovered in the manual process. OOA applies the OO paradigm to models of proposed systems by defining classes, objects and the relationships between them. Classes are the most important building block of an OO system and from these we instantiate objects. Once an individual object is created it inherits the same operations, relationships, semantics, and attributes identified in the class. Attributes of classes, and hence objects, hold values of properties. Operations, also called methods, describe what can be done to an object/class.[1] A relationship between classes/objects can show various attributes such as aggregation, composition, generalization and dependency. Attributes and operations represent the semantics of the class, while relationships represent the semantics of the model [1]. The KRB seven-step method, introduced by Kapur, Ravindra and Brown, proposes how to find classes and objects manually [1]. Hence, Identify candidate classes (nouns in NL). Define classes (look for instantiations of classes). Establishing associations (capturing verbs to create association for each pair of classes in 1 and 2). Expanding many-to-many associations. Identify class attributes. Normalize attributes so that they are associated with the class of objects that they truly describe. Identify class operations. From this process we can see that one goal of OOA is to identify NL concepts that can be transformed into OO concepts; which can then be used to form system models in particular notations. Here we shall concentrate on UML [1]. 1.3.5. Natural Language Processing (NLP): If an automatic analysis of the NL Requirements Document is carried out then it is not only possible to quickly find errors in the Specifications but with the right methods we can quickly generate a UML model from the Requirements. Although, Natural language is inherently ambiguous, imprecise and incomplete; often a natural language document is redundant, and several classes of terminological problems (e.g., jargon or specialist terms) can arise to make communication difficult [2] and it has been proven that Natural Language processing with holistic objectives is a very complex task, it is possible to extract sufficient meaning from NL sentences to produce reliable models. Complexities of language range from simple synonyms and antonyms to such complex issues as idioms, anaphoric relations or metaphors. Efforts in this particular area have had some success in generating static object models using some complex NL requirement sentences. 1.3.5.1. Linguistic analysis: Linguistic analysis studies NL text from different linguistic levels, i.e. words, sentence and meaning.[1] (i) Word-tagging analyses how a word is used in a sentence. In particular, words can be changeable from one sentence to another depending on context (e.g. light can be used as noun, verb, adjective and adverb; and while can be used as preposition, conjunction, verb and noun). Tagging techniques are used to specify word-form for each single word in a sentence, and each word is tagged as a Part Of Speech (POS), e.g. a NN1 tag would denote a singular noun, while VBB would signify the base form of a verb.[1] (ii) Syntactic analysis applies phrase marker, or labeled bracketing, techniques to segment NL as phrases, clauses and sentences, so that the NL is delineated by syntactical/grammatical annotations. Hence we can shows how words are grouped and connected to each other in a sentence.[1] (iii) Semantic analysis is the study of the meaning. It uses discourse annotation techniques to analyze open-class or content words and closed-class words (i.e. prepositions, conjunctions, pronouns). The POS tags and syntactic elements mentioned previously can be linked in the NL text to create relationships. Applying these linguistic analysis techniques, NLP tools can carry out morphological processing, syntactic processing and semantic processing. The processing of NL text can be supported by Semantic Network (SN) and corpora that provide a knowledge base for text analysis. The difficulty of OOA is not just due to the ambiguity and complexity of NL itself, but also the gap in meaning between the NL concepts and OO concepts.[1] 1.3.6. From NLP to UML Model Creation. After NLP the sentences are simplified in order to make identification of UML model elements form NL elements easy. Simple Heurists are used to Identify UML Model elements from Natural Text: (see Chapter 7) * Nouns indicate a class * Verb indicates an operation * Possessive relationships and Verbs like to have, identify, denote indicate attributes * Determiners are used to identify the multiplicity of roles in associations. 1.5. Plan of the thesis In Chapter 2 we present a brief survey of previous work and work similar to our work. Chapters 3, 4, 5, 6 and 7 describe the theoretical basis for Auto Modeler. Chapter 8 Describes the Architecture of Auto Modeler. In Chapter 9 we describe Auto Modeler in action with a case study. In Chapter 10 we present conclusions. 2. Literature Survey The first relevant published technique attempting to produce a systematic procedure to produce design models from NL requirements was Abbot. Abbott (1983) proposes a linguistic based method for analyzing software requirements, expressed in English, to derive basic data types and operations. [1] This approach was further developed by Booch (1986). Booch describes an Object-Oriented Design method where nouns in the problem description suggest objects and classes of objects, and verbs suggest operations.[1] Saeki et al. (1987) describe a process of incrementally constructing software modules from object-oriented specifications obtained from informal natural language requirements. Their system analyses the informal requirements one sentence at a time. Nouns and verbs are automatically extracted from the informal requirements but the system cannot determine which words are relevant for the construction of the formal specification. Hence an important role is played by the human analyst who reviews and refines the system results manually after each sentence is processed.[1] Dunn and Orlowska (1990) describe a natural language interpreter for the construction of NIAM (Nijssens, or Natural-language, Information Analysis Method ) conceptual schemas. The construction of conceptual schemas involves allocating surface objects to entity types (semantic classes) and the identification of elementary fact types. The system accepts declarative sentences only and uses grammar rules and a dictionary for type allocation and the identification of elementary fact types.[1] Meziane (1994) implemented a system for the identification of VDM data types and simple operations from natural language software requirements. The system first generates an Entity-Relationship Model (ERM) from the input text and then generates VDM data types from the ERM.[1] Mich and Garigliano (1994) and Mich (1996) describe an NL-based prototype system, NL-OOPS, that is aimed at the generation of object-oriented analysis models from natural language specifications. This system demonstrated how a large scale NLP system called LOLITA can be used to support the OO analysis stage.[1] V. Ambriola and V. Gervasi.[4] have developed CIRCE an environment for the analysis of natural language requirements. It is based on the concept of successive transformations that are applied to the requirements, in order to obtain concrete (i.e., rendered) views of models extracted from the requirements. CIRCE uses, CICO a domain-based, fuzzy matching, parser which parses the requirements document and converts it into an abstract parse tree. This parse tree is encoded as tuples and stored in a shared repository by CICO. A group of related tuples constitutes a T-Model. CIRCE uses internal tools to refine the encoded tuples called extensional knowledge and the knowledge about the basic behavior of software systems called intentional knowledge derived from modelers to further enrich the Tuple space. When a specific concrete view on the requirements is desired, a projector is called to build an abstract view of the data from the tuple space. A translator then converts the abstract view to a concrete view. In [5] V. Ambriola and V. Gervasi describe their experience of automatic synthesis of UML diagrams from Natural Language Requirement Specifications using their CIRCE environment. Delisle et al., in their project DIPETT-HAIKU, capture candidate objects, linguistically differentiating between Subjects (S) and Objects (O), and processes, Verbs (V), using the syntactic S-V-O sentence structure. This work also suggests that candidate attributes can be found in the noun modifier in compound nouns, e.g. reserved is the value of an attribute of ââ¬Å"reserved bookâ⬠.[1] Harmain and Gaizauskas developed a NLP based CASE tool, CM-Builder [2][3], which, automatically constructs an initial class model from NL text. It captures candidate classes, rather than candidate objects. Bà ¶rstler constructs an object model automatically based on pre-specified key words in a use case description. The verbs in the key words are transformed to behaviors and nouns are transformed to objects.[1] Overmyer and Rambow developed NLP system to construct UML class diagrams from NL descriptions. Both these efforts require user interaction to identify OO concepts.[1] The prototype tool developed by Perez-Gonzalez and Kalita supports automatic OO modeling from NL problem descriptions into UML notations, and produces both static and dynamic views. The underlying methodology includes theta roles and semi-natural language.[1] 3. Software Requirements Engineering Software requirements engineering is the science and discipline concerned with establishing and documenting software requirements [6]. It consists of: * Software requirements elicitation:- The process through which the customers (buyers and/or users) and the developer (contractor) of a software system discover, review, articulate, and understand the users needs and the constraints on the software and the development activity. * Software requirements analysis:- The process of analyzing the customers and users needs to arrive at a definition of software requirements. * Software requirements specification:- The development of a document that clearly and precisely records each of the requirements of the software system. * Software requirements verification:- The process of ensuring that the software requirements specification is in compliance with the system requirements, conforms to document standards of the requirements phase, and is an adequate basis for the architectural (preliminary) design phase. * Software requirements management:- The planning and controlling of the requirements elicitation, specification, analysis, and verification activities. In turn, system requirements engineering is the science and discipline concerned with analyzing and documenting system requirements. It involves transforming an operational need into a system description, system performance parameters, and a system configuration This is accomplished through the use of an iterative process of analysis, design, trade-off studies, and prototyping. Software requirements engineering has a similar definition as the science and discipline concerned with analyzing and documenting software requirements. It involves partitioning system requirements into major subsystems and tasks, then allocating those subsystems or tasks to software. It also transforms allocated system requirements into a description of software requirements and performance parameters through the use of an iterative process of analysis, design, trade-off studies, and prototyping. A system can be considered a collection of hardware, software, data, people, facilities, and procedures organized to accomplish some common objectives. In software engineering, a system is a set of software programs that provide the cohesiveness and control of data that enables the system to solve the problem.[6] The major difference between system requirements engineering and software requirements engineering is that the origin of system requirements lies in user needs while the origin of software requirements lies in the system requirements and/or specifications. Therefore, the system requirements engineer works with users and customers, eliciting their needs, schedules, and available resources, and must produce documents understandable by them as well as by management, software requirements engineers, and other system requirements engineers. The software requirements engineer works with the system requirements documents and engineers, translating system documentation into software requirements which must be understandable by management and software designers as well as by software and system requirements engineers. Accurate and timely communication must be ensured all along this chain if the software designers are to begin with a valid set of requirements. [6] 4. Automated Software Engineering Tools Software engineering is concerned with the analysis, design, implementation, testing, and maintenance of large software systems. Automated software engineering focuses on how to automate or partially automate these tasks to achieve significant improvements in quality and productivity. Automated software engineering applies computation to software engineering activities. The goal is to partially or fully automate these activities, thereby significantly increasing both quality and productivity. This includes the study of techniques for constructing, understanding, adapting and modeling both software artifacts and processes. Automatic and collaborative systems are both important areas of automated software engineering, as are computational models of human software engineering activities. Knowledge representations and artificial intelligence techniques applicable in this field are of particular interest, as are formal techniques that support or provide theoretical foundations.[7] Automated software engineering approaches have been applied in many areas of software engineering. These include requirements definition, specification, architecture, design and synthesis, implementation, modeling, testing and quality assurance, verification and validation, maintenance and evolution, configuration management, deployment, reengineering, reuse and visualization. Automated software engineering techniques have also been used in a wide range of domains and application areas including industrial software, embedded and real-time systems, aerospace, automotive and medical systems, Web-based systems and computer games.[7] Research into Automated Software Engineering includes the following areas: * Automated reasoning techniques * Component-based systems * Computer-supported cooperative work * Configuration management * Domain modeling and meta-modeling * Human-computer interaction * Knowledge acquisition and management * Maintenance and evolution * Model-based software development * Modeling language semantics * Ontologies and methodologies * Open systems development * Product line architectures * Program understanding * Program synthesis * Program transformation * Re-engineering * Requirements engineering * Specification languages * Software architecture and design * Software visualization * Testing, verification, and validation * Tutoring, help, and documentation systems 5. Natural Language Processing Natural language processing (NLP) is a subfield of artificial intelligence and linguistics. It studies the problems of automated generation and understanding of natural human languages. Natural language generation systems convert information from computer databases into normal-sounding human language, and natural language understanding systems convert samples of human language into more formal representations that are easier for computer programs to manipulate. 5.1. Language Processing Language processing can be divided into two tasks:[11] * Processing written text, using lexical, syntactic, and semantic knowledge of the language as well as any required real world information.[11] * Processing spoken language, using all the information needed above, plus additional knowledge about phonology as well as enough additional information to handle the further ambiguities that arise in speech.[11] 5.2. Uses for NLP: 5.2.1. User interfaces. Better than obscure command languages. It would be nice if you could just tell the computer what you want it to do. Of course we are talking about a textual interface not speech.[10] 5.2.2. Knowledge-Acquisition. Programs that could read books and manuals or the newspaper. So you dont have to explicitly encode all of the knowledge they need to solve problems or do whatever they do.[10] 5.2.3. Information Retrieval. Find articles about a given topic. Program has to be able somehow to determine whether the articles match a given query.[10] 5.2.4. Translation. It sure would be nice if machines could automatically translate from one language to another. This was one of the first tasks they tried applying computers to. It is very hard.[10] 5.3. Linguistic levels of Analysis Language obeys regularities and exhibits useful properties at a number of somewhat separable levels.[10] Think of language as transfer of information. It is much more than that. But that is a good place to start. Suppose that the speaker has some meaning that they wish to convey to some hearer.[10] Speech (or gesture) imposes a linearity on the signal. All you can play with is the properties of a sequence of tokens. Actually, why tokens? Well for one thing that makes it possible to learn.[10] So the other thing to play with is the order the tokens can occur. So somehow, a meaning gets encoded as a sequence of tokens, each of which has some set of distinguishable properties, and is then interpreted by figuring out what meaning corresponds to those tokens in that order.[10] Another way to think about it is that the properties of the tokens and their sequence somehow elicits an understanding of the meaning. Language is a set of resources to enable us to share meanings, but isnt best thought of as a means for *encoding* meanings. This is a sort of philosophical issue perhaps, but if this point of view is true, it makes much of the AI approach to NLP somewhat suspect, as it is really based on the encoded meanings view of language.[10] The lowest level is the actual properties of the signal stream: phonology speech sounds and how we make them morphology the structure of words syntax how the sequences are structured semantics meanings of the strings There are important interfaces among all of these levels. For example sometimes the meaning of sentences can determine how individual words are pronounced.[10] This many levels is obviously needed. But language turns out to be more clever than this. For example, language can be more efficient by not having to say the same thing twice, so we have pronouns and other ways of making use of what has already been said: A bear went into the woods. It found a tree. Also, since language is most often used among people who are in the same situation, it can make use of features of the situation: this/that you/me/they here/there now/then The mechanisms whereby features of the context, whether it is the context created by a sequence of sentences, or the actual context where the speaking happens is called pragmatics.[10] Another issue has to do with the fact that the simple model of language as information transfer is clealy not right. For one thing, we know there are at least the following three types of sentences: statements imperatives questions And each of them can be used to do a different kind of thing. The first *might* be called information transfer. But what about imperatives? What about questions? To some degree the analysis of such sentences can involve the ideas of a basic notion of meaning Speech acts.[10] There are other, higher-levels of structuring that language exhibits. For example there is conversational structure, where people know when they get to talk in a conversation, and what constitutes a valid contribution. There is narrative structure whereby stories are put together in ways that make sense and are interesting. There is expository structure which involves the way that informative texts (like encyclopedias) are arranged so as to usefully convey information. These issues blend off from linguistics into literature and library science, among other things.[10] Of course with hypertext and multi-media and virtual reality, these higher levels of structure are being explored in new ways.[10] 5.4. Steps in Natural Language Understanding The steps in the process of natural language understanding are:[11] 5.4.1. Morphological analysis Individual words are analyzed into their components, and non-word tokens (such as punctuation) are separated from the words. For example, in the phrase Bills house the proper noun Bill is separated from the possessive suffix s.[11] 5.4.2. Syntactic analysis. Linear sequences of words are transformed into structures that show how the words relate to one another. This parsing step converts the flat list of words of the sentence into a structure that defines the units represented by that list. Constraints imposed include word order (manager the key is an illegal constituent in the sentence I gave the manager the key); number agreement; case agreement.[11] 5.4.3. Semantic analysis. The structures created by the syntactic analyzer are assigned meanings. In most universes, the sentence Colorless green ideas sleep furiously [Chomsky, 1957] would be rejected as semantically anomalous. This step must map individual words into appropriate objects in the knowledge base, and must create the correct structures to correspond to the way the meanings of the individual words combine with each other. [11] 5.4.4. Discourse integration. The meaning of an individual sentence may depend on the sentences that precede it and may influence the sentences yet to come. The entities involved in the sentence must either have been introduced explicitly or they must be related to entities that were. The overall discourse must be coherent. [11] 5.4.5. Pragmatic analysis. The structure representing what was said is reinterpreted to determine what was actually meant. [11] 5.5. Syntactic Processing Syntactic parsing determines the structure of the sentence being analyzed. Syntactic analysis involves parsing the sentence to extract whatever information the word order contains. Syntactic parsing is computationally less expensive than semantic processing.[10] A grammar is a declarative representation that defines the syntactic facts of a language. The most common way to represent grammars is as a set of production rules, and the simplest structure for them to build is a parse tree which records the rules and how they are matched. [10] Sometimes backtracking is required (e.g., The horse raced past the barn fell), and sometimes multiple interpretations may exist for the beginning of a sentence (e.g., Have the students who missed the exam ). [10] Example: Syntactic processing interprets the difference between John hit Mary and Mary hit John. 5.6. Semantic Analysis After (or sometimes in conjunction with) syntactic processing, we must still produce a representation of the meaning of a sentence, based upon the meanings of the words in it. The following steps are usually taken to do this: [10] 5.6.1. Lexical processing. Look up the individual words in a dictionary. It may not be possible to choose a single correct meaning, since there may be more than one. The process of determining the correct meaning of individual words is called word sense disambiguation or lexical disambiguation. For example, Ill meet you at the diamond can be understood since at requires either a time or a location. This usually leads to preference semantics when it is not clear which definition we should prefer. [10] 5.6.2. Sentence-level processing. There are several approaches to sentence-level processing. These include semantic grammars, case grammars, and conceptual dependencies. [10] Example: Semantic processing determines the differences between such sentences as The ink is in the pen and The ink is in the pen. 5.6.3. Discourse and Pragmatic Processing. To understand most sentences, it is necessary to know the discourse and pragmatic context in which it was uttered. In general, for a program to participate intelligently in a dialog, it must be able to represent its own beliefs about the world, as well as the beliefs of others (and their beliefs about its beliefs, and so on).[10] The context of goals and plans can be used to aid understanding. Plan recognition has served as the basis for many understanding programs PAM is an early example. [10] 5.7. Issues in Syntax For various reasons, a lot of attention in computational linguistics has been paid to syntax. Partly this has to do with the fact that real linguistics have spent a lot of work on it. Partly because it needs to be done before just about anything else can be done. I wont talk much about morphology. We will assume that words can be associated with a set of features or properties. For example the word dog is a noun, it is singular, its meaning involves a kind of animal. The word dogs is related, obviously, but has the property of being plural. The word eat is a verb, it is in what we might call the base form, it denotes a particular kind of action. The word ate is related, it is in the past tense form. You can imagine Im sure that the techniques of knowledge representation that we have looked at can be applied to the problem of representing facts about the properties and relations among words. [11] The key observation in the theory of syntax is that the words in a sentence can be more or less naturally grouped into what are called phrases, and those phrases can often be treated as a unit. So in a sentence The dog chased the bear, the sequence the dog forms a natural unit. The sequence chased the bear is a natural unit, as is the bear.[11] Why do I say that the dog is a natural unit? Well one thing is that I can replace it by another sequence that has the same referent, or a related referent. For example I could replace it by: [11] Snoopy (a name) It (a pronoun) My brothers favorite pet (a more complex description) What about chased the bear? Again, I could replace it by died (a single word) was hit by a truck (a more complex event) This basic structure, in English, is sometimes called the subject-predicate structure. The subject is a nominal, something that can refer to an object or thing, the predicate is a verb phrase, which describes an action or event. Of course, as in the example, the verb phrase can also contain other constituents, for example another nominal. [11] These phrases also have structure. For example a noun phrase (a kind of nominal) can have a determiner, zero or more adjectives, and a noun, maybe followed by another phrase, like: the big dog that ate my homework Verb phrases can have complicated verb groups like will not be eaten Syntactic theories try to predict and explain what patterns are used in a language. Sometimes this involves figuring out what patterns just dont work. For example the following sentences have something wrong with them: [11] * the dogs runs home * he died the book * she saw himself in the mirror * they told it to she Figuring out exactly what is wrong with such sentences allows linguists to create theories that help understand the way that sentences
Friday, October 25, 2019
Language Codes :: Papers
Language Codes The construct of elaborated and restricted language codes was introduced by Basil Bernstein in 1971, as a way of accounting for the relatively poor performance of working-class pupils on language-based subjects, when they were achieving as well as their middle-class counterparts on mathematical topics. Interestingly, it was stimulated directly by his experience of teaching in further education. It is frequently misunderstood, largely because of Bernstein's unfortunate choice of labels. The "restricted" code does not refer to restricted vocabulary, and the "elaborated" code does not entail flowery use of language. There is an issue of "linguistic impoverishment" in the educational problems of some pupils, but Bernstein is not on the whole concerned with such extreme cases. One of Bernstein's research studies involved showing a group of children a strip cartoon and recording their account of what it depicted. Some said things like: "They're playing football and he kicks it and it goes through there it breaks the window and they're looking at it and he comes out and shouts at them because they've broken it so they run away and then she looks out and she tells them off" while others said: "Three boys are playing football and one boy kicks the ball and it goes through the window the ball breaks the window and the boys are looking at it and a man comes out and shouts at them because they've broken the window so they run away and then that lady looks out of her window and she tells the boys off." (from Bernstein, 1971 p 203 [re-arranged]) As Bernstein points out, the first account makes good sense if you have the strip cartoon in front of you, but means much less without it. This is an example of restricted code. The second can "stand on its own", and is an example of elaborated code. See Bernstein's own work for detailed accounts of the research behind the construct.
Thursday, October 24, 2019
Tears of a Tiger
Justin Evans Goodbye to love This song goes along with the mood of the story because it is a sad song about losing a loved one. In the story after Rob and Andy pass away their families were very sad about losing them. This was a major plot point in the story. Eye of a Tiger Eye of a tiger relates to the story because Andy tells his brother multiple times in the story ââ¬Å"itââ¬â¢s ok to put dragons in the jungle and tears on a tigerâ⬠the title of the story comes from this quote.Basketball The Theme of Tears of a tiger is basketball. This rising action of the story is when Andy, Rob, and 20B. J were celebrating after a huge win. Basketball by Lil Bow Wow I could imagine that this song is Robert Washingtonââ¬â¢s favorite song. The song talks about playing basketball. Rob always wanted to be on the court, or practicing his shot. That is why I think this song was Robertsââ¬â¢s favorite song.Heaven wasnââ¬â¢t so far away. After Andy and Rob died, their families wished they could see them again. In this song it talks about going to heaven for a day which is what they wanted to do Wish you were here In the climax of this story Rob Washington dies. The song wish you were here fits the mood of the story because everybody wishes Rob was there. Robs girlfriend wishes Roberts was back several times in the story. Tears of a Tiger
Wednesday, October 23, 2019
Elections in Africa Essay
The Aim of this essay is based on the clarity on the Elections in Africa if they are a good Measure of democracy. It basically analyses the advantages and disadvantages of elections (in an argument form). The issue of how citizens influence policymaker is central to an understanding of democratic political system. We normally agree that democracy should allow the people to participate in policy making. Hence elections are one of the ways to establish connections between citizens and policy makers and by elections citizens encourage the policymakers to pay attention to their interests. However there are some disagreements about whether and how elections serve to link citizens to policymakers; a number of schools put more emphasis upon accountability and others do on representativeness, even if there have been a lot of theoretical debates about this issue, we have few attempts to test the role of competitive election on popular attitudes towards the legislature. An election is a formal decision making process in which the population chooses an individual to hold a public office. Elections have been the mechanism by which modern representative democracy has been operated since 17TH century. According to Business Dictionary (BD), Election is the act of a party casting vote to choose an individual, for some type of position. It may involve a public or private vote depending on the position. Most positions in the local, state federal governments are voting on the same type of elections. According to (Abraham Lincoln), the word democracy means ââ¬Å"the government of the people, by the people and for the people ââ¬Å". Democracy is term that comes from a Greek and it is made up of two other words, demo which means people and kratain which means to govern or to rule. Democracy can then be literally translated by the following terms, Government of the people or government of the majority. Electoral systems are conventionally divided into two categories, majoritarian. And proportional representations, (Lijphart 1999). Majoritarian system usually employs exclusively single-seat distrust with plurality rule and tends to give greater representation to the two parties and that which receive the most votes. Proportional representation (P.R) System must employ multi-seat districts, usually with party lists, and typically produce parliamentary representation that largely mirrors the vote shares of multi-parties However elections be it Proportional Representation (PR), or Majoritarian type, are instruments of democracy to the degree that they give the people theà influence over policymaking,.(Powell 2000). One fundamental role of elections is the evaluation of the incumbents government. Citizens use elections to reward or punish the incumbents although on the other hand increasingly competitive elections raise the risk of increased election violence, this can be raised in two ways. Firstly, closer elections can increase tension throughout the electoral process; when the outcome of the election is in doubt, all stages of the process including the appointment of the members of the electoral management body, the registration of parties, candidates, and voters; campaigning; voting ;and vote counting and tabulation, becomes more heated. For example, Kenya erupted in chaos in 2007 when incumbent president Mwai kibaki was sworn in hours after being declared the winner in the countryââ¬â¢s closest presidential elections ever; the ensuing violence left 1,500 dead and 300,000 displaced. Secondly, as long-term incumbents witness the growing strength of the opposition candidates, they may feel increasingly imperilled and crack down more fiercely on perceived threats, example, after losing the first round of Zimbabweââ¬â¢s 2008 presidential elections and subsequently manipulating results to force run-off, president Robert Mugabe presided over a wave of widespread and brutal violence against supporters of Morgan Tsvangirai to ensure himself victory in the second round. While these above examples demonstrate the potential of elections to create conflict, elections are often used as a means to end conflict and solidify peace. For this reason, elections usually form a key part of the agreements ending civil wars or conflict. The basic principal behind these post conflict or transitional elections is that of Ballots over Bullets: citizens choosing their political leaders by voting rather than fighting, although in the 1992 Angola elections which was intended to end the c ivil war, this election instead reignited conflict for another ten (10) years. cases such as these have led many to argue that elections are not appropriate for post conflict environment. In majority, however, there is no viable alternative to post conflict elections as a means of achieving legitimate governance; a non elected government is far more susceptible to accusations of illegitimacy than the one chose by the people, and legitimate governance must be achieved as soon as possible following a conflict. Moreover, elections have the potential to create government broadly representative of all disputing political factions. Demonstratively, severalà countries have recently held remarkably successful post-conflict elections. For example, Liberiaââ¬â¢s elections in 2005 intended to over a decade of civil war were remarkably peaceful and hailed as generally free and fair. Another example is the DRCââ¬â¢s 2006 elections, the first multi-party election in 46 years, were also relatively successful, especially when considering the tremendous logistical challenges that had to be overcome. in these cases therefore, elections facilitated an ongoing transition from dev astating conflict toward greater stability and development. Based on a multi-level analysis of Afro barometer survey data from 17 sub-Saharan African countries, the study examines the influence of these two types of electoral systems; Majoritarian and Proportional Representations-on popular confidence in African parliaments. Controlling for a variety of individual and macro-level characteristics, it was found that citizenââ¬â¢s perceptions of Members of Parliament (MPââ¬â¢s) representations have a positive and significant effect on their trust in legislature. In addition the results suggest that the effect of political representations is mediated by electoral systems. Powell (2000), distinguishes between two versions of elections as instruments of democracy; accountability and representation. Accountability model tries to use elections to bring the power of the people directly to bear on policymakers. Elections offer citizens a periodic opportunity to change the policymakers. Citizens will have control because they will be able, at least occasionally to reject elected officials who are doing the wrong. Competitive elections create a pressure on all incumbents or rather the current policymakers to worry about the next elections and make policy with voters review in mind. On the other hand representation model emphasises citizens should be treated equally at the decisive stage of public policy making. Elections are instruments of citizenââ¬â¢s influence in policy making. Elections should create equitable reflection of all points of view into the legislature. They work as an instrument to choose representatives who can bargain for their voterââ¬â¢s interest in post-election policy making. Elections are not only integral to all these areas of democratic governance, but are also the most visible representations of democracy in action. They are also in most cases the most complicated and expensive single event a country will ever undertake. Good governance, upholding rule of law, and supporting civil society, this testimony examinesà all these areas in the context of elections. International support to electoral processes is crucial if democracy is to continue developing on the African continent. Indeed the very purpose of elections is to achieve participatory governance without violence- through political rather than physical competition ââ¬âand this has succeeded in a number of African countries. South-Africa and Botswana, for example have proven themselves among the continentââ¬â¢s most stable democracies, while Ghana, Mali, and Benin have emerged as democratic stronghold in West Africa. Moreover, countries such as Sierra Leone and Liberia, among the poorest in the world and only recently emerged from civil war, have demonstrated the power of elections to foster and solidify peace. In reality, then, Africaââ¬â¢s experience with the electoral democracy has been mixed; progress has been made but challenges remain. The various elections in past several years-from Kenya and Zimbabwe to Ghana and Sierra Leone- have become historical landmarks for different reasons, varying drastically in their conduct and outcome. This mix of electoral experience has generated considerable debate and passion on the subject of transparent, free and fair electoral process among election stakeholders, especially as democratic progress itself can come with further challenges; as more elections are held as these elections become increasingly competitive, one-party and military regimes face potentially destabilizing challenges that could increase the risk of fraud and violence. In conclusion; elections-especially free and fair, competitive and multi-party elections, are assumed to be a critical component of democratization in emerging democracies, while an election can intensify the polarization of a society along ethnic lines. Competitive elections can force political elites to legitimate their rule through the ballot box. However, we are still debating about how elections serve to link voters and elected Officials. While a group of scholars emphasise the directness and clarity of the connection between voters and policy-makers, others do the representation of all factions in society. Elections help voters to send Members of Parliament (MPââ¬â¢s) representing their interest to the parliament, to some extent elections constitutes a principal avenue of citizenââ¬â¢s involvement in political life. Understanding their effects on public attitudes towards the legislature and the role of the individualà therein has important implications for theories of democratic governance in emerging democracy. Therefore with this information, elections are a good measure of democracy in that they give citizens the participatory right in policy making through their elected representatives. Making it the government of the people by the people and for the people, thatââ¬â¢s democracy according to Abraham Lincolnâ â¬â¢s definition. BIBLIOGRAPHY 1. Powell (2000), Elections as instrument of Democracy. 2. Easton David (1965), A Systems Analysis of Political Life. New York: john Wiley. 3. Norris, Pippa, Eds (1999), Critical Citizen: Global Support for Democratic Governance. New York oxford University press. 4. Lebas, Adrienne (2006), Comparative Politics 38; 419:438. 5. Margolis, M (1979), Viable Democracy. 6. Tordoff, W. Government and Politics in Africa. London McMillan (1993). 7. Rose, Richard, William Mishler, Christian Haerpfer (1998), Democracy and Its Alternatives. 8. Sisk, Timothy D, Andrew Reynolds, Eds (1998), Election and Conflict Management in Africa. Washington; United States Institute of Peace press. 9. Powell G. Bingham (1982), Contemporary Democracies; participation stability and violence. Cambridge University. 10. Almami l. Cyllah. Democracy and Elections in Africa.
Subscribe to:
Posts (Atom)