Monday, September 30, 2019

Virtual Merchants

A Virtual merchant is any website which offers the sale of goods or services in a return for remuneration. [ (Tatum, 2010) ] Virtual merchants are essentially the same as a retail outlet, except they only operate online, example Amazon. They allow the consumer easy and instant access to view/purchase merchandise at the click of a button anytime and anywhere. Online stores are now being called e-tailers as they are highly popular with the general consumer. For example in 2008 Amazon had over ‘’76 million active customers accounts and order fulfilment to more than 200 countries’’. [ (DaveChaffey, last modified 13-03-2008) ] Amazons success and domination in the market place is well known. Customers tend to stay loyal to the e-tailer as they are extremely reliable on delivery, have an easy and user friendly online interface and are constantly learning and establishing trading relationships with its customer’s example possible likes e-mails. This loyalty aspect is one problem which most virtual merchants face and in Amazons case the customer orientated strategy they employ seems to be very effective at maintaining brand loyalty. ‘’Relentlessly focus on customer experience by offering our customers low prices, convenience, and a wide selection of merchandise’’ [ (DaveChaffey, last modified 13-03-2008) ] Many trading merchants offer the virtual trading aspect to their existing physical business to stay competitive and diverse within in the market place. This retail outlet coupled with the virtual merchant allows the business to appeal to a larger target market of potential customers, example the convenience shopper (online) and the physical shopper. This type of merchant can be referred to as bricks and clicks merchant example Wal-Mart. The value proposition defines how a company’s product or service fulfils the needs of customers (Kambil, Ginsberg and Bloch 1998). In Amazons case the value proposition is quiet simple as it aims to offer the world’s biggest choice of certain goods and be extremely customer focused and orientated. Amazon offers a personalised and customized service at a very competitive cost to their customers at the click of a mouse. According to Kambil 1997 and Bakos 1998, offering personalization, customization of product offerings and a reduction on product search costs are extremely important factors in developing a company’s value proposition. A company’s revenue model defines how they intend to generate profit and return on investment. In the virtual merchant market place there are several ways where profit can be generated. Firstly there is the direct sales profit margin and in amazons case they don’t have to rent retail outlets in busy high streets etc only merchandise warehousing storage etc, this accompanied with its online trading medium means that overheads, example direct contact with customer and reduction in sales support costs, are kept to a minimum allowing Amazon to offer an unrivalled selection and value for money. This places Amazon extremely competitive within the market place. Secondly they offer other businesses space to advertise on their webpage for example Hewlett pacard, Thompson holidays, Travel lodge etc. Virtual merchants are constantly looking to be dynamic and diverse in their service which they provide and different ways in targeting new customers. The e-tailer market is constantly growing as new users and accounts are set up every day. Amazon began in 1995 and have gained there competitive advantage within their market, they generated over 5 billion in sales in under a decade. When we compare this statistic with Wal-Mart (a bricks and clicks merchant) it took them twenty years to hit this sales figure. This is an indicator of how big and expanding the e-tailer market is.

Sunday, September 29, 2019

Global Crimes Analysis Essay

Global Crimes Analysis will allow individuals to understand everything that involves crime around the world as it relates to the least of the most dangerous crime committed. The United States crime rate is a known fact learned through news media, newspapers, and online news information. With the United States’ prison population it is evident that an overwhelming amount of crime committed as well as what the global crime rate is over the world. Unfortunately, crime occurs everywhere in the world where ever people are, which does affect the justice system internationally. In this paper, the identification of various major global crimes and criminal issues that have a global impact on national and international justice systems and processes will evolve as the report proceeds. A comparison and contrast of various international criminal justice systems and how these major global crimes and criminal issues addressed will emerge. Identification of Various Major Global Crimes Various major global crimes are spreading across the country, which is a phenomenon of an increasing globalization of criminal acts. Consider the following crimes: †¢ Ecstasy, the drug manufactured mainly in the Netherlands trafficked to the United States among other countries, Israeli sophisticated crime groups. †¢ Viruses created just for computers sent from the Philippines caused computers to crash with numerous United States government entities for a week long. †¢ A prominent United States’ bank found Russian organized crime groups were laundering money. †¢ In Columbia, crime groups informed through computer bank ledgers in drivers pulled aside from roadblocks in choosing wealthy abducted survivors. Examples such as these illustrate the latest version of criminal acts. The degree of unlawful acts of crime increased tremendously in the wake of globalization, and the people part of it had no regard for the act of loyalty to the nation, border, or authority. International crimes such as terrorist acts, trafficking people, and bringing in contraband, these crimes consist of extreme barbarity and bodily injury (Dobriansky, 2001). Identify Criminal Issues that have a Global Impact on national and international Justice Systems and Processes.   Criminal issues that have a global impact on national and international justice systems and processes are major problems for the United States. Internationally, crimes pose serious danger with a few basic connected fronts. First, it impacted surrounded communities; there were a significant number of people entering the U.S. against the law yearly. Crimes groups were secretly bringing in drugs, artillery, stolen vehicles, the pornography of children, and various types of contraband occurrences on a broad scale seized at U.S. borders. Second, with American businesses expanding around the world, there were openings of new jobs for immigrant-based offenders. Whenever Americans enterprises overseas are victims, the repercussions may consist of losing profit, production as well as work for American citizens at home. Third, international offenders take part in various acts, which could present serious risks for the nations’ security along with the strength and benefits for the world. For instance, serious harm consist of an acquisition of armory of mass destruction, exchange in prohibited or harmful materials, and illegal buying, and selling of women and children. Corrupt law enforcement and massive flow of unauthorized, illegal-production of proceeds are dangerous risks to the stability of democratic institutions and free market economies around the world. Compare and Contrast of Various International Criminal Justice Systems and how these Major Global Crimes and Criminal Issues In comparing and contrasting various international criminal justice systems and how these major global crimes and criminal issues are not much alike. The first step is identifying their differences of crime. The crime level recorded, along with a few outlooks on trends and comparison in regard to a couple of forms of crime, murder, and burglary. Comparing crime statistics from different jurisdictions is a hazardous undertaking. Initially, the classes of criminal acts of events recorded rely on what crime is lawful in certain countries. If the meaning of the offense differs countrywide, which is mostly the case; comparison would not but will have equal kinds of criminal acts. In a situation of law enforcement, some offenses made, the discretion is in use or the relevance to authority figures is identifiable. For instance, the interpretation differences betwixt extreme or regular assault in different legal areas of jurisdiction might have an alternate meaning which will reflect the number of incidents reported (Shaw, Dijk & Rhomberg, 2003). In international comparison of crime there are pertinent roles played in the perception of understanding how people with the criminal justice systems’ function. The criminal justice can aid in the improvement of them. Every jurisdiction has one criminal justice system that indicates the comparison of evidence concerning its performance but can distinguish it basically by observing methods in different countries. Likewise, whereas policy initiatives proposal for the justice system are at times ‘home-grown’, in addition, it is normal for policies overseas to influence them (Ministry of Justice Comparing International Criminal Justice Systems, 2012). Conclusion In conclusion, global crime analysis describes the danger of advanced crime nationally. Various major global crimes are spreading across the country, which is a phenomenon of an increasing globalization of criminal acts. Criminal issues that have a global impact on national and international justice systems and processes are major problems for the United States. Global dynamics as it relates to crime offenses across the world is in a stage of advanced criminal activity the country faces no matter where people go. Its impact nationally and internationally poses major threats to the United States is challenging to the justice system in which American live. Global crime comparison and contrast are much different, but often transcends its way into the country through borders. In comparing and contrasting various international criminal justice systems and how these major global crimes and criminal issues differ and the similarities of both. As the criminal justice system come close to resolving one situation of crime another arises, which makes law enforcements’ jobs heavily difficult in controlling its reminisce left behind. References Dobriansky, P. (2001, August). The Explosive Growth of Globalized Crime. U.S. Under Secretary of State for Global Affairs, 6(2), 1-41. Retrieved from http://guangzhou.usembassy-china.org.cn/uploads/images/sqVFYsuZI0LECJTHra1S_A/ijge0801.pdf Ministry of Justice Comparing International Criminal Justice Systems. (2012, February). National Title Audit, (), 1-51. Retrieved from http://www.rethinking.org.nz/assets/Newsletter_PDF/Issue_101/NAO_Briefing_Comparing_International_Criminal_Justice.pdf Stephens, M. (1996, January 6). GLOBAL ORGANIZED CRIME AS A THREAT TO NATIONAL SECURITY. GLOBAL ORGANIZED CRIME. Retrieved from http://www.fas.org/irp/eprint/snyder/globalcrime.htm Shaw, M., Dijk, J., & Rhomberg, W. (2003, December). DETERMINING TRENDS IN GLOBAL CRIME AND JUSTICE: AN OVERVIEW OF RESULTS FROM THE UNITED NATIONS SURVEYS OF CRIME TRENDS AND OPERATIONS OF CRIMINAL JUSTICE SYSTEMS. Crime and Society, 3(1 and 2), 1-62. Retrieved from http://www.unodc.org/pdf/crime/forum/forum3_Art2.pdf

Saturday, September 28, 2019

Human resources Management Essay Example | Topics and Well Written Essays - 750 words

Human resources Management - Essay Example d, recruitment of the manpower requirement of these stores will have to be undertaken as soon as possible and the best possible way to achieve this is to undertake a job analysis. A job analysis refers to the process undertaken to pinpoint and establish in detail the particular job duties and requirements and the importance of the same, where the analysis is conducted on the job and not the person (www.hr-guide.com). It is undertaken as an initial step towards successive human resource management actions such as defining a job domain, writing a job description, selection and promotion, training needs assessment, compensation and organizational analysis/planning (en.wikipedia.org). While the whole process may take some time to complete and shall entail the company some costs, the results of a job analysis are far greater than the time and costs involved as the same shall help spell the success of the proposed stores. The proposed job analysis may take the form of structured or unstructured interviews of incumbent employees, direct observation of employees at work, or the administration of questionnaires on existing employees. As against the interview and questionnaire methods, a job analysis undertaken through direct observation makes possible the gathering of first-hand knowledge and information about the job being analyzed as it allows the analyst to see, or experience in some cases, the work environment, the tools and pieces of equipment used, the relationships among workers and the complexity of the job. However, the observations may not be as conclusive as the presence of observers may cause alterations in the normal work behavior of the employees being directly observed (www.jobanalysis.net). The interview method of job analysis, on the one hand, requires that the interviewer possess effective listening skills as concentration can easily be disturbed by interruptions, the interviewers own thought processes and the difficulty of remaining neutral

Friday, September 27, 2019

COMMUNICATION PLAN for Nestle Company Essay Example | Topics and Well Written Essays - 3750 words

COMMUNICATION PLAN for Nestle Company - Essay Example For perishable products such as milk and vegetables, Nestlà © has direct procurement process with specific requirement so that the excess is not wasted. Nestlà © invest sufficiently in sustainable agriculture in collaboration with its direct suppliers so that high quality food products are delivered. Multinationals such as Nestlà © focus on long term partnership with suppliers so that resources are available at a reasonable cost and whenever required. These long term contracts minimizes various risks on the part of the company as well as for the suppliers. For instance, such kind of supply chain system acts as a hedge against fluctuations in the agricultural market (Handfield& Nichols, 1999; Nestlà ©, 2009; 2014b). Nestle company has been going through some publicity issues in the past. There were a lot of cases where children died as a result of taking products from the nestle company. This led to massive protests by the people against their products. Nestle was accused of aggressively marketing their breast milk substitutes and dressing their sales ladies as nurses. The deaths reported were as a result of the increasing deaths of infants. For the infant formula milk, the powder has to mixed with water which in most poor countries is usually contaminated and unhygienic and therefore leading to the death of the children. Another issue was that even when the parents knew the hygienic standards that they had to have, they do not have the means to sterilise the equipment that they used and therefore having no choice but to use the contaminated water. The women in the poor countries sometimes could not afford the formula and thus would end up using less than the required amount of formula and mix it with more water so that a can would last for longer. This means that the infants got less nutrients than they required. Basically, children who are fed on breast milk are more protected than children who are fed on formula and thus have better health compared to the

Thursday, September 26, 2019

Homework Research Paper Example | Topics and Well Written Essays - 750 words - 3

Homework - Research Paper Example The company manager must ensure that the best qualified marketers and sales people are in the company to ensure that the company’s assets meet the requirements of the customers. If the particular assets are not in the company, they should ensure that they refer the customers to the right company’s partners who are in the same sphere. When selling the services outside the company’s environment, Better Sms must have qualified sales people and marketers in the field who have an excellent working experience in the same technology field. The team must relate very well with the technical department in that they will be able to advertise the messaging services very clearly and avoid technical mistakes when marketing the services. On the same note, the company should have a determined customer care department to be ready to educate the customers more about the services. Better Sms Ltd has a twenty four seven online customers care that will be communicating with the customers in case of any problem by using the interactive video response method. There are numerous sales methods to be used by the sales person but Better Sms will use the best methods that will fit the company’s services by prospecting the right target for the text messaging and the issuing of the bulky messages. Understanding and getting the right customers for the business is quite a hard task that has a lot of valleys and experiences because it needs patience and tolerance. There is a lot of frustrations in the marketing field of the sales because majorly it is a new service that is being broadcasted in the information and technology world. Just few companies like Better Sms that will educate the sales people on how to prospect the services very effectively. Better Sms will look forward to prospect the services by use of networking while interacting with the

Wednesday, September 25, 2019

The 1968 Theft Act Assignment Example | Topics and Well Written Essays - 1000 words

The 1968 Theft Act - Assignment Example This research will begin with the statement that the 1968 Theft Act was supposed to come with simplified rules and policies that were meant to eliminate the many dilemmas and confusions that criminal lawyers had faced earlier. Unfortunately, soon after its introduction into the legislation more complexities started to arise. It appeared that most of the concepts that were designed to bring out clarity in theft cases turned out to bring more confusion following the various proceedings that advanced from there henceforth. The most controversial concept being appropriation which had been introduced into the law to simplify things by replacing taking and carrying away. The term was defined in partiality by section 3 (1) of the Theft Act. Lack of further explanation on the term by the legislative is what caused more problems. Two issues arose:Â  one is what the relationship between consent and appropriation was and the second, the possibility of appropriating property that was acquired i n a transaction impeccable at civil law. This essay tries to analyze these facts using issue raised in two important cases R v Hinks and DPP v Gomez. The researcher of this essay also utilizes Shute’s views from his article Appropriation and the law of theft. The purpose of this research is to investigate the following: consent and appropriation; unimpeachable transfers and appropriation and evaluation of arguments Hinks case.... 41). There are other valid reasons where a transaction is censured at common law on a number of certain defined grounds. These can be where the transaction is a product of duress, and it may have been through deception, undue influence, fraud, or misrepresentation (Horder & Shute, 1993, p. 549). Sometimes the transaction can be vitiated to have occurred because there was enough reason to believe one party was unconscious. In the event that any of the above occurs then the transaction should remain valid until the transferor repudiates it successfully. Unimpeachable Transfers and Appropriation This issue arises where a transfer of property is unimpeachable at both equity and at common law. This matter arose in Mazo (1997) case where a house cleaner took advantage of her employer’s mental incapability to dishonestly receive and cash cheques made payable to her by her boss. The house cleaner was sentenced to Jail after being found guilty for five counts of theft and one count of attempted theft. On her appeal, the court held it that the case was consistent with that of Lawrence v Metropolitan Police commissioner (1972) where the House of Lords decision was that in the event a valid gift was made then there could be no theft. However, it was not clear as per Viscount Dilhorne’s speech in the Lawrence case whether the House of Lords could not charge the receiver of the gift with theft, which was consistent with the ruling in Gomez case. This decision was later addressed in the case of Hinks that happened two years later after the Gomez and Lawrence cases. In Hinks case, he was convicted of five counts of theft despite his argument that the sums received were gifts and loans therefore could not be appropriated

Tuesday, September 24, 2019

Mona Lisa Essay Example | Topics and Well Written Essays - 750 words

Mona Lisa - Essay Example There is a general consensus among historians that the Mona Lisa was done between 1503 and 1519. The painting was requested by Francesco Del Giocondo, the subject’s husband and a rich silk merchant. Lisa Gherardin, who was Giocondo’s wife, came from a prominent family. The Mona Lisa is thought to have been painted to celebrate the completion of the couple’s house in 1503 and to mark the birth of Andrea, the couple’s second son, in 1502.The identity of the portrait had been a subject of speculations but in 2005, its real identity was discovered(Earls 113). The Mona Lisa is a half length portrait of a beautiful lady. The lady’s hair is covered by a delicate dark cloak. During renaissance period, a dark veil was considered a mourning veil and may have been representing the subject’s mourning of her daughter who died in 1549.Her clothing is simple. The scarp that is wrapped around her shoulders, the pleated gown and the yellow sleeve do not show any signs of nobility. The Mona Lisa was painted basing on a realistic scale. The portrait is half length and the woman is presented from the head to the waist. She is sitting in an arm chair while her left arm is resting on the chair’s arm. The arm of the chair is situated in front of loggia, which is characterized by two fragmentary pillars that form the frame of the figure and form a window that faces the background. The aesthetic nature of this artwork highlights the influence of Lombard and Florentine art of the late 15th century and early 16th century. Aspects of artwork such as the architectural settings, hands put together in the forefront, and the view of the portrait against the landscape were common in Flemish portraits of the late 15th century. However, Leornardo managed to introduce several unique and special features in The Mona Lisa. The new features are the sheer equilibrium of the painting, the monumentality, and the atmospheric illusionism exhibited by the M ona Lisa (Kemp 79). The Mona Lisa is a unique oil painting whose surface consists of cotton wood panel unlike most of the paintings that were done by other artists during Leonardo’s period, which were commonly commissioned as oil on canvas. Actually, the use of cotton wood panel as the surface of the Mona Lisa is one of the factors that have been attributed to its fame. In addition, the cotton wood panel medium has contributed to its durability. The Mona Lisa has survived for six centuries without any alteration or repair, a factor that makes it different from other artworks. Although most of renaissance period artworks denoted biblical themes, the Mona Lisa did not portray any religious theme but was created to mark Giocondo’s achievements (Earls 114).The painting shows Leornardo’s mastery of using identifiable marks when presenting his artwork. The use of shadowing technique at the corners of the eyes and lips gives the portrait a look of a delight and lifelik e appearance. Leornado also developed a background that had attractive scenery and an aerial view. The technique used by Leornado when painting did not leave

Monday, September 23, 2019

My Synthetic Journey Essay Example | Topics and Well Written Essays - 500 words

My Synthetic Journey - Essay Example I say again "The streets are always wet, my ashes can hardly fly and make a nuisance of my dark overcoat". But it is a matter of no importance, I decided then and there. The wet floor becomes puddles at places, and I try to skip them by and nibble at the only question that nags my mind "Am I really regular" I try to dally with the answer for bedtime soporific musings. Then I think, If I must go home now, there will be so much to do with the rest of the day. For instance, I will have to avoid being alone amidst the whole of the neighborhood, praying before dinner, holding hands across fences or already making love in their kitchen. In the street, I only need to fear the rain and the sky that is chequered with the fate of the stars. It is never regular and yet always the forgotten limit. The street is now a little darker; every window looks warm and lost in velvety warmth that has withstood the daylights assault. There! that's my home, my house, and my shelter. I will have the darkness to stir from the porch to the bed till I leave a wake of flooded ennui. I am lost within my own rhythm of chores. A sensitized journey along the streets to the unique shelter that I call my home is undergone and homeostasis is reached for the day until the day begins again and I start from the same point. I was supposed to know you by name, but I shall call you 'My synthetic journey'.

Sunday, September 22, 2019

The Stroke Risk Calculator Coursework Example | Topics and Well Written Essays - 1250 words

The Stroke Risk Calculator - Coursework Example The user is placed in a particular age group and then their probability to suffer from stroke is determined. The results obtained are also based on the age group a person falls under. The rating results are provided in either lower than average, average or higher than average of a person in a particular age group. To analyze the stroke probability in a person, the tool enquires on several causing factors. Firstly, the gender of the user is required, the user is then placed in an age group. Questions on the health status of a person are requested. For instance, the tool asks on medical history of condition like diabetes, irregular pulse, fibro muscular dysplasia and transient ischemic attack (UCLA Stroke Center, 2015). Social factors like smoking are then analyzed. The elderly population is the most likely age group to be suffer from stroke. In an argument by Birkett (2012) the population is comprised of too many risk factors as influenced by aging thus a great stroke risk. However, t he risk factors in older adults are significantly influenced by the lifestyle at a younger age. For this reason, the younger age groups are a significant target population as older adults. In addition, the risk calculator can be of great importance to younger adults than older adults. This is based on that risk causing factors in older adults are irreversible. In younger adults changes in lifestyle and seeking good health care may reduce the probability of suffering from stroke at an older age (Birkett, 2012). For the older population the tool may also be effective in analyzing their stroke risks status. Similarly to the younger age groups, older adults may also feel the need to change their lifestyle to minimize their stroke risk. For instance, an older adult may be advised to stop smoking or drinking due to a high probability of them having stroke.

Saturday, September 21, 2019

Fast Food Essay Example for Free

Fast Food Essay Visit at least two different fast-food restaurants that make hamburgers and observe the basic differences in the following processes: How are in store orders taken How are the hamburgers prepared How are special orders handled How are the burgers cooked How are the burgers assembled Is a microwave used How are other items such as fires and drinks handled The two fast-food restaurants that I visited were McDonalds and In-and Out. The main differences that I found between the two restaurants were freshness and customer service. When in comes to in store orders at McDonalds, you wait in line to place your order. It seems to be not as customer service driven. You give them your order, which seems to be informal, and not that personable. You pay and stand off to the side until they call your order. They announce your order is ready by saying your order out loud, for example number 2 with a diet coke, without your name attached. At McDonald’s the hamburgers are prepared from frozen possessed meat, they then they cook the meat on the grill. As far as special orders go you must tell the cashier exactly what you don’t want because the burgers come as they are. The cashier then inputs the data into the computer, which then in turn allows the kitchen to make the arrangements. The employees in the kitchen then prepare the burgers; they utilize somewhat of an assembly line to make sure the right ingredients get put on the right burger. With McDonalds you do not have full visibility of the kitchen staff preparing the food. While at McDonalds I did not see a Microwave used. When it comes to fries, McDonalds has their fries frozen in a large plastic bag they then cook them in oil. With drinks you serve your self. When I went into In and Out it was a little different of an experience, and seemed more personable. The orders here are taken similar to McDonalds, but they take your name and give you a number, you wait off to the side and they then call your name and number aloud. The hamburgers at In and Out are grilled using higher quality meat, without preservatives, and they utilize local beef distributors. When it comes to special orders, it seems every order at In and Out is a special order. They ask you exactly what you want, where’s McDonalds doesn’t ask you exactly what you want, they just assume and you must be the one who asks for changes. The cashier then inputs the data into the computer, which then in turn allows the kitchen to make the arrangements. The burgers are cooked on a grill just like McDonalds. When it comes to the assembly, In and Out also utilize somewhat of an assembly line to add ingredients. In and Out uses fresh ingredients, the kitchen is open and you can see the employees making the food right in front of you. As far as I saw, a microwave was not used. As for fries, they use fresh potatoes with out preservatives, instead of bagged fries. As far as drinks goes at In and Out you also serve yourself. This assignment was quite interesting, I would defiantly choose In and Out over McDonalds. They have fresh ingredients, you can have it your way, and the experience is more personable.

Friday, September 20, 2019

Software testing

Software testing 1.0 Software Testing Activities We start testing activities from the first phase of the software development life cycle. We may generate test cases from the SRS and SDD documents and use them during system and acceptance testing. Hence, development and testing activities are carried out simultaneously in order to produce good quality maintainable software in time and within budget. We may carry out testing at many levels and may also take help of a software testing tool. Whenever we experience a failure, we debug the source code to find reasons for such a failure. Finding the reasons of a failure is very significant testing activity and consumes huge amount of resources and may also delay the release of the software. 1.1 Levels of Testing Software testing is generally carried out at different levels. There are four such levels namely unit testing, integration testing, system testing, and acceptance testing as shown in figure 8.1. First three levels of testing activities are done by the testers and last level of testing (acceptance) is done by the customer(s)/user(s). Each level has specific testing objectives. For example, at unit testing level, independent units are tested using functional and/or structural testing techniques. At integration testing level, two or more units are combined and testing is carried out to test the integration related issues of various units. At system testing level, the system is tested as a whole and primarily functional testing techniques are used to test the system. Non functional requirements like performance, reliability, usability, testability etc. are also tested at this level. Load/stress testing is also performed at this level. Last level i.e. acceptance testing is done by the cus tomer(s)/users for the purpose of accepting the final product. 1.1.1 Unit Testing We develop software in parts / units and every unit is expected to have defined functionality. We may call it a component, module, procedure, function etc, which will have a purpose and may be developed independently and simultaney. A. Bertolino and E. Marchetti have defined a unit as [BERT07]: A unit is the smallest testable piece of software, which may consist of hundreds or even just few lines of source code, and generally represents the result of the work of one or few developers. The unit test cases purpose is to ensure that the unit satisfies its functional specification and / or that its implemented structure matches the intended design structure. [BEIZ90, PFLE01]. There are also problems with unit testing. How can we run a unit independently? A unit may not be completely independent. It may be calling few units and also called by one or more units. We may have to write additional source code to execute a unit. A unit X may call a unit Y and a unit Y may call a unit A and a unit B as shown in figure 8.2(a). To execute a unit Y independently, we may have to write additional source code in a unit Y which may handle the activities of a unit X and the activities of a unit A and a unit B. The additional source code to handle the activities of a unit X is called driver and the additional source code to handle the activities of a unit A and a unit B is called stub. The complete additional source code which is written for the design of stub and driver is called scaffolding. The scaffolding should be removed after the completion of unit testing. This may help us to locate an error easily due to small size of a unit. Many white box testing techniques may be effectively applicable at unit level. We should keep stubs and drivers simple and small in size to reduce the cost of testing. If we design units in such a way that they can be tested without writing stubs and drivers, we may be very efficient and lucky. Generally, in practice, it may be difficult and thus requirement of stubs and drivers may not be eliminated. We may only minimize the requirement of scaffolding depending upon the functionality and its division in various units. 1.1.2 Integration Testing A software may have many units. We test units independently during unit testing after writing required stubs and drivers. When we combine two units, we may like to test the interfaces amongst these units. We combine two or more units because they share some relationship. This relationship is represented by an interface and is known as coupling. The coupling is the measure of the degree of interdependence between units. Two units with high coupling are strongly connected and thus, dependent on each other. Two units with low coupling are weakly connected and thus have low dependency on each other. Hence, highly coupled units are heavily dependent on other units and loosely coupled units are comparatively less dependent on other units as shown in figure 8.3. Coupling increases as the number of calls amongst units increases or the amount of shared data increases. The design with high coupling may have more errors. Loose coupling minimize the interdependence and some of the steps to minimize the coupling are given as: (i) Pass only data, not the control information. (ii) Avoid passing undesired data. (iii) Minimize parent / child relationship between calling and called units. (iv) Minimize the number of parameters to be passed between two units. (v) Avoid passing complete data structure. (vi) Do not declare global variables. (vii) Minimize the scope of variables. Different types of coupling are data (best), stamp, control, external, common and content (worst). When we design test cases for interfaces, we should be very clear about the coupling amongst units and if it is high, large number of test cases should be designed to test that particular interface. A good design should have low coupling and thus interfaces become very important. When interfaces are important, their testing will also be important. In integration testing, we focus on the issues related to interfaces amongst units. There are several integration strategies that really have little basis in a rational methodology and are given in figure 8.4. Top down integration starts from the main unit and keeps on adding all called units of next level. This portion should be tested thoroughly by focusing on interface issues. After completion of integration testing at this level, add next level of units and as so on till we reach the lowest level units (leaf units). There will not be any requirement of drivers and only stubs will be designed. In bottom-up integration, we start from the bottom, (i.e. from leaf units) and keep on adding upper level units till we reach the top (i.e. root node). There will not be any need of stubs. A sandwich strategy runs from top and bottom concurren tly, depending upon the availability of units and may meet somewhere in the middle. (b) Bottom up integration (focus starts from edges i, j and so on) c) Sandwich integration (focus starts from a, b, i, j and so on) Each approach has its own advantages and disadvantages. In practice, sandwich integration approach is more popular. This can be started as and when two related units are available. We may use any functional or structural testing techniques to design test cases. The functional testing techniques are easy to implement with a particular focus on the interfaces and some structural testing techniques may also be used. When a new unit is added as a part of integration testing then the software is considered as a changed software. New paths are designed and new input(s) and output(s) conditions may emerge and new control logic may invoke. These changes may also cause problems with units that previously worked flawlessly. 1.1.3 System Testing We perform system testing after the completion of unit and integration testing. We test complete software alongwith its expected environment. We generally use functional testing techniques, although few structural testing techniques may also be used. A system is defined as a combination of the software, hardware and other associated parts that together provide product features and solutions. System testing ensures that each system function works as expected and it also tests for non-functional requirements like performance, security, reliability, stress, load etc. This is the only phase of testing which tests both functional and non-functional requirements of the system. A team of the testing persons does the system testing under the supervision of a test team leader. We also review all associated documents and manuals of the software. This verification activity is equally important and may improve the quality of the final product. Utmost care should be taken for the defects found during system testing phase. A proper impact analysis should be done before fixing the defect. Sometimes, if system permits, instead of fixing the defects are just documented and mentioned as the known limitation. This may happen in a situation when fixing is very time consuming or technically it is not possible in the present design etc. Progress of system testing also builds confidence in the development team as this is the first phase in which complete product is tested with a specific focus on customers expectations. After the completion of this phase, customers are invited to test the software. 1.1.4 Acceptance Testing This is the extension of system testing. When testing team feels that the product is ready for the customer(s), they invite the customer(s) for demonstration. After demonstration of the product, customer(s) may like to use the product for their satisfaction and confidence. This may range from adhoc usage to systematic well-planned usage of the product. This type of usage is essential before accepting the final product. The testing done for the purpose of accepting a product is known as acceptance testing. This may be carried out by the customer(s) or persons authorized by the customer. The venue may be developers site or customers site depending on the mutual agreement. Generally, acceptance testing is carried out at the customers site. Acceptance testing is carried out only when the software is developed for a particular customer(s). If, we develop software for anonymous customers (like operating systems, compilers, case tools etc), then acceptance testing is not feasible. In such c ases, potential customers are identified to test the software and this type of testing is called alpha / beta testing. Beta testing is done by many potential customers at their sites without any involvement of developers / testers. Although alpha testing is done by some potential customers at developers site under the direction and supervision of testers. 1.2 Debugging Whenever a software fails, we would like to understand the reason(s) of such a failure. After knowing the reason(s), we may attempt to find solution and may make necessary changes in the source code accordingly. These changes will hopefully remove the reason(s) of that software failure. The process of identifying and correcting a software error is known as debugging. It starts after receiving a failure report and completes after ensuring that all corrections have been rightly placed and the software does not fail with the same set of input(s). The debugging is quite a difficult phase and may become one of the reasons of the software delays. Every bug detection process is different and it is difficult to know how long it will take to find and fix a bug. Sometimes, it may not be possible to detect a bug or if a bug is detected, it may not be feasible to correct it at all. These situations should be handled very carefully. In order to remove bugs, developer must first discover that a problem exists, then classify the bug, locate where the problem actually lies in the source code, and finally correct the problem. 1.2.1 Why debugging is so difficult? Debugging is a difficult process. This is probably due to human involvement and their psychology. Developers become uncomfortable after receiving any request of debugging. It is taken against their professional pride. Shneiderman [SHNE80] has rightly commented on human aspect of debugging as: It is one of the most frustrating parts of programming. It has elements of problem solving or brain teasers, coupled with the annoying recognition that we have made a mistake. Heightened anxiety and the unwillingness to accept the possibility of errors, increase the task difficulty. Fortunately, there is a great sigh of relief and a lessening of tension when the bug is ultimately corrected. These comments explain the difficulty of debugging. Pressman [PRES97] has given some clues about the characteristics of bugs as: The debugging process attempts to match symptom with cause, thereby leading to error correction. The symptom and the cause may be geographically remote. That is, symptom may appear in one part of program, while the cause may actually be located in other part. Highly coupled program structures may further complicate this situation. Symptom may also disappear temporarily when another error is corrected. In real time applications, it may be difficult to accurately reproduce the input conditions. In some cases, symptom may be due to causes that are distributed across a number of tasks running on different processors. There may be many reasons which may make debugging process difficult and time consuming. However, psychological reasons are more prevalent over technical reasons. Over the years, debugging techniques have substantially improved and they will continue to develop significantly in the near future. Some debugging tools are available and they minimize the human involvement in the debugging process. However, it is still a difficult area and consumes significant amount of time and resources. 1.2.2 Debugging Process Debugging means detecting and removing bugs from the programs. Whenever a program generates an unexpected behaviour, it is known as a failure of the program. This failure may be mild, annoying, disturbing, serious, extreme, catastrophic or infectious. Depending on the type of failure, actions are required to be taken. Debugging process starts after receiving a failure report either from testing team or from users. The steps of the debugging process are replication of the bug, understanding the bug, locate the bug, fix the bug and retest the program. (i) Replication of the bug: The first step in fixing a bug is to replicate it. This means to recreate the undesired behaviour under controlled conditions. The same set of input(s) should be given under similar conditions to the program and the program, after execution, should produce similar unexpected behaviour. If this happens, we are able to replicate a bug. In many cases, this is simple and straight forward. We execute the program on a particular input(s) or we press a particular button on a particular dialog, and the bug occurs. In other cases, replication may be very difficult. It may require many steps or in an interactive program such as a game, it may require precise timing. In worst cases, replication may be nearly impossible. If we do not replicate the bug, how will we verify the fix? Hence, failure to replicate a bug is a real problem. If we cannot do it, any action, which cannot be verified, has no meaning, how so ever important it may be. Some of the reasons for non-replication of bug are: ÂÂ · The user incorrectly reported the problem. ÂÂ · The program has failed due to hardware problems like memory overflow, poor network connectivity, network congestion, non availability of system buses, deadlock conditions etc. ÂÂ · The program has failed due to system software problems. The reason may be the usage of different type of operating system, compilers, device drivers etc. there may be any above mentioned reason for the failure of the program, although there is no inherent bug in program for this particular failure. Our effort should be to replicate the bug. If we cannot do so, it is advisable to keep the matter pending till we are able to replicate it. There is no point in playing with the source code for a situation which is not reproducible. (ii) Understanding the bug After replicating the bug, we may like to understand the bug. This means, we want to find the reason(s) of this failure. There may be one or more reasons and is generally the most time consuming activity. We should understand the program very clearly for understanding a bug. If we are the designers and source code writers, there may not be any problem for understanding the bug. If not, then we may even have more serious problems. If readability of the program is good and associated documents are available, we may be able to manage the problem. If readability is not that good, (which happens in many situations) and associated documents are not proper, situation becomes very difficult and complex. We may call the designers, if we are lucky, they may be available with the company and we may get them. Imagine otherwise, what will happen? This is a real challenging situation and in practice many times, we have to face this and struggle with the source code and documents written by the per sons not available with the company. We may have to put effort in order to understand the program. We may start from the first statement of the source code to the last statement with a special focus on critical and complex areas of the source code. We should be able to know, where to look in the source code for any particular activity. It should also tell us the general way in which the program acts. The worst cases are large programs written by many persons over many years. These programs may not have consistency and may become poorly readable over time due to various maintenance activities. We should simply do the best and try to avoid making the mess worse. We may also take the help of source code analysis tools for examining the large programs. A debugger may also be helpful for understanding the program. A debugger inspects a program statement wise and may be able to show the dynamic behaviour of the program using a breakpoint. The breakpoints are used to pause the program at any time needed. At every breakpoint, we may look at values of variables, contents of relevant memory locations, registers etc. The main point is that in order to understand a bug, program understanding is essential. We should put desired effort before finding the reasons of the software failure. If we fail to do so, unnecessarily, we may waste our effort, which is neither required nor desired. (iii) Locate the bug There are two portions of the source code which need to be considered for locating a bug. First portion of the source code is one which causes the visible incorrect behaviour and second portion of the source code is one which is actually incorrect. In most of the situations, both portions may overlap and sometimes, both portions may be in different parts of the program. We should first find the source code which causes the incorrect behaviour. After knowing the incorrect behaviour and its related portion of the source code, we may find the portion of the source code which is at fault. Sometimes, it may be very easy to identify the problematic source code (second portion of the source code) with manual inspection. Otherwise, we may have to take the help of a debugger. If we have core dumps, a debugger can immediately identify the line which fails. A core dumps is the printout of all registers and relevant memory locations. We should document them and also retain them for possible futu re use. We may provide breakpoints while replicating the bug and this process may also help us to locate the bug. Sometimes simple print statements may help us to locate the sources of the bad behaviour. This simple way provides us the status of various variables at different locations of the program with specific set of inputs. A sequence of print statements may also portray the dynamics of variable changes. However, it is cumbersome to use in large programs. They may also generate superfluous data which may be difficult to analyze and manage. Another useful approach is to add check routines in the source code to verify that data structures are in a valid state. Such routines may help us to narrow down where data corruption occurs. If the check routines are fast, we may want to always enable them. Otherwise, leave them in the source code, and provide some sort of mechanism to turn them on when we need them. The most useful and powerful way is to do the source code inspection. This may help us to understand the program, understand the bug and finally locate the bug. A clear understanding of the program is an absolute requirement of any debugging activity. Sometimes, bug may not be in the program at all. It may be in a library routine or in the operating system, or in the compiler. These cases are very rare, but there are chances and if everything fails, we may have to look for such options. (iv) Fix the bug and retest the program After locating the bug, we may like to fix the bug. The fixing of a bug is a programming exercise rather than a debugging activity. After making necessary changes in the source code, we may have to retest the source code in order to ensure that the corrections have been rightly done at right place. Every change may affect other portions of the source code also. Hence an impact analysis is required to identify the affected portion and that portion should also be retested thoroughly. This retesting activity is called regression testing which is very important activity of any debugging process. 1.2.3 Debugging Approaches There are many popular debugging approaches, but success of any approach is dependant upon the understanding of the program. If the persons involved in debugging understand the program correctly, they may be able to detect and remove the bugs. (i) Trial and Error Method This approach is dependent on the ability and experience of the debugging persons. After getting a failure report, it is analyzed and program is inspected. Based on experience and intelligence, and also using hit and trial technique, the bug is located and a solution is found. This is a slow approach and becomes impractical in large programs. (ii) Backtracking This can be used successfully in small programs. We start at the point where program gives incorrect result such as unexpected output is printed. After analyzing the output, we trace backward the source code manually until a cause of the failure is found. The source code from the statement where symptoms of failure is found to the statement where cause of failure is found is analyzed properly. This technique brackets the locations of the bug in the program. Subsequent careful study of bracketed location may help us to rectify the bug. Another obvious variation of backtracking is forward tracking, where we use print statements or other means to examine a succession of intermediate results to determine at what point the result first became wrong. These approaches (backtracking and forward tracking) may be useful only when the size of the program is small. As the program size increases, it becomes difficult to manage these approaches. (iii) Brute Force This is probably the most common and efficient approach to identify the cause of a software failure. In this approach, memory dumps are taken and run time traces are invoked and the program is loaded with print statements. When this is done, we may find a clue by the information produced which leads to identification of cause of a bug. Memory traces are similar to memory dumps, except that the printout contains only certain memory and register contents and printing is conditional on some event occurring. Typically conditional events are entry, exit or use of one of the following: (a) A particular subroutine, statement or database (b) Communication with I/O devices (c) Value of a variable (d) Timed actuations (periodic or random) in certain real time system. A special problem with trace programs is that the conditions are entered in the source code and any changes require a recompilation. The huge amount of data is generated which although may help to identify the cause but may be difficult to manage and analyze. (iv) Cause Elimination Cause elimination is manifested by induction or deduction and also introduces the concept of binary partitioning. Data related to error occurrence are organized to isolate potential causes. Alternatively, a list of all possible causes is developed and tests are conducted to eliminate each. Therefore, we may rule out causes one by one until a single one remains for validation. The cause is identified, properly fixed and retested accordingly. 1.2.4 Debugging Tools Many debugging tools are available to support the debugging process. Some of the manual activities can also be automated using a tool. We may need a tool that may execute every statement of a program at a time and print values of any variable after executing every statement of the program. We will be free from inserting print statements in the program manually. Thus, run time debuggers are designed. In principle, a run time debugger is nothing more than an automatic print statement generator. It allows us to trace the program path and the variables without having to put print statements in the source code. Every compiler available in the market comes with run time debugger. It allows us to compile and run the program with a single compilation, rather than modifying the source code and recompiling as we try to narrow down the bug. Run time debuggers may detect bugs in the program, but may fail to find the causes of failures. We may need a special tool to find causes of failures and correct the bug. Some errors like memory corruption and memory leaks may be detected automatically. The automation was the modification in debugging process, because it automated the process of finding the bug. A tool may detect an error, and our job is to simply fix it. These tools are known as automatic debugger and come in several varieties. The simplest ones are just a library of functions that can be linked into a program. When the program executes and these functions are called, the debugger checks for memory corruption, if it finds this, it reports it. Compilers are also used for finding bugs. Of course, they check only syntax errors and particular type of run time errors. Compilers should give proper and detailed messages of errors that will be of great help to the debugging process. Compilers may give all such information in the attribute table, which is printed along with the listing. The attribute table contains various levels of warnings which have been picked up by the compiler scan and which are noted. Hence, compilers are coming with error detection feature and there is no excuse to design compilers without meaningful error messages. We may apply wide variety of tools like run time debugger, automatic debugger, automatic test case generators, memory dumps, cross reference maps, compilers etc during the debugging process. However, tools are not the substitute for careful examination of the source code after thorough understanding. 1.3 Software Testing Tools The most important effort consuming task in software testing is to design the test cases. The execution of these test cases may not require much time and resources. Hence, designing part is more significant than execution part. Both parts are normally handled manually. Do we really need a tool? If yes, where and when can we use it? In first part (designing of test cases) or second part (execution of test cases) or both. Software testing tools may be used to reduce the time of testing and to make testing as easy and pleasant as possible. Automated testing may be carried out without human involvement. This may help us in the areas where similar data set is to be given as input to the program again and again. A tool may do the repeated testing, unattended also, during nights or weekends without human intervention. Many non-functional requirements may be tested with the help of a tool. We want to test the performance of a software under load, which may require many computers, manpower and other resources. A tool may simulate multiple users on one computer and also a situation when many users are accessing a database simultaneously. There are three broad categories of software testing tools i.e. static, dynamic and process management. Most of the tools fall clearly into one of the categories but there are few exceptions like mutation analysis system which falls in more than one the categories. A wide variety of tools are available with different scope and quality and they assist us in many ways. 1.3.1 Static software testing tools Static software testing tools are those that perform analysis of the programs without executing them at all. They may also find the source code which will be hard to test and maintain. As we all know, static testing is about prevention and dynamic testing is about cure. We should use both the tools but prevention is always better than cure. These tools will find more bugs as compared to dynamic testing tools (where we execute the program). There are many areas for which effective static testing tools are available, and they have shown their results for the improvement of the quality of the software. (i) Complexity analysis tools Complexity of a program plays very important role while determining its quality. A popular measure of complexity is the cyclomatic complexity as discussed in chapter 4. This gives us the idea about the number of independent paths in the program and is dependent upon the number of decisions in the program. Higher value of cyclomatic complexity may indicate about poor design and risky implementation. This may also be applied at module level and higher cyclomatic complexity value modules may either be redesigned or may be tested very thoroughly. There are other complexity measures also which are used in practice like Halstead software size measures, knot complexity measure etc. Tools are available which are based on any of the complexity measure. These tools may take the program as an input, process it and produce a complexity value as output. This value may be an indicator of the quality of design and implementation. (ii) Syntax and Semantic Analysis Tools These tools find syntax and semantic errors. Although compiler may detect all syntax errors during compilation, but early detection of such errors may help to minimize other associated errors. Semantic errors are very significant and compilers are helpless to find such errors. There are tools in the market that may analyze the program and find errors. Non-declaration of a variable, double declaration of a variable, divide by zero issue, unspecified inputs, non-initialization of a variable are some of the issues which may be detected by semantic analysis tools. These tools are language dependent and may parse the source code, maintain a list of errors and provide implementation information. The parser may find semantic errors as well as make an inference as to what is syntactically correct. (iii) Flow graph generator tools These tools are language dependent and take the program as an input and convert it to its flow graph. The flow graph may be used for many purposes like complexity calculation, paths identification, generation of definition use paths, program slicing etc. These

Thursday, September 19, 2019

From Coexistence to Conflict :: history

From Coexistence to Conflict From Coexistence to Conflict in 19th Century Mount Lebanon Mount Lebanon has been a troubled region throughout much of Lebanese history. Through most of the 19th century, the Maronite and Druze inhabitants of the Mount Lebanon region had successfully coexisted in an intricate inter-sectarian system. True to the words of Leila Fawaz, â€Å"Lebanon was at peace, as it had been for most of its history.† Excessive foreign intervention, however, caused the status in Mount Lebanon to move from coexistence to conflict, which ultimately led to the civil war of 1860. The first step that led to the emergence of inter-sect rule in Lebanon was the gaining of autonomy by local rulers. Fakhr al-Din al Maani was the first prince in the region, and he was awarded that title and responsibility by the Ottomans as a reward for his loyalty to them. Prior to Fakhr al-Din, Lebanon did not have an autonomous ruler; it was fully controlled by the Ottomans. The Maanis, however, were not only supported by the Ottomans, but by the local citizens as well, and this common support for a single ruler helped bring about inter-sectarianism. The Druze-Maronite inter-sectarian system gained its roots during the reign of Fakhr al-Din II, who raised the Maronites to the same civil status as their Druze counterparts. This equal status allowed both sects to live peacefully among each other. Fakhr al-Din’s reign soon came to an end though in 1635, when the Ottomans, who had control over Lebanon at the time, captured and executed Fakhr al-Din for trying to expand the are a under his control. By upsetting the balance between local and Ottoman rule, Fakhr al-Din brought about the end of his reign as prince. After two insignificant rulers, the princedom fell to the Shihab family, which would rule the Mount Lebanon region from 1697 to 1842. During the long reign of the Shihab family, the Maronites had slowly started to gain power as the Druze began to weaken. The most notable of the Shihabs was Prince Bashir II. During his reign, Prince Bashir II developed a strong relationship with Sheikh Bashir Jumblatt. The Jumblatt family was originally of Sunni Kurdish descent and they later became accepted as part of the Druze community. After the end of the Maani dynasty, the Jumblatts took their place as lords of the Shouf and rapidly rose to power. Consequently, the Jumblatts were able to influence other areas of the region.

Wednesday, September 18, 2019

Essay --

Professional Development for Strategic Managers Introduction Professional development can provide the drive to progress your career, keeps managers across the industry more competitive. Mostly, professional development is something you will do everyday of your life without even thinking about it, however, being aware of the development you tackle, will allow you to record this and develop in a practical way. In order to maximise your prospective for lifetime employability, it is important to maintain high levels of professional competence by continually improving your skills and knowledge. It is essential to take ownership of your career and its continuing professional development, because of this ever changing market environment as you may no longer be able to depend on your employers to identify and satisfy your development necessity. The impact of such changes has increased the demands on professionals to maintain documentary evidence of their continued competence; because of the swift technology advancement in organisations. It is very important developing a personal portfolio of your professional activities and their relevance to your current job and your continued career as well as future ambitions. Task 1. Be able to assess personal and professional skills required to achieve strategic goals T1.1: Using appropriate methods to evaluate personal skills required to achieve strategic goals Professional skills are those skills obtained by an individual and necessary for use in a particular assignment or profession. These skills are developed over a period of time and are endlessly sharpen by working in the particular professional area. The skills are mostly used in businesses and professional organisations to expand the ... ...port personal development at the individual level in the organisation. At the individual level, self development involves the following; †¢ Improvement of social abilities †¢ Developing talents or strengths †¢ Improving knowledge †¢ Improvement in self awareness †¢ Improving or identifying potential †¢ Executing or defining personal development plan Conclusion In order to be effective, the objectives set at personal development and performance review should relate in part to the organisation’s key strategic objectives. The job description should have a connection, where suitable, with the strategic organisational and departmental goals. These strategic outcomes need to be translated for practical application at departmental and individual level. It is important for staff to understand what their organisation is trying to achieve and the implications for their work.

Tuesday, September 17, 2019

Family Values, Personal Values Essay -- Ethnicity Culture Families Val

Family Values and Unity There are so many various types of people with different ethnic backgrounds, culture and manner of living that are the cause of distinct values in a family. These families have poor, mediocre or virtuous family values, however what one may consider as a mediocre family value may seem poor to someone else and vice-versa. These family values differ from family to family world-wide. The most significant values are family unity, honesty and education. Family unity, is a family being together in blissful harmony on holidays. Family unity is regardless how bad a situation may be it will bring us closer together and make our bond stronger. Family unity is my family watching me grow from infancy to adulthood, guiding me with good values. Family unity is communicating with each other. Unfortunately, my parents were seldom around during my childhood stages. Therefore they were rarely home to guide me through good family values. Now that I am an adult my parents are persistent to spend time with me and teach me values not taught to me when I was a child. I believe it's like teaching an old dog new tricks. A child needs direction from the childhood up to adulthood not the reverse. I recall coming home from school to an empty house. My parents were working to provide us with a home, things we needed and wanted. Regardless, as a child a family was just as important. A popular soul singer, Luther Vandross, sang a song whose lyrics expla...

Monday, September 16, 2019

Jungle Rot

Tropical ulcers (also commonly known as Jungle Rot) are necrotic painful lesions that are a result from a mixed bacterial infection. These ulcers are common in hot humid tropical or subtropical areas. They are usually found on the lower legs or feet of children and young adults. Typically, the ulcers have a raised border, and a yellowish necrotic base. The ulcers may heal spontaneously, but in many instances extension may occur which results in deep lesions that can penetrate into muscles, tendons and bone.If the so called Jungle Rot goes untreated it can result in much scar tissue and disability. A person can contract this disease or disorder in the skin from having preexisting abrasions or sores that sometimes begin from a mere scratch. The majority of tropical ulcers will occur below the knee of the patient, usually around the ankle. These lesions can sometimes also occur on the arms, but it is more likely to occur on the lower parts of the body. Most of the people who get this ul cer are subjects with poor nutrition which puts them at a higher risk, or people who do not wear socks or proper footwear and clothing.Jungle rot has been described as a disease of the â€Å"poor and hungry'. Urbanization of populations could be a factor in the disorder seeing as tropical ulcers are usually a rural problem. Sometimes outbreaks can occur; one was recorded in Tanzania in sugarcane workers cutting the crops while barefoot. Another piece of information on these ulcers is that males are more commonly infected than females. There are not really any symptoms from having a tropical ulcer. You are simply Just infected in some way and the ulcer appears. It is initially circular, superficial, very painful, and has purple edges.It will enlarge rapidly across the skin and down into deeper tissues such as the muscle or even the periosteum, which is the fibrous membrane covering the surface of bones. Tropical ulcers (or Jungle Rot) are known to reach several centimeters in diamet er after a couple of weeks. The edges will become thickened and raised at this stage of the ulcers growth. The central crater may also become necrotic, or blackened due to the death of tissue. Sometimes, the ulcer becomes foul smelling and quite simply, very nasty looking or else disgusting.Luckily, there are some known treatments for hese ulcers, although not all of the ulcers are treatable. In the early stages of the ulcers growth antibiotics such as penicillin or metronidazole can be used in combination with a topical antiseptic to reduce the size of the ulcer and ultimately clear the ulcer up altogether. For other subjects, if you simply improve nutrition and vitamins into their diet the ulcer can be healed. Sometimes if you Just keep the infected area clan or elevated the area becomes well. In extreme cases, amputation is necessary, but most of the time the Tropical ulcer can be treated with success.The reatments are usually quite affordable, it all Just depends on the person b eing treated and the amount ot money they nave . This disorder is also curable. The ulcers are known to go away in time as little as a week after being treated. Once a person has been ridded of the ulcer life can go on as normal if the treatment was successful. Sometimes there are complications with the skin pigmentation of the patient after treatment. Victims have been known to have different colors such as bright red, blue, and green around and on the infected area. It is even rare for there to be a color hange from regular pigmentation to orange.Although life goes on normally for some, for others it is different. If a patient's ulcer grew deep into large muscles or a bone, they can be left walking with a limp or other things such as not being able to use their arm or fingers in such ways like lifting things that they used to be able to. There are also more serious cases involving amputation that can put a person in a handicapped position such as having to use crutches to help wal k or only having one arm which limits very many things. There are known to be outbreaks of tropical ulcers, but nothing is said on a person preading the infection to another person physically.

Sunday, September 15, 2019

China Global Imbalances, Reserve Currency and Global Economic

Global imbalances, Reserve currency, and Global economic governance The accepted hypotheses for the root cause of global economic imbalances are: 1)East Asian economies’ export-led growth: recently the integration with international markets leads to an import and export expansion making the trade surpluses in EA dramatically increase. It had a great success in EA producing higher living standards and poverty rates declining. This cannot be the main cause for the emergence of large global imbalances in 2000 and thereafter since before 2000 EA economies’ TB were roughly balances. )Self-insurance motivation for foreign currency reserve accumulation: after the financial crises in the late 1990s, emerging market economies in EA increased their CA surpluses substantially, and they experienced rising international reserves. After 2005 Chinese surpluses and reserves are too large to be justified by the self-insurance motivation. 3)China’s exchange rate policy: the g. i. started to grow in 2002 and China has been accused of causing the imbalance sustaining a large undervaluation of its real exchange rate since 2003, but it is not true because: †¢China trade surplus did not become large until 2005 RMB appreciated against US$ by 20% in 2005-2008 but the global imbalances continued to grow †¢Most other developing countries also increased their CA surpluses in the same period (if exchange rate was the cause, the other countries that compete with China would have experience declining trade surpluses and reserves) >The need for an alternative hypothesis: these hypotheses imply that the EA economies are driving the g. i. but is not consistent with the basic statistics.While the US trade deficits with China did increase substantially, the share of the US trade deficit due to EA economies as a region actually declined significantly. The three hypotheses surely contributed but they cannot be the main cause of the global imbalances. >An alternative h ypothesis consistent with the data: it views the g. i. as a result of the status of the US $ as the major global reserve currency, combined with: †¢The lack of appropriate financial sector regulation due to deregulation in the 1980s. The federal reserve’s low interest rate policy following the burst of the â€Å"dotcom† bubble in 2001. These policy changes led to excessive risk-taking and higher leverage, producing excess liquidity and â€Å"bubbles† in the US markets, which enabled the US overconsumption that increased the US CA deficit. As China had become the major producer of labor-intensive processed consumer goods by 2000, the US ran a large deficit with China, which ran trade deficits with the EA economies that provided intermediate products to China.The excess liquidity also led to the large outflow of capital to developing countries, which enhance their investment and consequently in large trade surpluses in capital-goods exporting countries and na tural resources exporting countries. Since the US is the reserve currency issuing country, the foreign reserves accumulated through trade/capital account surpluses in other countries would return to the US leading to the US CA surplus. >Why did China stand out in the global imbalances? : the large CA surplus in China reflects high domestic savings.There are several commonly accepted hypotheses about China’s high households saving rate: such as the lack of well-developed social safety net and the demographics of an aging population. But the uniqueness of China’s savings is the large share of corporate savings, which are driven by the excessive concentration of the financial system that serves the big firms, low taxation on natural resources, and monopolies in some sectors. Reforms are required for removing these distortions and increasing consumption. The role of the reserve currency in global imbalances: the status of the $ as the major global reserve currency, combine d with the financial deregulation of the 1980s and the low interest rate policy of the 2000s, led to the emergence of global imbalances. To prevent their recurrence, the ultimate solution is to replace national currencies as global reserve currencies with a new global currency, but US is unlikely to give up its reserve-issuing privilege to a global body (IMF).A more likely scenario is the emergence of a basket of reserve currencies with some changes in the basket’s consumption and weights. >A win-win solution for the global recovery: the most urgent challenges are high unemployment and the large excess capacity in high-income industrialized countries. Win-win solutions for the global recovery and long-term growth could be based on new international financial arrangements along with structural reforms in both high-income and developing countries.On the financial front it could be created a global recovery fund (supported by hard-currency countries and large-reserve countries a nd managed by multilateral development banks) to finance investments to release bottlenecks and enhance productivity in developing countries. These investments would increase the demand for capital goods produced in high-income countries, reduce their unemployment now, and enhance the developing countries’ growth in the future. The fund could be complemented by structural reforms in high-income and developing countries to create space for investment and to improve the efficiency of investment.

How does Arthur Miller present the flaws and limitations of the American Dream in ‘Death of a Salesman’ Essay

The American Dream is an object of desire for many Americans as it is what they strive for their whole life. The American Dream is based mainly on wealth and materialism. The sense of freedom is what people are striving for. Freedom from bills and debt is what Willy Loman is striving for in ‘Death of a Salesman’. The American Dream is seen as a perfect life, which consists of a house with a white picket fence and perfect family: husband, wife, two children and a dog all living happily and comfortably without any financial troubles. But very few Americans achieve that goal in their lifetime, because there’s also competition if everyone’s aiming for it. Every person is competing with their friends and neighbours. These flaws show through in ‘Death of a Salesman’ as Willy tries to get to grips with his life and trying to pay off his house. ‘Death of a Salesman’ has been used by Arthur Miller to show what the American Dream is really like. The play is based around an average family man, Willy Loman, who has struggled all his life to make something of it; to strike it lucky, but his chance never came. He is presented as a ‘normal’ character; the average ‘middle American’, who wants to pay off all his debts and bills. This shows the lack of contentment in his life. He’s not content having a roof over his head, or having a job, because he wants more. Willy wants to achieve more, just like his brother, Ben, who struck it lucky, because he happened to get lost and stumble upon some diamond mines, but Willy blames himself for not going to explore the world with him, ‘There’s just one opportunity I had with that man†¦Ã¢â‚¬â„¢ Willy regrets not going with his brother, but what he doesn’t realise is that he was too young to go with him; he was only 3 years old, when his brother left, whereas Ben was 17. But, despite this fact, he still admires his brother. Yet, there’s barely any mention of his father, who earned his living and fulfilling the American Dream by working hard. Willy has a very flawed way of trying to fulfil the American Dream. He does everything the wrong way and what he doesn’t realise is that it takes some hard work. This may be the reason as to why there’s a feeling of failure in the play. Both, Willy and his sons Biff and Happy are failures in achieving in what they wanted and this shows how Arthur Miller is presenting the flaws of the American Dream, because it can really take its toll on people’s lives and practically ruin their relationships with other people, such as their friends and neighbours. Willy has constantly been competing with his neighbour, Charley. However, Charley is running his own business, whereas Willy is still in the same job that he’s been in for years. Selling. The character of Willy Loman is perfect for presenting the flaws of the American Dream, because he’s just an average man; an average ‘Joe Bloggs’ and basically a nobody, because he hasn’t achieved the things that he wanted to achieve. He continues dreaming of making it big and he keeps on chasing this dream, because there’s a feeling of hope in him, everytime his sons go for a job interview or have an appointment with their boss. He refuses to listen to what his sons have to say, because it’s not what he wants to hear. So, instead he just fills their mouth with words or keeps on interrupting them. Willy holds a lot of false hope of something that he won’t be able to achieve and this is reflected within the play and its setting. The play is set in Willy’s house and this is one of the main reasons as to why there is a lack of contentment in the play, because he hasn’t been able to pay off his mortgage for the house. The setting gives off a boxed in feeling, because of the towering apartment buildings and the lack of greenery is a representation of a metaphor, as nothing can flourish or grow. This is why it is regarded as a limitation of the American Dream, which Arthur Miller presents in the play and through Willy. The lack of contentment is also shown through both sons. Happy’s name is pretty ironic, because his life doesn’t seem to be happy, even though he pretends to be. Both, Biff and Happy have a vengeful streak in them as they both take revenge on their bosses in one way or another. Happy has a tendency to sleep with his boss’s girlfriends/fiancà ¯Ã‚ ¿Ã‚ ½s/wives, whereas Biff steals from his boss. But the reason they are like this is because their father has made them think they can do anything and get anywhere without qualifications, ‘You filled us up with hot air!’ however, Biff seems to go against his father, probably due to the fact that he knows about his father’s affair. He has always gone against his father’s wishes, such as wanting to work with his hands rather than work in an office job. But, Willy is still very stubborn and proud. He doesn’t realise his children are happy doing what they want. This is why his pride has got in the way of him not being able to achieve anything. He has also made his sons proud; too, by making them think that it’s their personalities that will get them a successful job. This represents another limitation of the American Dream; people have to work hard to get where they want. Bernard, Biff’s high school friend, is an example of a hard working person because he worked hard to get where he wanted and yet he never mentioned it to Willy, ‘The Supreme Court! And he didn’t even mention it!’ This shows that Bernard isn’t the type to boast about how well he’s doing even though he climbing the ladder towards the American Dream. He’s overtaken Biff and Willy regrets that, but isn’t quite sure who to blame. Himself or Biff? Willy is blinded by false hope and great aspirations of striking it rich, but he’s doing all this for his children, so that they don’t have to struggle the way he did. But Bernard and Charley show that people have to do things themselves to achieve what they want to achieve, because Bernard is a top lawyer and he did this without anyone’s help. He doesn’t need Charley to provide for him, nor is he working for him either. The only things that Willy has ever been able to achieve in his life are solid material goods, such as his house, fridge, car and vacuum cleaner. But he doesn’t think that it’s enough, so he decides to go and crash the car and kill himself, just because he wants his children to lead a comfortable life. His death brings in money for his children, but it shows what lengths Willy went to just so that his children could lead the perfect life of an American Dream.

Saturday, September 14, 2019

Abducted by a UFO: prevalence information affects young children’s false memories for an implausible event Essay

SUMMARY This study examined whether prevalence information promotes children’s false memories for an implausible event. Forty-four 7–8 and forty-seven 11–12 year old children heard a true narrative about their ï ¬ rst school day and a false narrative about either an implausible event (abducted by a UFO) or a plausible event (almost choking on a candy). Moreover, half of the children in each condition received prevalence information in the form of a false newspaper article while listening to the narratives. Across two interviews, children were asked to report everything they remembered about the events. In both age groups, plausible and implausible events were equally likely to give rise to false memories. Prevalence information increased the number of false memories in 7–8 year olds, but not in 11–12 year olds at Interview 1. Our ï ¬ ndings demonstrate that young children can easily develop false memories of a highly implausible event. Copyright # 2008 John Wiley & Sons, Ltd. Both recent studies (e.g. Pezdek & Hodge, 1999; Strange, Sutherland, & Garry, 2006) and legal cases have demonstrated that children can develop memories of events that never happened, so-called false memories (Loftus, 2004). A well-known legal case is the ‘McMartin Preschool’ trial in which several teachers were accused of ritually abusing hundreds of children across a 10-year period (Garven, Wood, & Malpass, 2000; Garven, Wood, Malpass, & Shaw, 1998; Schreiber et al., 2006). Some of the children recalled extremely bizarre, implausible events such as ï ¬â€šying in helicopters to an isolated farm and watching horses being beaten with baseball bats. The charges against the teachers, however, were eventually dropped; videotapes of the investigative interviews indicated that the children were suggestively interrogated and many experts concluded that the children’s memories were almost certainly false. Controversial cases like the McMartin trial have inspired researchers to investigate how children develop false memories of implausible experiences (Pezdek & Hodge, 1999; Strange et al., 2006), yet the precise antecedents of implausible false memories are still ill-understood. The question we ask here is whether prevalence information—that is, details about the frequency of a false event—is a potential determinant of children’s implausible false memories. *Correspondence to: Henry Otgaar, Faculty of Psychology, Maastricht University, PO Box 616, 6200 MD, Maastricht, The Netherlands. E-mail: henry.otgaar@psychology.unimaas.nl Copyright # 2008 John Wiley & Sons, Ltd. H. Otgaar et al. What do we know about the role of prevalence information in the development of false memories? Mazzoni, Loftus, and Kirsch (2001) describe a three-step process that explains how false memories are formed. According to this model, three conditions must be satisï ¬ ed to create false memories. First, an event has to be considered plausible. Second, the event has to be evaluated as something that genuinely happened. Finally, images and thoughts about the event have to be mistaken as memory details. Consider, now, just the ï ¬ rst stage of Mazzoni et al.’s model (event plausibility) and how prevalence information might affect perceived plausibility. Recent experiments have shown that prevalence information enhances the perceived plausibility of implausible events (Hart & Schooler, 2006; Mazzoni et al., 2001; Pezdek, Blandon-Gitlin, Hart, & Schooler, 2006; Scoboria, Mazzoni, Kirsch, & Jimenez, 2006). Mazzoni et al. (2001) asked undergraduates to read false newspaper articles describing demonic possession. The articles implied, among other things (i.e. a description of what happens in a typical possession experience), that possessions were more common than people previously thought and after reading the articles participants were more likely to believe they had witnessed a demonic possession in the past. Other studies investigating the role of prevalence information in eliciting false beliefs have produced similar striking effects (Hart & Schooler, 2006; Mazzoni et al., 2001; Pezdek et al., 2006; Scoboria et al., 2006). What we do not know, however, is whether prevalence information inï ¬â€šuences the development of false memories (stage 3 of Mazzoni et al.’s model) and not just false beliefs per se. This is an important issue in the false memory literature because several authors have argued that memories and beliefs, although related, are deï ¬ nitely not the same (Scoboria, Mazzoni, Kirsch, & Relyea, 2004; Smeets, Merckelbach, Horselenberg, & Jelicic, 2005). Moreover, the effect of prevalence information has only ever been tested on adults’ beliefs. To date, no study has examined whether prevalence information affects the generation of children’s false memories. What do we know about event plausibility in the development of children’s false memories? In short, research has produced interesting but varied results. Early studies showed that children were more likely to create false memories of plausible than implausible events (Pezdek & Hodge, 1999; Pezdek, Finger, & Hodge, 1997), and researchers suggested that it may be difï ¬ cult to implant false memories of an implausible event (i.e. receiving a rectal enema). In contrast, one recent study shows that children will falsely recall both plausible and implausible events to a similar extent (Strange et al., 2006). Three different explanations might account for these mixed ï ¬ ndings. First, Strange et al. presented children with a doctored photograph of the false event whereas Pezdek and colleagues used false descriptions. Doctored photographs might be considered an extreme form of evidence -one that is very difï ¬ cult for children to refute. It is probable, then, that the doctored photographs skewed the children’s plausibility judgments which in   turn caused them to develop false memories for the plausible and implausible event at a similar rate. Second, Strange et al. compared false events that were either plausible or implausible whereas Pezdek and colleagues (1997, 1999) contrasted false events that differed in terms of script knowledge (i.e. description of what typically occurs in an event). Speciï ¬ cally, they compared a high script knowledge event (i.e. lost in a shopping mall) with a low script knowledge event (i.e. receiving a rectal enema). However, the exact relation between script knowledge and plausibility is not clear (Scoboria et al., 2004). Third, the two false events used in Strange et al.’s and Pezdek et al.’s studies differed with respect to valence. Strange et al.’s events were positive (i.e. taking a hot air balloon ride and drinking a cup of tea with Prince Charles), whereas Pezdek and colleagues implanted false negative events in Copyright # 2008 John Wiley & Sons, Ltd. children’s memory (i.e. lost in a shopping mall and receiving a rectal enema). Studies have shown that valence affects the development of children’s false memories (Ceci, Loftus, Leichtman, & Bruck, 1994; Howe, 2007). Since plausibility, valence and script knowledge seem to play a role in the development of false memories, the false events used in the current study were matched on these factors. To examine whether prevalence information can lead children to develop full-blown false memories of plausible and implausible events, and to examine developmental differences in the development of false memories, we adapted the false narrative procedure (e.g. Garry & Wade, 2005; Loftus & Pickrell, 1995; Pezdek & Hodge, 1999; Pezdek et al., 1997), and exposed some 7–8 year old children and some 11–12 year old children to one true description and one false description of past experiences. Previous studies have shown that these age groups differ developmentally with respect to suggestibility and false memory formation (e.g. Ceci, Ross, & Toglia, 1987). The true description described the child’s ï ¬ rst day at school. The false description was either plausible and described almost choking on a candy, or implausible and described being abducted by a UFO. Half of the children in each group also received prevalence information in the form of a newspaper article. The article suggested that the target false event was much more common than the children probably thought. Our predictions were straightforward: based on the prevalence literature with adults, we predicted that children who heard false prevalence information would be more likely to report false memories than children without false prevalence information. With respect to the role of event plausibility, two predictions can be formulated. Based on studies by Pezdek and colleagues (1997, 1999), we would predict that regardless of prevalence information, plausible events would elicit more false memories than implausible events. However, based on a recent study by Strange et al. (2006), we would expect that plausible and implausible events are equally likely to elicit false memories. Finally, because younger children are more suggestible than older children (for an overview see Bruck & Ceci, 1999), we expected that younger children would be more likely to develop false memories than older children. METHOD Participants The study involved 91 primary school children (48 girls) from two different age groups (n  ¼ 44, 7–8 year olds, M  ¼ 7.68 years, SD  ¼ 0.52; n  ¼ 47, 11–12 year olds, M  ¼ 11.64 years, SD  ¼ 0.53). Children participated after parents and teachers had given informed consent. All children received a small gift in return for their participation. The study was approved by the standing ethical committee of the Faculty of Psychology, Maastricht University. Materials True narratives True narratives described children’s ï ¬ rst day at school. This event was chosen because it was a unique event that had happened to all children at age 4. Children’s parents were contacted by telephone to obtain the following personal details about each child’s ï ¬ rst school day: the family members or friends who escorted the child to school, and the teacher’s and school’s name. These details were incorporated in the true narratives. Copyright # 2008 John Wiley & Sons, Ltd. An example of a true narrative was: Your mother told me that when you were 4 years old, you went for the ï ¬ rst time to the elementary school. The name of the elementary school was Springer and it was located in Maastricht. The name of your teacher was Tom. Your mother took you to school. False narratives False events were selected from a pilot study. In that study, 49 children (M  ¼ 8.02 years, SD  ¼ 1.20, range 6–101) rated the plausibility and valence of 29 events on child-friendly 7-point Smiley scales (anchors:  ¼ implausible/negative,  ¼ plausible/positive) with bigger smiley faces referring to more plausible/more positive events. Speciï ¬ cally, children had to indicate how likely the events were to happen to them (e.g. ‘How likely is it that you almost choke on a candy’?; i.e. personal plausibility; Scoboria et al., 2004) and how pleasant the events were for them (e.g., ‘How pleasant is it that you almost choke on a candy’?). To ensure that they understood the events, all children rated two practice items. Furthermore, 19 children (M  ¼ 8.74 years, SD  ¼ 1.05, range 7–10) were instructed to report everything they knew about each event and the total number of idea units served as our measure of children’s script-knowledge about the events (Scoboria et al., 2004). Based on their ratings, we selected two events, almost choked on a candy and abducted by a UFO. These events were equal in terms of valence (Mchoking  ¼ 1.65, SDchoking  ¼ 1.48, MUFO  ¼ 1.94, SDUFO  ¼ 1.98, t(47) < 1, n.s.) and script knowledge (Mchoking  ¼ 1.11, SDchoking  ¼ 0.99, MUFO  ¼ 0.74, SDUFO  ¼ 1.05, t(18)  ¼ 1.20, n.s.), but differed in terms of plausibility with mean plausibility ratings being higher for the choking event (M  ¼ 5.86, SD  ¼ 2.02) than for the UFO event (M  ¼ 1.63, SD  ¼ 1.75, t(47)  ¼ 10.07, p < .001). Age did not correlate with plausibility, valence and script knowledge for the two events ( ps > .05). Children’s parents conï ¬ rmed that their child had never experienced the false events. The false narratives were: Almost choked on a candy: Your mother told me that you were at a birthday party when you were 4 years old. At this party you received a bag of candies. When you were at home again, you were allowed to have one candy. Your mother saw that you turned blue and she panicked. Then she hit you on the back and the candy came out. Abducted by a UFO: Your mother told me that when you were 4 years old, you were abducted by a UFO. This happened when you were alone outside. You mother was inside the house. Then she suddenly saw through the window that a UFO took you. False newspaper articles For the true and false events a newspaper article was fabricated describing that the event took place quite frequently when participants were age 4. These false newspaper articles were similar in appearance to a local newspaper. Moreover, to personalize the newspaper articles, we included the children’s hometown in the articles. The newspaper articles were 1 Because the age range of our pilot sample did not completely overlap with the age groups of our study, we conducted a 2 (pilot group: younger vs. older children)  2 (event: UFO vs. choking) ANOVA with the latter factor being a within subject factor to examine the effect of age on plausibility judgments. No signiï ¬ cant interaction emerged ( p > .05) indicating that age did not have an impact on the plausibility ratings of our two events. Therefore, the plausibility ratings of our pilot sample can be extended to the older group of our study were randomly assigned to the plausible or implausible event and to the prevalence or no prevalence information condition. Each child was interviewed individually twice over seven days. All interviews were audio taped and transcribed. During the interviews, one true narrative and one false narrative were read aloud, with the latter always being presented in the second position. The procedure of the interviews was similar to that used by Wade, Garry, Read, and Lindsay (2002). At the start of Interview 1, children were told that we were interested in their memories for events that had happened when they were 4 years old. Children were instructed to report everything they remembered about the events. In the prevalence information condition, they were told that to help them remember the events they would be provided with a newspaper article. Subsequently, the interviewer read out the article to the child. Children who did not describe details of the target event were told that ‘many people can’t recall certain events because they haven’t thought about them for such a long time. Please concentrate and try again’. If they still did not recall any details, the interviewer made use of context reinstatement and guided imagery. The purpose of these retrieval techniques was to take the children mentally back to the scene of the event. Speciï ¬ cally, children were told to close their eyes and they were asked to think about their feelings, who was with them, and about the time of the year. After this, children were asked again to recall any details about the event. If they still did not come up with details, the next narrative was presented or the interview was stopped. At the end of Interview 1, children were asked to think about the events every day until the next interview and they were instructed not to talk with others about the events. Parents were asked not to discuss these events with their children. Interview 2 was similar to Interview 1. At the end of Interview 2, they were debriefed using ethical guidelines for false memory research with children (Goodman, Quas, & Redlich, 1998). RESULTS AND DISCUSSION An extensive number of children were extremely surprised during the debrieï ¬ ng when they were told that the false event did not happen to them. For example, one 8-year old child responded ‘It really did happen’ where another one said ‘I really can remember seeing the UFO’. After the debrieï ¬ ng, 39% (n  ¼ 13) of the children remained absolutely conï ¬ dent that they experienced the false events. We debriefed these children until they understood the events were false. Together, these ï ¬ ndings suggest that the false memories in this study were not the result of children falsely assenting or trying to please the interviewer. True events True memories were categorized as either remembered or not remembered. To be categorized as remembered, children had to report at least two of the three personal details correctly. Children’s true recall was near ceiling. They remembered 88 (97%) events at Interview 1 and 89 (98%) events during Interview 2, x2(1)  ¼ .07, n.s. False events For the false events, two independent judges classiï ¬ ed each memory report as no false memory, images but not memories or false memory according to criteria used by Lindsay, Hagen, Read, Wade, and Garry (2004). If a child attempted to recall the false event, but did Copyright # 2008 John Wiley & Sons, Ltd. Appl. Cognit. Psychol. 23: 115–125 (2009) DOI: 10.1002/acp Prevalence information, plausibility, and children’s false memories not have any memory of the event or did not report any details that were beyond the false description, the report was categorized as no false memory. A report was judged as an image when children speculated about details and described images related to the false events. For example, one child reported: ‘I think I almost choked on a candy on the birthday of Mauk. I am not sure. It was not a pleasant feeling’. To be classiï ¬ ed as a false memory, children had to indicate that they remembered the event and provide details beyond those mentioned in the narrative, but related to the narrative. To give an example of a detail, one child stated that he remembered being taken to the UFO through a blue beam of light. If children stated that they thought the event and/or certain details could have happened, then this was not scored as a false memory. Furthermore, to minimize the effect of demand characteristics, direct responses to interviewer prompts were not classiï ¬ ed as a false memory. The following dialogue from Interview 2 illustrates a child’s false memory of the UFO abduction. Child: ‘I saw cameras and ï ¬â€šashes and some people in the UFO’. Interviewer: ‘How many people did you see’? Child: ‘Approximately nine or ten’. Interviewer: ‘What kind of people’? Child: ‘People like me, children’. Interviewer: ‘What else did you see’? Child: ‘I saw some people and also some blue/green puppets were passing’. Inter-rater agreement for classiï ¬ cation of the memory reports was high; k  ¼ 0.92 for Interview 1 and k  ¼ 0.94 for Interview 2. Collapsing across the conditions, at Interview 1, 33% (n  ¼ 30) of the children developed a false memory. Thirty per cent (n  ¼ 9) of these children assented to the false events immediately, that is prior to guided imagery and context reinstatement. Thirty-six per cent of the children (n  ¼ 33), with 67% (n  ¼ 20) immediately assenting, ‘remembered’ the false events at Interview 2, x2(1)  ¼ 26.61, p < .001, Cramer’s V  ¼ 0.54. Some of the children who rejected the false events at Interview 2 indicated, despite the explicit instruction at Interview 1, that they had discussed the false events with their parents. The increase in false memories over time is in line with previous studies with adults and children (e.g. Lindsay et al., 2004; Strange et al., 2006; Wade et al., 2002). Furthermore, 10% (n  ¼ 9) of the children were classiï ¬ ed as having an image of the false events at Interview 1. At Interview 2, this percentage decreased to 7% (n  ¼ 6), x2(1)  ¼ 58.53, p < .001, Cramer’s V  ¼ 0.80. Recall that the primary question in this study was whether prevalence information boosts the likelihood of plausible and implausible false memories. Table 1 shows the percentage and number of children who reported false memories as a function of interview and condition. To examine the role of age, event type, and prevalence information in the development of false memories, we conducted a logistic regression analysis with the dependent variable being false memory (0  ¼ no false memory/images, 1  ¼ false memory). In this analysis, we only focused on ‘genuine’ false memories and did not collapse across false memories and images. Although non-parametric methods, such as logistic regression, often lack the statistical power to detect interactions (Sawilowsky, 1990), there are four important points to note about these data. First, the only signiï ¬ cant interaction found was an Age  Prevalence information interaction at Interview 1. Prevalence information enhanced the development of 7–8 year old children’s false memories but not 11–12 year old children’s false memories, and this effect occurred at Interview 1 (B  ¼ 2.16, SE  ¼ 0.96, Copyright # 2008 John Wiley & Sons, Ltd.