Wednesday, July 31, 2019

Employment and Unemployment in the 1930s

The Great Depression is to economics what the Big Bang is to physics. As an event, the Depression is largely synonymous with the birth of modern macroeconomics, and it continues to haunt successive generations of economists. With respect to labor and labor markets, these facts evidently include wage rigidity, persistently high unemployment rates, and long-term joblessness. Traditionally, aggregate time series have provided the econometric grist for distinguishing explanations of the Great Depression.Recent research on labor markets in the 1930s, however, has shifted attention from aggregate to disaggregate time series and towards microeconomic evidence. This shift in focus is motivated by two factors. First, disaggregated data provide many more degrees of freedom than the decade or so of annual observations associated with the depression, and thus may prove helpful in distinguishing macroeconomic explanations. Second, disaggregation has revealed aspects of economic behavior hidden in the time series but which may be essential to their proper interpretation and, in any case, are worthy of study in their own right.Although the substantive findings of recent research are too new to judge their permanent significance, I believe that the shift towards disaggregated analysis is an important contribution. The paper begins by reviewing the conventional statistics of the United States labor market during the Great Depression and the paradigms to explain them. It then turns to recent studies of employment and unemployment using disaggregated data of various types. The paper concludes with discussions of research on other aspects of labor markets in the 1930s and on a promising source of microdata for future work.My analysis is confined to research on the United States; those interested in an international perspective on labor markets might begin with Eichengreen and Hatton's chapter in their edited volume, Interwar Unemployment in International Perspective, and the vario us country studies in that volume. I begin by reviewing two standard series of unemployment rates, Stanley Lebergott's and Michael Darby's, and an index of real hourly earnings in manufacturing compiled by the Bureau of Labor Statistics (BLS).The difference between Lebergott's and Darby's series, which is examined later in the paper, concerns the treatment of persons with so-called â€Å"work relief† jobs. For Lebergott, persons on work relief are unemployed, while Darby counts them as employed. Between 1929 and 1933 the unemployment rate increased by over 20 percentage points, according to the Lebergott series, or by 17 percentage points, according to Darby's series. For the remainder of the decade, the unemployment rate stayed in, or hovered around, double digits. On the eve of America's entry into World War Two, between 9. and 14. 6 percent of the labor force was out of work, depending on how unemployment is measured. In addition to high levels of unemployment, the 1930s w itnessed the emergence of widespread and persistent long-term unemployment (unemployment durations longer than one year) as a serious policy problem. According to a Massachusetts state census taken in 1934, fully 63 percent of unemployed persons had been unemployed for a year or more. Similar amounts of long-term unemployment were observed in Philadelphia in 1936 and 1937.Given these patterns of unemployment, the behavior of real wages has proven most puzzling. Between 1929 and 1940 annual changes in real wages and unemployment were positively correlated. Real wages rose by 16 percent between 1929 and 1932, while the unemployment rate ballooned from 3 to 23 percent. Real wages remained high throughout the rest of the decade, although unemployment never dipped below 9 percent, no matter how it is measured. From this information, the central questions appear to be: Why did unemployment remain persistently high throughout the decade?How can unemployment rates in excess of 10 to 20 perc ent be reconciled with the behavior of real wages, which were stable or increasing? One way of answering these questions is to devise aggregative models consistent with the time series, and I briefly review these attempts later in the paper. Before doing so, however, it is important to stress that the aggregate statistics are far from perfect. No government agency in the 1930s routinely collected labor force information analogous to that provided by today's Current Population Survey.The unemployment rates just discussed are constructs, the differences between intercensal estimates of labor force participation rates and employment-to-population ratios. Because unemployment is measured as a residual, relatively small changes in the labor force or employment counts can markedly affect the estimated unemployment rate. The dispute between Darby and his critics over the labor force classification of persons on work relief is a manifestation of this problem. Although some progress has been made on measurement issues, there is little doubt that further refinements to the aggregate unemployment eries would be beneficial. Stanley Lebergott has critically examined the reliability of BLS wage series from the 1930s. The BLS series drew upon a fixed group of manufacturing establishments reporting for at least two successive months. Lebergott notes several biases arising from this sampling method. Workers who were laid off, he claims, were less productive and had lower wages than average. Firms that went out of business were smaller, on average, than firms that survived, and tended to have lower average wages.In addition, the BLS oversampled large firms, and Lebergott suspects that large firms were more adept at selectively laying off lower- productivity labor; more willing to deskill, that is, reassign able employees to less-skilled jobs; and more likely to give able employees longer work periods. A rough calculation suggests that accounting for these biases would produce a n aggregate decline in nominal wages between 1929 and 1932 as much as 48 percent larger than that measured by the BLS series.Although the details of Lebergott's calculation are open to scrutiny, the research discussed elsewhere in the paper suggests that he is correct about the existence of biases in the BLS wage series. For much of the period since World War Two, most economists blamed persistent unemployment on wage rigidity. The demand for labor was a downward sloping function of the real wage but since nominal wages were insufficiently flexible downward, the labor market in the 1930s was persistently in disequilibrium.Labor supply exceeded labor demand, with mass unemployment the unfortunate consequence. Had wages been more flexible, this viewpoint holds, employment would have been restored and Depression averted. The frontal attack on the conventional wisdom was Robert E. Lucas and Leonard Rapping. The original Lucas-Rapping set-up continued to view current labor demand as a ne gative function of the current real wage. Current labor supply was a positive function of the real wage and the expected real interest rate, but a negative function of the expected future wage.If workers expect higher real wages in the future or a lower real interest rate, current labor supply would be depressed, employment would fall, unemployment rise, and real wages increase. Lucas and Rapping offer an unemployment equation, relating the unemployment rate to actual versus anticipated nominal wages, and actual versus anticipated price levels. Al Rees argued that the Lucas-Rapping model was unable to account for the persistence of high unemployment coincident with stable or rising real wages. Lucas and Rapping conceded defeat for the period 1933 to 1941, but claimed victory for 1929 to 1933.As Ben Bernanke pointed out, however, their victory rests largely on the belief that expected real interest rates fell between 1929 and 1933, while â€Å"ex post, real interest rates in 1930-33 were the highest of the century†. Because nominal interest rates fell sharply between 1929 and 1933, whether expected real rates fell hinges on whether deflation — which turned out to be considerable — was unanticipated. Recent research by Steven Cecchetti suggests that the deflation was, at least in part, anticipated, which appears to undercut Lucas and Rapping's reply.In a controversial paper aimed at rehabilitating the Lucas-Rapping model, Michael Darby redefined the unemployment rate to exclude persons who held work relief jobs with the Works Progress and Work Projects Administrations (the WPA) or other federal and state agencies. The convention of the era, followed by Lebergott, was to count persons on work relief as unemployed. According to Darby, however, persons with work relief jobs were â€Å"employed† by the government: â€Å"From the Keynesian viewpoint, labor voluntarily employed on contracyclical †¦ government projects should certainly be counted as employed.On the search approach to unemployment, a person who accepts a job and withdraws voluntarily from the activity of search is clearly employed. † The exclusion of persons on work relief drastically lowers the aggregate unemployment rate after 1935. In addition to modifying the definition of unemployment, Darby also redefined the real wage to be the average annual earnings of full-time employees in all industries. With these changes, the fit of the Lucas-Rapping unemployment equation is improved, even for 1934 to 1941. However, Jonathan Kesselman and N. E.Savin later showed that the improved fit was largely the consequence of Darby's modified real wage series, not the revised unemployment rate. Thus, for the purpose of empirically testing the Lucas-Rapping model, the classification of WPA workers as employed or unemployed is not crucial. Returning to the questions posed above, New Deal legislation has frequently been blamed for the persistence of high unem ployment and the perverse behavior of real wages. In this regard, perhaps the most important piece of legislation was the National Industrial Recovery Act (NIRA) of 1933.The National Recovery Administration (NRA), created by the NIRA, established guidelines that raised nominal wages and prices, and encouraged higher levels of employment through reductions in the length of the workweek (worksharing). An influential study by Michael Weinstein econometrically analyzed the impact of the NIRA on wages. Using aggregate monthly data on hourly earnings in manufacturing, Weinstein showed that the NIRA raised nominal wages directly through its wage codes and indirectly by raising prices.The total impact was such that â€Å"[i]n the absence of the NIRA, average hourly earnings in manufacturing would have been less than thirty-five cents by May 1935 instead of its actual level of almost sixty cents (assuming unemployment to have been unaltered)†. It is questionable, however, whether the NIRA really this large an impact on wages. Weinstein measured the direct effect of the codes by comparing monthly wage changes during the NIRA period (1933-35) with wage changes during the recovery phase (1921-23) of the post-World War One recession (1920-21), holding constant the level of unemployment and changes in wholesale prices.Data from the intervening years (1924-1932) or after the NIRA period were excluded from his regression analysis (p. 52). In addition, Weinstein's regression specification precludes the possibility that reductions in weekly hours (worksharing), some of which occurred independently of the NIRA, had a positive effect on hourly earnings. A recent paper using data from the full sample period and allowing for the effect of worksharing found a positive but much smaller impact of the NIRA on wages (see the discussion of Bernanke's work later in the paper).Various developments in neo-Keynesian macroeconomics have recently filtered into the discussion. Martin Bai ly emphasizes the role of implicit contracts in the context of various legal and institutional changes during the 1930s. Firms did not aggressively cut wages when unemployment was high early in the 1930s because such a policy would hurt worker morale and the firm's reputation, incentives that were later reinforced by New Deal legislation. Efficiency wages have been invoked in a provocative article by Richard Jensen.Beginning sometime after the turn of the century large firms slowly began to adopt bureaucratic methods of labor relations. Policies were â€Å"designed to identify and keep the more efficient workers, and to encourage other workers to emulate them. † Efficiency wages were one such device, which presumably contributed to stickiness in wages. The trend towards bureaucratic methods accelerated in the 1930s. According to Jensen, firms surviving the initial downturn used the opportunity to lay off their least productive workers but a portion of the initial decline in e mployment occurred among firms that went out of business.Thus, when expansion occurred, firms had their pick of workers who had been laid off. Personnel departments used past wage histories as a signal, and higher-wage workers were a better risk. Those with few occupational skills, the elderly (who were expensive to retrain) and the poorly educated faced enormous difficulties in finding work. After 1935 the â€Å"reserve army† of long-term unemployed did not exert much downward pressure on nominal wages because employers simply did not view the long-term unemployed as substitutes for the employed at virtually any wage.A novel feature of Jensen's argument is its integration of microeconomic evidence on the characteristics of the unemployed with macroeconomic evidence on wage rigidity. Other circumstantial evidence is in its favor, too. Productivity growth was surprisingly strong after 1932, despite severe weakness in capital investment and a slowdown in innovative activity. Th e rhetoric of the era, that â€Å"higher wages and better treatment of labor would improve labor productivity†, may be the correct explanation.If the reserve army hypothesis were true, the wages of unskilled workers, who were disproportionately unemployed, should have fallen relative to the wages of skilled and educated workers, but there is no indication that wage differentials were wider overall in the 1930s than in the 1920s. It remains an open question, however, whether the use of efficiency wages was as widespread as Jensen alleges, and whether efficiency wages can account empirically for the evolution of productivity growth in the 1930s. In brief, the macro studies have not settled the debate over the proper interpretation of the aggregate statistics.This state of affairs has much to do with the (supreme) difficulty of building a consensus macro model of the depression economy. But it is also a consequence of the level of aggregation at which empirical work has been con ducted. The problem is partly one of sample size, and partly a reflection of the inadequacies of discussing these issues using the paradigm of a representative agent. This being, the case I turn next to disaggregated studies of employment and unemployment. In a conventional short-run aggregate production function, the labor input is defined to be total person-hours.For the postwar period, temporal variation in person-hours is overwhelmingly due to fluctuations in employment. However, for the interwar period, variations in the length of the workweek account for nearly half of the monthly variance in the labor input. Declines in weekly hours were deep, prolonged, and widespread in the 1930s. The behavior of real hourly earnings, however, may have not have been independent of changes in weekly hours. This insight motivates Ben Bernanke's analysis of employment, hours, and earnings in eight pre-World War Two manufacturing industries.The (industry- specific) supply of labor is described by an earnings function, which gives the minimum weekly earnings required for a worker to supply a given number of hours per week. In Bernanke's formulation, the earnings function is convex in hours and also discontinuous at zero hours (the discontinuity reflects fixed costs of working or switching industries). Production depends separately on the number of workers and weekly hours, and on nonlabor inputs. Firms are not indifferent â€Å"between receiving one hour of work from eight different workers and receiving eight hours from one worker. A reduction in product demand causes the firm to cut back employment and hours per week. The reduction in hours means more leisure for workers, but less pay per week. Eventually, as weekly hours are reduced beyond a certain point, hourly earnings rise. Further reductions in hours cannot be matched one for one by reductions in weekly earnings. But, when hourly earnings increase, the real wage then appears to be countercyclical. To test the mode l, Bernanke uses monthly, industry-level data compiled by the National Industrial Conference Board covering the period 1923 to 1939.The specification of the earnings function (describing the supply of labor) incorporates a partial adjustment of wages to prices, while the labor demand equation incorporates partial adjustment of current demand to desired demand. Except in one industry (leather), the industry demand for workers falls as real product wages rise; industry demands for weekly hours fall as the marginal cost to the firm of varying weekly hours rises; and industry labor supply is a positive function of weekly earnings and weekly hours.The model is used to argue that the NIRA lowered weekly hours and raised weekly earnings and employment, although the effects were modest. In six of the industries (the exceptions were shoes and lumber), increased union influence after 1935 (measured with a proxy variable of days idled by strikes) raised weekly earnings by 10 percent or more. S imulations revealed that allowing for full adjustment of nominal wages to prices resulted in a poor description of the behavior of real wages, but no deterioration in the model's ability to explain employment and hours variation.Whatever the importance of sticky nominal wages in explaining real wage behavior, the phenomenon â€Å"may not have had great allocative significance† for employment. In a related paper, Bernanke and Martin Parkinson use an expanded version of the NICB data set to explore the possibility that â€Å"short-run increasing returns to labor† or procyclical labor productivity, characterized co-movements in output and employment in the 1930s. Using their expanded data set, Bernanke and Parkinson estimate regressions of the change in output on the change in labor input, now defined to be total person-hours.The coefficient of the change in the labor input is the key parameter; if it exceeds unity, then short-run increasing returns to labor are present. Bernanke and Parkinson find that short-run increasing returns to labor characterized all but two of the industries under study (petroleum and leather). The estimates of the labor coefficient are essentially unchanged if the sample is restricted to just the 1930s. Further, a high degree of correlation (r = 0. 9) appears between interwar and postwar estimates of short-run increasing returns to labor for a matched sample of industries.Thus, the procyclical nature of labor productivity appears to be an accepted fact for both the interwar and postwar periods. One explanation of procyclical productivity, favored by real business cycle theorists, emphasizes technology shocks. Booms are periods in which technological change is unusually brisk, and labor supply increases to take advantage of the higher wages induced by temporary gains in productivity (caused by the outward shift in production functions).In Bernanke and Parkinson's view, however, the high correlation between the pre- and post -war estimates of short-run increasing returns to labor poses a serious problem for the technological shocks explanation. The high correlation implies that the â€Å"real shocks hitting individual industrial production functions in the interwar period accounted for about the same percentage of employment variation in each industry as genuine technological shocks hitting industrial production functions in the post-war period†.However, technological change per se during the Depression was concentrated in a few industries and was modest overall. Further, while real shocks (for example, bank failures, the New Deal, international political instability) occurred, their effects on employment were felt through shifts in aggregate demand, not through shifts in industry production functions. Other leading explanations of procyclical productivity are true increasing returns or, popular among Keynesians, the theory of labor hoarding during economic downturns.Having ruled out technology s hocks, Bernanke and Parkinson attempt to distinguish between true increasing returns and labor hoarding. They devise two tests, both of which involve restrictions on excluding proxies for labor utilization from their regressions of industry output. If true increasing returns were present, the observed labor input captures all the relevant information about variations in output over the cycle. But if labor hoarding were occurring, the rate of labor utilization, holding employment constant, should account for output variation.Their results are mixed, but are mildly in favor of labor hoarding. Although Bernanke's modeling effort is of independent interest, the substantive value of his and Parkinson's research is enhanced considerably by disaggregation to the industry level. It is obvious from their work that industries in the 1930s did not respond identically to decreases in output demand. However, further disaggregation to the firm level can produce additional insights. Bernanke and P arkinson assume that movements in industry aggregates reflect the behavior of a representative firm.But, according to Lebergott (1989), much of the initial decline in output and employment occurred among firms that exited. Firms that left, and new entrants, however, were not identical to firms that survived. These points are well-illustrated in Timothy Bresnahan and Daniel Raff's study of the American motor vehicle industry. Their database consists of manuscript census returns of motor vehicle plants in 1929, 1931, 1933, and 1935. By linking the manuscript returns from year to year, Bresnahan and Raff have created a panel dataset, capable of identifying plants the exited, surviving plants, and new plants.Plants that exited between 1929 and 1933 had lower wages and lower labor productivity than plants that survived. Between 1933 and 1935 average wages at exiting plants and new plants were slightly higher than at surviving plants. Output per worker was still relatively greater at surv iving plants than new entrants, but the gap was smaller than between 1929 and 1933. Roughly a third of the decline in the industry's employment between 1929 and the trough in 1933 occurred in plant closures. The vast majority of these plant closures were permanent.The shakeout of inefficient firms after 1929 ameliorated the decline in average labor productivity in the industry. Although industry productivity did decline, productivity in 1933 would have been still lower if all plants had continued to operate. During the initial recovery phase (1933-35) about 40 percent of the increase in employment occurred in new plants. Surviving plants were more likely to use mass-production techniques; the same was true of new entrants. Mass production plants differed sharply from their predecessors (custom production plants) in the skill mix of their workforces and in labor relations.In the motor vehicle industry, the early years of the Depression were an â€Å"evolutionary event†, perman ently altering the technology of the representative firm. While the representative firm paradigm apparently fails for motor vehicles, it may not for other industries. Some preliminary work by Amy Bertin, Bresnahan, and Raff, on another industry, blast furnaces, is revealing on this point. Blast furnaces were subject to increasing returns and the market for the product (molten iron) was highly localized.For this industry, reductions in output during a cyclical trough are reasonably described by a representative firm, since â€Å"localized competition prevented efficient reallocation of output across plants† and therefore the compositional effects occurring in the auto industry did not happen. These analyses of firm-level data have two important implications for studies of employment in the 1930s. First, aggregate demand shocks could very well have changed average technological practice through the process of exit and entry at the firm level.Thus Bernanke and Parkinson's reject ion of the technological shocks explanation of short-run increasing returns, which is based in part on their belief that aggregate demand shocks did not alter industry production functions, may be premature. Second, the empirical adequacy of the representative firm paradigm is apparently industry-specific, depending on industry structure, the nature of product demand, and initial (that is, pre-Depression) heterogeneity in firm sizes and costs.Such â€Å"phenomena are invisible in industry data,† and can only be recovered from firm-level records, such as the census manuscripts. Analyses of industry and firm-level data are one way to explore heterogeneity in labor utilization. Geography is another. A focus on national or even industry aggregates obscures the substantial spatial variation in bust and recovery that characterized the 1930s. Two recent studies show how spatial variation suggests new puzzles about the persistence of the Depression as well as provide additional degre es of freedom for discriminating between macroeconomic models.State-level variation in employment is the subject of an important article by John Wallis. Using data collected by the Bureau of Labor Statistics, Wallis has constructed annual indices of manufacturing and nonmanufacturing employment for states from 1930 to 1940. Wallis' indices reveal that declines in employment between 1930 and 1933 were steepest in the East North Central and Mountain states; employment actually rose in the South Atlantic states, however, once an adjustment is made for industry mix.The South also did comparatively well during the recovery phase of the Depression (1933-1940). Wallis tests whether the southern advantage during the recovery phase might reflect lower levels of unionization and a lower proportion of employment affected by the passage of the Social Security Act (1935), but controlling for percent unionized and percent in covered employment in a regression of employment growth does not elimina te the regional gap. What comes through clearly,† according to Wallis â€Å"is that the [employment] effects of the Depression varied considerably throughout the nation† and that a convincing explanation of the South-nonSouth difference remains an open question. Curtis Simon and Clark Nardinelli exploit variation across cities to put forth a particular interpretation of economic downturn in the early 1930s. Specifically, they study the empirical relationship between â€Å"industrial diversity† and city- level unemployment rates before and after World War Two.Industrial diversity is measured by a city-specific Herfindahl index of industry employment shares. The higher the value of the index, the greater is the concentration of employment in a small number of industries. Using data from the 1930 federal census and the 1931 Special Census of Unemployment, Simon and Nardinelli show that unemployment rates and the industrial diversity index were positively correlated across cities at the beginning of the Depression.Analysis of similar census data for the post-World War Two, period, reveals a negative correlation between city unemployment rates and industrial diversity. Simon and Nardinelli explain this finding as the outcome of two competing effects. In normal economic circumstances, a city with a more diverse range of industries should have a lower unemployment rate (the â€Å"portfolio† effect), because industry-specific demand shocks will not be perfectly correlated across industries and some laid-off workers will find ready employment in expanding industries.The portfolio effect may fail, however, during a large aggregate demand shock (the early 1930s) if firms and workers are poorly informed, misperceiving the shock to be industry-specific, rather than a general reduction in demand. Firms in industrially diverse cities announce selective layoffs rather than reduce wages, because they believe that across-the-board wage cuts would caus e too many workers to quit (workers in industrial diverse cities think they can easily find a job in another industry elsewhere in the same city), thus hurting production.Firms in industrially specialized cities, however, are more likely to cut wages than employment because they believe lower wages â€Å"would induce relatively fewer quits† than in industrially diverse cities. Thus, Simon and Nardinelli conclude, wages in the early 1930s were more rigid in industrially-diverse cities, producing the positive correlation between industrial diversity and unemployment. Improvements in the quantity, quality, and timeliness of economic information, they conjecture, have caused the portfolio effect to dominate after World War Two, producing the postwar negative correlation.Although one can question the historical relevance of Simon and Nardinelli's model, and the specifics of their empirical analysis, their paper is successful in demonstrating the potential value of spatial data in unraveling the sources of economic downturn early in the Depression. Postwar macroeconomics has tended to proceed as aggregate unemployment rates applied to a representative worker, with a certain percentage of that worker's time not being used. As a result, disaggregated evidence on unemployment has been slighted.Such evidence, however, can provide a richer picture of who was unemployed in the 1930s, a better understanding of the relationship between unemployment and work relief, and further insights into macroeconomic explanations of unemployment. To date, the source that has received the most attention is the public use tape of the 1940 census, a large, random sample of the population in 1940. The 1940 census is a remarkable historical document. It was the first American census to inquire about educational attainment, wage and salary income and weeks worked in the previous year; nd the first to use the â€Å"labor force week† concept in soliciting information about labor f orce status. Eight labor force categories are reported, including whether persons held work relief jobs during the census week (March 24-30, 1940). For persons who were unemployed or who held a work relief job at the time of the census, the number of weeks of unemployment since the person last held a private or nonemergency government job of one month or longer was recorded.The questions on weeks worked and earnings in 1939 did not treat work relief jobs differently from other jobs. That is, earnings from, and time spent on, work relief are included in the totals. I have used the 1940 census sample to study the characteristics of unemployed workers and of persons on work relief, and the relationship between work relief and various aspects of unemployment. It is clear from the census tape that unemployed persons who were not on work relief were far from a random sample of the labor force.For example, the unemployed were typically younger, or older, than the average employed worker (u nemployment followed a U-shape pattern with respect to age); the unemployed were more often nonwhite; and they were less educated and had fewer skills than employed persons, as measured by occupation. Such differences tended to be starkest for the long-term unemployed (those with unemployment durations longer than year); thus, for example, the long-term unemployed had even less schooling that the average unemployed worker.Although the WPA drew its workers from the ranks of the unemployed, the characteristics of WPA workers did not merely replicate those of other unemployed persons. For example, single men, the foreign-born, high school graduates, urban residents, and persons living in the Northeast were underrepresented among WPA workers, compared with the rest of the unemployed. Perhaps the most salient difference, however, concerns the duration of unemployment. Among those on work relief in 1940, roughly twice as many had been without a non-relief job for a year or longer as had u nemployed persons not on work relief.The fact that the long-term unemployed were concentrated disproportionately on work relief raises an obvious question. Did the long-term unemployed find work relief jobs after being unemployed for a long time, or did they remain with the WPA for a long time? The answer appears to be mostly the latter. Among nonfarm males ages 14 to 64 on work relief in March 1940 and reporting 65 weeks of unemployment (that is, the first quarter of 1940 and all of 1939), close to half worked 39 weeks or more in 1939. Given the census conventions, they had to have been working more or less full time, for the WPA.For reasons that are not fully clear, the incentives were such that a significant fraction of persons who got on work relief, stayed on. One possible explanation is that some persons on work relief preferred the WPA, given prevailing wages, perhaps because their relief jobs were more stable than the non-relief jobs (if any) available to them. Or, as one WP A worker put it: â€Å"Why do we want to hold onto these [relief] jobs? †¦ [W]e know all the time about persons †¦ just managing to scrape along †¦ My advice, Buddy, is better not take too much of a chance. Know a good thing when you got it. Alternatively, working for the WPA may have stigmatized individuals, making them less desirable to non-relief employers the longer they stayed on work relief. Whatever the explanation, the continuous nature of WPA employment makes it difficult to believe that the WPA did not reduce, in the aggregate, the amount of job search by the unemployed in the late 1930s. In addition to the duration of unemployment experienced by individuals, the availability of work relief may have dampened the increase in labor supply of secondary workers in households in which the household head was unemployed, the so- called â€Å"added worker† effect.Specifically, wives of unemployed men not on work relief were much more likely to participate in the labor force than wives of men who were employed at non-relief jobs. But wives of men who worked for the WPA were far less likely to participate in the labor force than wives of otherwise employed men. The relative impacts were such that, in the aggregate, no added worker effect can be observed as long as persons on work relief are counted among the unemployed.Although my primary goal in analyzing the 1940 census sample was to illuminate features of unemployment obscured by the aggregate time series, the results bear on several macroeconomic issues. First, the heterogenous nature of unemployment implies that a representative agent view of aggregate unemployment cannot be maintained for the late 1930s. Whether the view can be maintained for the earlier part of the Depression is not certain, but the evidence presented in Jensen and myself suggests that it cannot.Because the evolution of the characteristics of the unemployed over the 1930s bears on the plausibility of various macro economic explanations of unemployment (Jensen's use of efficiency wage theory, for example), further research is clearly desirable. Second, the heterogenous nature of unemployment is consistent with Lebergott's claim that aggregate BLS wage series for the 1930s are contaminated by selection bias, because the characteristics that affected the likelihood of being employed (for example, education) also affected a person's wage.Again, a clearer understanding of the magnitude and direction of bias requires further work on how the characteristics of the employed and unemployed changed as the Depression progressed. Third, macroeconomic analyses of the persistence of high unemployment should not ignore the effects of the WPA — and, more generally, those of other federal relief policies — on the economic behavior of the unemployed. In particular, if work relief was preferred to job search by some unemployed workers, the WPA may have displaced some growth in private sector emplo yment that would have occurred in its absence.An estimate of the size of this displacement effect can be inferred from a recent paper by John Wallis and Daniel Benjamin. Wallis and Benjamin estimate a model of labor supply, labor demand, and per capita relief budgets using panel data for states from 1933 to 1939. Their coefficients imply that elimination of the WPA starting in 1937 would have increased private sector employment by 2. 9 percent by 1940, which corresponds to about 49 percent of persons on work relief in that year. Displacement was not one-for-one, but may not have been negligible.My discussion thus far has emphasized the value of disaggregated evidence in understanding certain key features of labor markets in the 1930s — the behavior of wages, employment and unemployment — because these are of greatest general interest to economists today. I would be remiss, however, if I did not mention other aspects of labor markets examined in recent work. What follow s is a brief, personal selection from a much larger literature. The Great Depression left its mark on racial and gender differences.From 1890 to 1930 the incomes of black men increased slightly relative to the incomes of white men, but the trend in relative incomes reversed direction in the 1930s. Migration to the North, a major avenue of economic advancement for Southern blacks, slowed appreciably. There is little doubt that, if the Depression had not happened, the relative economic status of blacks would have been higher on the eve of World War Two. Labor force participation by married women was hampered by â€Å"marriage bars†, implicit or explicit regulations which allowed firms to dismiss single women upon arriage or which prohibited the hiring of married women. Although marriage bars existed before the 1930s, their use spread during the Depression, possibly because social norms dictated that married men were more deserving of scarce jobs than married women. Although the y have not received as much attention from economists, some of the more interesting effects of the Depression were demographic or life-cycle in nature. Marriage rates fell sharply in the early 1930s, and fertility rates remained low throughout the decade.An influential study by the sociologist Glen Elder, Jr. traced the subsequent work and life histories of a sample of individuals growing up in Oakland, California in the 1930s. Children from working class households whose parents suffered from prolonged unemployment during the Depression had lower educational attainment and less occupational mobility than their peers who were not so deprived. Similar findings were reported by Stephan Thernstrom in his study of occupational mobility of Boston men.The Great Depression was the premier macroeconomic event of the twentieth century, and I am not suggesting we abandon macroeconomic analysis of it. I am suggesting, however, that an exclusive focus on aggregate labor statistics runs two risk s: the facts derived may be artifacts, and much of what may be interesting about labor market behavior in the 1930s is rendered invisible. The people and firms whose experiences make up the aggregates deserve to be studied in their diversity, not as representative agents.I have mentioned census microdata, such as the public use sample of the 1940 census or the manufacturing census manuscripts collected by Bresnahan and Raff, in this survey. In closing, I would like highlight another source that could be examined in future work. The source is the â€Å"Study of Consumer Purchases in the United States† conducted by the BLS in 1935-36. Approximately 300,000 households, chosen from a larger random sample of 700,000, supplied basic survey data on income and housing, with 20 percent furnishing additional information.The detail is staggering: labor supply and income of all family members, from all sources (on a quarterly basis); personal characteristics (for example, occupation, age , race); family composition; housing characteristics; and a long list of durable and non-durable consumption expenditures (the 20 percent sample). Because the purpose of the study was to provide budget weights to update the CPI, only families in â€Å"normal† economic circumstances were included (this is the basis for the reduction in sample size from 700,000 to 300,000).Thus, for example, persons whose wages were very low or who experienced persistent unemployment are unlikely to be included in 1935-36 study. A pilot sample, drawn from the original survey forms (stored at the National Archives) and containing the responses of 6,000 urban households, is available in machine-readable format from the Inter-University Consortium for Political and Social Research at the University of Michigan (ICPSR Study 8908). Robert A. Margo Vanderbilt University

Case: Supply Chain Management and Application Software Packages

Info from case total revenue for last reporting = 110 million cio reviewed 3 following implementation strategies: -classic disintermediation – removal of intermediaries in a supply chain. connects supplier directly with customers -remediation-working more closely with ecisting middlemen partners. strategy could be affected by high contracting risks. -network-building alliances and partnerships with both existing and new suppliers and distributors involving a complex set of relationships. Networks tended to reduce search costs for obtaining information, products and services. selected remidiation – because it best fits the firms goal of simplifying data sharing throughout the supply chain -also had longterm and positive relationship with its primary distributors, which would ameliorate the high contracting risk. â€Å"The firm purchased stock woods from a number of producers and processed them to meet specific customer specifications. Approx. 60 percent of woodsynergy s ales were in high-end furniture† Problems 1 – Choice of implementation plan is wrong – LONG TERM -CIO chose remediation because it best fit the firm's goal of simplifying data sharing throughout the supply chain; furthermore, the CIO noted that woodsynergy had a long-term and positive relationship with its primary distributors which would ameliorate the high contracting risk issue† -the best way of simplifying data sharing is eliminating any unnecessary party that the information needs to travel to. -remove the distributors and engage the customers directly -who are we to decide how your existing distributors will feel after you amend any contracts to include any new information system to the SCM that ultimately creates more overhead for them? the business model of woodsynergy suggests that â€Å"the firm was committed to delivering information to the right people at the right time so that strategic and operational decisions were made properly and quicklyà ¢â‚¬  -benefit going national prevented by local distributors – if woodsynergy engages their end users directly it will promote better customer relationships as well as open potential national and international markets/ Causes -long-term relationships with distributors -contracts with distributors -CIO decision seems biased Alternatives choose classic intermediation -stay with remediation -choose networking Solution: Chose Classic intermediation †¢Removes the middleman †¢The middle man share shift to suppliers, Woodsynergy and to the customer, making the company more profitable and increasing the customer loyalty †¢Efficiency – instead of suppliers shipping first to the Woodsynergy and then Woodsynergy shipping the products to the customer, supplier can ship straight to the customer Implementation: (implanting the plan – find the need, develop the program, and implement it and the evaluate it) Business need †¢System investigation †¢Syst em analysis †¢System design †¢Proframming and testing †¢Implementation †¢Operation and mainenance 2 – Prototype Built – short term problem *** -â€Å"due to budget and time constraints the project team chose to build a gateway prototype without addressing problems of integrity and timeliness with the systems data. The project team decided to improve the data quality at a future date† – customers data needs to be secure. Period. For any duration no matter how short. â€Å"Two of the key drivers included in gateway design were data standardization and real-time interface† -It should be real-time interface and data integrity as aligned with Woodsynergy’s business goals. -release data standardization at a later time instead of data integrity Causes -budget -time constraint -phase 1 of prototype does not directly correlate to business goals Alternatives -cloud system from 3rd party -key drivers in phase 1 = data integrity and real-time interface/data standardization at future date/release †¢Application software packages – off the shelves. ONE MORE alt Solution: †¢Application software packages – off the shelves. oPrewritten, pre-coded application software commercially available for sale oA lot of choices, with rating/reviews from its customers/users oOther companies are already using them oSome software companies even let you try them oQuicker solution, gives the it team to work on the bigger problem or new software oIt may be cheaper than labour and resources spent building prototype that may put company`s customer`s information at risk Implementation – . Identify potential vendors 2. Determine the evaluation criteria a. Functionality of the software b. Cost and financial terms c. Vendor`s reputation – success stories/customer reviews d. System flexibility e. Security f. Required training g. Data handling h. Ease of internet interface i. User friendly 3. Evaluate ven dors and packages 4. Chose vendor and package 5. Negotiate a contract 6. Implement the software 7. Train the staff/users 3 – Project Team Questionable – Short term and Long term? *** Causes launched multiple it based supply chain management initiatives -researched how gateways are used in their business and understand the different of technology on the internet† in first few weeks – this should take a few days at most -phase 1 of prototype not aligned with business goals –decision criteria— this is what I think would be the criteria, we can discuss if you have others *** -budget – need better coaching on team goal and better planning -increase customer satisfaction -be consistent with corporate mission -Time constraint – implement fairly quickly -improve profits within acceptable risk parameters Solution – BE consistent with corporate mission Implementation †¢Be consistent with corporate mission oTrain and remind the m in every morning huddles oBefore implementing the any new plan or developing new software or making the decision to devolve a new software, correlate it with the business strategy oDelegate effectively to team members oHold them accountable – stay on top of their performance oGive the team budget – quarterly yearly or project based – so there will not be any wastages Source: /http://plato. acadiau. ca/courses/Busi/IntroBus/CASEMETHOD. html/

Tuesday, July 30, 2019

Dang It’s Him Essay

Hassan considers Amir as his friends, but in Amir’s eyes he is more than a servant, except he couldn’t accept him as a friend. Amir is unable to accept Hassan as a friend because he is a Hazara and in his mind, due to peer pressure, he considers Hazaras to be lower in status than he is. Amir constantly tests Hassan’s loyalty because he is jealous of Hassan’s loyalty and therefore wants him to slip up. Amir is jealous that he doesn’t treat Hassan with the trust of a friend that Hassan gives him, so he wants Hassan to slip up so he can feel like they’re equal. He resents Hassan because of the love that Baba gives him and how he never forgets Hassan’s birthday. His Baba always compares Hassan and him consequently his Baba would mention that he is more proud of Hassan than Amir. We begin to understand early in the novel that Amir is constantly vying for Baba’s attention and often feels like an outsider in his father’s life, as seen in the following passage: â€Å"He’d close the door, leave me to wonder why it was always grown-ups time with him. I’d sit by the door, knees drawn to my chest. Sometimes I sat there for an hour, sometimes two, listening to their laughter, their chatter. † Discuss Amir’s relationship with Baba. After hearing Amir’s story, Hassan asks, â€Å"Why did the man kill his wife? In fact, why did he ever have to feel sad to shed tears? Couldn’t he have just smelled an onion? † How does this story epitomize the difference in character between Hassan and Amir? Refer to the beginning of Chapter 4. How might Baba’s treatment of Ali have influenced Amir’s understanding of how to treat Hassan? What moral lessons does Baba convey to Amir, and are any of them contradictory? 1. After Amir wins the kite running tournament, his relationship with Baba undergoes significant change. However, while they form a bond of friendship, Amir is still unhappy. What causes this unhappiness and how has Baba contributed to Amir’s state of mind? Eventually, the relationship between the two returns to the way it was before the tournament, and Amir laments â€Å"we actually deceived ourselves into thinking that a toy made of tissue paper, glue, and bamboo could somehow close the chasm between us† (93). Discuss the significance of this passage. 2. As Amir remembers an Afghan celebration in which a sheep must be sacrificed, he talks about seeing the sheep’s eyes moments before its death. â€Å"I don’t know why I watch this yearly ritual in our backyard; my nightmares persist long after the bloodstains on the grass have faded. But I always watch, I watch because of that look of acceptance in the animal’s eyes. Absurdly, I imagine the animal understands. I imagine the animal sees that its imminent demise is for a higher purpose† (82). Why do you think Amir recalls this memory when he witnesses Hassan’s tragedy in the alleyway? Why does Amir respond the way that he does? 3. What role does Rahim Khan play in Amir’s life? What are the requirements for a true friendship? How can a friendship be damaged? Make sure to refer to a specific example from your experience AND a specific example from The Kite Runner.

Monday, July 29, 2019

Mid term paper comapring and constrasting one perfect day' and ' the

Mid comapring and constrasting one perfect day' and ' the ameican way of death - Term Paper Example Take for example, the two known lavish industry nowadays, the wedding and funeral industry. These two industries are respectively discussed in â€Å"One Perfect Day† by Rebecca Mead and â€Å"The American Way of Death† by Jessica Mitford. To emphasize, both of the books are written exposà © of the real deal behind the two events in one’s life, wedding and funeral. The former reveals the issues behind the wedding industry which actually counts one hundred sixty billion dollars in the United States economy (â€Å"Synopsis†). On the other hand, the latter talks about the highly commercialized funeral service in America. Both of the authors highlighted the â€Å"costs† of having either of the two. It is observed that the wedding and funeral industry have become more and more expensive. In the book â€Å"One Perfect Day,† the main topic is about a wedding ceremony which highlights the two central figures, the bride and the groom. Nonetheless, majority of the exposà ©s are associated with the whims and caprices of the bride from the gown to the wedding’s order of events. Plausibly, the bridal gown which is the central object with its matching accessories such as the shoes, veil and many others are also considered by the author in exposing the evils behind the wedding industry. Normally, in a wedding, it is the bride who initially plans everything as the groom only approves or makes some modifications. This is the normal behavior during the planning stage. In most cases, the bride and the groom hire a wedding planner to set up everything for them. The author then highlights the disadvantages of hiring a wedding planner (Mead). The author’s explanation do not really dwell on the skill of the wedding planner, but, instead on the accessory role of such person in the wedding and its correlative effect to the substantive aspect of the ceremony. Obviously, there is much to spend

Sunday, July 28, 2019

Service Quality and Operations (A REPORT) Essay

Service Quality and Operations (A REPORT) - Essay Example Marriott Hotel Empire which started as a small company in 1927 in Washington D.C. by John Marriot of Utah. But due to its consistent efforts, today this company is serving in 67 countries and has about 3150 lodging properties. Different operation strategies, marketing strategies, maintenance of quality, employee empowerment and customer satisfaction are the key points that have enabled the success of this organization. Marriot has adopted a rigorous marketing policy, where this company is catering almost all market segments, i.e. it is not only allocating the high class but also deals with business class and the lower class. According to the research conducted in 2008, this company had built its strong network with its suppliers, customers and employees. it has also build a strong sense of teamwork among employees, maintained a positive and supportive management style. They believe customers as their guests and due to this they have enforced strict quality measures and strict quality control on all its hotels and motels. Kandampully et al. (2001) reported in his book that J.W. Marriott himself stated a philosophy in treating employees in the following statement, i.e. â€Å"take care of your employee and they will take care of you†. Every company whether it belongs to a manufacturing industry or a service sector has to carry out daily operations and transactions. Due to this operational management is extremely important OPERATIONAL MANAGEMENT Not just Marriott but there are basic ten main tasks or critical decisions that every company, manager and an employee has to undertake in order to effectively manage operations. In the book by Henzer & Render (2006) mentioned the operations management ten strategic decisions, such as; service and product design, quality management, process and capacity design, location, layout design, human resource and job design, supply chain management,

Saturday, July 27, 2019

What is Branding Essay Example | Topics and Well Written Essays - 250 words - 1

What is Branding - Essay Example An example of a company with a strong brand is Starbucks Cafà ©. Another company that has excelled due to its marketing strategy is McDonalds. McDonalds spends over $2 billion in advertising each year to solidify its brand value (O’brian). One of the greatest benefits of a branding strategy is that it improves customer loyalty. Customer loyalty is a great benefit because it provides companies with a steady inflow of income. Not all products are suited for the application of a branding strategy. Commodities such as gold, silver, copper, petroleum, and rice are not suitable for a branding strategy because its price fluctuates daily in stock exchanges such as NYSE, NASDAQ, and LSE. Companies that operate in industries in which there is intense competition do not benefit from branding strategies as much as firms in other industries, but branding can be use to differentiate the company. Differentiation allows firms to operate in niche markets where branding can be effective. Another advantage of using branding to differentiate is that it reduces competition. In the case of a pricing war the use of branding is not suitable because the cost associated with the implementation of a branding strategy will further deplete the operating margins of the company. O’brian, K. 4 May 2012. â€Å"How McDonalds Came Back Bigger Than Ever.† The New York Times. 8 February 2013.

Friday, July 26, 2019

Explain Einsteins theory of relativity and its impact upon science and Essay

Explain Einsteins theory of relativity and its impact upon science and society - Essay Example He treated matter and energy as exchangeable, not distinct. In doing so, he laid the basis for controlling the release of energy from the atom. Thus, Einstein was one of the fathers of the nuclear age (Kevles, 1989). Setting out from the discoveries of the new quantum mechanics, he showed that light travels through space in a quantum form (as bundles of energy). This was clearly in contradiction to the previously accepted theory of light as a wave. In effect, Einstein revived the old corpuscular theory of light, but in an entirely different way. Here light was shown as a new kind of particle with duel nature, simultaneously displaying the properties of a particle and a wave. This startling theory made possible the retention of all the great discoveries of 19th century optics, including spectroscopes, as well as Maxwell’s equation. Einstein’s discovery of the law of equivalence of mass and energy is expressed in his famous equation E = mc2, which expresses the colossal energies locked up in the atom. This is the source of all the concentrated energy in the universe. The symbol ‘e’ represents energy (in ergs), ‘m’ stands for mass (in grams) and ‘c’ is the speed of light (in centimeters per second). The actual value of c2 is 900 billion billion. That is to say, the conversion of one gram of energy locked up in matter will produce a staggering 900 billion billion ergs. Einstein predicted that the mass of a moving object would increase at very high speeds. The discoveries of quantum mechanics demonstrated the correctness of the special theory of relativity, not only qualitatively, but quantitatively. The predictions of special relativity have been shown to correspond to the observed facts. Scientists discovered by experiment that gamma-rays could produce atomic particles, transforming the energy of light into matter. They also found that the minimum energy required to create a particle depended on its

Which is bigger Feel the Fear or The Giant Speech Presentation

Which is bigger Feel the Fear The Giant - Speech or Presentation Example Mathematics is used in numerous ways including description of real world research important real world situations, idea test and give predictions of the real world situations among others (Berry et.al 1995 pp24). Generally, in mathematics, modelling of mathematics gives a procedure or a method that can be used for solving certain situations and problems in mathematics. In this report, the process of mathematical modelling was intended to be used effectively in solving the problems and the situations in the report (Berry et.al 1995 pp24). The report also will use the model to analyze the data given and mathematically give solutions to the research question. The analysis section of this report involved some mathematical calculations of numerous problems whose solutions were found through differentiation. The first problem tackled focused on determining the difference in the altitude of each coasters. The determination was performed through manual mathematical calculations, all the steps used in conducting this calculation are highlighted and explained appropriately in the report. The following are the solutions including all the steps used to solve the three mathematics problem during the research: In conclusion, the report analysis used the modelling of mathematics to solve and evaluate the questions asked in the report. Mathematical modelling such as differentiation of first derivatives is used in the report to find the accurate answers. In order to approach different problems in the correct form, the modelling process of mathematics was used. Just from prediction which was tested to give the data to be formulated. Formulation gives the model which is analyzed to give the conclusion as well as the answers to the problems. The methods used to analyze the report were accurately and appropriately

Thursday, July 25, 2019

How Ethics Provides a Standard for the Recourses of Action in the Assignment

How Ethics Provides a Standard for the Recourses of Action in the World - Assignment Example In the era of the contemporary world, one cannot deny that globalization has been the dominating force in order to exert influence and control over other countries and groups of people. To a certain, the universality cause of globalization can gravely deform the concept of universal ethics. Kant purported that universal ethics is something that is generally agreed upon by people due to its principles being applicable to almost everyone (Gregor 1998, p. 47). If carefully scrutinize, the globalized condition of the 21st century indeed distorts the universal ethics due to an imposition of a certain standard to make it universal. Thus, it can detrimentally devalue the universality of ethical principles. To a certain extent, it can be analyzed that the universal ethics becomes an imposed universal ethics. Why is this possible? This happens due to cultural complications that come with globalization as a phenomenon. The ethical question that one must assess is that is it reasonable to impose new cultural conventions, ‘under the banner of one world, one culture’, to achieve the universality being aspired for (Steger 2003)? There are several issues that will definitely come here. Other people chose to comply with the standards imposed by globalization due to the benefits it can offer them as of the moment. However, it is costly given that one must give up certain conventions just to accommodate the latest trend in the world. The trend becomes a standard for universality in this case. Is this ethical? Yes, for globalized trends of the 21st century. However, for those countries that have so much primacy on their culture, how are they assured of them preserving their innate cultural values and attitudes, which can be considered universal and morally correct for them? Thus, it can be considered ethically wrong but due to the conditioning of the people’s mindset about what is acceptable and ethical, the incursion of globalized conventions deforms the universal ethics.  

Wednesday, July 24, 2019

Information Systems Security and Ethical Issues - Finance Management Assignment

Information Systems Security and Ethical Issues - Finance Management - Assignment Example Computerized information systems are becoming the De facto way to communicate business information, especially financial information. As Whitman and Mattord (2011), say, there are however many security issues which have to be used which range from internal threats, external threats from hackers, etc. Managerial information if more of a product by the financial management department rather than what they collect. This information is derived from raw data from other sources such as POS, Inventory data, etc The sources of financial information can either be primary or secondary. Primary data is the data that derived from direct transactions such as POS data while secondary data is data that has been derived from other sources of data, such as internal financial resources which includes the cash flow statements, the trial balance, the income statement etc. Financial accounting systems and Principles are also useful in avoiding errors. They are designed in such a way that if an error is made, the error is made. However, some errors (such as compensating errors) may not be detected in this way. Timely: information must be timely in order to be of any use, the right information provided at the wrong time is not useful to anyone. For instance, if there is going to be a fall in demand for a specific product, getting this information in time to plan for this change in market is very useful to the business, however, if this information comes at the wrong time, the information will not be of any use to the business and the business will still have to suffer the consequences. Relevant: information has to be relevant to a business and to a specific situation. For instance, information about the fall in market demand for cars may not be relevant for a retail store, unless there is a direct correlation between the two.

Tuesday, July 23, 2019

Organizational Ethics Essay Example | Topics and Well Written Essays - 1250 words

Organizational Ethics - Essay Example In most cases in an organization, unethical behavior is normally as a result of the subordinates actions. In order to foster a decision making and ethical climate in an organization, it is important for the concerned manager to create and give enough freedom to the workers. As a result, the workers are likely to exhibit more loyalty to the organization and this makes them less prone to unethical behavior such as theft. The results of a study carried out by Graham indicate that employees are more likely to be attracted to and more committed to ethical organizations. As a manager in an organization, I would devote valuable employee time into training on ethical reasoning and ethical behavior; I would insist that ethical conduct should be exhibited even in the midst of aggressive competition. This would play a critical role in creation of a positive reputation in the organization. It would also enhance ethical climate as well as improving the decision making process. Unethical behaviors are very prominent in business settings and they include a wide variety of different activities. Myer (123), states that there are limitless reasons as to why many people and organizations exhibit unethical behavior. However, the most prominent ones relate to ones personality and the ethical frame work an individual holds. This is due to the fact that the frame work may conflict with the ethical frame work that the organization holds. In reference to the outlined and recent scandals, any individual is likely to fall into them. Therefore, this does not exempt me as a person, mainly due to the fact that I also hold unto some ethical frame work which would conflict with what others and the organization hold. It is therefore wise to devise ways in which to manage the potential ethical pitfalls in any organization, business, and company. To start with, individuals in prominent positions in an organization should encourage ethical consciousness in a concerned and supportive manner. Secondly, a clear policy in writing should be allocated to all individuals involved; they should carefully read and sign against it to indicate that they have clearly understood the terms and conditions and that they are ready to abide by the requirements. This would play a critical role in promotion of ethical behavior. In most instances, the personal ethics of leaders either positively or negatively impact the ethical behavior of an organization. They therefore play a big role in determining the kind of ethical behavior that is portrayed in an organization. In relation to the leaders, my ethical frame work may either positively or negatively influence my organization. Myer, Craig. Contemporary Business. 2nd ed. New York: Oxford UP, 2000. Week 4 # 4 The framework for ethical decision making process includes 10 stages which have been grouped into 5 steps (Greg 65). It provides a well defined direction in which one is expected to follow in order to achieve the best results during decision making process and formulation of ethical behavior. The first step involves recognition of an ethical issue in which the facts are collected. Evaluation of alternative actions follows up whereby a decision and a test are made. Lastly, one is expected to

Monday, July 22, 2019

Cougar or Coyote Essay Example for Free

Cougar or Coyote Essay The trickster is an important archetype in any religion or myth because it provides an outlet for all of the chaotic and destructive emotions and tendencies of a people that are controlled by a larger social construct. It is through a trickster figure that people of a religion or society are able to explore the more untamed side of their nature while additionally presenting them with the consequences of those desires. The trickster is a figure that at once both mocks social morals and at the same time also reinforces those morals by showing the pandemonium and trouble that arises if the people do not follow the rules that are in place. The trickster also allows the people of a religion to express ideas and desires that might not ordinarily be acceptable in their society. In this way the trickster plays a very important and cathartic role in a religion or myth. Penelope, from Homers The Odyssey, is a woman of grit and spirit. Ellen Shull declares in her essay â€Å"Valuing Multiple Critical Approaches: Penelope, Again and Again† that Penelope is â€Å"the paragon of resilient womanhood† (32). However, a trickster god, like Monkey from Wu Cheng-ens novel Monkey, and a mortal woman like Penelope appear to have nothing in common. Their roles are so different and their apparent purposes are even more so. On the surface it may seem as though Penelope from The Odyssey shares very little resemblance with a trickster god. However, when one takes a closer look the similarities become more obvious. Penelope is at once a powerful figure that adheres to the social norms of her patriarchal society while still rebelliously challenging the acknowledged rules of how a woman should behave. This can be seen as how a trickster like Monkey is used in myth to subvert a society’s own beliefs. Penelope is the other side of the coin of what it means to be a trickster. She is the female version as it were. Penelope may not be male, amoral, animal, or supernatural but she is cunning, childish, inventive, and she also a subversive figure within her patriarchal society. The most obvious source of incompatibility of Penelope being a trickster is that she is female while the trickster is usually a male like Monkey. Now, unless Penelope was even more deceitful than anyone had ever imagined then it is safe to say that she is not a trickster god based on that one quality alone. Leeming states that the trickster is always male† (163). Obviously, Penelope is not male which means that she is, according to Leeming, not a trickster god, no exceptions. But if Leeming were to make an exception then Penelope would be one. Penelope is a woman who must work against all the restrictions and suffocating bounds that her society uses to leash women in order to trick the people surrounding her and she does. â€Å"She deceives the suitors and even her own husband† (Mueller, 337). Penelope even has long lasting deceptions that fool people for years. The sexual organs Penelope was born with seem to be of little importance when compared to the massive opposing powers and influences that she if forced to undermine and battle against. The next point of disparity between Penelope and a trickster figure like Monkey is that the trickster is seen as a philandering, unprincipled, hooligan. The trickster is considered to be an ethically neutral figure with a propensity for getting into humorous predicaments. Leeming calls the trickster â€Å"amoral†¦outrageous†¦ [and is] untamed by the larger social conscience). Monkey is a perfect example of this side of a trickster. Monkey is not exactly immoral he just has his own sense of what the right thing to do is and he is overwhelmingly selfish. Every action and quest he takes at the beginning of his story is motivated by his desire to be immortal and to gain power. Even when Monkey protects his other monkey subjects he does so because he wants to maintain his kingship more than out of a fear for their safety and wellbeing. One could even posit that the monkeys would be better off without him because he brings the wrath of heaven down upon them. Monkey has all these qualities that Leeming states a trickster is comprised of. Penelope, on the other hand, is none of these things. In fact, she is usually remembered for her faithfulness to her husband even though he was gone for twenty years. Penelope â€Å"waits in Ithaca for Odysseus. She looks after his home, his son and his estate. She weeps lonely tears but nothing induces her to betray her husband and to neglect her duties, not even under pressure from the suitors does she contemplate infidelity† (Smit, 393-394). Her unwavering loyalty to her husband and her devotion to the gods are not the sort of characteristics seen in the trickster who typically represents lower or baser instincts and functions. Penelope is a classy lady but again she also has that side to her that rebels at the rules of her culture. Some might even call her a vain tease for keeping her suitors around for so long while never picking one or giving in to their masculine power. Penelope, also, does not fit in the trickster category because she is only human while a trickster is usually an animal. Leeming states that a trickster â€Å"takes animal form† (163). Monkey obviously fits into this category. Not only is he a monkey but he has mystical origins. He was born from a stone. In fact Monkey’s animal form is a point of ire for him because he in Monkey he tries become more and more human-like. He starts wearing clothes and stands upright in an attempt to appear more human. This fight between animal and human characteristics is vital in a trickster figure because that animal quality is in part what allows them to get away with their mischief. Penelope is no dog. Or any animal for that matter. She is in fact a very desirable woman with scores of suitors fighting for her hand in marriage. This does not help her in the trickster category but it does, however, show how her beauty and desirability are in part what allow her to get away with her schemes. Her beauty can even be seen as her animal side because it basically serves the same function that the animal form serves the trickster. An animal form, or in the case of Penelope, her beauty, is a metaphor of who they are and it allows them to be more completely that character and it allows them to do things that would not ordinarily be acceptable within that society. Penelope’s beauty is what allows her to subvert her patriarchal culture because her beauty gives her power over her suitors. She is a woman but she uses that to her advantage. It could also been seen that being a woman in the time of The Odyssey was akin to being an animal because it was such a male dominant culture where woman were little more than chattel or bargaining pieces. Maybe Penelope has more trickster qualities than are first apparent. The last way that Penelope does not fit into the trickster category is that she has no supernatural powers. Leeming â€Å"† (). She has no magical powers which show even further how she is not like a trickster. The trickster is almost always a supernatural figure. This category obviously denotes that a trickster has otherworldly abilities with which to influence outcomes. Penelope works entirely in the realm of her intelligence to bring about the results and tricks that she has concocted. This can make Penelope seem as being more skilled than a god who needs magic to bring about the outcome that he so desires. When compared to Penelope supernatural powers might be viewed as cheating.

Sunday, July 21, 2019

Cinema In France Film Studies Essay

Cinema In France Film Studies Essay Select a national cinema of your choice to examine its position in articulating a cultural identity. Attempt to present your answer by a close reading of at least two films.   (2,000 words) Cinema in France has always been a key issue in society, the arts and culture in general. This can be understood through many different aspects. The first being the very invention of cinema in France by the Lumià ¨re brothers, with the first public projection in the world taking place in Paris in 1895. But also many other key elements such as George Mà ©lià ¨s being considered as the first director and inventor of scenarios and special effects, until more recent features such as the Nouvelle Vague, the movement of rejection by young film-makers against more academic ways of film-making and acting, influencing cinema worldwide until this day. In other words, cinema in France is well and very active, with production, exports, viewers, talented directors being steady. The number of Art Houses and Festivals are higher than anywhere else in the world, and France has the highest number of screens per million inhabitants, as well as the ceremony of the Cà ©sars, the equivalent of the Os cars in France. This places the French movie industry third in the world, behind the USA and India, which makes it the strongest in Europe, with 22% of European films being produced and having the largest market-share of nationally-produced films in Europe. This is due to its long history in the cinema industry, but also to its more recent policies concerning French films, and what is known as lexception culturelle. This French concept, basically meaning the French cultural exception, defends everything that is cultural, in opposition to a product and the market and protected from free enterprise and quotas. This is because French society, most culturally represented by its language, needs to protect itself against any competition that would harm the French culture and replace it by another one. Everything that refers to Culture in France; writers, musicians, film-makers, and more are protected against market laws and this is the States role; therefore there being a Minister of Culture. This is ultimately a reaction against globalization, seen as dangerous in this sense, and a will to maintain or reinforce a national identity. Before World War 1, Pathà © and Gaumont dominated the industry and French cinema was first worldwide in terms of quality, quantity and diversity. But after the war, this cultural status was replaced by American cinema. This struggle of course concerns the USA more than an y other, as they are the leading country in the industry, and the American hegemony in the rest of the world is evident. Therefore, France came up with a unique financing system to fight against the main threat for French cinema; television and North American cinema. In the 1980s the French State put in place quotas in television in favor of audiovisual and cinematographic oeuvres. The main television channels have to allocate 3.2% of their revenue to cinema, which includes 2.5%, minimum, to French films. A minimum of 50% of French films must be broadcast. And this is when the now very popular pay-channel, Canal+, helped a lot, as they must give 20% of their income to buy rights. And on each cinema ticket, a tax (11%) is billed to a support fund for foreign films, as long as they are co-produced with a French producer. In result, over 160 films per year are made, and France ranks third worldwide. Moreover, an important factor concerning television, is the amount of broadcast cultural programs on public channels, relating to the exception culturelle concept and that helps understand French cinema better, in the sense that, a movie in France is considered as a message made by the director, on top of the entertainment aspect of it. Compared to most countries, French audiences are very aware of their audiovisual landscape, and experience more films in cinema and on all television channels, often at primetime, giving them a very different cinematic experience, closer to culture. In the 1980s, the Socialist government of the time, and more particularly the Minister of Culture, Jack Lang, made many efforts to help and promote a more cultural cinema. A goal to marry popular and cultural cinema, and distribute French cinema domestically and abroad, also as a way to offset the Hollywood domination. Jack Lang wanted a cultural cinema for the masses, promoting films that were assimilated with French cultural heritage, but that could also provide popular entertainment for a wide public. These particular heritage films, or films de patrimoine, have played an important part in the French audiovisual landscape from the late 1980s. It was successful as the key aspects put together worked very well, not being too frankly popular nor too highly cultural. This genre, seems to dominate international perceptions of French cinema, although of course there is much more diversity. The first prominent example of this kind, was Claude Berris movie, Jean de Florette, in 1986, a box office success, and the first high budget film in France, including French stars, such as Yves Montand and indicative of old-school French cinema, Gà ©rard Depardieu, often compared as the contemporary equivalent of Jean Gabin or Maurice Chevalier, and the rising Daniel Auteuil, for which this movie marked the beginning of his career as a serious actor. It is drawn upon the very popular novels of French author, Marcel Pagnol, continuing and developing furthermore the tradition of literary adaptations. This combination of elements along with the natural locations in Provence, evoking nostalgia, and celebrating the landscape, the history and the culture of France, actually contemporizes the film as a whole. At the same time, Jean de Florette marks continuity in French cinema, with its central locations mainly being Paris and the South, often opposing them too. In this film the focus is on the past; past values and past issues. But a past that is not so far away as it has and still marks Frances national identity, and this film was made to reinforce this by a whole aesthetic of nostalgia, tending to idealize the past and the regions and the nations geography, taking part in the protectionist cultural imperatives. France relies a lot on its past to vehicle its national identity, and that is why canonical source-texts, by the greatest French authors were and are often used as basis for films. The past, in Jean de Florette, is used as a spectacle, the nations territory, the landscape of Provence evokes the nations nostalgia, as it idealises its rural past, showing the French industrys will to affirm itself through the representation of its past. This is because it offers a firm cultural point, marked in the nations history, in a time where notions of national identity were, and still are, unstable, with the globalization and issues of immigration in the 1980s.These concerns can be found in the story itself, with questions of greed, materialism, identity, exclusion concerning the main characters Jean, the outsider, and Papet Soubeyran and Ugolin, the established peasants, and at the time it was suggested that the way Jean was treated by the locals, represented the anti-immigration movement, growi ng at the time. Now, it could be said that in the film, the past, represented by Provence itself, is the main character. Through a mix of panoramic and static tableau shots, Berri shows it as an idyllic place, providing visual sites for national identification, as not only is it one of the most symbolic regions in France, but it often speaks to the spectator who in many cases may have childhood recollections of the journeys down south, to visit family. This feeling can be experienced in the opening sequence, where a car journey is shown, without showing the character, which gives a feeling of intimacy. The spectator has a view from the window, and a feeling of return to the past, going back to nature, from urban to rural, with many elements that could be seen as stereotypical, such as the long winding roads, the crowing cock in the morning, the magnificence of the mountains. Therefore the emphasis on the geographical setting is the most important aspect in the film, but also the somewhat stereotypic al images of Provence. The characters, first of all, include a patriarch, and loud southerners, an outsider, farmer, an introverted peasant, and a bad guy of course. These characters all take on traditional rural activities, and the action takes place in the most emblematic Provenà §al and rural places: the cafà ©, the market, the fountain, the square, as well as the main spaces of the action in the film, being Jeans house and garden, the Soubeyrans property, the village and the mountain, which build up a sense of place and identity. Of course another main aspect of the region is very much reliant on dialogue, which reinforces the specificity of the film within the region. The accent of Provence is very marked, and clearly illustrates the difference between the locals and Jean, with his standard spoken french, who represents frenchness for many foreigners through Gà ©rard Depardieu, and marks the binary of Paris/province, meaning anywhere outside of Paris. Similarities to some of Paul Cà ©zannes paintings can be found in some of the bar scenes, reminding the Card players series and The Smoker, but also the mountain panoramas, recalling his famous paintings of Mont Ste Victoire. The background characters also provide a local color and credibility, with the game of boules and the pastis also being typical associations. In essence, Berri used this film to emphasize Provence as a French, cultural, historical region, representing the past and everything the French can identify to the region. Right after Jean de Florette, the sequel, Manon des Sources, came out. They were filmed as a whole over the period of seven months. In the long term, they did much to promote tourism in the region, causing interest internationally, as the film was very successful, inspiring true authenticity of rural France. Of course, many successful films of the kind followed, most notably, Cyrano de Bergerac, with Depardieu, also a literary adaptation, which won Best Foreign Film Oscar in 1990, and contributed to expand and revive Frances historical national identity. Now, a binary opposition was mentioned above, and it comes with the notion of films in Paris. Paris, the capital, the city of love, arts, and of course of cinema. For many, Paris truly represents France, of course this is a more international perception, but it still maintains its position in Frances history and key elements in the nations culture. A film that recently played upon many key cultural elements, giving it a worldwide success in 2001, is Le Fabuleux Destin DAmà ©lie Poulain, by Jean-Pierre Jeunet. Again it can be said that Amà ©lie Poulain celebrates nostalgia. The nostalgia of typically French and Parisian aspects of life. The action is set in Montmartre, a quartier of Paris, well known for being where many artists established themselves living la bohà ¨me, also a classic setting seen in many films, such as Les 400 coups (Truffaut, 1959), French Cancan (Jean Renoir, 1955), Lautrec (Roger Planchon, 1998) or Zazie dans le mà ©tro (Louis Malle, 1960). The particular element of the film is that it is seen through the eyes of the main character Amà ©lie, which gives it a romantic and idealized aspect, picturesque and clearly serving many stereotypes, a reason for its national and international success. Many key elements are present, the grocers, the cafà ©, the metro station, the scooter, the old painter, and the different views of Paris in general. At different moments in the film, Amà ©lie is watching Jules et Jim on television, a classic of Franà §ois Truffaut, which is a testimony of the importance of French cinema and the influence of the New Wave on current film-makers. The photography of the film is very special, and contributes to this nostalgic feeling, mainly displaying two colors, red and green. The story is very simple, and could be considered as a modern fairytale, but it is the way it is told, and the backdrop and atmosphere of the whole that give an aspect to it that can be considered French, culturally. This very atmosphere is also majorly due to its magnificent music that accompanies Amà ©lie everywhere she goes. The young composer, Yann Tiersen, used music from his earlier album, but also composed 19 songs and variants for the film. The main motive of the film appears in different variations, expressing different moods. Tiersens music, mainly includes accordion and piano, and what more can the accordion refer to than frenchness; a marker of the past, at the time of the guinguettes, open air dancing establishments outside the center. The accordion vehicles a known clichà ©, but also nostalgia and marginality, and is practically the real center of the film. This retrospective to guinguettes, is reprised in different ways, with references to the Moulin de la Galette, a Montmartre guinguette, which was painted by Toulouse-Lautrec, Renoir and Van Gogh in the 1870s and 1880s. The reference to Renoir is also repeated with the character of Dufayel, the old painter neighbour, who repeats the same painting every year, by Renoir, The Luncheon of the Boating Party (1881). This obsession and the repetition, aim to make what was in the past, present. This is also marked in the many repetitions of the accordion which anchor the film nostalgically in the period of the guinguettes, between 1880 and 1940. The accordion signifies a national identity, but that is very specific to Paris, and the imaginary this place evokes; romanticism, and a touch of exoticism. At the time, the two presidential candidates for 2002, Jacques Chirac and Lionel Jospin, publicly marked their appreciation of the film, and audiences were seen clapping eagerly at the end of the film in cinemas, a very rare happening in France, and which testifies the important role cinema has in French culture and society. France treats cinema very seriously,

Capital Assets Pricing model |Analysis

Capital Assets Pricing model |Analysis Since 1970 the financial company using the Capital Assets Pricing Model (CAPM) to calculate their cost of the portfolio performance and the cost of capital. However, there are a lot of models in assets pricing have to identify the riskiness of the assets, and there are many of the researchers have developed the capital assets pricing model (CAPM) and contribute in pricing the risky financial assets such as, Mossin (1966), Sharpe (1964) and Lintner (1965). CAPM calculated the risk of assets by measuring the risk premium for each unit across the entire assets and measuring the means of market beta. Therefore, the CAPM module has a linear relationship between the market beta and the risk premium of the assets which can be considered as a methodical risk. Moreover, the CAPM illustrated that the assets return is fluctuated due to the values of the assets market beta. (Fazil, 2007) Advantages of CAPM However, Capita Assets Pricing Model (CAPM) is useful to examine the performance of portfolios and evaluating the cost of equity for the companies. And determine the theories of asset pricing. While, before CAPM had been founded by John Linter (1965) and William Sharpe (1964) there were no models can help in assets pricing models and predictions about returns and risk. The attraction of the capital asset pricing model considered to be powerful in assessing the risk and determine the relationship between the risk and expected return. In contrast, the simplicity of the CAPM reflects true failing and let to an inefficient record about invalidate the way it is used in applications. Also, the inadequacy of the empirical tests and proxies for the market portfolio led to fail in the model. However, if the difficulties of the market broker invalidate the model test, it also will cancel many applications, which normally lend the market broker used in empirical tests. While, for the expectation about the expected return and risk, the researcher will start with the logic summary. After that, will illustrate the previous empirical application on the model and explanation about the challenges of the shortcoming of the Capital Assets Pricing Model (CAPM) (Fama and French, 2003) Fama and French model The assessment of the cost of equity and the expected return for the individual investor or individual share is considered to be an important point for the financial decision, for instance, the investors who are associated to the capital budgeting, evaluating the performance and portfolio management. Therefore, there are two alternatives for this reason. Firstly, we can use on a factor which is Capital Assets Pricing Model (CAPM). Secondly, we can use the there-factor model which is known as Fama and French model. Although, there are many indications from academic literature for assess and evaluating the portfolio returns, and there are many users of the two models such as, Bruner, Eades, Harris and Higgins (1998) and Graham Harvey (2001) who prefer the (CAPM) model to assess and evaluate the cost of equity.( Bartholdy and Peare, 2005). The (CAPM) model consider the accurately of choosing a market portfolio broker, and the difference in the returns of the security is the only appropriate source of methodical risk. Consequently, the premium of the risk on the portfolio of the securities or individual security consider as the function of methodical risk which can be measured by beta on the appropriate benchmark index. In contrast, Fama and French (1993) changed the capital assets pricing model (CAPM) to three factors. Firstly, portfolios explain the variation in the return of the company with high opposed to the low market value ratio. Secondly, portfolios illustrate the difference in the expected returns of large and small companies (SMB). Finally, the premium of the risk on the security is primary for methodical risk and can be measured by betas. Moreover, Carhart (1997) added new factor for the Fama and French risk-return, and brings in a fourth factor known as a price momentum factor. This factor explains the tend ency of the company with positive previous profits in order to gain positive future returns and for companies with negative previous profits in order to gain negative future returns. However, this model (Fama and French Model) is applying statistical regression as follows: r rft =a j +b j rmt rft +b j SMBt +b j HMLt +e jt (1) where, rjt: is the known profits on security j over period of time t; rmt: is the profit have been made from the market over the period t. I got the chain of the known profit on the market, (rmt-rft) from ken Frenchs website4 where it is illustrated as the value weight return on all NASDAQ, AMEX and NYSE shares (from CRSP) and deducting the treasury bill for one month. Rft: is the rate of the risk free over the period t and explained here by the monthly profit on the quarter period of treasury bill a j: is the cut off and explained by the Arbitrage pricing model in order to be equal to zero. b1 to b3 : is the betas factor on the factors of three risks which include the HML, SMB and the excess return on the market. e jt : is the remaining profits on the portfolio j over the period of t SMBt: is the variation in the profits for the small companies against companies over the period of t. HMLt: is the variation in the profits of the companies with big market value (B/M) ratio against the profits of the companies that have low B/M ratio. However, Carharts (1997) divided this model (Fama and French model) as follow: r r =a +b r r +b SMB +b HML +b MOM +e (2) The price momentum factor (MOM) considers as the profit on high prior return portfolio and detected the average profit on low prior profit portfolios, which is the average profits on securities with the top profit from the performance over the previous years minus the average profit on securities, which is had the bad profits from the performance (Bello ,2008) Criticism of CAPM Capital Assets Pricing Model does not give a clear view about the average stock returns. Particularly, the CAPM does not illustrate why during the previous 40 years, small shares do better than large shares. Also, CAPM does not illustrate how the companies which have high rate of a book to market (B/M) ratio did better than the companies with low (B/M) ratios. Moreover, it does not explain why the shares that continue to achieve high profit do better than the companies which achieve low profits. However, the aim of this research is to comprehend if the version of CAPM can illustrate these patterns. According to Jensen (1968), Dybving and Ross (1985), Jagannathan and Wang (1996) who said that the Capital Assets Pricing Model (CAPM) can carry perfectly, time by the time, although that the shares are mispriced by the capital assets pricing model CAPM. Also, the unqualified alpha can be zero when the alpha is not conditional, and if beta fluctuates during the time and is related with the market volatility or equity. In other words, the portfolio of the market can be variance and efficient. (Hansen and Richard, 1987) Furthermore, there are many studies discussed that the time varying beta can illustrate the effect of B/M and the size. Also, Zhang (2005) contributes in developing the model when the high risk premium will lead to high B/M stock. Moreover, many researchers as Lettau and Ludvigson (2001), Lustig and Van Nieuwerburgh (2005), Jagannathan and Wang (1996), and Santos and Veronesi (2006) who explained that high, small B/M beat shares will be varying during the trade cycle, and according to the researchers, widely explained why those shares have good alpha. (Lewellena and Nagel, 2006) According to Fama and French (1992) who illustrate a value premium in u.s share return in 1963, and shares that have a high ratio of the book value of equity to the market value of equity have higher profits than shares with a non-high book to market ratio. Expand the exam back to 1926, Fama and French (2000) document a rate premium in the profit of the beginning period. Moreover, Fama and French (1993) illustrate that the capital assets pricing model (CAPM) of Sharpe (1964), Ang and Chen (2005) and Lintner (1965) did not explain the premium value. Also, Loughran (1997) said the premium value from 1963 to 1995 in any case exacting to small shares. This paper has three aims. Firstly, to give a clear picture about the value premium fluctuating with the company size. Secondly, evaluating if ÃŽÂ ² is in relative to the average profit by capital asset pricing model (CAPM). Finally, to measure whether the market of capital assets pricing model (CAPM) ÃŽÂ ²s illustrate the premiums value. Therefore, the results of the variation in premium value are easily summarized. Moreover, Loughran`s (1997) proved that and said there is no premium value among large shares appear to be exacting to (1) applying the book-to- market ratio as he growth value indicator. (2) the post-1963 period (3) determines the test to u.s. shares. During the period 1926 to 1963, the premium value is the same for small and big u.s. shares and when we use price earnings ratio rather than market to book ratio in order to distinguish growth stock and value, and during the period 1963-2004 introduce small variation between the premium value to big and small us shares. Moreover, they used another sample test, and they measured international premium value during the period 1975 to 2004 from 14 main markets outside the united states of America (USA), and the results of B/M or E/P on international stocks shows that the premium value is parallel to big and small shares, and the indication on the USA premium va lue and the capital assets pricing model (CAPM) is a bit more difficult. The overall premium value in the USA average profit is very similar and there is no variation before and after 1963, while Franzoni (2001) found that market ÃŽÂ ²s fluctuated dramatically. After that period, stocks value to indicate to lower ÃŽÂ ²s than stock growth the overturn of the needs of the capital asset pricing model (CAPM) to illustrate the premium value. Accordingly, the capital assets pricing model fails the exam during the period 1963 to 2004; if or not one permit to for time variation ÃŽÂ ²s over the period 1963 to 2004. Furthermore, the stock value had higher ÃŽÂ ²s compared to growth stock, and Ang and Chen (2005) found that the capital assets pricing model determined the premium value in higher rate. And it is tempting to gather that the capital assets pricing model gives a good explanation of the average profits before 1963. Conclusion According to the CAPM which suggests that the all difference in ÃŽÂ ² across securities is the same method with the expected returns. On the other hands Fama and French (1992) suggest that the difference in ÃŽÂ ² connected to size proves up in the average returns when the portfolio is created on size and ÃŽÂ ², but the difference in ÃŽÂ ² unconnected to the size appears to go unrewarded. This proposes that disagree with the CAPM, the size or a non-ÃŽÂ ² risk linked to the size that counts, not with ÃŽÂ ². Thus the examinations here expand this result. When the portfolios are formed on the size, B/M, and ÃŽÂ ², they find that the difference in ÃŽÂ ² linked with B/M and size are compensated with the average of the returns for 1928 to 1963, on the other hands the difference in ÃŽÂ ² unconnected to size and B/M goes unrewarded during the period 1928 to 1963. (Fama, and French, 2006) In conclusion, our evidence that the variation in ÃŽÂ ² is irrelevant to B/M and size is unrewarded in average profits is as efficient for huge shares and for small shares. This should lay to rest the common claim that experiential infringement of the capital assets pricing model is inconsequential due to the limited small shares and consequently, small fraction of invested wealth. Communication: A Literature Review Communication: A Literature Review Chapter 2 Literature Review 2.0 Introduction People communicate since they are part of society. The speech plays the main role in the communication, since it can express complicated ideas through important tone in the use of wide range of means. However the function of speech is not only convey information or messages but also connected with the interaction between people. This interaction supposed to be polite as etiquette of absolute majority of culture so that people can feel comfortable while communicate. Due to this, politeness should be applied in daily conversation. Politeness is a phenomenon that has been drawing a lot of attention in recent years. According to Huang (2008), everyone perceives as natural and understood what it means. According to many linguists, the importance of politeness strategies lies in maintaining a social order and is seen as â€Å"a precondition of human cooperation† (Brown Levinson, 2000, xiii). Lakoff said that the purpose of politeness is to avoid conflicts (Lakoff, 1889. 101). Polit eness strategies are learned when your mother tells you to thank someone who has, for example, given you a present for your fifth birthday. It seems to be very important to stick to these conventions, which have developed since human being exists. However, the politeness theory by Brown and Levinson is widely accepted and utilized as the basis for research by the researchers in the field of not only sociolinguistics but of psychology, business, and so on (Yuka, 2009). This study will focusing on the use of Brown and Levinson politeness strategies among University Tunku Abdul Rahman (UTAR) student and measure the frequency they used it. 2.1 Politeness Theory/Principle Politeness theory is the theory that accounts for the redressing of the insults to face posed by face-threatening acts to addressees. Politeness theory, derived from Goffinan’s (1967) understanding of â€Å"facework,† suggests that all individuals hold two primary desires, positive face (the desire to be liked by others) and negative face (the desire to have one’s actions unconstrained by others). In our interpersonal interactions, we occasionally threaten others’ face needs, or desires, by exposing them to criticisms which is positive face threatening acts and requests which called negative face threatening acts. According to Simpson (1997), Face Threatening Acts (FTAs) is utterances that disrupt the balance of face maintenance. Thus, the manner in which we criticize or make requests of another is influenced by the degree of politeness that we wish to convey. Goffman’s (1967) argued that maintaining face feels good will showed an emotional attachme nt to the face that we maintain and disruptions of this, or losing face, results in a loss of the internal emotional support that is protecting oneself in a social situation. Plus, maintaining it is the expression of the speakers’ intention to mitigate face threats carried by certain face threatening acts toward another (Mills, 2003, p. 6). Relying on a Grician framework, proposed the Politeness Principle (PP) and elaborated on politeness as a regulative factor in communication through a set of maxims (Grice, 1989). Politeness, as found out, is a facilitating factor that influences the relation between ‘self’, which means the speaker, and ‘other’ that is the addressee and/or a third party. Besides, it minimizing the expression of impolite beliefs as the beliefs are unpleasant or at a cost to it (Leech, 1983). Later, politeness formulated by Brown Levinson (1978;1987). Politeness theory has since expanded academia’s perception of politeness. B esides, in an extension of Goffman’s (1967) discussion of face, Brown and Levinson (1978) also used two types of face that Goffman mentioned. Another scholar Yule, (2006) defines positive face is the pro-social person you present yourself as while negative face suggests giving space to disagreement or refusal, to have freedom of action and not to be imposed by others. In addition, politeness theory by Brown Levinson, (1978;1987) is a dynamic theory of human behavior describing linguistic strategies associated with politeness behavior. Because of its all-encompassing nature and ability to accommodate diverse aspects of human behavior, such as cross-cultural differences, gender roles, exchange theory, and interpersonal address, this theory has been considered to be both exemplary and a desirable ideal for experimental social psychology as a whole (R. Brown, 1990). However this study was not to examine face conceptualization as past study by Rudick (2010) which the researcher was tried to get perception of students by combining politeness strategies and face conceptual with classroom justice scales. Yet this study is just focused on the use of politeness strategies among Universiti Tunku Abdul Rahman (UTAR) students and the frequency scale of each strategy. 2.2 Brown and Levinson’s politeness strategies Brown and Levinson’s approach is based on Goffman’s study on the notion of face. Goffman (1967) defines face as an image of self-delineated in terms of approved social attributes. The moment a certain face is taken, it will have to be lived up to. Here he coins the expressions ‘to lose face’ and ‘to save one’s face’. From these concepts, the following expressions are derived: ‘to have, be in or maintain face’, which stand for an internally consistent face to be in the wrong face, which refers to the situation when information clashes with the face which a person sustains; and to be ‘out of face’, which means that a participant’s expect line is not yet prepared for a certain situation (Goffman, 1967). Goffman claims that interaction, especially face to face talk, is ruled by a mutual acceptance that participants in an encounter will tend to maintain their own face, defensive orientation, as well as other p articipants’ faces, protective orientation. â€Å"To study face-saving†, he states, â€Å"is to study the traffic rules of social interaction† (1967:12). According to him, face- saving actions are usually standardized practices which differ from one society to another as well as among subcultures and even individuals. Despite the differences, everyone is expected to have some knowledge and experience of how face work is used. Brown and Levinson borrowed these concepts and elaborated them somewhat in order to define the strategies that speakers follow when constructing messages. They treat the aspects of face as ‘basic wants’, and they address the universality of the notion of face. According to them, face has a twofold character positive face, which stands for the desire to be approved of and negative face, which responds to the desire that one’s actions are not hindered (Brown and Levinson 1987). They shape the term face-threatening acts (FT As), and agree with Goffman that interlocutors will try to maintain others people’s faces as well as their own. Therefore, the effect of FTAs will be minimized as much as possible through linguistic strategies (Brown and Levinson 1987). There are four strategies in politeness based on Brown and Levinson which are positive politeness, negative politeness, bold on record and bold off record. However, in this study, bold off record is not included because the theory is not deeply explained and difficult to collect data in classroom interaction. According to David A. Morand ( 2003 ), this difficulties will encounter when researcher need to detect sentences based on ambiguous meaning. Based on past study by Scollon and Scollon (1995), negative politeness is often preferable than positive politeness among British people. On this study, the researcher again will used Scollon and Scollon’s hypothesis to measure a qualitative data among UTAR student which can be clearly state a s Asian people. To fulfill the needs of this study the researcher applied three out of four politeness strategies. First strategy is positive politeness which mean an expression of solidarity which can say as appreciating addressee’s positive face and sharing the same values plus an act of sympathy towards the addressee. In other word, no inference required (Hirschova, 2006). Meanwhile based on Brown and Levinson (1987), positive politeness is a sender’ attempt to communicate intimacy with receivers. This kind of intimacy can be noticed in a friendly and familiar conversation in which the relationship between addresser and addressee is close. Second type is negative politeness which enables the speaker to avoid conflict among them while communicate by hesitating and softening the utterance with devices such as modality or indirect questions (Rudick, 2010). To make it clear, according to Brown and Levinson (1987), negative politeness is redressive action addressed to the addressee’s negative face. In other word, the key aspect is the addresser show respect towards the addressee by giving him/her freedom to react in a free way. In fact it used more intended enunciation in a careful way with a set of polite phrase examples Could you,†¦Sorry to bother you but,†¦ The addresser is extremely indirect so as not to harm the addressee’s negative face and hurt their feeling. Usually this strategy happened among unfamiliarity between the addresser and addressee or their different social status. Third type is bald on record which can be defined as a direct way of saying things, without any minimization to the imposition, in a direct, clear, unambiguous and concise way (Brown and Levinson, 1978;1987) For example â€Å"Do it!†. Brown and Levinson (1987) claim that the primary reason for bald on record usage may be generally stated as whenever the speaker wants to do FTA with maximum efficiency more than s/he wants to satisfy hearer’s face, even to any degree, s/he will choose the bald on record strategy. Final type is bald off record which Brown and Levinson (1987) defined as a communicative act which is done in such a way that is not possible to attribute one clear communicative intention to the act. In this case, the actor leaves her/himself an â€Å"out† by providing her/himself with a number of defensible interpretations. S/he cannot be held to have committed her/himself to just one particular interpretation of her/his act. In other words, Brown and Levinson claim, the actor leaves it up to the addressee to decide how to interpret the act. Off record utterances are essential in indirect use of language. One says something that is rather general. In this case, the hearer must make some inference to recover what was intended. For example, if somebody says â€Å"It is hot in here†, the hidden meaning of the utterance can be a request to open the window or to switch on the air conditioner. However due to this hidden meaning and ambiguous, this strategy will not be carry out to collect data on the use of politeness among UTAR student. This statement was agree by the scholar David A. Morand ( 2003 ) in his book ‘Gender talk at work’ by mentioned this difficulties will encounter when researcher need to detect sentences based on ambiguous meaning. To sum up, the politeness strategies may be applied and this study and this study will investigate how student use Brown and Levinson (1978;1987) politeness strategies with their instructors based on open ended question given and finally this study will measure or calculate the frequency that student used on the three type of politeness strategies. 2.3 Classroom interaction Language classrooms can be seen as sociolinguistic environments (Cazden, 1988) and discourse communities (Hall and Verplaetse, 2000) in which interaction is believed to contribute to learners’ language development. According to a review of studies in the area of classroom interaction and language learning presented by Hall and Verplaetse (2000), interactive processes are not strictly individual or equivalent across learners and situations; language learning is a social enterprise, jointly constructed, and intrinsically linked to learners’ repeated and regular participation in classroom activities. Based on Ghosh (2010), classroom interaction is a practice that enhances the development of the two very important language skills which are speaking and listening among the learners. This device helps the learner to be competent enough to think critically and share their views among their peers. A major goal of is to provide a prospective teachers with sufficient knowledge, s kills and behavior to enable them to function effectively in future teaching experience. Interaction has a similar meaning in the classroom. We might define classroom interaction as a two-way process between the participants in the learning process. The teacher influences the learners and vice versa. The teacher’s role is important to influence the learner. It is the responsibility of the teacher to create a learning atmosphere inside the classroom. It is through these interactive sessions that the teacher can extract responses from learners and motivate them to come out with new ideas related to the topic. Teacher is an observer who helps the learners to construct an innovative learning product through group discussions, debates and many more. Teacher also will define their self as a planner who plans out the best of the modules of interaction that would be effective to invite the learners in classroom interaction (Ghosh, 2010). Meanwhile, in vice versa which the learners ar e trying to influence the teacher is by students’ sense of social relatedness in classroom (Connell and Wellborn, 1991). When students experience a sense of belonging at school and supportive relationships with teachers and classmates, they are motivated to participate actively and appropriately in the life of the classroom.