The practice however releases carbon to the atmosphere and increases the probability of soil erosion

Similar results were found for tropical silvopastoral systems where the canopy cover and tree density were positively related with migratory bird flocks . In a comparison of different structural complexity, silvopastoral systems supported less diverse bird flocks than shade coffee in Colombia, but it is likely that shade coffee could support more bird species than open grazing lands . In Romania traditional silvopastoral systems of open oak woodland forest-pastures harbored more diverse and significantly different bird communities than modern high intensity managed open pastures .In Europe and North American prairie bird populations are declining due to agricultural intensification, resulting from the redusction of crop rotations, use of pesticides, and landscape simplification . In terms of habitat preferences, Skylarks were significantly positively influenced by legumes and set-aside, , a field with no or low agrochemical inputs, with weeds and wild flowers/grasses and low/ no tillage. In a meta-analysis, Van Buskirk and Willi found that set-aside in conventional agriculture increased bird richness and abundance ,container size for blueberries although set-aside parcels had greater benefit when surrounded by low intensity agriculture.

Floral diversity associated with diversified farms was shown to support a variety of grains and plant biomass for granivores bird species that use arable land as habitat , but depletion of these food resources via agriculture simplification and use of herbicides was found to negatively impact the granivorous populations . The interactions between bird habitat associations at the plot/local scale and landscape scale factors are relevant for implementing conservation strategies in arable lands, for instance, granivores species were prone local extinctions when arable land decreased in the landscape . Lowland rice fields are useful for bird conservation due to their similarity to natural wetlands . In Asia, the integration of wildlife with rice crops in wetland areas drives synergies between nutrient cycling and pest control. In the case of rice fields, ducks provide ecosystem services such as consumption of weed seeds, aquatic arthropods and rice pests, release of nutrients , and soil aeration, while rice fields provide ducks with shelter, nesting places, and food resources . In an exclusion experiment in rice flooded in winter in France, Brogi et al. reported that domesticated mallards mimicked wild populations of mallards by removing rice crop residues , independent of duck density, thereby accelerating the decomposition process post-harvest in the field. Long et al. compared duck-rice agroecological management and conventional rice farms and showed that duck-rice management effectively controlled insects , improved soil fertility and organic matter content, and reduced greenhouse gas emissions. In addition, Cagauan et al. recommended integration of water fern and fish in order to increase nutrient cycling within the duck-rice agroecosystem. The agroecological management of duck-rice farming is not only the reduced use of agrochemicals, but also improved yields and economic returns for low income farmers in Bangladesh .

To enhance conservation of waterbirds in rice fields, Elphick et al. suggested key agricultural management practices such as decreased use of pesticides and promoted field flooding. Harvest method , crop residue management , water management , agricultural management , and manipulation of edge fields and provision of landscape ponds and ditches are all practices that can benefit birds. Orchards provide vertical structures that can be used as habitat for birds. In central Chile Muñoz-Sáez et al. found that orchards as land cover at two spatial scales favored the abundance of five species including granivores, insectivores, and omnivores birds. Woody crops such as olive orchards can be beneficial for frugivorous birds, especially in winter . Castro-Caro et al. conducted a study relating bird richness with effects of ground cover in olive orchards within homogenous and heterogeneous landscapes. Their results showed that the effect of the herbaceous ground cover was beneficial for passerine birds , independent of the landscape category, suggesting that landscape structure did not play a significant role in determining bird species diversity in fruit orchards. Almond orchards were reported to host higher bird richness in Eastern Australia than in apple orchards, vineyards, and Eucalyptus woodlots, including the threatened Regent Parrot . Apple orchards can also be useful for birds species, in particular when managed organically, increasing the number of insectivores in comparison with conventional orchards or serve as habitat for woodpeckers . Orchards can be sources of tree-cavities that are relevant for secondary cavity nester birds .Agroforestry is proposed as a multifunctional system where agricultural and forestry productivity, ecosystem services and biodiversity conservation are simultaneously enhanced . In a meta-analysis Torralba et al. found that in agroforestry systems, biodiversity, soil fertility and soil erosion control is higher than in conventional agriculture and forestry.

Leakey highlights the functional role of trees in agroforestry: increasing soil fertility , increasing water infiltration and habitat for mycorrhizae, and providing habitat for wildlife, among others. Several studies have shown that multi-strata and diverse agroecosystems can provide habitat for many species, and increase conservation potential in these areas . In tropical ecosystems, cacao and coffee are usually grown in agroforestry systems, as these crops are naturally adapted to shade conditions. In a comparison, between shade and sun coffee, Johnson et al. found higher diversity and abundance of insectivorous and resident birds in shade coffee than in sun coffee. Van Bael et al. reported a similar magnitude of bird predation in agroforestry systems in comparison with adjacent forests, and reduced insect abundance and plant damage, particularly due to migratory birds . Nájera and Simonetti found that increasing complexity can enhance biodiversity in forest plantations, although different species have a diverse response to management . Colorado et al. found that highly structured coffee agroforests favors Neotropical migratory birds, including the provision of overwinter habitat. In Malaysia, oil palm forest supports less birds and has lower abundance of insectivores, omnivores and granivores but higher abundance of raptors and wetland birds in comparison with logged peat swamp forest . Although the bird communities supported by agroforestry remain high, the bird diversity is notably not the same as in the native forest. These agroecosystems were inhospitable for some of the forest specialist birds that rely primarily on food and resources present only in native ecosystems . Interestingly, it has been reported that bird richness could be higher in cacao agroforest plantations in comparison with native forest, although species that inhabited agroforest cacao were common generalist . However, bird diversity supported by agroforestry systems is higher than supported by agricultural monocultures .Although the adoption of agricultural techniques and practices dates back 10,000 years , agricultural landscapes were traditionally considered incompatible with conservation biology goals . More recently,raspberry grow in pots agricultural management and the adoption of different techniques and approaches have the potential to reduce negative impacts on bird populations and even enhance bird populations . The use of pesticides has been recognized as detrimental for non-target species since Rachel Carson published Silent Spring, documenting the impacts of DDT on bird breeding. After the banning of DDT in several countries, a new family of chemicals appeared with new impacts for wildlife. Neonicotinoid pesticides have heavily impacted bird populations via depleting insect populations, a bird food resource. It was shown that neonicotinoid pesticide could disrupt the thyroid gland in birds which affects the endocrine and reproduction systems . Neonicotinoids and other pesticides can persist in soil and soil water in the long term and their indirect effects are poorly understood . Reduced reproduction rate of secondary cavity nesters was reported in apple orchards sprayed with organochlorine pesticides . Granivores birds are also affected by the ingestion of pesticide treated seeds . It was reported that cumulative pesticides applications over multiple years decrease bird community specialization, favoring a few generalist species capable of persisting in simplified arable monoculture . In contrast, Henderson et al. , showed that use of poly cultures and reduced pesticide amounts increased grassland bird populations. In a risk assessment using birds as an indicator species to compare alternative farming systems with genetically modified herbicide tolerant crop production in the UK, wildlife friendly farm management provided better conservation outcomes via better provisioning of food resources and nesting habitats, in comparison with transgenic crops .

Misuse of pesticides have unintended consequences for non-target organisms and undermine environmental health for wildlife and humans. Organic agriculture is widely conceived of as a management approach that relies on optimization of natural cycles, avoids the use of synthetic fertilizers, pesticides and genetically modified crops, and minimizes external local impacts to the environment , although specific practices vary greatly between farms and certification schemes. Common practices include use of organic matter additions , crop rotations, greater reliance on biological nitrogen fixation by legumes, mechanical weed control, and the use of biological control for pest regulation. Organic management enhances biodiversity as reported by Rahmann in 82% of the 396 papers analyzed. Although the total surface area of certified organic agriculture corresponds to about 1% of the global agricultural land worldwide , organic agriculture has been reported as beneficial for wildlife in comparison to conventional agriculture . Tuck et al. in their meta-analysis documented a 34% increase in species richness of multiple taxa in comparison with conventional agriculture, a gain that remained stable for three decades. Plants benefited most, but positive effects on arthropods, birds and microbes were also reported . In a meta-analysis for Europe and North America Wilcox et al. found that for arable crops, bird abundance was higher in organic agricultural management than conventional management, although the response varied by species traits. Several studies on different taxa corroborate the positive effects of organic farming within the field and at its boundaries on plants , birds , insect pollinators , natural predators , soil microbial activity, beneficial arthropods, and earthworms . A smaller number of studies have reported a neutral influence of organic agriculture on biodiversity in comparison with intensive management, such as for birds within vineyards in Italy . Some authors contend that the benefit of organic farming varies with the taxa studied and the landscape context. For example, Gabriel et al. reported that landscape context interact significantly with organic management practices, such that conservation strategies should be considered from a ‘local perspective’. The effectiveness of organic practices to increase biodiversity likely depends not only on agricultural management, but also on structural diversity at different landscape scales . In terms of bird dietary groups, Genghini et al. found that in fruit orchards in Italy insectivorous birds were more abundant in organically managed fields than in conventionally managed fields. Similarly, Smith et al. reported that organic farms positively influenced insectivores birds only in monocultures surrounded by borders of semi-natural grassland. However, organic management had a positive effect on non-passerine birds richness independent of the vegetation context . Tillage is a common management practice worldwide to homogenize and standardize land cover surface, increase contact between the seed or crop root and soil, and for weed control and pest turnover.Tilled crop lands are not considered high quality habitat due to frequent soil disturbance and typically monoculture planting and high agrochemical application levels common in intensive agriculture . In terms of soil conservation, zero tillage or conservation tillage is a more environmentally friendly technique due to low impact on soil structure, reduced soil erosion, and increased soil carbon sequestration, with similar medium- and long-term yields in comparison to conventional tillage , however high levels of herbicide application on such systems remains a concern. Positive impacts of conservation tillage in comparison to conventional tillage has been reported for some farmland birds in the UK, particularly in late winter . Zero tillage can also increase seed and invertebrate availability which are key food sources for birds . In a comparison between no-till and conservation tillage in a soy monoculture, VanBeek et al. reported a higher abundance of birds with conservation value, higher nesting and higher nest success in notill in comparison with tilled farms. Shutler et al. reported higher abundance of birds in notill than in conventional farms in Canada, although all of the species were more prevalent in wetlands and wild sites than on farms. In Illinois a study compared the nesting success of birds under till and no-till soy crops and found a higher nesting density and nest success in no-till farms probably due to shelter and food resources provided by crop residues and weeds . However, it has been reported that no-till in corn–soy rotations can act as an ecological trap, due to birds’ perception of no-till cropland as suitable habitat and the subsequent negative impacts on nesting of farm equipment traffic and herbicide/pesticide applications common in such systems .

The cost is that it may measure something different from the impact of climate on long-run profit

One possible measure is farmland value, which presumably reflects long run profitability, and which has been used quite extensively since Mendelsohn et al. introduced the so-called Ricardian approach for assessing the impact of climate change on agriculture. With this approach, one estimates a Ricardian or hedonic regression relating farmland value to climate variables and to other variables that control for non-climate factors which might also affect farmland value. This approach is intended to capture the long-run impact of climate on farmland value, and it allows for farm-level adaptations that might be undertaken as climate varies. In their paper, DG question whether there are unmeasured and omitted variables that also influence farmland value and that might be correlated with the climate variables in such a way as to bias their coefficient estimates in the hedonic regression. DG’s important methodological innovation is to propose an alternative approach using fixed-effects to control for time-invariant idiosyncratic features of the county within a panel data setting. However, for this approach to work, one cannot use climate variables as regressors since, in practice,blueberry production these are likely to be fixed over the duration of the panel and, hence, perfectly collinear with the county fixed-effects.

For this reason, DG use annual weather rather than long-run climate, since weather does vary over the course of the panel. Also, because of their annual focus, DG use annual “profit” as their metric of value rather than farmland value; this becomes the dependent variable in their regression. They are thus measuring the effect of weather on short-run profit rather than that of climate on long-run farming profitability or land value.They find no statistically significant relationship between U.S. agricultural profits, proxied by sales-minus-costs as reported in county-level data of the 1987, 1992, 1997, and 2002 agricultural censuses, and weather variables in the same years. They also find no statistically significant relationship between yields of the major field crops corn and soybeans and weather. They conclude that if short-run weather fluctuations have no influence on agricultural profits or output, then in the long-run, when adaptations are possible, climate change is likely to have no impact or will even prove beneficial. With any measurement strategy there are benefits and costs, and the ultimate effectiveness of the strategy is an empirical question.The benefit of DG’s strategy is that it is less vulnerable to unmeasured and omitted time-invariant factors.

DG argue that their measure overstates any possible long-run adverse impact of climate because it reflects the short-run response to fluctuations in weather and therefore does not allow for longer-run adaptation, which could only be less costly. But there are also many ways farmers cope with short-run shocks that would be more costly and/or less sustainable in the long run. For example, if the short-run response to a sudden increase in temperature is to pump more groundwater, this strategy may be less sustainable and/or more costly over the long run with a permanent increase in temperature than in the short run, due to depletion of the groundwater resource. In that case, the short-run impact of a fluctuation in weather would understate the long-run impact of a permanent shift in climate. Besides the conceptual issues associated with DG’s measurement strategy, there are serious questions about how they implement it and whether it actually produces the results they claim. Perhaps most importantly, there are some unusual features of the data used by DG and their representation of climate change scenarios that appear to influence their results, in each case in a direction away from finding any potential negative impact of the change. In our own research we have considered regression models that use both cross-sectional climate variations and time-series weather variations. In Schlenker et al. we show that a better-specified hedonic model that accounts for the influence of irrigation on farmland values is robust and predicts large negative impacts from projected climate changes.

In Schlenker and Roberts we find a strong relationship between corn, soybean, and cotton yields and weather, a relationship that indicates extremely warm temperatures sharply reduce yields for all three crops. Adaptation to warmer, or even extreme, temperatures, is suggested by DG and others, and this is of course possible, especially over time with the development of new crop varieties, but it is worth noting that we find no evidence of greater heat tolerance in yield regressions in warmer regions in the South as compared to cooler regions in the North, and no evidence that relative heat tolerance has grown over time. The relationship is strong and robust and very similar whether derived from time-series variations in weather or cross-sectional variations in climate and comparable in the cross-section of farmland values. Thus, while one cannot predict whether adaptations to extreme heat may occur in the future, there appears to have been little or no adaptation in the past . Climate models, in turn, project that the frequency of extremely warm temperatures will increase significantly. Holding fixed the locations where crops are grown, we predict potential losses in yields for key crops of approximately 30-40% by the end of the century under a slow- warming scenario and 60-80% under the fastest warming, “business as usual”, scenario. These predictions also accord with our research that uses the hedonic approach, where potential losses in farmland value range from approximately 30% to 70% for the same scenarios over the same time period. What explains the stark differences between our empirical findings and those of DG? With regard to DG’s results on profits and yields, we present evidence showing the difference stems from several sources: coding and data errors in the weather data that magnify within state temperature fluctuations by a factor of seven; an unusual and in our judgment incorrect characterization of climate change across the units of observation; differences in underlying climate change scenarios, in particular reliance by DG on an earlier and more optimistic climate projection than that used in the Fourth IPCC assessment and in our analysis; and DG’s omission of storage, and perhaps other financial or technological mechanisms, that smooth their measure of short-run profits in the presence of weather-induced output fluctuations and cause the short-run impact of weather on profit to understate the long-run impact of a permanent shift in climate.

To investigate differences we downloaded DG’s data and STATA code from the AER website. We found several irregularities in their weather and climate data. These data irregularities explain a large portion of the differences in findings. DG have two weather variables in their data set: the variable dd89, which measures growing degree days for each year and county, and dd89 7000, which measures the average number of degree days in each county between 1970 and 2000.3 These two variables do not appear consistent with each other. The correlation of the county-level average of the four-year panel and the 31-year average given in their data is only 0.39. Given the wide variation in temperatures in the cross-section,blueberry in container one would expect a stronger correlation between the 4-year and 31-year averages across counties. We reconstruct the same weather variables from raw data sources and find a correlation of 0.99. We also find the average of dd89 is much lower and the standard deviation much higher than in our replication. Second, DG’s baseline climate measure has a value of zero degree days for 163 counties. If correct, this measure implies temperatures do not exceed 8C in those counties during the growing season . Temperatures this low would seem implausible in any state, yet many of these counties are in warm southern states . Anomalies caused by missing or incorrect measurements, which as we shall show have an important influence on estimated impacts of climate change, are illustrated in Figures 1 and 2. We independently calculate the degree days variable dd89 7000 used by DG and display it in the bottom panel of Figure 1.4 Note the much smoother pattern as compared to the large discontinuous changes in the top panel. Average temperatures vary smoothly across space, where counties of the same latitude tend to have comparable average temperatures that increase as one moves southward. Exceptions to this rule are mountain chains like the Rockies in the West or the Appalachians in the East, where temperatures are cooler due to gains in altitude. The discontinuous pattern induces incorrect weather variation, which has an especially large influence on parameter estimates in regression models that use state-by year fixed effects. Within-state temperature deviations in our replicated data set are roughly one seventh of DG’s. Third, DG’s predicted changes under warming scenarios are discontinuous in space and range from a decrease of 880 growing degree days to a 6572 growing degree days increase . This pattern is odd given that the underlying climate model does not predict cooling anywhere in the U.S. and the variance of the projected changes far exceeds that of any climate model. Predicted changes in DG’s model and in our replication are shown in Figure 2. Again, compare the discontinuities in the top with the more coherent patterns in the bottom. The large variability of DG’s predicted climate changes stems from the way they combine observed weather and climate-change forecasts. The difficulty arises from the fact that general circulation models generate climate predictions on a coarser geographic scale than data available in historic records. DG use historic county-level data as a baseline combined with climate predictions that are uniform across each state. Thus, after climate change, Los Angeles and San Francisco, Salinas and San Joaquin Valleys, Mount Whitney and Death Valley, are all assumed to have the same climate since all are in California. Much of the within-state variation, however, is maintained in the baseline values, which are county-level averages. Such a representation of climate change therefore displays regression towards the mean, with cooler counties becoming much warmer and some very warm counties becoming cooler. This regression-toward-the-mean effect is accentuated by apparent errors in the baseline degree-day measure. Consider for example Fresno, Kings, and Tulare counties in the southern San Joaquin Valley of California. In DG’s data, Fresno is predicted to have a decrease of 414 degree days ; Kings county has an increase of 403 degree days and Tulare an increase of 4685 degree days . Tulare’s large increase is the result of a zero baseline. But even for Kings and Fresno counties, for which there are no missing baselines, predicted climate changes are too different for bordering counties. This treatment of climate change is unusual. We are not aware of any other application of the Hadley GCM model that predicts decreasing average temperatures by the end of the century in any U.S. location. Rather, the standard approach is to add regional predicted changes from the climate models to the sub-regional baseli While there are differences between DG’s and our own model of yields, much of the difference in our predicted impacts stems from the data issues described above. We generally find large negative projected climate impacts from replicated profit regressions as well, though results here are somewhat mixed and less likely to be significant, for reasons we discuss in the next section. Comparisons of the original and replicated yield and profit regressions are summarized in Tables 1 and 2. In our replications we fix the observations so they exactly match those used by DG. This excludes some agriculturally important counties, which are missing in DG. For example, 66 of Iowa’s 99 counties are missing from their data set, yet Iowa is the largest producer of corn and soybeans, in turn the nation’s two largest crops. On the other hand, most of Nevada’s counties are included, which are highly irrigated. Irrigation poses a problem for estimation of the effects of climate variables both in a cross-section and a panel. In a cross-section such as the Ricardian or hedonic approach the problem is that since irrigation tends to be correlated with temperature and precipitation it can bias estimates if omitted, as we discuss later in the section on robustness. In a panel, the effect of weather fluctuations depends on water availability. DG deal with this problem by estimating regressions with separate coefficients for irrigated and rainfed counties.

The main impact on MA changes during this sample period was from output demand

Even based on observed pK changes, CMA,pK=0.044. If weighted by the greater increases p*K, the MA demand augmenting impact of capital price changes would be C*MA,pK =0.056. The implied higher growth of virtual compared to measured price of capital could result from various factors. Its drivers could include substantive and rising adjustment costs , environmental or safety standards, or taxes, that are not effectively captured in the measured user cost of capital. These capital costs motivate a substitution effect toward primary agricultural products. In turn, growth in the scale of production, or output demand, has had a greater-than proportional effect on the augmentation of MA demand; eMA,Y=1.095 on average for the full sample, implying CMA,Y=0.024.24 And although eMA,Y>1 implies scale effects are MA-using, they are even more MF-using, so in this sense they are relatively MA-saving. By contrast to the positive substitution and scale influences on MA use, disembodied technological shift impacts on MA demand have been negative, and in a direct sense, quite large. That is, an input-cost-diminution impact associated with MA demand is evident = – 0.008 on average, that is typically interpreted as deriving from disembodied technical change.

This trend is statistically relevant; the eMA,t estimates are significantly different from zero for most individual observations.25 And this tendency was augmented post-1980 = – 0.021. The direct t- and t2- impacts are, however, much greater in magnitude than these total measures,big plastic pots since much of the direct trend effects are counteracted by effective price trends that may be interpreted as embodied technical change or adjustment costs, as alluded to above. These patterns can be seen from the decompositions of the total trend and structural change impacts in the first section of Table 2, that arise from the inclusion of t- terms in the p*MA and p*K specifications.For our scenario, although eMA,pMA < 0, since the trend component of p*MA is negative , the indirect p*MA effect on MA demand is positive – as is the p*K effect since K is a substitute but p*K is rising .This evidence is consistent with the embodied technical change interpretations of the timpacts on effective prices implied by the discussions of the p*MA and p*K as compared to pMA and pK changes above. Declines in effective as compared to measured pMA, and the reverse for pK, both tend to augment MA use. Escalation of the equipment-to-structure ratio, representing another form of embodied technical change, also had a positive impact on the demand for MA; CMA,ES = 0.014. Total Cost Implications In addition to the specific MA impacts, the total cost effects of adaptations in the economic and technological climate are of interest individually, as well as providing indications of input biases .

The cost effect most directly associated with the use of MA is represented by the eTC,pMA = 0.025 elasticity, indicating the impact on costs of pMA changes, which depends on the input intensity or average share of MA for industries that use agricultural commodities.This is larger than the corresponding elasticity for any other input; rising pMA has a substantive positive impact on production costs, and thus on output production/price, in the food processing industries. Note, however, that the overall pMA contribution to total cost increases of CTC,pMA=0.014 is not only smaller than that for capital , but is also is even lower if the smaller increase in effective pMA is recognized within this measure . The eTC,Y estimate of 0.868, which implies significantly increasing returns to scale, also deserves attention. This evidence is largely driven by a very small capital-output elasticity, that counteracts the eMA,Y elasticity of slightly more than 1, and an eMF,Y elasticity that is even higher , which suggests scale expansion is somewhat MA-using, and significantly K-saving and MF-using. This is of particular interest since this conclusion is closely linked to the inclusion of t in the lK and lMA specifications. When t is not included as an argument in these specifications , output increases instead appear MA-saving , and both eK,Y elasticity and eTC,Y elasticity estimates are much closer to 1, implying close to constant returns to scale. These patterns highlight two issues alluded to above. First, apparent declines in the MA-input-intensity of output production in the food industries are partly associated with increases in effective or quality-adjusted MA-inputs, perhaps due to embodied technical change. Second, adjustment costs for capital implied by a higher and more quickly rising p*K than pK may mean that these estimates should be interpreted as short-run, or at least capital-adjustment-constrained estimates.

And both of these impacts, if ignored, affect estimation of the scale- or output-effects. Finally, the elasticities associated with disembodied and capital-embodied technical change deriving from t and ES changes, and with structural changes in the 1980s , suggest other technological forces have contributed to cost diminution. The negative values for both CTC,t = -0.004 and CTC,t2 = -0.012, augmented by the embodied technical change impact CTC,ES = -0.041, highlight such trends, and their enhancement in the 1980s, and from technological advance embodied in equipment. However, the total disembodied technical change impact becomes positive – CTC,t = 0.0004 – when the higher cost of capital is recognized, even though the analogous effect for p*MA is in the opposite direction . By contrast, CTC,t2 is even more negative than its direct counterpart, since CTC,p*K,t2 = -0.0025 outweighs CTC,p*MA,t2 = 0.001. Note also that the input-specific CMA,t = -0.0525 measure is much larger than the associated overall input declines captured by CTC,t = -0.004, and the total MA effect CMA,t is negative whereas that for TC, CTC,t is positive, indicating that “technical change” has been both relatively and absolutely, MA-input-saving. Over time there has been a technical change bias toward reducing MA use more than other inputs for a given level of output.27 Marginal Cost and Output Price To move toward consideration of the pass-through of MA prices to output price, as well as its impact on scale economies, we can compare these estimates to those for marginal cost in the third panel of Table 1. Note that the input price effects for the materials and labor inputs are slightly larger for MC than for total cost,growing berries in containers implying a depressing impact on scale economies . The reverse is true, however, for the pK and pE elasticities, supporting the notion that capital is subject to adjustment costs, and “lumpiness”, that are driving forces for returns to scale. This is also consistent with the virtually nonexistent MC impacts of changing output. And with the fact that marginal cost has decreased significantly over time, both in terms of the direct and indirect effects, largely due to the smaller impact of pK on MC than on TC. Comparing these measures to those for pY provides some insights about markup behavior, and its determinants. The average epY,pMA = 0.272 elasticityis larger than either eTC,pMA, or the eMC,pMA. So a 1 percent increase in pMA drives a somewhat larger increase in AC than MC, and an even greater adaptation in pY than MC. This implies a higher markup pY/MC associated with a rise in pMA, but also an increase in the scale economies that support such markups .Note also that pY decreases somewhat more than MC as time progresses, primarily due to the larger p*MA effect. Temporal and Industrial Variations In addition to the indicators for the data averaged for the entire sample, it is useful to briefly consider variations in the estimates over time and by industry, which are presented in Tables 3 and 4, respectively.

The temporal decompositions presented in Table 329 show a much smaller depressing contribution of pMA increases to MA demand post-1980, that results from low pMA growth; the measured eMA,pMA elasticity is actually larger later in the sample. Also note that the trend in the effective price of MA is actually downward for the post-1980 period, so the full contribution of own price changes to MA demand is positive. This tendency is particularly worth highlighting since measured pMA changes that occurred after the end of our sample period actually dropped, which implies that the implications from these measures may have been exacerbated. It also appears that although the growth rate of MA demand in the 1980s was larger than in the 1970s, the individual input price contributions were generally smaller, with less of the growth arising from output increases. In fact, a large proportion of MA demand expansion seems to have arisen from t-effects. In particular, the indirect p*MA effect has increased over time to the point where CMA,t is positive post-1980, although the direct impact, CMA,t, reported in Table 2, remains negative in the later time period. The TC measures for the 1970s as contrasted to the 1980s, presented in Table 3, indicate a much smaller average annual percentage increase in total costs for the food processing industries overall post-1980, that is only in part due to a slower output growth rate . All the contributions of individual TC determinants are smaller , although they remain statistically significant. In particular, the eTC,pMA elasticity is slightly lower in the 1980s, but the contribution falls more since pMA increased so little . The -estimate of the actual TC change in the 1980s seems to be driven by capital price effects, which appear in the CTC,pK measure of 0.014, as well as a positive CTC,p*K,t measure of 0.009 which augments the direct CTC,t = 0.004 .30 Although a full analysis of the 3-digit industries within the food processing aggregate is beyond the scope of this study, it is worth briefly considering the differences in MA demand that are apparent across these sub-samples, as reported in Table 4. First note that for the meat products industries very little substitution is apparent, as might be expected.Note also that the t-effect is very small, at only about 10% the magnitude of that for these industries as a whole. For the dairy industry, the own and cross-substitution responses seem similar to those for the overall food processing industries. But the t impact in total is very slightly positive, since the indirect adjustment – particularly the CMA,p*MA,t component – is quite large. The vegetables sector of the industry seems to be fairly responsive to the own price of MA. The p*K contribution, as well as the t elasticities are also large. The substantial t impacts on p*MA and p*K in fact suggest a particularly significant amount of embodied technology in the primary agricultural vegetable inputs, as well as high and increasing adjustment costs, likely due to the great scale and processing expansion in this industry. The grain mill and oil industries have exhibited quite different patterns.31 We find a negative output impact on MA demand for grains, both due to the very low eMAY elasticity , and observed output declines for some observations. Responsiveness to other factors seems generally low in this industry, except perhaps for ES. For the oil industries, we find the own contribution to be smaller than for most industries, and even less responsiveness to prices of other inputs, and thus substitutability; the cross-demand contributions are only about half those for the food industries as a group. By contrast, the output response is the largest of any other industry on average. For sugar and confectionary products the own price contribution is by contrast very large, although other substitution effects are somewhat small relative to the other industries. The pK impact is slightly more minor, and the CMA,t impact more major, than for the industry as a whole. And industries in the miscellaneous category have exhibited similar substitutability patterns to those apparent for the overall industry, except for very small capital/energy and technological contributions. Impacts of MA Price Changes Finally, in Table 5 we report elasticities that facilitate an evaluation of responsiveness to pMA changes, which may be thought of as a converse experiment to the evaluation of MA demand changes that began our discussion of empirical results. These measures facilitate investigation of the potential implications of the declines in pMA that were experienced by the food industries during the remainder of the 1990s not represented by our data sample.

Pasture still accounts for over 90 percent of animal nutrition in Brazil

In the piece, Boyd describes a heterogeneous network of firms, governments, and advocates developing REDD institutions simultaneously. He uses the metaphor of DNA to describe how frameworks and approaches from any particular effort can be combined and recombined into other efforts without passing through the bureaucracy of the UN. One respondent noted that a good deal of finance under the banner REDD has already begun to address drivers, well before the concept has acquired substantive meaning in the UNFCCC. “It’s hard to know which is the cart and which is the horse sometimes,” observed the respondent.For party and advocacy interviewees alike, the drivers themes is closely associated with a gathering debate about the appropriate uses for REDD finance. Several respondents noted that agricultural supply chains might be better suited to REDD finance than traditional government targets and forest landholders. Among this group, there was a sense that not only had existing lending been ineffective, it had been, as a consequence, heavily delayed by challenges finding appropriate borrowers and grantees. On the flip-side,blueberries in pots some advocates who are currently beneficiaries of REDD finance schemes argued that drivers are a low priority distraction that could threaten a “successful” model for REDD.

Yet another position questioned the wisdom of finance targeted for monitoring and evaluation of REDD at the expense of direct support for private sector activities. This subject argued that drivers might be helpful in rechanneling finance out of government hands and into the private sector. One respondent remarked that the insertion of drivers indicates a balkiness about the role of offsets from compliance markets to fund REDD activities. Drivers, this subject argued, indicates a growing credence in the “fund model” of REDD.In my interviews, several respondents keyed in on the “all parties” language in Paragraph 68 as signal of a focus on policy targets beyond landholders themselves and instead along land use-intensive supply chains. Several respondents saw the opportunity and some the need for synergy between formal REDD activities and other complementary government and NGO initiatives. Potential linkages mentioned included the commodity round tables . Others expressed hope that the legal model to crack down on the international trade of illegal tropical timber could be adopted for agricultural products.However, several parties have expressed skepticism about the political and legal potential of such a measure. The European Commission, meanwhile, has called for further research on the matter. In sum, demand-side and supply chain approaches to REDD represent a very different set of guiding principles than REDD as usual. In a sense, such approaches constitute the fourth cell of the 2X2—the public sphere as the root of the problem, and private sector as the site of intervention.

Finally how drivers relate to reference levels was a less common, but highly charged theme. Reference levels are estimates of the rate of deforestation in the absence of REDD. They are important to parties because they will form part of the basis for determining performance payments.61 In turn, drivers are important to reference levels because drivers are factors associated deforestation. One subject speculated that the high stakes of reference levels could pull along work to define drivers, but only if drivers are included in the reference levels discussion.Another, disagreed, worrying that linking the charged atmosphere of the reference levels debates with the efforts to define drivers could derail progress on defining drivers. The concern was that reference levels might politicize drivers and thereby limit progress to define them rigorously.Previous studies suggest that accelerating adoption of higher output, semi-intensive Brazilian cattle pasture intensification technologies can reduce direct GHG emissions, slow deforestation rates, and secure GHG benefits from bio-fuels production . Adoption of one or more semi-intensification technologies can directly reduce emissions from cattle systems by limiting enteric methane produced per unit meat and by reducing nitrous oxide emissions .

Direct reductions are notable, but they are an order of magnitude smaller than reductions estimated from avoided deforestation due to adoption of semi intensification systems. Such reductions could deliver a substantial fraction of the GHG mitigation pledged in the PNMC, Brazil’s Climate Law . Previous research suggests that land sparing from pasture intensification could also readily sop up the additional demand for land associated with bio-fuels mandates . However, previous estimates of the land sparing potential of cattle pasture intensification have shortcomings in their depictions of production systems, land use change processes, policies, and GHG impacts. One problem is the non-mechanistic representation of agricultural intensification processes. Rather than delineate all changes to the material and financial flows through the cattle systems, some studies depict boosted output alone. This approach does not allow representation of the full scope of mechanisms by which intensification shifts land use and therefore may misrepresent outcomes resultant from intensification processes. In such studies, intensification balances the spatial budget instead of depicting the land use change process using mechanisms consonant with economic principles and used at the cutting edge of land change science . Second, policies are also often highly stylized, depicted through constraints without practical analogs . Some modeled policies are implemented globally, belying limits to the authority of international law . Third,square plant pots though greenhouse gas emissions are of global salience and land use change processes are increasingly mediated by global markets, previous studies have focused on land use and greenhouse impacts at scales ranging from local to national . Global impacts from national land sparing policies have not been an explicit focus.This study investigates the global land use and GHG impacts of pasture intensification policies for achieving compliance with cattle ranching intensification targets specified in the Brazil National Climate Plan. We focus explicitly on three aspects of the problem that previous studies have not adequately addressed—a bottom-up, mechanistic representation of cattle production systems, policy interventions with practical analogs, and the effects of global trade. We employ the global economic land use simulation model GLOBIOM adapted to Brazil in order to represent global land use and GHG impacts of changes to land policies and pasture technologies in Brazil. The model consists of spatially explicit estimates of global crops, range lands and timber production potential spatially explicit product specific internal transportation costs in Brazil an economic model which represents the competition for land between the forestry, agriculture, and livestock sectors, and international trade for crops and livestock products. We introduce a semi-intensive pasture based cattle ranching alternative production type into the model. The production offers boosted yield per hectare and boosted costs of production per hectare. The main model outputs are crops, livestock and timber production, land use change, and GHG emissions from land use and agricultural sector, at a 50x50km resolution in Brazil for each 10 year-period over 2000-2030. Minimum levels of global food, timber and bio-energy demand are exogenously specified based on assumptions about population and economic growth.

We model two intensification policy mechanisms—a flat subsidy for all ranches adopting intensive alternatives and a flatly levied tax on all ranches that do not adopt intensive alternatives. Both of these interventions reduce the cost gap between higher cost semi-intensive alternative pasture systems and lower cost, business as usual pasture systems. We explore the extent to which international trade modulates the effects of these policies by comparing the model outcomes of freezing Brazilian external trade to the level observed in the baseline simulation or allowing external trade to adjust to the introduction of pasture intensification policies. The GHG impacts of the intensification policies are accounted as the difference between all GHG emissions over the modeled baseline scenario and all GHG emissions in each of the policy scenarios. In recent decades, the cattle sector of Brazil has expanded, shifted North and West, intensified, and become more export-oriented . Brazil has the largest commercial cattle herd, the second largest share of international trade in cattle products, and widespread adoption of semi-intensive pasture technologies. Semi-intensive adoption is largely associated with good domestic market access and fertile soils—higher rates of output per area can be found in the South, Southeast, and parts of the Center West of the country.64 Meanwhile, the geographic center of cattle production has continually migrated North and West in Brazil as an integral part of the frontier settlement and deforestation process . Internal demand for Brazilian cattle products is strong and growing. Per capita beef consumption is high and per capita consumption relative to GDP is higher. This despite the fact that prices are high relative to other countries of similar socio-economic status .Confinement facilities have emerged in the past decade, but they are cost competitive with pasture during only the height of the dry season and only in the core agricultural regions where land prices are higher than is typical . Age at slaughter has declined somewhat in the past two decades, but it still lags well behind producers in the U.S. and Europe . Growth in the sector has been uneven—fluctuations in currency, purchasing power, and export market access temper and enhance typical price dependent managerial fluctuations in slaughter rate and age class ratios known as the cattle cycle . Our baseline simulation depicts the period 2000-2030 in a fashion broadly concordant with the aforementioned patterns. Beef production in Brazil is simulated to increase to 13.4 million tons of carcass weight equivalent by 2030. This increase constitutes a modest 50 percent rise over production in 2011. Most of this increase will be associated with exports. We simulate Brazilian beef exports will to grow to represent 40 percent of the production in 2030 compared to 8 percent in 2000 and 24 percent in 2007 as reported by the United Nations Food and Agriculture Organization . The number of head of cattle is simulated to increase in all five political regions of Brazil across the simulation, but differentially. The Center West is simulated to remain the top cattle-producing region but its share would decrease due to rapidly rising cattle production in the North region . Part of the shift comports is due to agricultural expansion in the Center West and Southeast regions. Simulated crop farmers are willing to pay more for land than ranchers in these high fertility, market proximate regions. Prior to 2004, growth and change in the Brazilian cattle sector closely paralleled conversion of savannah and tropical forest in the North and West of Brazil . Remote sensing studies report that cattle ranching occupies between 50 percent and 80 percent of cleared Amazon rainforest land at some point in the initial years following clearing & Brazilian Agricultural Research Corporation , 2011. However, in key frontier regions, the last five years has seen growth in cattle production unmatched by concomitant land conversion . This discontinuity has been interpreted as promising evidence of land sparing, but it also highlights the challenges of estimating the extent to which marginal growth in the cattle sector contributes to deforestation. We revisit this theme in our discussion session. Nevertheless, in some periods, cattle pasture and crops expand in a concerted fashion. New croplands arise closer to key infrastructure while new pasture arises close to the agricultural frontier . However, it is often the case that croplands arise through conversion of pasture. The net effect seems to be that crops are expanding more rapidly than pasture. Past trends in the balance of pasture, croplands and native vegetation in Brazil continue in our baseline simulation. Pasture and crops undergo modest gains and, for the most part, these come at the expense of native vegetation. We simulate less land abandonment than the rate that has been empirically observed in the Amazon biome & Brazilian Agricultural Research Corporation , 2011. Land use change GHG effects from beef are uncertain, however, relative to other forms of food, beef is associated with substantially higher direct environmental impacts than other food products. Methane emissions, nitrous oxide emissions from soils, and nitrate water pollution are all major problems from cattle systems . Methane emissions aside, the bulk of the GHG effects are associated with land use changes that the literature attributes to cattle productions systems . However, these studies employ techniques for attributing land use change to cattle ranching that use correlational relationships between cattle pasture and deforestation as proxies for causal patterns.

Fitz-Gibbon and Eisler similarly critiqued early meta-analyses of educational and psychology studies

In contrast to the notion that seepage water from wetlands may be considered as a source of groundwater nitrate contamination, this study shows that under the conditions present in our wetland, seepage through wetland soils can actually prevent some nitrate contamination of groundwater. Before recommending constructed wetlands that utilize seepage as a beneficial-management practice for treating agricultural tail waters, further study is necessary to determine the fate and transport of other contaminants . Studies are also needed to evaluate long-term nitrate removal efficiency over the life of these wetlands. While sealing the constructed wetland floor is considered an important aspect of treatment-wetland design, as it prevents the seepage of contaminants into groundwater bodies , it is not economically practical in most agricultural settings. Moreover, sealing wetlands can discourage surface water exchange with soils, which is where denitrification is most favorable.Initially developed by medical researchers to synthesize data from multiple clinical trials, large pots plastic systematic literature review and meta-analysis are increasingly popular in the agricultural sciences.

Systematic literature reviews apply a structured methodology to collect and analyse secondary data, with the objective of transparently reviewing all available research evidence . Systematic reviews contrast with traditional literature reviews, the former being thought of as more objective, defensible and conclusive . Meta-analysis takes systemic literature review further, fundamentally changing how research syntheses are conducted . It extracts and assembles quantitative information from primary studies to build a database for analysis. This enables increased statistical power and the testing of hypotheses that can only be partially addressed through individual studies. Rosenthal and Schisterman suggest that meta-analysis permits researchers ‘…to formally and systematically pool together all relevant research in order to clarify findings and form conclusions based on all currently available information’ . Most researchers conducting meta analysis collect means and standard deviations of response variables to determine treatment effect size . Meta-analysis of combined data from papers that individually report non-significant or idiosyncratic relationships between variables can point to an underlying data structure across studies. Both Garg et al.and Borenstein et al., therefore argued that increased statistical power is a key reason for deploying meta-analysis to address conflicting research findings and resolve scientific debates. Doré et al.recommended that agronomists conduct meta-analysis to investigate patterns in cropping system performance.

Over 1000 studies using meta analysis in agriculture have been published since 1985, with 65% completed since 2012 . Described as one of the most objective and robust methods in agricultural research , the usefulness of meta-analysis has however long been questioned in other fields. For example, Eysenck described meta analyses of clinical psychotherapy interventions as ‘an exercise in mega-silliness’ and an ‘abandonment of scholarship’ because researchers commonly included studies ‘mostly of poor design’ .In a more recent evaluation of 9135 papers labelled as systematic review or meta-analysis in health care, Ioannidis found one in six studies to be misleading, and one in three redundant, unnecessary, or potentially biased.Additional methodological concerns with meta-analysis have been identified in other fields that may be applied to the agricultural sciences. The first concern involves the criteria used to select and analyse literature. Failure to locate all available literature, or inclusion of primary studies with diverging or poorly implemented methods can lead to contradictory or erroneous conclusions . The greater availability of publications in developed compared to developing countries, and reduced accessibility of non-English literature may also compromise the comprehensiveness of research results. Publication bias, a condition resulting from journals’ preference to publish studies with significant rather than non-significant results, is one of several related issues .

Analytical techniques are now available to overcome publication bias, though they are inconsistently applied . Reviews of meta-analysis in agriculture include Philibert et al.and Brandt et al.who suggest that the methodological quality and application of meta-analytical techniques has been highly variable. Most meta-analyses in agronomy focus on crop yield response to experimental manipulation . Yield is however only one criterion by which the performance of cropping systems can be judged: yield stability and resilience, nutritional yield and environmental and economic performance are additional relevant but less studied indicators. Aside from the constructive critiiques of Philibert et al.and Brandt et al. , critical appraisal of meta-analysis in the agricultural sciences is largely lacking. This paper addresses this research gap considering a suite of yet-unaddressed issues of importance, starting with the ways in which meta-analytical research is framed. Framing can be defined as the way in which research questions and methods are selected, described and justified as contributing to solutions for particular problems , for example, agricultural productivity or environmental goals. When applied to rural development, Andersson and Sumberg refer to studies that reiterate these goals as belonging to ‘development-oriented agronomy’. Given heightened competition among agricultural scientists for decreasing research funds, research topics and investments are commonly justified using the language of development-oriented agronomy .

Meta-analysis may be also described as a descendent of the logical-positivist tradition of science that champions empirical and hypothesis-driven inquiry as the prime mechanism by which unbiased knowledge is generated and validated. Sumberg et al.and de Roo et al.conversely recognized the sociopolitically embedded nature of agricultural science. By doing so, they recognize the ways in which agricultural researchers in development-oriented agronomy experience tension between the generation of scientific evidence and the need to convince multiple audiences of the relevance of their research findings and types of agronomic practices. In addition to the narrative employed when agronomists design, interpret and discuss research results, we explore the ways in which this tension can influence the range of potential solutions to agricultural problems that may be proposed by agronomists conducting meta-analysis . Confirming the placement of meta-analysis within the logical positivist tradition, square planter pots researchers publishing meta-analyses in agronomy frequently highlight the size and representativeness of their datasets – which are usually constructed using observations from small-plot agronomic experiments – to answer agricultural development questions of continental or even global significance . Goulding et al.however cautioned that results from small-plot trials should be interpreted cautiously, as they may not include higher level processes and contextual interactions, and thus poorly approximate whole field- and farm-scale performance. Addressing these topics, we examine how meta-analysis has been used to support claims and counter claims over organic agriculture and conservation agriculture . In doing so, we critically assess the suggestion that meta-analysis can provide unifying conclusions and rectify topics of scientific debate . We adopt a ‘political agronomy’ perspective that recognizes the socio-politically embedded nature of agricultural science and suggests that agronomy can be an arena for contestation and debate . OA and CA are among the most widely disputed subjects in contemporary agronomy, with vigorous debate indicating large rifts in epistemological approaches and contrasting agricultural research and development paradigms . Considering these issues, we review prominent OA and CA meta-analyses published since 2007 and discuss whether meta-analysis has reduced or resolved research debate. We conclude by offering suggestions for how both scientists conducting meta-analyses, as well as the readers of scientific literature can more carefully evaluate meta-analytical evidence, particularly when applied in the context of development-oriented agronomy OA is defined by the International Federation of Organic Agricultural Movements as a production system that sustains the health of ecosystems and people, and that makes use of ecological processes and cycles to eliminate synthetic inputs .

OA is frequently equated with ‘ecological’, ‘agroecological’, ‘sustainable’ and/or ‘low-external input ’ agriculture, though each may differ in practice . OA is also generally contrasted with ‘conventional agriculture’, although the characteristics of conventional agriculture tend to be counter factually defined as anything not organic . Conversely, OA is commonly framed as a holistic and sustainable alternative production system, as well as a philosophy . Although debate over OA has a long history and has been recognized as being rooted in schisms between different agricultural paradigms , a systematic review by Badgley et al.concluding that OA could produce more food than required to feed the global population sparked much contemporary debate and paved the way for use of meta-analysis in OA-conventional systems comparisons. Subsequent and consecutive meta-analyses examining OA each claimed increasingly large datasets and comprehensive and conclusive analyses . This case study analyses key meta-analyses published following Badgley et al.and considers if meta-analysis has resolved or contributed to further debate over the merits of OA. CA involves three crop management principles. These include minimal soil disturbance, crop residue retention as mulch and crop rotation or diversification. Practiced in combination, these principles are meant to reduce soil degradation while increasing yields and reducing production costs . Although reduced tillage dates to the 1930s, widespread adoption began only after 1970, following the release of herbicides, mechanized NT planters and, in the 1990s, the advent of herbicide resistant, genetically modified crops . Erosion mitigation and reduced costs from the elimination of tillage appear to have been major drivers of adoption on large-scale farms in developed countries. These goals were however also considered imperative for smallholders in developing nations , sparking interest amongst international research and development organizations in CA . CA has since been widely reframed as a yield-enhancing technology to improve smallholder food security, with widespread promotion to smallholder farmers ensuing in sub-Saharan Africa and South Asia, in particular . This prompted critical debate over the suitability of CA in the context of development-oriented agronomy, and particularly the yield and adoption claims made for CA . This case study consequently considers eleven prominent meta-analyses on CA published since 2010, again asking if meta-analysis has resolved or inadvertently contributed to further debate.Contemporary debate over the productivity of OA emerged with Badgley et al. , who framed their paper as a response to objections that OA could make significant contributions to the global food supply. Badgley et al. compiled what they referred to as a ‘global dataset’ of 239 yield response ratios for a diversity of crop, meat, and dairy products . An average YRR of 1.32 was reported, indicating higher organic than conventional yields, with ratios in developing and developed countries averaging 1.80 and 0.92, respectively. Average ratios were extrapolated to estimate if OA could produce sufficient calories to meet global requirements. The authors concluded that OA could supply 17–50% more calories person–1 than the globally extrapolated average adult requirement per day. Badgley et al.also summarized 77 studies quantifying biological nitrogen fixation to estimate if legumes could supply sufficient of nitrogen annually to substitute for global use of synthetic fertilizer N. They concluded that OA could supply global food requirements without requiring additional land or fertilizer resources, and advocated strongly for increased institutional and public support for OA. The editors of Renewable Agriculture and Food Systems, which published the study, also permitted Badgley et al. to publicly reply to Editor and peer-reviewers’ concerns with their manuscript in a special Forum. Agronomists presented a range of technical critiques used to problematize Badgley et al.’s results and argue against increased OA research funding and support. Cassman , for example, critiqued the analysis of YRRs from singly grown crops, as opposed to rotational systems commonly employed in OA. Use of grey literature and concern over yield data collected in different years, e.g. before and after farmers adopted organic practices, were flagged as methodologically invalid. Such before–after measurements comprised half of the data from developing countries presented by Badgley et al. , and originated from a single report . Badgley and Perfecto however countered that organic-conventional comparisons were rare in developing countries, necessitating the use of before–after comparisons and grey literature. Importantly, Badgley et al.’s framing of OA was broad, including agroecological, sustainable or ecological practices that either exclude or make limited use of synthetic pesticides, and that improve soil quality. This definition differs from IFOAM and other certifying agencies, and was critiqued by Cassman as vague. The food policy analyst Dennis Avery argued that nearly half the studies in the Badgley et al. database used synthetic fertilizer or pest control products , which would disqualify them as organic under most certification programmes. Badgley et al.countered that practices using synthetic inputs in ways intended to reduce their application should still qualify as OA.

An increase in government spending naturally affects domestic aggregate demand

The devaluing pressure on the exchange rate would create an interest differential in favor of the foreign countries and a capital outflow which would increase the depreciating pressure on the exchange rate. Money supply growth would thus decrease proportionally and so would price inflation, at least initially. Real depreciation would then lead to an increase in demand . The increasing output demand would raise inflation, and thus real money balances would start to fall. Nominal interest rates would then increase, thereby restoring the capital account. Ultimately, real depreciation of the exchange rate and falling real money balances would bring the system back to equilibrium. With government intervention in agriculture, the government is able to “neutralize” any effect of foreign disturbances on domestic prices and demand. That is, the government can fix the target price at the existing domestic price level, allowing domestic producers to sell on the domestic market at the world price, with the latter below the former, and paying the difference. This kind of intervention amounts to a complete “sterilization” of foreign disturbances on the trade balance,plastic gutter the exchange rate, and on monetary growth and domestic inflation.

In the long run, this strategy would lead to unsustainable cumulative budget deficits; if the foreign price decrease is penn anent, an increase in domestic taxes is then necessary in order to finance the deficit. This, however, would lead to a decrease in disposable income with the consequent deflationary effects. Therefore, we define an intervention mechanism that allows zero-sum deficits in the long run. We will assume all non-farm government expenses are exactly balanced by tax revenues so that all government spending in agriculture amounts to a deficit in its budget. The resulting budget deficit must be either monetized or debt financed. Both operations have obvious effects on the money market, the capital account, and the trade balance. Increases in government expenditures which are debt-financed directly increase domestic absorption and income but do not have any effect on the monetary base. Increases in government expenditure financed by money creation directly increase domestic absorption and income and obviously affect money supply. In the fonner case, the overall long-run effect depends on the degree of capital mobility and be either contractionary or expansionary. In the latter case there are no real long-run effects. Without ruling out any of the two possibilities, the budget deficit is specified to be debt financed and partly monetized. The introduction of government expenditure in agriculture has several implications. First, the real effects of changes in money and in the exchange rate appear to be dampened. This accounts for the sluggishness of movements in prices, output, and the trade balance that occur because of “institutional” factors.

In particular, by acting on the way agricultural output reacts to changes in the monetary variables, the standard model is modified and all price deflationary or inflationary effects are lessened. Under this new framework, money is still neutral in the long run; but it has non-neutral effects in the short run which are smaller than in the standard model, and the overshooting in the exchange rate is smaller as well. Second, the entire dynamics of fann prices is altered. In the standard model, the differences in the dynamics of the two prices are ultimately due to their different degrees of stickiness and to the overall GNP share of the two sectors. These differences can, for instance, put the fann sector in a “cost-price” squeeze if manufacturing prices increase more than fann prices in the short run. With government intervention, farm prices are more protected; and if the degree of intervention is high, the overall real effect on farm prices of monetary contractions or of exchange rate appreciations can be nil, and thus the differential speed effect can turn in favor of fann prices. Third, several feedback effects other than the ones already present in the standard model can occur in the new fonnulation. Since the main scope of government intervention is to counter unfavorable movements in relative prices and to dampen the negative effects of monetary shocks on the fann sector, the impact on the money market and on the entire economy resulting from the financing of the budget deficit now represents one more channel of feedback from prices to money and the exchange rate. In order to introduce explicitly government intervention in agriculture, we assume that agricultural producers have a notional supply function of the type defined by Barro and Grossman .

This supply function depends mainly on two types of forces: a combination of policy indicator variables and excess demand pressures. The policy indicator variables represented here are target price, q, and a land reduction premium , v. Regardless of the market prices, agents who participate in government programs are assured a certain price of q. The second policy indicator serves to lessen the financial burden of accumulating stocks resulting from target prices well above market prices, e.g., under current legislation, an acreage reduction program is employed to assist in reducing producers’ supply.! The coefficients and c:o measure the weight that producers attach to the two arguments in the supply function. If ro is close to one, agricultural output is essentially demand determined. Alternatively, if c:o is close to zero, supply would be essentially determined by policy variables and relative prices. In a competitive world , if foreign prices fall below a certain level, then domestic producers will be paid the price given by q fixed at that level . The difference between the target price, q, and the world price, e + P:, is paid to domestic producers by the government. Target prices set above market prices create excess supply and large government expenditures to finance the implied level of subsidy. In order to reduce wide excess supply accumulations from target price incentives, the government affects producers’ decisions through the acreage reduction program. The higher stock accumulation,blueberry container the stronger will be the action of the government to reduce output supply. Although inventories are exogenous in our framework, it is clear that it is the interaction between the inventories and production costs on one side and between the inventories and interest rates on the other that plays a major role in determining the amount of acreage reduction intervention in agriculture. Given no trade barriers, an exogenous reduction in the foreign price of agricultural products would result in a shift of internal demand from domestic to foreign goods; a trade balance; and, thus, depreciating pressure in the exchange rate. With no intervention policy in agriculture, the monetary authority would then intervene by contracting the supply of domestic money on the world market. The pressure on the exchange rate would create an interest differential in favor of the foreign countries and a capital outflow which would increase the pressure on the exchange rate. Money supply growth would thus decrease proportionately, as would price inflation . Real depreciation would lead to an increase in demand . The increasing output demand would raise inflation, and thus real money balances would start to fall. Nominal interest rate would then increase, thus restoring the capital account. Ultimately, real depreciation of the exchange rate and falling real money balances would bring the system back to equilibrium. With government intervention in agriculture, we can have a quite different scenario. Suppose that, in the extreme case, the government wants to neutralize any effect of foreign disturbances on domestic prices and demand. That is, suppose the government fixes the target price, q, at the existing level, PA, and allows domestic producers to sell on the domestic market at the world price, e+ P:, where e+ P: < PA and pays them the difference .

This kind of intervention amounts to a complete “sterilization” of foreign disturbances on the trade balance, the exchange rate, and on monetary growth and domestic inflation. However, this could not last forever since, in the long run, this would lead to unsustainable cumulative budget deficits. If the foreign price decrease is permanent, an increase in domestic taxes would then be necessary in order to finance the deficit, but would lead to a decrease in disposable income with the consequent deflationary effects. Therefore, we need to define an intervention mechanism that would allow zero-sum deficits in the long run. These equalities highlight the relationships among the farm policy instruments. The two types of intervention can be interpreted as follows: The rate of increase in government financed price-target programs is guided by a rule that relates to the excess of the domestic price increase over the increase of the exchange rate. The higher the “support” coefficient, the higher the response in the rate of growth of government expenditure in price target programs. Hence, in the limit, as gl tends to infinity, the rate of increase in domestic agricultural prices is kept equal to the rate of depreciation. In the above case, the rate of increase of relative prices would be basically zero. What such an intervention rule shows is that the higher government support is, the lower will be the gains in competitiveness due to movement in the exchange rate and/or in foreign prices. However, although temporary shifts in competitiveness are minimized, the negative effects of the increase in the government expenditure for agriculture will have impacts on the government budget deficit and, ultimately, on the money market. On the other hand, the rate of increase in government-financed supply-reduction programs is guided by a rule that relates it to the excess of money supply growth over farm price inflation. The higher the intervention coefficient, g2′ the higher the response of government expenditure in supply-reduction programs. In the limit, all excess supply is “absorbed” by government intervention so that, ultimately, any excess supply is eliminated and the price growth rate is kept equal to the money supply growth rate. This amounts to assuming, in the limit, that agriculture supply is kept at the market-clearing level. It is clear, however, that even in this case all the budget effects of the supply-reduction programs will have an impact on the money market, depending on the magnitude of the intervention coefficients. It affects only investment and savings if the increased spending is financed by the sale of government bonds. The sale of bonds does not affect the domestic money supply since the funds obtained by the government from the bond sale returns to the public as the government spends it. Thus, the LM curve is not directly affected by the changes in government debt financed spending. Of course, if the increase in government expenditure is financed by the issuance of money, then both aggregate demand and demand for money will be affected. Increases in government expenditure which are debt financed directly increase domestic absorption and income but do not have effects on the monetary base. However, the increase in income heightens the demand for money, driving interest rates up. At the same time, all else constant, the increase has a negative impact on the trade balance, raising the demand for imports. The rise in the interest rates attracts capitals from abroad, restoring the capital account and counterbalancing the current account. Whether the balance of payments will be in surplus or in deficit will depend ultimately on the degree of capital mobility, the magnitude of the income multiplier, the willingness to save, and the propensity to import. If capital mobility is low, an increase in government expenditure has a negative effect on the balance of payments. Money supply decreases over time, due to the decumulation of international reserves, inducing income to decrease and partially offsetting the initial expansionary effect. If capital mobility is high, a balance of payment surplus will arise and money supply will increase, generating an additional income expansion over time. Increases in government expenditure financed by money creation directly increase domestic absorption and income and also affect money supply. The income effect generates a trade balance deficit through the increased demand for imports. The excess supply of money will be spent on foreign goods and also on foreign assets, thereby generating a capital account deficit. Complete adjustment will occur over time by means of net purchases of foreign goods. This slow adjustment will be reflected by trade deficits. In essence, the increase in the money supply through the printing of money will induce both the balance of trade and the balance of payments to worsen over the short run.

Our findings show that PFAS exposure dysregulates adipogenesis

Barrier disruptions were attributed to the toxic effects of PFOS on f-actin, microtubule, and gap junction organization. Specifically, dose dependent PFOS exposure disassembled tight and gap junctions responsible for barrier function of blood-testis barrier, increased blood-testis barrier permeability, and disrupted spermatogenesis. PFAS chemicals have also demonstrated tight junction opening in brain endothelial cells, a main component of the blood-brain barrier, and increases in reactive oxygen species that increases endothelial permeability. PFOS was found to induce the remodeling of actin filaments through ROS production in human endothelial cells and increased gaps/breaks between endothelial cells in monolayers which resulted in increased permeability. Authors observed formation of central cell actin stress fibers and the formation of lamellipodia and filopodia structures at cell periphery. In alignment with the previous findings summarized, we observed the shift in f-actin expression toward the center of N/TERT-1 cells which could indicate formation of central stress fibers. The representative images shown display increased fiber formation near the center of the cells . We also observed the formation of filopodia structures at cell edges which were observed due to ROS production by PFOS by Qian et al.

Although we did not measure ROS production,greenhouse snap clamps is likely that these cell models are being affected in the same way; PFOS may increase ROS production by the host cell and in turn dysregulate the cytoskeleton. Importantly, previous literature showed that the disruptions caused to the cytoskeleton by PFAS increase permeability of crucial barriers such as the blood-brain barrier and the blood-testis barrier. Skin keratinocytes are responsible for formation of the skin barrier, but instead of working in a monolayer, keratinocytes differentiate into 4 different epidermal layers with the top corneal layer acting as the barrier . N/TERT-1 cells are skin keratinocytes and are what were used for epidermal monolayer cultures in this work. We used these cells as a general model to study PFAS effects, but the implications of PFAS toxicity on epidermal skin cells are important in understanding PFAS absorption through the skin and the dangers it might cause to human health. The study of PFAS effects on skin tissue and cells is lacking with only a few investigations completed thus far using human tissue engineered skin and animal models. PFAS was previously deemed to be not well absorbed through the skin by the US Environmental Protection Agency in 2002, but since then, it has been demonstrated that after topical exposure, PFOA permeates human tissue engineered skin, human and rat skin explants , and mouse skin. In the mouse studies completed by Franko et al., it was demonstrated that there were dose-responsive increases in serum PFOA after dermal exposure and concluded that PFOA is dermally absorbed. Han et al. showed that after 6 days of dermal exposure to PFOA , there was decreased skin thickness and portions of degeneration in the epidermal layers and necrotic fibroblasts in human tissue engineered skin but did not observe decreases in cellular viability.

They also demonstrated that after 2 weeks of dermal exposure in rats with another type of PFAS, short-chain perfluoroalkyl carboxylic acids rather than long, there were adverse effects on kidney, liver, testes, and skin that resulted in death via ulcerative dermatitis at application sites with high doses . Our findings of PFAS toxicity and cytoskeletal dysregulation on N/TERT1 cells support the need for more research on dermal absorption of PFAS. Previous investigations have concluded that tubulin acetylation plays a role in cell migration and cell development and it has been concluded that alpha tubulin acetylationhelps to protect microtubules against mechanical stress and enhance microtubule flexibility. Acetylation of alpha tubulin is a common post-translational modification of alpha tubulin that happens on stable microtubules. Disrupted acetylation process can contribute to negative effects in cell polarization, cell division, adhesion, motility and have been linked to neurodegenerative disorders and tumor metastasis. To our knowledge, no studies have investigated tubulin acetylation specifically in regards to PFAS toxicity. The work presented here demonstrates dysregulation of acetylated tubulin by differences in localization within N/TERT1 cells and further implicates PFAS as a cytoskeletal disrupter. These underlying roles of tubulin acetylation and f-actin and the consequences of dysregulation may be involved in the developmental changes of children and metabolic/reproductive toxicity changes in adults Many studies have focused on YAP localization and Hippo signaling regarding organ size and development.

YAP and its upstream regulators have been investigated in liver, pancreas, intestine, kidney, lung, and bone but regulation through the Hippo pathway is different in each system. Disruption of YAP localization can cause several developmental problems in many organs. Notably, deletion of YAP in the lung epithelium of mice and humans was shown to cause defective lung development; correct YAP localization in the cytoplasm is required for proximal airway maturation while nuclear YAP is required for progenitor specification. In the mouse kidney, deletion of YAP led to hypoplastic kidneys, fewer glomeruli, and defect formations in distal tubules and loop of Henle YAP is essential for nephron development and establishing kidney morphology and function. YAP knockout has also been found to decrease bone formation and increases bone marrow fat331. The pancreas and liver have been a focus of Hippo signaling pathway and organ development research. Rather than YAP knockout and upregulation, the Hippo pathway was altered via deletion/knockout of Mst1/2 in the pancreas324 and deletion of Mst1/2 and Lats1/2 in the liver. Changes in the levels of Hippo pathway regulators MST and LATS and their effect on organ size poses other mechanisms of organ development dysregulation; in which Hippo is inactivated and nuclear YAP is upregulated. These too could be causes of changes in organ mass and body weight decrease during infancy and in mice have been shown to be a cause of decrease in pancreatic mass resulting in body weight differences. Although the current study did not investigate knockout of YAP and PFAS effects, these summarized studies suggest that perturbation of YAP localization and Hippo signaling pathway activity by PFAS can lead to organ developmental changes and possible decreased body weight due to changes in organ size/mass tied to developmental issues. To investigate the possibility that PFAS acts directly on the Hippo pathway effector, YAP, we studied localization of YAP in the cytosol and nuclei under PFAS exposure in the N/TERT-1 and ASC52telo cell lines. Through this work, data show that in N/TERT1 cells at the 72 h time point, expression of nuclear YAP increased for PFOS doses of 30 and 40 µM and PFOA doses of 100 and 125 µM. These data indicate that these PFAS chemicals regulate the Hippo pathway in N/TERTs through YAP/TAZ regulation. Because of the crosstalk between Hippo pathway and Rho/ROCK cytoskeletal perturbation, it is plausible that PFAS regulates Hippo through mechanosensing and extracellular matrix/focal adhesion changes that propagate through the Hippo mediator YAP/TAZ . At the 24 h time point in N/TERT-1 cells and at 48 h in ASC52telo cells, there were no differences in YAP localization.General mesenchymal stem cell polarization toward adipogenesis and osteogenesis is dependent on Hippo signaling and particularly related to YAP localization . Reduced expression of upstream regulators of YAP such as Lats1/2 inhibit Hippo signaling and promote in vitro adipogenic and osteogenic differentiation, proliferation, and migration of bone marrow derived mesenchymal stem cells . With increased nuclear localization of YAP, there is increased osteogenic differentiation. Conversely, with decreased nuclear localization of YAP there is increased adipogenic differentiation. Though we did not directly quantify YAP in mature adipocyte cells under PFAS exposure, to aid in the understanding of adiposity changes,snap clamps for greenhouse we explored PFAS’ impact on mesenchymal stem cell differentiation toward adipogenesis.

We found that some doses of PFOA and PFOS upregulated adipogenesis as measured by lipid droplet presence . qPCR results further support the finding that PFAS disrupts fat homeostasis, but instead of showing upregulation of fat markers, data shows down regulation of mRNA expressions of the adipokine leptin and no changes in Adiponectin . PFOA and PFOS upregulated adipogenesis indicated by lipid droplet analysis but the mechanism of action still requires more investigation especially in understanding the non-supporting mRNA expression results. One avenue that may elucidate changes in the adipogenic process is specifically investigating cytoskeleton changes that may occur when PFAS is present during adipogenesis. Here, we investigated cytoskeleton in pre-adipogenic ASC52telo cells under PFAS, but not changes during or after exposure through 19 days of differentiation. Cytoskeletal components are involved in the maturation and homeostasis of fat cells. Our findings that PFAS chemicals disrupt cytoskeletal stability in skin and pre-adipose cells lend support to previous literature and suggest that the upregulation of adipogenesis could be due to the cytoskeletal perturbations. Palanivel et al. found, in rat cardiomyocytes, that an increase in adiponectin induces cytoskeletal remodeling and increases membrane microvillar like protrustions and actin polymerization. It is clear that PFAS is effecting pre-adipose and adipose cells, but it is unclear whether or not PFAS upregulates adipokines in mature fat cells which then disrupt the cytoskeleton through rho/rock and/or if PFAS acts directly on the cytoskeleton or not at all. In their undifferentiated state, ASC52telos did exhibit cytoskeletal changes including increase of protrustions at the cell boarder, shifts in cytoskeletal component intensities and location, and dysregulation of fibers .This dysregulation may be contributing to the PFAS associated increased adolescent obesity. Further investigation on lipid profiles and hormonal changes due to increased adipogenesis should be explored to help in understanding prior observations regarding increased childhood adiposity and higher PFAS serum concentrations. The goal of this work was to better understand the mechanistic actions of PFAS on human cells specifically of their effect in altering the cell cytoskeleton and the Hippo signaling pathway. We investigated PFAS chemicals, PFOA and PFOS, for their effects on an adipose derived mesenchymal stem cell line and a keratinocyte skin cell line . In conclusion, these data support previous findings that PFAS chemicals affect cytoskeletal integrity of the f-actin and microtubules and have demonstrated this in the human skin keratinocyte cell line, N/TERT-1 and the adipose derived mesenchymal stem cell line, ASC52telo. We also investigated if PFOA and PFOS directly effect the Hippo signaling pathway modulator, YAP. Our results support a direct modification of YAP localization by PFOA and PFOS chemicals. Finally, to gain an understanding into the negative effects of PFAS chemicals on infant and adolescent body weight, we examined the adipogenic process under PFAS dosing. PFAS indeed upregulates the adipogenic process as quantified by presence of lipid droplets through ORO assays and qPCR. Key adipokines were dysregulated in dose-dependent exposure to PFOA and PFOS. Although more investigation is required in understanding how PFAS specifically acts to upregulate adipogenesis, we speculate that the cytoskeleton changes involved in adipogenesis are being dysregulated and aiding in dysregulation of fat maturation. These findings reinforce the need for more in depth detection and more stringent limitations of PFAS exposure to the public, particularly pregnant mothers and children. In the 1960s, the state of Punjab led in the adoption of new high-yielding varieties of wheat and rice. Production of these new varieties required innovations in the use of fertilizer and water, which occurred in a complementary manner to the innovation in seed choices. Mechanization of several aspects of farming also became a supporting innovation. Agricultural extension services based in Punjab’s public universities guided farmers in their transition to the new modes of production. Furthermore, an infrastructure of local roads and market towns had been developed by the state government: these, along with central government procurement guarantees, gave farmers access and security in earning income from their produce. In the private sector, new providers of seeds and fertilizer, as well as farm equipment and equipment maintenance services also arose. All of these conditions together created what has been known as the Green Revolution economy. With the Green Revolution, Punjab quickly became the state with the highest per capita income. This ranking persisted into the 1990s, but underlying conditions became less favorable well before then. Gains in agricultural yields and productivity slowed, due to diminishing returns. While India began to grow faster after trade and industrial policy liberalization of 1991 and subsequent creeping reforms in other sectors, agriculture remained locked into the old policies, and Punjab mostly into the old equilibrium.

Cell densities can be optimized for specific cells and research question

Anatomically, skin is the largest organ in the human body and is made up of three main layers and possesses a complex system of stromal, vascular, glandular, and immune/nervous system components in addition to epidermal cells. The epidermis itself is composed of four layers of cells that are continuously renewed to maintain barrier function and other structures of native skin.Skin physiology is important in immune function, wound healing, cancer biology, and other fields, leading researchers to use a wide range of models, from in vitro monocultures to in vivo animal models. Animal models offer the ability to study the full complexity of skin physiology, however, commonly used animal models such as mice have significant physiological differences when compared to humans. These limitations, and the increased cost of animal models, have led many researchers to focus on developing in vitro models that more closely reflect the physiology of human skin. Of these, one of the simpler model types is the human epidermal equivalent which are composed of only epidermal keratinocytes on an acellular dermal matrix, but capture epidermal differentiation and stratification seen in vivo. Building on this, models containing dermal and epidermal components are often referred to as human skin equivalents , full-thickness skin models, or organotypic skin constructs . Briefly,pot with driange holes these models are generated by encapsulating dermal cells within gel matrices and seeding epidermal cells on top.

Epidermal differentiation and stratification can then be achieved via specialized media and air exposure. Skin equivalents have most often been generated through self-assembly techniques using dermal gels made of collagen type I, but similar models have incorporated other matrix components such as fibrin, fibroblast derived, cadaveric de-epidermized membranes, commercially available gels and others. Currently, there are skin equivalents commercially available . However, these are primarily developed for therapeutic purposes and cannot be readily customized to specific research questions. HSEs have been applied in studies of wound healing, grafting, toxicology, and skin disease/developement. Although 3D culture more comprehensively models functions of human tissue compared to 2D cultures, the inclusion of diverse cell types that more accurately reflect the in vivo population enables studies of cell-cell coordination in complex tissues. Most HSEs only include dermal fibroblasts and epidermal keratinocytes, although the in vivo skin environment includes many other cell types. Recent studies have started including more cell populations; these include endothelial cells in vasculature, adipocytes in sub-cutaneous tissue, nerve components, stem cells, immune cells, and other disease/cancer specific models. Particularly important among these is vasculature; while some HSEs include vascular cells, overall they still lack comprehensive capillary elements with connectivity across the entire dermis, extended in vitro stability, and appropriate vessel density.

Further, HSE models are typically assessed through post-culture histological sectioning which limits analysis of the three-dimensional structure of HSEs. Three dimensional analysis allows for volumetric assessment of vascular density as well as regional variation of epidermal thickness and differentiation. Although HSEs are one of the most common organotypic models, there are many technical challenges in generating these constructs including identification of appropriate extracellular matrix and cell densities, media recipes, proper air liquid interface procedures, and post-culture analysis. Further, while HEE and HSE models have published protocols, a detailed protocol incorporating dermal vasculature and volumetric imaging rather than histological analysis does not exist. This work presents an accessible protocol for the culture of vascularized human skin equivalents from mainly commercial cell lines. This protocol is written to be readily customizable, allowing for straight-forward adaptation to different cell types and research needs. In the interest of accessibility, availability, and cost, the use of simple products and generation techniques was prioritized over the use of commercially available products. Further, straightforward volumetric imaging and quantification methods are described that allow for assessment of the three-dimensional structure of the VHSE.

Translating this procedure into a robust and accessible protocol enables non-specialist researchers to apply these important models in personalized medicine, vascularized tissue engineering, graft development, and drug evaluation.Here is presented a protocol for generation of in vitro vascularized human skin equivalents using telomerase reverse transcriptase immortalized keratinocytes , adult human dermal fibroblasts , and human microvascular endothelial cells . Additionally, the customizable nature of this protocol is highlighted by also demonstrating VHSE generation and stability when using commonly available lung fibroblasts instead of hDF. Generation of the VHSE is completed in steps, while steps are optional end point processing and imaging techniques that were optimized for these VHSEs. It is important to note that the VHSEs can be processed according to specific research questions and steps are not required to generate the construct. Volumetric imaging, analysis, and 3D renderings were completed to demonstrate a volumetric analysis method. These volumetric construct preparation and imaging protocols preserve VHSE structure at both the microscopic and macroscopic levels, allowing for comprehensive 3D analysis. Characterization of the epidermis and dermis show appropriate immunofluorescent markers for human skin in the VHSE constructs . Cytokeratin 10 is an early differentiation keratinocyte marker which usually marks all suprabasal layers in skin equivalents . Involucrin and filaggrin are late differentiation markers in keratinocytes and mark the uppermost suprabasal layers in skin equivalents. A far-red fluorescent nuclear dye was used to mark nuclei in both the epidermis and dermis, with Col IV marking the vasculature of the dermis . Epidermal basement membrane components are not always properly expressed in HSE cultures; and Col IV staining of the BM is not consistently observed using this protocol. Research focused BM components and structure would benefit from additional media, cell, and imaging optimization. Though confocal imaging through the bulk of the VHSE cultures often yields high resolution images that are sufficient for computational analysis of the dermis and epidermis, the clearing method described allows for deeper tissue imaging. Clearing improves confocal laser penetration depth, and effective imaging in VHSEs can be achieved to over 1 mm for cleared samples . The described clearing technique sufficiently matches refractive index throughout VHSE sample tissue.

Clearing the VHSE allowed for straightforward imaging through the entire construct without manipulation.Volumetric images allow for generation of 3D rendering to map vasculature throughout each construct . Briefly,large pot with drainage confocal image sets were taken in dermal to epidermal orientation of several sub-volumes of VHSEs to detect Collagen IV stain and nuclei . Image stacks are loaded into computational software and a custom algorithm is used for 3D rendering and quantification as described previously . This algorithm automatically segments the vascular component based on the Col IV stain. The volumetric segmentation is passed to a skeletonization algorithm based on fast marching. Skeletonization finds the definitive center of each Col IV marked vessel and the resulting data can be used to calculate vessel diameter as well as vascular fraction . Wide field fluorescent microscopy is an accessible option if laser scanning microscopy is not available; the vascular network and epidermis can be imaged with wide field fluorescent microscopy . Three-dimensional quantification is possible using wide field imaging of VHSEs rather than laser scanning microscopy, although it may require more filtering and deconvolution of images due to out of-plane light.This protocol has demonstrated a simple and repeatable method for the generation of VHSEs and their three-dimensional analysis. Importantly, this method relies on few specialized techniques or equipment pieces, making it accessible for a range of labs. Further, cell types can be replaced with limited changes in the protocol, allowing researchers to adapt this protocol to their specific needs. Proper collagen gelation is a challenging step in establishing skin culture. Especially when using crude preparations without purification, trace contaminants could influence the gelation process. To help ensure consistency, groups of experiments should be performed with the same collagen stock that will be used for VHSE generation. Further, the gelation should ideally occur at a pH of 7-7.4, and trace contaminants may shift the pH. Before using any collagen stock, a practice acellular gel should be made at the desired concentration and the pH should be measured prior to gelation. Completing this collagen quality check before beginning dermal component seeding will identify the problems with proper gelation and collagen homogeneity prior to setting up a complete experiment. Instead of seeding acellular collagen directly onto a culture insert, seed some collagen onto a pH strip that evaluates the whole pH scale and verify a pH of 7-7.4. Gelation can be evaluated by applying a droplet of the collagen gel solution onto a coverslip or tissue culture plastic well plate . After gelation time, the collagen should be solid, i.e., it should not flow when the plate is tilted. Under phase contrast microscopy, the collagen should look homogeneous and clear. Occasional bubbles from collagen seeding are normal but large amorphous blobs of opaque collagen within the clear gel indicates a problem-likely due to insufficient mixing, wrong pH, and/or failure to keep the collagen chilled during mixing. The cell seeding amounts and media may be adjusted. In the protocol above, the encapsulated cell amounts have been optimized for a 12-well insert at 7.5 x 104 fibroblasts and 7.5 x 105 endothelial cells per mL of collagen with 1.7 x 105 keratinocytes seeded on top of the dermal construct. Cell densities have been optimized for this VHSE protocol based on the preliminary studies and the previous research investigating 3D vascular network generation in various collagen concentrations144 and HSE generation. In similar systems, the published endothelial cell densities are 1.0 x 106 cells/mL collagen; the fibroblast concentrations often range from 0.4 x 105 cells/mL of collagen,175 to 1 x 105 cells/mL of collagen; and the keratinocyte concentrations range from 0.5 x 105 [cells/cm2] 173 to 1 x 105 [cells/cm2].Three-dimensional cultures with contractile cells, such as fibroblasts, can contract leading to viability reduction and culture loss. Preliminary experiments should be completed to test contraction of the dermal compartment and to test epidermal surface coverage. Additionally, the number of days in submersion and the rate of tapering the serum content can also be customized if excessive dermal contraction is occurring or a different rate of keratinocyte coverage is required. For example, if contraction is noticed during the period of dermal submersion or while keratinocytes are establishing a surface monolayer, moving more quickly through the serum tapering process and raising VHSEs to ALI can aid in preventing additional contraction. Similarly, if keratinocyte coverage is not ideal, changing the number of days that the VHSE is submerged without serum may help increase the epidermal monolayer coverage and mitigate the contraction since serum is left out. Changes in cell densities or other suggestions above must be optimized for the specific cultures and research goals. To establish a proper stratification of the epidermis during the air liquid interface period, it is critical to regularly check and maintain fluid levels in each well so that ALI and appropriate hydration of each construct is kept throughout the culture length. Media levels should be checked and tracked daily until consistent ALI levels are established. The epidermal layer should look hydrated, not dry, but there should not be pools of media on the construct. During ALI, the construct will develop an opaque white/yellow color which is normal. The epidermal layer will likely develop unevenly. Commonly, the VHSEs are tilted due to the collagen seeding or dermal contraction. It is also normal to observe a higher epidermal portion in the middle of the construct in smaller constructs and a ridge formation around the perimeter of the VHSE in 12 well size. Contraction of the constructs may change these topographical formations, and/or may not be observed at all. Staining and imaging of VHSEs introduces mechanical manipulation to the VHSEs. It is very important to plan and limit manipulation of each culture. When manipulation is necessary, maintain gentle movements when removing VHSEs from the insert membranes, when adding staining or wash solutions to the construct surface, and when removing and replacing VHSEs in their storage/imaging wells during imaging preparation. Specifically, the apical layers of the epidermal component may be fragile and are at risk of sloughing off the basal epidermal layers. Apical layers of the epidermis are fragile and go through desquamation even in native tissue, but for accurate analysis of epidermal structure it is important to minimize damage or loss. If epidermal layers lift off the construct, they can be imaged separately.

The short wavelength of soft X-rays enabled high-resolution 3D imaging of the nuclear architecture

The 800 m transient data provides slight improvements in precipitation estimates for the state, particularly at high elevations dominated by snow pack. For future projections, a broader suite of GCM and scenarios could be developed, while awaiting the next iteration of projections from the IPCC. To improve the snow‐driven module in BCM, calibration of the snow accumulation and snowmelt calculations will be done to match measured snow‐water equivalent at over 300 snow sensors and snow courses, and maps of persistent glaciers will also be included. The use of the 800 m PRISM climate data also contributes to a marked improvement in snow pack in some locations. SSURGO soils maps are far more detailed and accurate than STATSGO datasets and reflect topographic controls on soil properties. Therefore SSURGO soils maps are being developed statewide for model application. Geologic maps and local calibrations are being refined in those locations where geologic types are not well represented by the calibration basins,round plastic plant pot such as the volcanics of the Modoc Plateau and the upper Klamath River basin. Herpes viruses target the cell nucleus because of their need for the cellular DNA reproduction machinery.

The nuclear entry of viral DNA is followed by the nuclear accumulation of viral proteins and replication of viral DNA, leading to the formation of viral replication compartments . Electron microscopy and confocal microscopy studies have illustrated how profoundly these viruses transform the nuclear structure so as to optimise their multiplication. During cellular entry the nucleocapsid of herpes simplex virus type-1 undergoes partial disassembly at the nuclear pore complex , followed by viral genome delivery via the pore. When the infection proceeds, the VRCs emerge as distinct nuclear foci, which subsequently undergo fusion and expansion into a large, globular VRC. The infected cell expresses and multiples only a limited number of input HSV-1 genomes. Each small VRC, initially located at the nuclear periphery, originates from a single viral genome. These small VRCs undergo actin-mediated directed movement, resulting in their fusion and eventual enlargement into the entire nucleus. The expansion of the VRCs is accompanied by an increase in the nuclear volume and relocation of the host chromatin into the nuclear periphery. It is in these VRCs that viral nucleocapsid assembly occurs. Soon after assembly, each nucleocapsid penetrates the host chromatin layer and nuclear lamina to the inner nuclear envelope, the site of its egress. Despite their many achievements with biological applications, light- and electron-based imaging techniques suffer from fundamental limitations in 3D imaging of the subcellular architecture of the entire cell. Transmission electron microscopy and TEM tomography are limited by fixation-induced distortions of cellular features, damage caused by the electron beam and the limited range of angular sampling. Fluorescence microscopy can determine the positions of specific molecules, but only of those selected based on existing information, includingthe sizes of the labels and the density of the labelling. Furthermore, immunofluorescence imaging is prone to artefacts by fixation or permeabilization. We employed a strategy integrating 3D soft X-ray tomography imaging with cryogenic fluorescence microscopy , confocal and electron microscopy, and advanced data analysis in order to study in more detail the HSV-1-induced changes in nuclear architecture, molecular organization of the host chromatin and, in particular, the formation of channels across the chromatin layer. Moreover, since X-rays can penetrate 15 μm-thick biological material, SXT allows for quantitative assessment of the entire cell. For SXT image acquisition, the cells were placed in thin-walled capillaries of diameter up to 15 μm. B cells were used because of their small size and HSV-1 susceptibility. In fact, SXT permits cell imaging in a near-native state i.e. intact, unsliced, unstained, and fully hydrated. X-ray absorption depends on the concentration of organic material in each voxel. Therefore, SXT not only detects multiple cellular structures but can also provide quantitative assessment of their composition and structure. Contrast in soft X-ray microscopy is generated by differential attenuation of X-rays by the bio-molecules in the specimen and is not muted by weakly absorbing water. Attenuation of X-rays by the specimen follows the Beer-Lambert law and is therefore both a linear and a quantitative measure of the thickness and the chemical species present at each point in the cell. To gain insight into the spatial localization of the host chromatin and of the viral and cellular proteins and nucleocapsids, we complemented 3D SXT with confocal and electron microscopy imaging techniques.To follow the progress of infection, we analysed the expression of viral immediate-early, early and late genes by cytometry and real-time RT-PCR and of the virus yields by plaque assay. We detected the presence of viral proteins, as well as the expression of viral lytic genes of all three phases of the replication cycle and substantial production of the progeny virus from the B cells. The highest yield of virus was obtained at 24h p.i. . This suggests that not only could HSV-1 enter and infect B cells but also that its replication cycle was completed. Based on these findings, we decided to use a multiplicity of infection of 5 and the time point 24h p.i. in our subsequent experiments.To further analyse the nuclear architecture, we used SXT on hydrated cells in their near-native state. The image contrast of SXT is based on the absorption of X-rays by mainly carbon and nitrogen. This allows measurement of the linear absorption coefficients of cellular structures reflecting their concentration of cellular biomolecules. Owing to its high density of bio-molecules, the heterochromatin region of the nucleus has a er LAC than the less densely packed nucleoplasm, as is evident from computer-generated SXT ortho-slices through nuclei . The use of an HSV-1 strain expressing EYFP-ICP4 allowed the detection of infected cells with enlarged VRC by CFM used in SXT studies. When SXT ortho-slices were aligned with CFM images of the same cell, EYFP-ICP4 was found to be localized in distinct nuclear foci or in a few enlarged foci in the heterochromatin-depleted nuclear regions .The distinct LAC values of SXT were used to automatically segment the nuclear structures for 3D visualization of the spatial information in HSV-1 infected cells. Surface-rendered 3D tomographic reconstructions of the nuclear periphery of the infected cells revealed low-density gaps in the compact layer of marginalized host chromatin. Statistical analysis showed that 24.7± 1.2% of the surface area of the chromatin had a low-LAC value , suggesting the presence of low-density regions in the compact chromatin . In the non-infected cells the relative area of the region with a low LAC value was 7.5±1.2% . In order to further study the low-LAC regions, we analysed their ‘skeletonized’ versions. The skeletonized structure revealed that the low-LAC regions formed channels in the 0.5 μm-thick layer of host chromatin close to the nuclear envelope,25 liter round pot some of which penetrated the nuclear periphery across the layer of peripheral chromatin in both infected and non-infected cells . However, in the infected cells the total number of low-LAC breaks, 900± 300 , was significantly higher than in the non-infected cells . Note that the average volume of the nucleus , was significantly increased in the infected cells compared to the non-infected cells. Additionally, analysis of the density of these channels across the layer of marginalized heterochromatin as a function of distance from the nuclear envelope revealed that, in a 0.1 μm band close to the nuclear envelope, the area density of these channels was significantly higher in the infected than in the non-infected cells . Furthermore, analysis of their local thickness as a functionof distance from the nuclear envelope indicated that their diameter increased towards the nuclear centre, and that they finally merged with the nucleoplasm of the infected and non-infected cells. The smallest diameter of these channels at the nuclear periphery, at 0.1μm from the nuclear envelope in both cases, was at least 200nm . Their size was thus sufficient to allow the passage of at least one viral nucleocapsid at a time. It is known that heterochromatin-free gaps are associated with NPCs and involved in nucleo-cytoplasmic transport of non-infected cells. This prompted us to study whether the virus-induced channels are connected with NPCs. The number and distribution of low-DAPI gaps across the heterochromatin were compared with those of NPCs. Immunolabelling of cells with the Nup153 antibody revealed that NPCs frequently formed clusters in both infected and non-infected cells. Some of these clusters were significantly enlarged in the infected cells . The numbers of NPCs or NPC clusters in the infected cells were reduced compared to those in non-infected cells . Accordingly, in the infected cells the total number of low-LAC breaks was higher than the number of NPCs, whereas in the non-infected cells the number of breaks in the nuclear periphery was close to that of NPCs. Moreover, our studies showed that, in the infected cells, the low-DAPI regions were almost always located independently of NPCs, whereas in the non-infected cells, the low-DAPI regions were located adjacent to the NPCs . In summary, these results demonstrated that HSV-1 infection increases the number of virus-capsid-sized and NPC-independent gaps, which may facilitate the viral transport across the compacted layer of chromatin.Confocal microscopy was performed to detect the presence of nucleocapsid proteins in the low-density chromatin regions of the nuclear periphery. Viral capsid protein VP5 was frequently seen in low-DAPI regions in close proximity to the nuclear envelope . Next, we used transmission electron microscopy to examine the distribution of nucleocapsids with respect to the peripheral marginalized chromatin. The images showed that nucleocapsids were very often localized in thechromatin region next to the nuclear envelope. Consistently with the confocal data, these nucleocapsids were located in the low-density chromatin breaks and narrow virus-nucleocapsid-sized channels . The NPCs are mostly invisible in the infected cells, presumably because of virus induced structural changes of the NE. The distribution of chromatin and NPCs in the non-infected cells is shown in the Supplementary Fig. S5. This is in line with recent studies showing that herpesvirus infection increases the porosity of the nucleus, leading to an enhancement of nucleocapsid motility. Altogether, our findings showed that HSV-1 infection induces breakages penetrating the cellular chromatin barrier to permit nucleocapsid access to the nuclear envelope.Formation of a VRC as a result of HSV-1 lytic infection is followed by structural changes in the host chromatin. Profound reorganization of the nuclear chromatin by HSV-1 has been known to include chromatin marginalization to the nuclear periphery. A compact layer of host heterochromatin constitutes an accessibility barrier for the translocation of viral nucleocapsids toward the inner nuclear envelope across which they exit the nucleus. A previous indication of HSV-1 infection’s ability to disrupt marginalized chromatin, so as to allow access through it, has come from immunofluorescence studies showing fragmented distribution of histone H1, suggesting the presence of routes through chromatin5 . By combining SXT, CFM and confocal imaging techniques with advanced image analysis, we achieved a new insight into the 3D structure and distribution of such breakages.We observed that the low-density regions in the host chromatin formed gaps through it. In the absence of infection these openings were typically located near NPCs, in agreement with earlier results on the density distribution of the host chromatin. In the infected cells, the number of these low-density breakages was significantly increased, and they were most often located independent of NPCs. This independence was in agreement with the NPC-independent nuclear egress of the virus. The SXT analysis revealed channels wide enough to allow the passage of a 125-nm-wide HSV-1 nucleocapsid. However, the narrowest channels, with a diameter of 200 nm, allowed the passage of only one or a few nucleocapsids at a time. This leads to the presence of nucleocapsids in the discontinuous-density regions of the peripheral chromatin close to the nuclear envelope, which agrees well with the egress mechanism demonstrated previously. The exact nature of the molecular mechanisms involved in the intra-nuclear motility of the capsids is controversial. It has been suggested that nucleocapsids are transported in the nucleoplasm via an active process mediated by an intra-nuclear actin and myosin motor protein. However, a very recent study showed that their motility is based on passive diffusion36,46. In summary, we were able to create 3D reconstructions of intact and fully hydrated HSV-1 infected cell nuclei by using SXT.

Carbon is one of the most abundantly available elements on earth

Natalia Molina traces the relationship between public health in Los Angeles and compares the experiences of Chinese to that of Japanese and Mexican residents in the city in the period between 1879-1939.Taken together these works remind us that Chinatown was only one of the many districts that composed Los Angeles’s core in the first half of the twentieth century, and that the actions of Chinese people must be understood in relationship to the other people and ethnic groups that lived in this section of the city. Perhaps the most interesting work done by an historian on Los Angeles Chinatown has been the work of Isabella Quintana who highlights interactions between Mexican Americans and Chinese Americans in the plaza area between 1871 and 1938 focusing on the racialized and gendered nature of space. In her essay, “Making Do, Making Home: Borders and the Worlds of Chinatown and Sonoratown in Early Twentieth Century Los Angeles,” Quintana explores the architecture of Chinese and Mexican homes in the Plaza area to “imagine social worlds created by women that presented alternative ways of living to those dictated by colonialism, industrialization, exclusion, and segregation in Los Angeles.”Given a paucity of first-hand accounts by Chinese and Mexican women during the period,10 liter pot Quintana shows how it is possible to understand the interactions between women from these two groups by looking closely at architectural records.

Old Chinatown, New Chinatown, and China City were located adjacent to the Los Angeles Plaza in one of the most diverse sections of Los Angeles. Nonetheless all three districts came to be seen as Chinatowns by those outside the Chinese American community. The process which marked these three communities were marked Chinese in the popular imagination despite the demographic reality of this part of the city can only be understood when we begin to explore Chinatown’s place in the popular imagination. Over the last century, representations of Chinatown have become an important site through which whites as well as other non-Asian Americans envision Asia and Asian people. In many ways, Elaine Kim’s observation about depictions of Chinese people in Anglo-American literature as a whole holds especially true for Chinatown in particular. Kim writes that, “many depictions of Chinese have been generalized to Asians, particularly since Westerners have found it difficult to distinguish among East Asiannationalities.”Certainly, the racialization of Chinatown has had a profound influence not only on the way whites perceive Chinese American communities but also on the way whites perceive many Asian American communities other than Chinatown. While there exists a substantial amount of scholarly work on the representation of Chinatowns in various works of literary fiction, scholarship that explores the place of Chinatown in popular imagination more generally are much less frequent. Two of the most important studies are to engage the relationship between Orientalism and Chinatown are Anthony Lee’s Picturing Chinatown and Kay Anderson’s Vancouver Chinatown. Anthony Lee’s Picturing Chinatown: Art and Orientalism in San Francisco explores the “history of imaginings” of San Francisco’s Chinatown in the period between 1850s and the 1950s, as represented through photography, paintings, and performance.

While the last two chapters do deal with ways in which Chinese Americans represented themselves through art, representations made by Chinese Americans are not the primary focus of Lee’s study. As Lee states in his introduction, outside of these last two chapters, his book “has precious little to say about the representations of Chinatown by its actual inhabitants.”What Lee is more interested in is recovering, “something of the pressure exerted on the art by the daily lives and experiences of Chinatown’s inhabitants.”Lee sees these artistic representations of Chinatown as being generated by “unequal social and political relations between Chinese and non-Chinese” and thus believes these works ultimately tell us more about larger white society than they are about Chinatown itself. Equally important for understanding the relationship between Chinatown as place and Chinatown as an idea is geographer Kay Anderson’s Vancouver Chinatown: Racial Discourse in Chinatown, 1875-1980. Drawing on the concept of hegemony advanced by Antonio Gramsci, Anderson states that Chinatown, “has been a historically specific idea, a cultural concept rooted in the symbolic system of those with the power to define…”Anderson posits that Chinatown is at its heart related to a set of racial and ethnic ideas held by whites about a particular place. She writes that Chinatown “was not a neutral term, referring somehow unproblematically to the physical presence of China in Vancouver. Rather it was an evaluative term ascribed by Europeans no matter how the residents of the territory might have defined themselves.”

The works of both Anderson and Lee have been deeply influential on my present study and yet my goal in the present study is to complicate the idea that Chinatown is a product of the white imagination and that writing about Orientalism in Chinatown should focus primarily on the representations and actions of whites leaders and cultural producers. The focus that scholars of American Orientalism have placed on actions and cultural productions produced by white Americans is no doubt a legacy of Said’s original work. Said’s 1978 text Orientalism focuses entirely on the writings of Europeans, and yet in his theoretical elaboration of Orientalism, Said lays the groundwork for understanding the ways in which this discourse could be contested. In fashioning this theory of Orientalism, Said draws on two theorists with significantly different understandings of power: Michel Foucault and Antonio Gramsci. Drawing from Foucault, Said defines Orientalism as a discourse. Said writes that, “no one writing, thinking, or acting on the Orient could do so without taking account of the limitations on thought and action opposed by Orientalism.” He elaborates that Orientalism did not determine what could be said about the Orient but rather that Orientalist “interests” were always involved in discussions of the Orient.But Said breaks with Foucault in one important way. Said writes, “Yet unlike Michel Foucault,10 liter drainage collection pot to whose work I am greatly indebted, I do believe in the determining imprint of individual writers upon the otherwise anonymous collective body of texts constituting a discursive formation like Orientalism.”This is an important ontological distinction that allows him to theorize the way the actions of individual authors created this discourse.In advancing my theory of Chinese American Orientalism, I argue that the Chinese American merchant class utilized North American Chinatowns to articulate their own distinct cultural representation of China. This should not simply be seen as an act of “self-Orientalism.” Chinese American merchants and others in the Chinese American community did not simply reproduce dominant ideas about China as presented in European and American literature and culture. Rather, Chinese American Orientalism was a distinct cultural formation that functioned for a moment counter-hegemonically. Because Chinese American Orientalism functioned within the larger framework of Orientlalism, Chinese Americans were not free to present Chinatown to tourists anyway they wished. But what they could do was subvert dominant expectations of the community in subtle ways, while still representing the district as a site of Otherness. While Chinese American Orientalism was deeply linked to visual culture, it manifested itself materially in Chinatowns across North America. Examples of Chinese American Orientalism include the architecture that came to define so many North American Chinatowns, Chinese American cuisine such as Egg Foo Yung and fortune cookies, and the embodied performances of race, gender, and nation enacted by Chinese American merchants and others in Chinatown. Chinese American Orientalism drew on all of the senses of the visiting tourist.Tourists did not simply watch Chinese Americans perform ethnicity from a distance.

In Chinatown, tourists could taste, smell, touch, hear, and see the version of the Orient presented to them by the merchant class. Tactile, visual, and edible, Chinese American Orientalism was also in a way political. Chinese American Orientalism presented a unified non-threatening image of China as a commodity, that appealed to white sensibilities enough to make a profit but did so in a way that did not replicate the worst aspects of Yellow Peril iconography that had left so many in the community disenfranchised.It is one of the building blocks of all forms of life. Commercially, carbon is essential to modern civilization through its multitude of applications such as a power source in the form of coal and hydrocarbons, a raw material for essential components like steel, polymers etc. to name a few. Carbon exists in the form of different allotropes such as graphite, diamond etc. Each allotrope differs from another in their physical properties. Depending on the application, different allotropes of carbon utilized as applicable. In the field of research, the recently discovered allotropes such as fullerenes, graphene, and nanotubes are being studied extensively due to their interesting physical properties. The discovery of carbon nanotubes by S.Ijima in 1991 initiated intense research interest in them. The research over a decade infers that CNT exhibit rare amalgamation of mechanical strength, stiffness and low density with exceptional transport properties. These attributes have made CNT an ideal choice for multifunctional materials that combine the best of CNTs with other materials such as polymers, ceramics, and metals. Thus, CNTs are blended with other materials to obtain tailored materials for various applications such as transistors, as hydrogen storing material, as probes and sensors; in lightweight bicycles, antifouling coating of ship hulls, electrostatic discharge shield on satellites, artificial muscles and actuators etc. Actuators are utilized in a wide variety of actuation application in various fields such as robotics, artificial muscles, micro-electro-mechanical-systems and micro-optomechanical-system. It is achieved by converting different types of energy to mechanical energy to move or control a mechanism or a system. On a macroscopic scale, actuators can be powered using materials such as piezoelectric materials, conducting polymers, shape memory alloys etc. The process is quite efficient as well. However, on a micro- and nanoscale, such materials are not suitable. Critical issues such as scalability into nanoscales, high performance and ease of operation for implementation in nanotechnology and biomedical impedes their applications severely. Under such circumstances, substitutes such as metallic nanoparticles, nanowires, and CNTs could be used instead. Nanotubes and nanowires have been used to configure dynamic systems like cantilever beams, linear and torsional actuators on a nanoscale level in laboratory conditions. Nanostructures, particularly those made of CNT, exhibit actuation to applied external stimuli such as heat, electric voltage, and light. The aim of this dissertation is to investigate the photo mechanical properties of the CNT-based composite. To this end, the dissertation is structured as follows: Chapter 2 discusses the general background of carbon nanotubes and Reactive Ethylene Terpolymer , the polymer matrix. Chapter 3 explores the process of fabrication of the composite in a detailed manner. Chapter 4 discussed the experiments conducted to investigate the photo mechanical actuation and discuss the results of the experiment. Chapter 5 suggests a mechanism describing the photo mechanical actuation process. Chapter 6 concludes the dissertation with a summary of the study.MWCNT structure is slightly different from that of SWCNT. Instead of just one nanotube, the structure of MWCNT consists of multiple tubules nested within each other. The outer diameter is usually in the range of 20 to 30 nm depending on the number of walls within a nanotube. The electronic properties of the MWCNT depend on theouter most tubule resulting in either a metallic or semiconductor-like behavior depending on the chirality of the outermost tubule. Since its discovery, CNT was expected to have superlative mechanical properties comparable to that of graphene. Computational simulations conducted by Overney et al. also provided similar results predicting Young’s modulus to be 1500 GPa. The first mechanical tests were conducted on MWCNT in 1997 by Wong et al using atomic force microscope which yielded Young’s modulus of 1.28 TPa and average bending strength of 14 GPa. However, generally accepted value Young’s modulus was measured by Yu et al. in 2000 obtained through measuring stress-strain measurements of an MWCNT pinned at one end inside an electron microscope. Young’s modulus was measured between 270 – 950 GPa. The outermost layer of MWCNT failed at applied tensile loads ranging between 11 and 63 GPa at strains up to 12% . Using the same method, Yu et al. could measure Young’s modulus for SWNT to be in the range 370 – 1470 TPa and tensile strength to be between 10 and 52 GPa.