Substrates play a prominent role in the quality of a printed feature

Since the first integrated circuits were fabricated at Texas Instruments and Fairchild Semiconductor in the early 1960s,the number of ICs that can fit on a chip has doubled approximately every two years following the well-known Moore’s law. This has been made possible by reducing the minimum feature size. As of 2022, through advancements primarily related to the development of new processing techniques, the smallest transistors today are 2 nm – small enough to fit 50 billion on a chip the size of a fingernail. While ICs were first built using germanium, they were soon replaced by silicon for two key advantages. First, silicon is abundant in nature, providing the possibility of manufacturing electronics with a low-cost starting material. Second, the processing advantages of silicon raised it above other semiconducting materials. CMOS is the technology used to make most conventional electronics. The MOS and bipolar structures are fabricated through repeated application of several processing steps, such as Photolithography, Etching, Diffusion, Oxidation, Evaporation, Sputtering, Chemical Vapor Deposition , Ion Implantation, Epitaxial Growth, and Annealing. The strength of this manufacturing strategy lies in the standardization of these processes; It is possible to reliably and repeatably make micro- and even nano-scale features by carefully following a process recipe.

Despite the benefits,raspberry container there are some limitations to conventional electronic manufacturing technologies. For one, the processing temperatures to obtain critical components in these devices are hundreds to thousands of degrees Celsius, which limits material selection drastically. Of these limited materials, they are nearly all rigid at room temperature. Rigid materials are inherently difficult to integrate with biological applications; Nature is full of organic curves and fractal patterns, while conventional electronics are Cartesian and rectilinear. There are also practical limitations to the size of silicon-based electronics, leading to the largest commercially-available silicon wafer being only 450 mm in diameter. These primary limitations motivate printing technologies, which circumvent these problems and offer additional benefits.Because the processing temperatures are much less than those required for rigid silicon electronics, there are many more materials compatible with printing that would otherwise melt or incinerate at foundry temperatures. These materials can be solution-processed, i.e., the discrete material particles can be suspended or dissolved in a liquid-phase solvent and deposited onto a substrate – or ‘printed’. Printing is an additive process, meaning material is only deposited where it is used, and there are no steps that require removing material as in conventional lithographic processes. Many of these printing processes can easily be scaled to roll-to-roll processing, making it possible to produce large volumes of printed electronics at minimal cost with the ability to change the design of the printed device quickly.

Finally, the low temperatures of printed technologies also allow for using plastic, paper, or other flexible materials as the base substrate. Despite the benefits, there are very few fully-printed electronic systems in practice. It is challenging to build analogs to the transistor using printed technologies. While there has been a lot of research on developing printed transistors, more work needs to be done for wider adoption. Thus, most self-proclaimed printed electronic systems are hybrid electronic systems in disguise. A substrate is the material that is being printed onto. Substrates determine the bulk mechanical properties of the device, which is a large part of why flexible films are most commonly chosen. Polymer films, for example, can be flexible and/or conformal, which can be leveraged in the design of manufacturing processes and used in various application spaces where physical flexibility is essential. In other areas where flexibility is not necessary or specific substrate material is required for additional features in a device, it is still possible to print onto rigid substrates.When a surface’s surface energy is high, the printed ink will try to spread. When the surface energy is low, the ink will form islands or beads of ink. The material properties primarily determine the surface energy of a surface, though processing of the material can modify its surface energy.Conductors are the base structural block of all electronic devices – printed or otherwise. They carry the power that powers the device, form the interconnections between device layers, and transmit data in an electric signal. In printed electronics, the conductor is either made from nanocomposite ink or organic polymers. Nanocomposite inks are common conductors in printed electronics and are made up of conductive particles, polymer binders, a solvent, and sometimes other tuning components.

The suspended conductive particles, when printed, form a percolated network within the non-conductive polymer binder, while the solvent evaporates away. Silver, carbon, and copper inks are the most popular choices of conductive particles for conductors, though other metals are sometimes used as well. The selection of conductor and the particle:binder ratio can be altered to tune the conductivity of the printed feature. However, these changes affect other material properties of the composite as well.Dispenser printing encompasses all printing techniques that employ a semi-continuous flow of ink through a nozzle or print head. A schematic of the process is shown in 2.2F. The composition of inks in dispenser printers can vary widely, from solvent-less fused deposition modeling printers to binder-less direct ink writers , and everything in between. In any case, dispenser printing utilizes a nozzle mounted on a 2D- or 3D- chassis. Similar to inkjet printers, CAD is used to generate digital designs of the printed pattern, and the computer generates a program that controls the print head and the flow rate through the nozzle to create the desired pattern. An exciting application of dispenser printing is the incorporation of electroactive particles in the polymer filaments used in FDM-type 3D printers. In this method, the ‘ink’ is a solventless blend of 3D printing polymer and electroactive particles such as carbon black. The polymer pellets are heated beyond their melting point and mixed with the particles before being extruded into a wire-shaped filament that is coiled and later used in the FDM 3D printer. The 3D printing process then remelts the wire by Joule heating of the nozzle and mechanical pushing the filament through the nozzle at a controlled rate while the chassis moves the nozzle to the desired location of the pattern. DIW is a similar process to FDM printing with three distinct features. First, the ink of a DIW always includes a solvent to control the viscosity of the ink, though sometimes the solvent is a gel-phase material such as PVDF. Second, the material is physically pumped through the nozzle like squeezing ketchup through a bottle. Finally, the ink does not need to be heated in DIW, whereas the filament in FDM printing will only flow when brought past its glass transition temperature, and is generally brought close to or beyond the melting temperature of the polymer. These differences give some advantages to DIW. Because DIW can be performed at lower temperatures and the rheology can be tuned by changing the volume of solvent, a wider range of materials can be used. After printing, there are several heat-treatment steps that may be required to complete the printed component layer. The most common of these are annealing, curing, and sintering. Annealing is a heat-treatment process that is used to relieve the internal stresses of a material. Annealing is more commonly used for the heat-treatments of macro-scale metals,plastic plant pots ceramic glasses, and high-performance polymers, though it is also applied to printed electronic components as well.

A macro-scale example would be a cold-rolled steel billet annealed so that it can be worked further into final products. In printed electronics, annealing is more commonly used to reduce the internal stresses of the polymer binder or to reshape the crystallinity, such as PLA. Curing is a process that accelerates a chemical reaction, and in the case of printed electronics, it almost always refers to the cross linking of a thermoset polymer binder. Thermoset polymers that do not set in a reasonable time at room temperature are cured at higher temperatures in an oven or vacuum oven. The monomers react much more rapidly at the curing temperature, hardening it beyond what would otherwise be possible. Sintering is a process for causing nano- or micro-scale particles to become a monolithic bulk material by diffusion. Sintering may occur in either or both of the solid and liquid states. Consider the extreme case of a composite of perfectly sphere shaped particles in a polymer matrix. Regardless of the particle:polymer ratio, two perfect spheres can only contact one another at a single point. If these were the conductive particles in a printed conductor composite, then the resistance would be very high despite the inherent conductivity of the particles because the cross-sectional area would be infinitesimal. In sintering, the composite would be heated above the melting temperature of the conductive particles so that they diffuse into one another, increasing the cross-sectional surface area and improving particle-to-particle contact.Printing is a disruptive technology that is a complete change in how electronics are made. However, considerable advances in printed and hybrid electronics are still needed to deliver on its market promises.

Part of the problem is that printed electronics are always bench marked against conventional silicon microelectronics in areas that favor silicon. This is erred thinking; Printed electronics should not compete against conventional electronics in areas such as charge mobility for printed semiconductors, the conversion efficiency of printed solar cells, or minimum feature dimension size. While researchers should strive to improve these areas, the future of printed electronics lies in its advantages over conventional electronics: large area, flexible, and low-cost manufacturing of electronics with high volumes and a wider variety of viable materials. These advantages open a new world of possibilities that would be unpractical to achieve with silicon alone.Artificial Intelligence is the capability of a computer system to mimic human cognitive functions. Computer scientists commonly develop AIs to mimic how humans learn and solve problems. They do this by programming the computer system to use math and logic to simulate people’s reasoning to learn and make decisions. Machine Learning is a subcategory of AI. Specifically, it is the application of mathematical models to help a computer system improve – or ‘learn’- without direct instruction. This enables a computer system to continue improving on its own based on its previous results or experiences.AI is what an ‘intelligent’ computer system uses to behave and perform tasks like humans. ML is how a computer system builds its intelligence. There are three primary strategies for building an ML program. These are supervised learning, unsupervised learning, and reinforcement learning. Supervised learning has the defining characteristic of access to annotated training data. Supervised learning algorithms induce models from the training data, which can then be applied to classify other unlabelled test data. A common analogy for supervised learning is that of a teacher teaching a student: the teacher trains a student with lots of practice problems, and then the student takes a test without the teacher’s help. If supervised learning is analogous to a classroom learning environment, then unsupervised learning is like throwing a child into the deep end of the pool to teach them how to swim. Unsupervised learning modes do not have access to labeled training data. Instead, unsupervised learning algorithms learn by clustering the data together in different ways, and trying to find patterns. The child in the pool will learn that behavior where they tread their legs and wave their arms brings them closer to the surface. Finally, reinforcement learning is akin to training a dog with treats. A dog will act however it wants, but they will learn that certain desired behaviors will result in a treat, such as returning to their handler when they shout ‘come’ or reclining onto their haunches when they call ‘sit’. In a reinforcement learning program, the ‘treat’ is a numerical output. In practice, a reinforcement learning program will perform a task by trial-and-error, the task result will be scored, and then the program will attempt the same task again with behavior similar to its highest-scored behavior in previous trials. Regardless of the learning strategy, all machine learning algorithms follow a process flow similar to the one shown in Figure 3.1. A computer model is built and executed, the simulation results are scored by calculating an error term, and the error drives adjustment of the computer model parameters.

The systems approach has several important implications for second generation models

While there was a sense that “decision support” was important, the model developments nevertheless began with research tools that were motivated primarily to better understand basic processes and effects on system performance. As long as model development is motivated primarily by academic and research outcomes, it will remain only loosely connected to user needs. Therefore, to re-orient model development towards user needs, a new set of institutional arrangements and incentives is needed. Fig. 1 presents a diagram of how these new arrangements might be organized. The figure shows the linkages between a “pre-competitive space” of basic science and model development, and the “competitive space” of knowledge product development. The concept of “pre-competitive space” grew out of the efforts of the pharmaceutical industry to collaborate on basic research while competing in product development. The arrows between these two “spaces” point both ways to represent the inevitable and important give-and-take. The model development approach that now exists is largely missing the competitive space component shown in Fig. 1.

To the extent that such a competitive space does exist,round pots it is in the private sector where proprietary management support is being provided, and linkages in Fig. 1 from competitive knowledge product development back to data and model development are largely missing. In Fig. 2 we show how this link from private decision makers to models and public data could be made by connecting on-farm decision support tools to databases that could be used for model development and analysis . Facilitating a pre-competitive environment is likely to require innovations in the way research organizations operate, and may need to involve public-private partnerships that clearly delineate boundaries and roles in creating specific NextGen products. PPPs are one way that science and industry can collaborate to generate new applied knowledge that can feed into the creation of new business and services. In PPPs it is common that both private and public partners provide funding and jointly formulate the research questions that can subsequently be tackled by research institutes and universities. There are a number of challenges in structuring PPPs. For example, in the European Union PPPs have been regulated to avoid unfair competition. The EU regulations stipulate that there always has to be more than one private partner involved and intellectual property rights of the knowledge developed belong to the research partner, which can then license the use to private partners for commercial purposes. An important aspect for a NextGen community of practice is openness. Open here means: first, inviting and engaging others to join and become involved; second, being ready to set priorities jointly with a broader stakeholder community ; and third, being transparent for scientific and public scrutiny of methods, tools and results through not-solely scientific venues.

Only a few of the agricultural systems models and economic models now in use can be said to be “open” in the sense that both the model equations and programming code are fully documented and freely available to the community of science. Establishing an open approach consistent with the principles of good science, including sufficient documentation and sharing of code to allow replication of results with reasonable effort, should be a priority of the practitioner community. Such an approach would facilitate model improvement through peer review, model inter-comparison and more extensive testing, new modes of model improvement and development such as crowd sourcing, and education of the next generation of model developers and users. Creating this open approach will also raise challenges related to incentives and intellectual property that would need to be addressed. The recent experience with the Agricultural Model Inter-comparison and Improvement Project , a new community of science dedicated to an open approach, suggests that researchers are now more willing to participate, but also has identified some of the challenges to an open collaborative approach. For example, obtaining funding for collaborative activities creates coordination issues among research institutions and funding agencies that need to be addressed. Another advantage of an open approach is that it will encourage the emergence of competing models and modeling approaches, rather than a single “super-model.” One dominant “super-model” could eventually emerge, but the only way to know that such a model is desirable is to allow a multi-model environment to flourish.

We also expect to see alternative approaches emerge as modelers tackle challenging features such as representation of heterogeneity and dynamics and linkages across scales. For models to be tractable, trade offs have to be made, and an open approach is needed to facilitate the testing of alternative solutions. There are important examples of recent efforts at creating a more open approach to agricultural model development. The bio-economic farm model FSSIM was made available as open source in 2010 after completion of its main project-related development and published with a license that allowed further use and extension. It is notable that the open sourcing of the model was combined withtraining sessions, but this did not lead to spontaneous community uptake and large-scale development of this relatively complex and data demanding model. The DSSAT crop modeling community is undertaking an effort to make its code open-source with the participation of more than 20 developers. The Global Trade and Analysis Project has provided extensive documentation of its model and data and allows user-modification of its standard model , and there is a large number of users of the model globally. The IMPACT model developed by the International Food Policy Research Center is publicly documented and available to other researchers . The TOA-MD model for technology adoption and sustainability assessment of agricultural systems was developed based on experience which showed that potential users needed a user-friendly, transparent tool for impact assessment. The TOA-MD model is available to users with documentation and a self-guided learning course and there is a growing community of users . To achieve the goal of demand-driven model development, it will be necessary to strengthen the linkages between the pre-competitive space of model development and the competitive space of knowledge product development. The current state of affairs appears to be that, on the one hand, the modeling community is strong on analytical capability but weak on linkage to user demand; while on the other hand, the developers of user-related farm-level products are weak on analytics. Thus, there appears to be the opportunity for “gains from trade” by facilitating more interaction between the two communities. An important part of this interaction has to be to identify the key research that could enable better service delivery to knowledge-product users.

Additionally, as emphasized in the NextGen Use Cases, there is a public good value to enhancing a broader community that can provide both data and analytics for public investment and policy decision-making. These ideas are further explored in the paper by Janssen et al. and by Capalbo et al. .The explosion in the availability of many kinds of data and the capability to manage and use it create new opportunities for systems modeling at farm and regional or landscape scales. Fig. 2 presents an example of the possible types of private and public data that could be generated and used for both farm-level management Use Cases and for landscape scale investment and policy analysis Use Cases. Some of these data would be generated and used at the farm-level,garden pots plastic others would be generated and used for landscape-scale analysis to support investment decision-making and science-based policy-making. While farm-level decision making and landscape-scale analysis have different purposes, they both depend on two kinds of data: private data, including site and farm-specific characteristics of the land and the farm operation, and the site- and farm-specific management decisions that are made; and public data, i.e. weather, climate, soils, and other physical data describing a specific location, as well as prices and other publicly available economic data. Many farm-level data and decision tools from private and public sources are currently in use, and are evolving rapidly . The left-hand side of Fig. 2 presents the generic structure of these tools, the data they use as inputs, and the outputs that are generated. The right hand side of Fig. 2 shows the general structure of the data and models needed to carry out landscape-scale research and policy analysis. A key feature of landscape-scale models is that they use public data for prices, weather forecast, and policy information, private site and farm-specific input use data, and outcome based data that are useful for both farm-level management decisions and landscape-scale policy decisions. There are three broad categories of landscape-scale data: publicly available bio-physical data, including down-scaled climate and soils data; publicly available economic data, including prices and policy information; and the confidential site- and farm-specific data obtained from producer- and industry-generated databases. Landscape management and policy analysis models require spatially and temporally explicit data that are statistically representative of the farms and landscapes in a geographic region in order to provide reliable information about economic and environmental impacts and trade offs. Such data are not typically available in most parts of the world. As a result, implementation of these models relies on the publically available information on farm management collected periodically through special-purpose surveys. Currently available data are inadequate for various reasons. Many of these data are collected with samples that are not statistically representative of relevant regions or populations for landscape-scale analysis; many data are not spatially or temporally explicit, are only available after substantial aggregation, thus limiting their usefulness, and are often available with long time lags between when the land management decisions are made, the data are collected, and when they become available for research or policy purposes.

Longitudinal data that provide observations of the same farms over time are particularly important for policy research, but there are few such data available. A key implication of the framework presented in Fig. 2 is the complementarity between knowledge product design, agricultural system models, and farm-level data collection. We return to this issue in Section 4.The NextGen Use Cases show clearly the need for whole-farm system approaches. Agricultural systems are managed ecosystems comprised of biological, physical and human components operating at various scales . Farms are embedded within larger ecological and human systems operating at regional scales , as well as larger scales. The need for a system-level understanding, however, should not be construed as meaning that there is not a need for component-level tools as well. Indeed, particularly in the more specialized, industrial systems, there will be a growing demand for tools to improve management of soil fertility, pests and diseases, and other elements of on-farm management. Nevertheless, until these components are integrated into a wider systems approach, it they will not be able to achieve goals of sustainable management. For example, nutrient and pesticide use cannot be managed effectively to account for potential off-farm impacts on water quality without a systems approac.Within each system level, a set of interacting subsystems is involved. This suggests the possibility of constructing models of large, complex systems by combining models of modular sub-systems. The level at which modularization may be possible remains an important question, and this in turn has implications for software engineering. For example, as discussed in Jones et al. , many crops are now modeled individually and separate from livestock. Systems with multiple interacting crops , livestock, and crop-livestock interactions, are needed for various Use Cases, showing the need for these interacting components to be incorporated in a modular “plug and play” system. Also, these biophysical production system components interact with economic-behavioral components and environmental components. These interactions among sub-systems show the need for standard ways to link inputs and outputs among sub-systems. As we noted above, several more complex systems models have been developed , but as yet each modeling system uses its own approach to model linking and model components from different developers cannot easily communicate with each other. Another important issue raised by the systems approach is the appropriate level of complexity for Use Cases, an issue discussed further in Section 3.8. Research in environmental modeling indicates there are often diminishing returns to complexity. Similarly, experience with economic modeling has shown the value to “minimum data” or “parsimonious” approaches . The need for both modularity and parsimony also relate to the need for generic approaches, particularly for complex agricultural systems models and economic models, so that model developers can move away from models that are application-specific.

Eighty percent of almond plantings are now located in the San Joaquin Valley

We examine the half-century of changes in regional shares of production for the major commodity groupings—fruit and nut crops, vegetable crops, and dairy products.Statewide acreage of fruit and nut crops increased throughout the last half a century from about 1.5 million acres in 1950 to nearly 2 million in 1975 and 2.5 million in 2000. Yields per acre also increased, resulting in production increases far above that of just acreage alone. Figure 8 shows that the share of the state’s acreage fell in the Central Coast region from 18 to 8 percent and in the Southern California region from 26 to 8 percent. There were significant increases in the San Joaquin Valley ; many of the additional acres are located in newly developed areas supported by federal and state water-delivery systems. Commodity Example – Almonds. The shifting location of almond acreage reflects the shift of production southward in the Central Valley from the Sacramento to the San Joaquin Valley toward productive irrigated lands with newer cultural and management systems in the southern region. Urbanization displaced a large portion of the acreage in the Central Coast region.

In 1950 three of the “top five” almond producing counties were located in the Sacramento Valley. In 2000 the top five counties accounted for more than two-thirds of statewide acreage,pot blueberries and all were located in the San Joaquin Valley . Commodity Example – Oranges. In 1950 four out of every five acres of oranges were in Southern California . The early dominance of Southern California counties waned within the next two decades, and acreage was progressively displaced northward to the east side of the San Joaquin Valley as CVP water deliveries began in the 1950s. San Joaquin Valley acreage rose by 85,000 acres between 1950 and 1975. Tulare County alone now accounts for more than half of the state’s 207,000 acres of oranges, and 82 percent of the harvested acreage is now located in the San Joaquin Valley production region. Orange County, which had 60,109 acres of oranges in 1950, retained only 115 acres in 2000. Appendix Table A2, Part B, identifies harvested acreages of oranges for the top five counties from 1950 to 2000. In 1950 the top five counties accounted for 85 percent of orange acreage. Concentration in the top five counties is now 93 percent of statewide acreage.Data presented in Table 2 confirms two fundamental trends in California agriculture. The first is the decline in importance of Southern California in overall value. Los Angeles produced the highest value of production in 1949 but had disappeared from California’s top five by the 1960s. The second trend is the rising importance of the southern San Joaquin Valley; Fresno, Kern, and Tulare County accounted for 21 percent of California production in 1949 and 32 percent in 2000.

This reflects two things: the shifts of high-value commodities from Southern California, and the enormous productive potential of both east-side agriculture and the newly irrigated agricultural land on the west side of the valley. The share of total value coming from the top five counties increased sharply, from 35 percent to 49 percent, over the 50-year period. A few other points of note: California Department of Food and Agriculture preliminary data for 2001 put Tulare County in the number one spot, confirming the rising importance of dairy production to California. Monterey County has steadily increased its share of production, which rose from 3 percent in 1949 to 11 percent in 2000, reflecting a rapid increase in demand for fresh vegetables. Table 3 lists the top ten California counties by value of crop and animal production. In 1950 Los Angeles County was number one but was shortly thereafter overtaken by Fresno County, which dominated throughout the last four decades of the 20th Century . The same six San Joaquin Valley counties are included in the 1950 and 2000 rankings , but their relative rankings changed from decade to decade. There are three Southern California counties on both the 1950 ranking and the 2000 ranking, but they are entirely different counties. In 1950 Southern California counties included were Los Angeles, Imperial, and Orange; in 2000, they were San Diego, Riverside, and Ventura. Increased concentration of statewide agricultural production occurred over the past half decade. The top five counties accounted for about a third of the value of production in 1950 and nearly half in 2000. The top ten counties accounted for slightly more than half of statewide production in 1950 and 70 percent in 2000.

In summary, population growth and water availability have been the two dominant underlying forces affecting regional shifts in the location of agricultural production within the state. Rapid postwar and continuing urban and suburban population expansions forced relocation to interior valleys—first from the Los Angeles basin and later from the central coast and San Francisco Bay Area. Only high-valued vegetables, nursery, and specialty crops persist because of climatic and location advantages in the remaining Central Coast and Southern California areas of production. Trees and vines have, when possible, moved from Southern California and Central Coast regions to those interior areas with more favorable soils and water supplies and less population pressures. The most favored area for increased intensive production, including dairies, is the San Joaquin production region. In general, the Sacramento Valley has had fewer opportunities to change the mix of commodities produced. In some cases, commodities traditionally grown in the Sacramento Valley have also found more productive locales in the newer crop areas of the San Joaquin Valley. Now, at the start of the 21st Century, urban development is placing pressure on agricultural production in the northern San Joaquin and southern Sacramento Valleys, setting in motion further dynamics affecting the future location of the state’s agricultural production.California’s farms and ranches have always relied on exporting a significant share of total production to foreign markets, which recently amounted to a fifth or more of the total value of production. The value of California agricultural exports ranged from $6.5 to $7 billion over the five-year period 1997–2001. Table 4 shows export values and rankings for California’s most important agricultural export commodities for 1997 and 2001.9 The rankings of most important exported commodities did not change much between 1997 and 2001. Almonds and cotton, the top two export commodities, each with exports exceeding $600 million, had exports valued significantly less in 2001 than in 1997.

The largest percentage of increase in export values was for carrots and dairy ; decreases of 30 percent or more occurred for beef and products, hay, lemons, and cotton. An improvement in commodity prices from lower price levels could significantly increase the value of agricultural exports in the 21st Century. Exports have always been important to California’s farmers and ranchers. Over time, changes in the character of California agriculture have changed the kinds of animal and plant commodities significantly entering export markets. Table 5 compares the most recent list of the top 20 export commodities with that of two decades earlier . Comparable export values do not exist for these two periods,square plastic plant pots but the two lists of rankings do reflect the agricultural sector’s shift toward production of higher-valued dairy, fruits, tree nuts, and vegetables. Export outlets are crucial for many of California’s commodities. It is estimated that in 2001 the quantities exported were nearly half or more for rice, pistachios, almonds, prunes, and cotton produced on California’s farms and ranches . Note that the export of grapes and grape products appears as the first commod-ity in this table as the most important agricultural export. The aggregate of fresh grape, wine, raisin, and grape juice exports was more than $1 billion, easily topping the value of almonds or cotton alone . In 2001 17 percent of the quantity of production of the top 50 export commodities was exported. Economic conditions in East Asia and Europe are important to exporters . Shares of exports have not changed much over the recent past. Roughly a fifth is exported to each of the following markets: Japan, other East Asian nations, the European Union, Canada, and the rest of the world, including Mexico and Latin America. Changes in foreign economic conditions, trading relationships, and exchange rates significantly affect the bottom line for California producers.Land is an important farm asset for farmers and ranchers. Farm real estate values include land and buildings plus permanent appurtenances . USDA statistics show substantial appreciation over time in the value of land and buildings . The average value in California in 1950 was $154 per acre. The nominal value of $2,850 per acre in 2000 is 18.5 times larger than that for 1950. Real appreciation is about 250 percent when adjusted for inflation. There is, of course, wide variation in per-acre values depending on the location and the highest and best use of California’s agricultural land. Select vineyard and vegetable lands are considerably higher in value and have displayed greater appreciation than statewide averages. Table 11 shows USDA-estimated values for several broadly defined statewide types of California agricultural land in 2000. Statewide averages are of limited use in reflecting the large variation in values of land in various areas of the state even if they had similar highest and best uses. For example, the California Chapter of the American Society of Farm Managers and Rural Appraisers reported wine-grape values ranging from $3,500 to $180,000 per acre in 2000, along with considerable variation in opinions about market activity and price trends depending on location within the state. Vineyards in the North Coast region ranged in value from $12,000 to $180,000 per acre depending on other factors, such as location or root stock.

Sales information for irrigated crop land reveals a range from $600 to $49,000 per acre depending on location, highest and best use, and water source. Central Valley sales values range from a low for Kings County lake-bottom irrigated crop land to a high for choice irrigated crop land in San Joaquin County . Coastal irrigated land values were substantially greater. In Monterey County values ranged from a low of $9,000 per acre in the King City area to a range of $20,000 to $39,000 per acre in the prime vegetable production area of the lower Salinas Valley .Net farm incomes to California farmers and ranchers were constant and without much variation during the 1960s . For most of the 1990s, incomes ranged from $5 to $6 billion with considerable year-to-year variation, presenting a difficult financial environment for agricultural producers. California farmers experienced reduced levels of net farm incomes beginning in 1997, which was also true for the net farm incomes of all U.S. farmers. The two interim decades were expansive years, showing growth in production capacity statewide and cyclical variations that were mostly associated with offshore market opportunities gained and lost. California’s share of U.S. net farm income increased from 9 to 11 percent over the period 1960–2000. In contrast, U.S. net farm income growth was more gradual through the 1980s, except for a spurt in the early 1970s, due again to export market opportunities that were attractive to all U.S. crop and livestock producers. The more significant growth in net incomes occurred from the 1980s through the mid-1990s .Undertaking a prognosis of the future is an exercise fraught with danger. Surely the future is unpredictable in absolute terms, but we believe that one may learn a great deal about what forces may shape the future by understanding the importance of past and enduring features that have influenced the growth and development of California agriculture. Our stylized history suggests a set of historical factors that have influenced development of California agriculture over most of its history. We identify six general categories of drivers of importance—biophysical, technology and inputs, access to capital and labor inputs, human capital, demand factors, and public investment—and within these categories an even more specific total of 18 historical drivers. First, this chapter establishes a baseline for appreciating the historical influence of these identified drivers against which we can speculate on their future influences. Then, we give our evaluation of how important they were in each of the three historical periods—pre-20th Century, 1900–1950, and 1950–2000. Table 13 lists five major categories and 18 historical drivers.

There appeared to be many harbingers of a dismal future for the industry

The next four columns vary that distance from 30-70 km in 10-km increments. As can be seen in Panel A, the impact of an additional fire is considerably larger when we focus on nearer fires, but this pattern of results no longer holds when we standardize our outcome measure based on the variability of test scores, as in Panel B. Unsurprisingly, the results become smaller as we include test takers further away from the fire. At a 70-km radius, as seen in column of Table 4, the results are no longer significant. Together, these results highlight the relatively localized impacts of agricultural fires. In columns – of Table 4, we explore the sensitivity of our results to alternative central angle measures to determine whether an individual is upwind or downwind of a fire. Recall that our baseline model specification uses the angle of 45 degrees to define upwind and downwind fires . As we alter the angle to 30, 60, and 90 degrees, the estimates remain significant, but become smaller as the angles become larger. This pattern of results is consistent with standard models of pollution dispersion,raspberry grow in pots as wider angles will expand the ‘treated’ upwind sample to include more individuals with peripheral levels of exposure. It also further validates that our upwind and downwind measures are doing a reasonable job of capturing the relevant transport of pollution from fires to test centers.

Table 5 experiments with alternative ways to define a fire. Column reproduces our core results from Table 2, while column takes a more aggressive approach to classifying fires as exogenous by limiting our attention to those fires within the 50-km radius of a county administrative center but that take place in a different county. While our use of wind direction is meant to capture the economic effects from agricultural fires, the enforcement of any policies designed to limit agricultural fires or protect air quality occurs primarily at the county level . Thus, our focus on non-local fires should help address any potential concerns about the endogeneity of local policies vis-à-vis testing outcomes. The results using this specification are largely unchanged. 15 In column , we inverse-distance weight fires to better reflect the distance of the fire from the county administrative center. In column , we account for the intensity of the fire by weighting by the fire radiative power in Watts of each event. The estimates remain statistically significant, but are slightly smaller in magnitude than those under our preferred specification. Finally, we use reliability measures from the fire dataset to adjust for the probability that a hotspot is genuinely a fire . The results after this adjustment are statistically significant and slightly larger in magnitude. In Table 6, we explore a final set of robustness checks. As before, the first column reproduces our core results for ease of comparability. We report the estimates using alternative ways of clustering standard errors either by prefecture in column , or by county and by year in column . The estimates are robust to these different clustering approaches, suggesting that spatial and temporal auto correlation is not a big concern in our setting. In column , we add controls for visibility.

These controls are important as impaired visibility may trigger avoidance behavior in the lead up to the exam.16 In addition, gray skies can impair one’s sense of psychological well-being, particularly if worried that diminished air quality might affect their test performance. In column , we expand our focus in Shandong to the third day, which only takes place in this province. In column , we add the data we have from Jiangsu Province, which only covers part of our study period. The coefficients barely budge across the first three checks. The results are slightly smaller and now only significant at the 10-percent level under the final one. In the end, our results appear quite robust to alternative methods of measuring fires, assigning exposure, clustering standard errors, and defining our sample population. That the magnitudes of results change in expected directions as we tighten or liberalize the approach we use to assign fires to testing facilities is particularly reassuring. In this section, we estimate the effect of agricultural fires on air pollution, to confirm that air pollution is the channel through which agricultural fires affect students’ exam scores and to place our results in a broader context. As described earlier, we do so by using data from the 2013–2016 period for which daily air pollution measurements, even in more rural areas, are available.

The ideal design for this analysis would focus exclusively on the two-day exam period, but this leaves us with limited statistical power. Instead, we construct a panel of two-day moving averages of pollutant concentrations in June and link them with proximate agricultural fires during the same period. The empirical model for this estimation is nearly identical to the one described in Equation , except that the dependent variable is now one of the six criteria air pollutants. Weather variables are now measured as two-day averages of the corresponding to each moving two-day period in June for which we have pollution measures. The results are shown in Table 7. The first two rows list the two-day averages and standard deviations of each pollutant in June during 2013–2016. The PM10 concentration is approximately 78 µg/m3 and the PM2.5 concentration is approximately 46 µg/m3 , both of which greatly exceed World Health Organization guidelines. The other pollutant levels are more modest, although still higher than those typically found in developed countries. Turning to our estimates, we find a significant and substantial effect of upwind agricultural fires on PM10 and PM2.5. A one-point increase in upwind agricultural fires increases PM10 and PM2.5 concentrations by 0.476 µg/m3 and 0.262 µg/m3 , respectively. We also detect a weak effect of downwind fires on PM10, and the coefficient of upwind-downwind difference becomes insignificant compared with that of PM2.5. This may be due to the fact that PM10 is heavier than PM2.5 and thus less responsive to wind direction. The impacts on PM2.5 are non-trivial: a one-standard-deviation change in the upwind-downwind difference is associated with a 5.6 percent standard-deviation change in PM2.5. In contrast, downwind fires have no impacts on air quality, providing further validation for our empirical strategy to uncover the pollution-driven impacts of agricultural fires on NCEE test performance. We find no effect of agricultural fires on other pollutants, including SO2, NO2, CO, and O3.

In general, these estimates are consistent with those found in the scientific literature and recent empirical analysis done by Rangel and Vogl in Brazil, both of which find that agricultural fires primarily emits PM. Given that the samples are different for our estimates of the impacts of fires on pollution and the impacts of fires on test performance, we are unable to provide an instrumental variable estimate of the effect of PM on student scores. We provide a rough estimate akin to Wald estimator as an alternative. Using the ratio of the reduced-form estimates over the first-stage estimates based on the differences in upwind and downwind fires, we find that a one-standard-deviation elevation in PM2 5 will lower average student scores by 13.6 percent of a standard deviation . While these magnitudes are quite modest, they are roughly three times as large as those found for the impact of PM on Israeli test takers . A simple transformation further shows that a 10 µg/m3 increase in PM2.5 reduces test scores by 4.6 percent of a standard deviation, which is larger than the 1.7 percent estimated from Ebenstein et al. . This likely reflects the higher levels of pollution in our setting,square plastic pots but may also be the result of our empirical strategy which relies on wind direction rather than an approach that assigns pollution equally to all of those within a certain distance of a pollution monitor. In addition, our estimates are also larger than those estimated for temperature . That said, our estimates here should be treated with some caution, as our ‘two-stage approach’ relies on data from adjacent but distinct time periods. California agriculture in 2004 is a very different industry than it was in 1950, 1850, or, for that matter, at its beginning in the late 18th Century. California, until well into the 18th century, was one of the few remaining “hunter-gatherer” societies left in the world . The origins of sedentary California agriculture began with the development of Spanish missions over the period 1769–1823. Over its brief history of 250 years, the character of California agriculture has been in a perpetual state of transition and adjustment: from early mission attempts to raise livestock, grow grains, and develop horticulture; to the era of ruminants ; to the development of large-scale, extensive wheat and barley production; to the beginnings of intensive fruit, nut, and vegetable agriculture based on ditch irrigation and groundwater; to pioneering large-scale beef feedlot and dairy production; to the intensified and expanded production of an increasingly diverse portfolio of crops resulting from massive public irrigation schemes; to today’s highly sophisticated, technologically advanced, management-intensive agricultural industry, which is embedded in a rich, urban state of 35 million people. It is a history of perpetual, profound, and often painful change. The turn of the millennium was marked by hard times in California agriculture: low prices seemingly across the board, water-supply woes, contracting growth in export markets, more stringent environmental regulations, and declining farm income. What does the future hold for California agriculture? Is it as bleak as it sounds? California agriculture has experienced recurrent challenges over its history and has survived. Can it do so again? This report is a modest attempt to throw some light on these questions. In this introduction we first review the situation in 2000–2002 to identify an assortment of “turn of the century” problems confronting California agriculture. In Chapter II we then place these symptoms/ indicators in a historical context in a stylized, epochal history of California agriculture circa 1769 to the present.

Chapter III is a more detailed examination of major structural shifts from 1950 to 2000, providing a look at internal performance indicators as well as comparisons to the performance of U.S. agriculture. In Chapter IV we develop a list of major factors that have driven California agriculture from the early mission agricultural period through the 20th Century and then make our qualitative assessment of the importance of these “drivers” in the early 21st Century. Chapter V contains our thoughts about the future of California agriculture. Overall, California agriculture has always battled economic adversity. While blessing California with good weather and fertile soils, nature did not provide adequate rainfall in the right place or at the right time. The downside is that investments are needed to bring water to the soil to grow crops. The upside is that irrigation potentially allows watering crops at the precise time of need and in the correct amounts, greatly increasing the range of production options. Thus, water management is a critical additional dimension of complexity for California agriculture. California is a long distance from everywhere; therefore, importing and exporting have always been expensive in terms of both money and time. Finally, California, because of its subtropical, Mediterranean climate, has different and more complex problems with pests and diseases than does the rest of mainland agriculture. Yet, since the 1850s, California agriculture has grown and adjusted many times. Each time, the composition and character of agriculture have changed, but the state’s overall industry, in terms of value and volume of output, has grown steadily and has always returned to profitability. Will California agriculture be unable to adjust and grow in the 21st Century? We cannot find evidence to support this proposition. In what follows we try to justify that conclusion.The 21st Century began with great uncertainty in the minds of many California farmers and ranchers.In the two sections that follow, and in no particular order of significance, we list some suggested indicators compiled from a variety of media.California has a highly diversified agriculture. Historically, when some prices have been low , fruit, nut, vegetable, or livestock prices were high, but this was not generally true in 2000–2001. Hence, the widespread concern following recognition that the last time everything was down was during the Great Depression of the 1930s.

This indicates that the hydrogel did help increase drought tolerance in the tomato plants

In controlled greenhouse conditions, we grew tomato plants from seeds for two weeks and then thinned the plants to one individual per pot where pots either contained soil treated with 0.4 wt % hydrogel mixed in the top half of the soil, near the plant root zone, or were untreated controls. We continued watering the plants regularly for two weeks to ensure healthy plant growth before subjecting to various watering treatments. Individual pots were subjected to one of three conditions: watering every two days to simulate normal growth conditions ; subjecting to drought for nine days followed by a recovery for seven days ; or drought for nine days followed by a recovery for seven days, followed by another drought for ten days where “-G” denotes a treatment with gel amendment. Here, drought conditions were simulated by discontinuing watering while normal watering conditions and recovery were applied by watering every other day. We included treatments with drought recovery and a second drought to evaluate the effect of the hydrogels on the tomato plants’ ability to recover and then be re-subjected to drought. Re-watering has demonstrated improved growth, stomata re-opening,macetas para viveros and photosynthetic recovery in plants.The recovery time of crop function after drought heavily determines the drought’s impact on plant health and wetter conditions typically expedite recovery.

Understanding how hydrogels affect recovery is therefore vital to assessing their potential as conditioners. Throughout the experiment, various measurements were taken to monitor the health and productivity of the plants. Chlorophyll content , leaf water potential , and stomatal conductance measurements were collected at various time points throughout the experiment. Soil plant analysis development chlorophyll measurements are highly correlated with chlorophyll concentration and used as an indicator of maximal photosynthetic capacity .In drought conditions, plants often experience reductions in photosynthesis and therefore changes in chlorophyll content.We evaluated SPAD for the tomato plants on day 0, 5, 9, and 16 of the experiment . Here, we generally observed no significant difference in chlorophyll content between the various conditions whether the plants were watered or not, or whether they were planted in soil with hydrogel or not. As such, the hydrogel’s presence is not detrimental or advantageous for leaf chlorophyll synthesis. The stomatal conductance of the tomato plant leaves were paired with Ψleaf measurements. The stomata are microvalves on leaf surfaces that facilitate the gas exchange of carbon dioxide and water between plants and the atmosphere.Stomatal opening enables leaf photosynthesis, plant growth, and water use while stomatal closure is necessary for plant survival during drought. This regulation is crucial for plants as a majority of loss from transpiration is through stomatal opening.

As most leaf water exchange occurs through stomatal pores, drought stress causes leaf stomatal closure and therefore reduced stomatal conductance.Stomatal conductance was significantly reduced for treatments that underwent drought and was then recovered after watering. Additionally, for plants that underwent one drought cycle , we observed a lower stomatal conductance in plants that were not treated compared to those treated with 0.4 wt % commercial hydrogel.The stomatal conductance of the plants that underwent two droughts were low and indistinguishable whether gel was present or not. This may be attributed to permanent damage of the stomatal pores; stomatal pores can reopen once rehydrated only if there is not irreversible cellular damage.Lastly, we compared the final dry biomass of all conditions as water stress can impose negative effects on crop yield.RGR was also calculated as the growth rate of each tomato plant relative to its size at a previously determined time point. Specifically, RGR determines the exponential relationship of the mass of plants at two time points relatives to the length of time between measurements. RGR is related to plant health as increased masses of roots, stems, and leaves, and thus higher yield, are indicative of a healthier plant. For all watering conditions, we observed significant differences in RGR of tomato stems , leaves , and roots . Additionally, in regular watering conditions , tomato plants grown in soil treated with hydrogel demonstrated a higher RGR of reproductive leaves compared to the control .

However, for conditions OD and TD, where drought were applied, reproductive leaves were not observed on any plants that were subjected to two droughts. RGR was also improved by the addition of Terra-sorb hydrogel for all treatments . A visual representative of OD and OD-G before harvesting can be seen in Figure 3.11. We also observed root penetration of hydrogels which has previously suggested increased plant available water in studies that applied hydrogels to lettuce and barley seedlings14 and tree cuttings.Using the similar experimental conditions described for Terra-sorb, we applied trehalose hydrogel, also at a 0.4 wt% concentration . The following changes were made from the previous experiment: the droughts both lasted for 11 days instead of 9 days and SPAD , Ψleaf, and gs measurements were collected at slightly different time points; all were collected on days 0, 11, 18, and 28. Overall, there were no statistical differences in the SPAD , Ψleaf , gs , or growth measurements , between the plants with or without gel under the same treatment. Plants that underwent two droughts are shown in Figure 3.21. The only statistically differences were between treatments which were watered and those subject to drought. It should be noted that the gels did not negatively affect plant growth and, as demonstrated in previous studies with other gels,46 may be effective at higher concentrations or in different environmental conditions, like a sandier soil. We believe the trehalose hydrogels did not work well when applied at 0.4 wt % because it is not as water absorbent as Terra-sorb. The hydrophobic styrenyl backbone of the hydrogel prevents high swelling ratios, so it is not effective as a soil conditioner in the treatments described in this report. However, the gels generally did not negatively influence plant health, and so could potentially be applied for another agricultural application.As the world population rapidly grows, global food demand is increasing at an unprecedented rate.At the same time, less land resources are available for food production due to urbanization, competition from non-food industries, and climate change-induced desertification of arable land.It follows that 90 % of growth in crop production is projected to come from enhancing harvest yields rather than through increased land use.By controlling harmful insect, weed, or fungal populations, pesticides are a critical technology for agriculture. However, the utilization efficiency of pesticides is low due to undesirable properties like high volatility, mobility through soil, and vulnerability to degradation. This ultimately leads to nonspecific over-application where up to 75 % of pesticides never reach their desired target.

The unused pesticides cause undesirable effects, including environmental pollution, toxicity to non-target organisms, and inducing resistant pests. Propesticides are relatively inactive derivatives of pesticides that are converted to more active parent compounds upon a transformation event, such as hydrolysis or enzymolysis.Through tuning the physiochemical properties and pharmacokinetic behavior of active ingredients, propesticides have demonstrated improved targeting capabilities with reduced off-site movement and less premature degradation.Despite the additional synthetic steps to create conjugates,pot growing supplies they are considered more green holistically since lower application rates are required to reach a similar level of efficacy as the unmodified compound, thus reducing environmental impact. As a result, propesticides are widely used and account for a significant portion of pesticide sales. In particular, proherbicides made up 37 % of total herbicide sales in 2015.Cyclic b-triketone herbicides are 4-hydroxyphenylpyruvate dioxygenase inhibitors and applied globally to protect a variety of crops, including two staple crops, maize and rice.In 2017, they made up 4.4 %, or $1.3 billion, of the global herbicide market.Currently, there is one b-triketone proherbicide on the market, benzobicyclon, which is approved for use in rice paddy fields and transforms into its active hydrolysate form upon hydrolysis.b-Triketone proherbicides that can transform in other relevant conditions, like maize cultivation, have potential to improve their sustainable use in agriculture. Furthermore, there have been recent developments of transgenic crops that are resistant to HPPD inhibiting herbicides, and complementary herbicide formulation technologies will further increase their agrochemical utilization efficiency. Mesotrione -2-nitrobenzoyl -1,3-cyclohexanedione is a top-selling, highly selective b-triketone herbicide commonly used to protect maize from broad leaf weeds and annual grass weeds,16 but is susceptible to degradation and dissipation in soil.One study demonstrated that mesotrione had a half-life of two to 18 days in soil ranging from pH 4.2 to 8.3.Additionally, mesotrione and its degradation products have shown significant toxic effects in microorganisms,aquatic species,and off-target crops.Despite the popularity of and need for improved application of mesotrione, the synthesis and evaluation of proherbicide mesotrione conjugates have not been reported.

We hypothesized that by creating reversible modifications to mesotrione, its properties could be improved without sacrificing the herbicidal activity of the parent compound. Specifically, the addition of hydrophobic moieties could reduce soil mobility as lipophilic compounds have high affinity for soil particles.Herein, we report the synthesis of mesotrione thioether, ester, and enamine conjugates, detailing the unique reactivity of the b-triketone moiety. The hydrolytically-reversible proherbicides were then utilized for the controlled delivery of mesotrione in aqueous conditions and through soil. Finally, the efficacy of the ester conjugate against a common broad leaf weed, Chenopodium album, was evaluated. To create an effective proherbicide, the enolizable b-triketone moiety of mesotrione was chosen as a target location for conjugation. The anionic form of mesotrione is more water soluble, which could lead to environmental problems through groundwater leaching.We postulated that modifying the enol with hydrophobic moieties would reduce mesotrione’s water solubility and soil leaching potential. Moreover, other b-triketones are used for a variety of applications including additional herbicides, human therapeutics, recyclable plastics,and solid phase peptide synthesis.Therefore, understanding conjugation strategies to mesotrione’s b-triketone functional group could possibly be transferrable to other important compounds. The key structural components of mesotrione that determine the reactivity, stability, and properties of its conjugates are outlined in Scheme 4.1. The b-triketone moiety will readily tautomerize into its b-keto-enol form due to its low pKa of 3.12.The resulting enol/enolate of mesotrione provides a site for conjugation and is also the group that coordinates with HPPD active sites for inhibition.It is well-known that more acidic cyclic b-triketones are more active against weeds, so these herbicides commonly have electron-withdrawing benzoyl substituents to further decrease their pKa.Mesotrione is especially acidic due to the nitro- and methylsulfonyl- groups in the 2- and 4- positions of the benzene, respectively.With the aim of making the conjugate reversible in agriculturally-relevant conditions, we focused on creating linkages that were susceptible to hydrolytic degradation . First, a phenyl thioether conjugate was created using a method similar to the synthesis of benzobicyclon, which is known to degrade in aqueous solutions.A diketovinyl chloride derivative was first synthesized through Vilsmeier reagent, oxalyl chloride and a catalytic amount of DMF. Chlorinated mesotrione was then directly added to an alkaline solution of phenyl thioether to form the final product . An alkyl thioether derivative was synthesized using a similar method to form an ethyl thioether analogue of mesotrione as well. Although it is possible for multiple isomers to form, via endocyclic- or exocyclic- substitution, H-NMR and 13C-NMR spectroscopy of the conjugates confirmed that the cyclohexane 4- and 6- methylene groups have distinct shifts. This indicates that the endocyclic substitution likely formed as the exocyclic substitution would have rendered the methylene groups equivalent.Next, we created a phenyl ester derivative using benzoyl chloride and the isolated mesotrione-TEA salt . Initially, we attempted to synthesize the ester from a solution of mesotrione in base. However, after screening base and electrophile equivalents, we observed the disubstituted mesotrione product formed favorably . We hypothesize that the pKa of the proton at position 4 or 6 of the cyclohexane ring decreases significantly upon forming a monosubstituted ester, thus favoring the disubstituted product when reacted with base. As such, we pre-isolated a mesotrione enolate salt by reacting with an excess of TEA and recrystallizing the product. The salt was then used to prepare the monosubstituted phenyl ester derivative of mesotrione. We then aimed to produce alkyl ester derivatives from acetyl chloride and hexanoyl chloride, but they were unstable to purification attempts.

The key assumption was that the rate of injury and illness was the same on large and small farms

Employers use OSHA form 300 to record each incident, including the employees name, job title, date, brief description of the incident, days absent, and other pertinent data. Employers sum the numbers within categories each year. The BLS, Office of Safety, Health, and Working Conditions, surveys roughly 250,000 firms and state and local government agencies, collecting annual OSHA form 300 summaries and compiling them into SOII. Based on these data, the BLS Safety Office publishes annual estimates for numbers of non-fatal injuries and illnesses, employment, and incidence rates within detailed industries including crop and animal farms. Our data on injuries and illnesses are drawn directly from SOII. Our employment data are drawn from QCEW. Incidence rates are for full-time equivalent workers. The Safety Office estimates FTE workers using numbers of injuries and illnesses from SOII, employment from QCEW , annual work hours from SOII, and a formula that defines full-time employment as 2,000 work hours per year.QCEW employment data “are derived from the quarterly tax reports submitted to State workforce agencies by employers, subject to State unemployment insurance laws” as well as federal agencies. QCEW does not explicitly exclude farms with <11 employees. Nevertheless, some state laws do not require farms with <10 employees to provide unemployment insurance ,maceta de 30 litros and these small farms may not be included in QCEW counts.

The state with the largest farm worker employment, California, requires UI, even for small farms. QCEW nevertheless recognizes that limitations to its ability to capture all employment within agriculture. QCEW estimates it misses 0.2 million employees in all agricultural industries combined and captures 1.2 million, suggesting it misses 14.3% of farm workers. In 2011, for crop farms in SOII, the estimate for number of injuries and illnesses was 19,700; the employment estimate was 413,800; the case rate was 5.5 cases per 100 FTE. For animal farms, the corresponding numbers were 12,400 injury or illness cases, 163,600 employed, and 6.7 cases per 100 FTE. The 2011 QCEW numbers for employment were 531,245 for crop and 230,610 for animal farms. Our first methodological adjustment was to increase the SOII injury and illness cases estimates in proportion to the difference in the SOII and QCEW employment estimates. For crops, the SOII estimate of 413,800 employed persons must be multiplied by 1.2838 to bring it up to the QCEW estimate of 531,245 employed persons. If we similarly inflate the number of SOII-reported injury and illness cases, the result is 25,291 cases. The same procedure was applied to animal farms and yielded 17,479 cases.The second methodological adjustment pertains to the QCEW underestimate of employment in agriculture.

The QCEW is likely to underestimate the number of employees in all industries, but especially in agriculture. In all industries, employers have an incentive to underreport numbers of employees because greater numbers will result in higher total payments for both unemployment and workers’ compensation insurance. This incentive is especially strong in agriculture because significant numbers of workers are undocumented — roughly 53% in crop farms. It is likely that undocumented workers are much less likely than documented workers to apply for unemployment compensation. In addition, our estimate of the under count is likely affected by varying UI statutes across states. Legal requirements on employers are not as strict for farms compared to other industries. In most states, UI only applies to farms with 10+ employees. BLS recognizes that there are limitations for the QCEW in measuring agricultural employment: “the QCEW program does provide partial information on agricultural industries… “. We therefore sought to adjust upward the QCEW estimates to reflect employment under counting. We could not find QCEW under counting estimates in agriculture in scientific journals. We used alternative QCEW data that estimated 0.2 million out of a total of 1.4 million were omitted from published QCEW tables. These data suggested that the QCEW estimates on which we rely missed 14.29% . This indicates that the observed figure should be multiplied by 1.1667 to yield the adjusted figure of 1.4 million employees. This adjustment used data from the BLS’s Current Population Survey provided by Steven Hipple. We calculated an adjustment factor for expected number of cases based on the fraction of the CPS participants. This fraction is divided by .

For CPS crop workers, of the total 966,000 participants, 634,000 are salary and wage workers; thus our adjustment factor is 966,000/634,000 = 1.5237. Our crop estimate from above that accounted for employees on farms with <11 employees as well as the QCEW underestimate of all agricultural workers was multiplied by 1.5237 and yielded 44,959 cases. This 44,959 estimate relied on the assumption that the case rate for farm owners and family members was the same as for wage and salary workers. A corresponding adjustment factor for animal production cases was also applied. Employers and employees may deliberately or carelessly not report an injury or illness.We refer to this as a behavioral rather than an institutional cause. Incentives for under reporting for employers may include a desire to reduce workers’ compensation insurance premiums, whereas employees may fear that reporting an injury may jeopardize their employment or may not be aware that they should report an injury. The extent of willful and negligent reporting is unknown, although there are estimates. An earlier review of the literature suggested an 11% to 59% rate for the SOII and a 28% to 75% rate for occupational conditions eligible for workers’ compensation coverage. More recent studies, described below, have generated estimates within these ranges. According to Boden and Ozonoffs analysis of six states, the SOII missed 27% – 57% due to willful and negligent under reporting. For Michigan, Rosenman et al estimated that the SOII missed 67.6%. Bonauto et al analyzed data from ten states and found 23% to 53% of cases were missed by the workers compensation system. Lakdawalla, Reville, and Seabury estimated workers compensation missed from 39% to 74% in their most recent years of analysis.

These two recent workers compensation studies therefore suggest a range from 23% to 74%. But if workers’ compensation systems are more complete than SOII, then the SOII likely missed more cases than previous estimates suggest. Following two earlier studies, we assumed an under reporting rate of 40%. Our sensitivity analysis allowed for a lower bound of 27% and an upper bound of 57% following Boden and Ozonoff. These might be low estimates given that such a high percentage of employees are undocumented. But our estimates assumed that undocumented workers would have reported cases at the same rate as BLS-SOII workers and the latter are likely to contain a high percentage of documented workers precisely because undocumented workers are less likely to report injuries and illnesses. We assumed, in other words,maceta plastico cuadrada that undocumented workers reported as frequently as documented workers before we took willful and negligent reporting into account. The 40% under reporting rate corresponded to a multiplication factor of 1/ = 1.667. For crop farms, the under reporting estimate was 1.667 × 44,959 = 74,932. The same factor was multiplied by the animal farm estimate.We also conducted a sensitivity analysis in which key assumptions were altered and new estimates were generated. These altered assumptions were included in five scenarios, each with one lower and one upper bound. The first scenario addressed the assumption that farms with <11 employees experienced the same case rate as farms with 11+ employees. This scenario used SOII data on 2011 case rates for farm establishments with 11–49 employees, 50–249 employees, 250–999 employees, 1000+ employees and all sizes combined. The SOII data display an inverted U-shape with establishments with the fewest and greatest number of employees with the lowest rates and establishments with 50–249 and 250–1000 employees with the highest rates. For the lower bound, we used the ratio of rates for establishments with employees 11–49 to the mean rate for all establishments. For crops this ratio was 4.8/5.5. The mean rate for all establishments was in the denominator because it corresponded to our assumption that farms with <11 employees had the same rate as farms with 11+ employees. For the upper bound, the ratio was the highest rate to the mean for all establishments. For crops, this ratio was 6.4/5.5. Because these adjustments were derived directly from injury and illness rates, they did not apply to the QCEW employment multiplication factors of 1.1238 and 1.4096 for crops and animal farms. For example, for the lower-bound for crops, we used 4.8/5.5 = 0.8727 or 87.27% of the original estimate for cases from farms with <11 employees. . The QCEW employment was 28.38% more than the SOII employment, so the 87.27% was applied to the 28.38% only and the adjustment factor was 1+0.2838×0.8727 = 1. 2477. The second scenario applied to the assumption that the QCEW missed 14.29% of worker employment and that the adjustment factor was 1/ or 1.1667. This 14.29% was drawn from the 2011 estimate of the QCEW employment under count. For the second scenario, we used QCEW estimates from 2010 and 2009.

In 2010, the estimate was 15.38% and an adjustment factor of 1.1818. In 2009, the estimate was 8.3% and used a multiplication factor of 1.0909. The third scenario involved the assumption that case rates were the same for farmers and family members as for employees. The preferred estimate above used employment data from the CPS; for example, for CPS crop workers, of the total 966,000 employment, 634,000 were salary and wage workers and the corresponding adjustment factor was 966,000/634,000 = 1.5237. Steven Hipple at the BLS provided us with standard errors and 90% confidence intervals for each CPS mean employment figure. Our interest, however, centered on the ratio of means . The standard error of a ratio requires information on the covariance between the numerator and denominator. But we do not have information on the covariance. We therefore applied 90% confidence intervals to numerators and denominators simultaneously. For example, for the upper bound for the ratio for crops, we added the upper limit to 966,000 and subtracted the lower limit from 634,000. For the upper bound for the ratio, we subtracted the upper limit from 966,000 and added the lower limit to 634,000. For the lower bound in crops, for example, the calculation was / = 1.3299 and 1.3299/1.5237 = 87.28% of the preferred estimate. . Our approach estimated the under count of nonfatal occupational injuries and illnesses on crop and animal farms utilizing data from the SOII, QCEW, CPS and assumptions from the literature. Whereas the SOII estimated 32,100 cases in 2011, we estimated 143,436, indicating that SOII missed 77.6%. A sensitivity analysis suggested the percent missed by SOII ranged from 61.5% to 88.3%. The reasons for this under count are straightforward, and, for the most part, readily acknowledged by BLS. We refer to these as institutional causes of the under count. First, the SOII explicitly excludes farms with < 11 employees, all self-employed farmers and family members. Second, SOII, QCEW, and CPS acknowledge data gathering problems from agriculture due to the transient nature of the work and the extent of employment accounted for by undocumented workers. These institutional causes account for nearly one-half of the under count. Third, there is considerable evidence that workers and employers in all industries under report cases due to willfulness and negligence. This third cause, which we label behavioral, accounts for a little over one-half of the under count. The QCEW is not the only data set with information on agricultural employment; the CPS and the Census of Agriculture also generate estimates. We preferred the QCEW because it serves as the basis for estimates in the SOII. It is nevertheless useful to compare employment estimates. The QCEW estimates 532,245 and 230,610 employees for crop and animal farms, respectively in 2011. In the Current Population Survey for 2011, for private sector employees, these numbers were 626,000 for crop farms and 447,000 for animal. Daniel Carroll recently analyzed Census of Agriculture data from 2007 and estimated 1,358,020 farm workers on crop farms and 434,953 on animal farms. But none of these estimates are for FTEs, and agriculture is well-known for transient and part-time work. Thus, each of these data sets, including the QCEW, have deficiencies. The CPS and Census of Agriculture data suggest an employment under count by the QCEW.

Yield improvement is not the only form of varietal improvement

During the early 1980s, the adoption of drip expanded, and local dealers and personnel developed the skills to design and improve the systems. Currently, much of the design is done at the dealer level, and dealerships often have sales engineers who can design sophisticated drip systems. Some large farms are able to design their own systems with the help of professional designers. Advantages associated with the introduction of drip in high-value crops in California are reduction of chemical use and replacement of unskilled laborers with a smaller number of more highly skilled employees.Continuous processes of adaptation and improvement of the technology reduced the fixed cost of drip systems, and the effectiveness of use increased because of “learning-by-using” by farmers. Some farmers combine drip with computer technology to allow irrigation activities to respond to environmental conditions. This version of precision agriculture has been found in some areas to increase yield and reduce water use significantly . In the future,maceta 25l the combination of drip and sprinkler irrigation with automated computerized systems that use weather and other data to adjust timing and flow will almost certainly become more popular.Public investment in provision of weather information in the form of the California Irrigation Management Information System has given impetus to the development of computerized and automated irrigation systems.

About 100 weather stations have been established throughout the state to provide detailed weather information via telephone, e-mail, and other modes of communication. Water districts, irrigation consultants, and growers have gradually joined the CIMIS system , and the annual benefits are estimated at about 20 times its cost. The introduction of this public weather system has reduced the cost of information to farmers and resulted in a proliferation of consultants who use the data, develop software, and provide farmers with irrigation advice. These consultants have gradually changed the way California agriculture operates. CIMIS has also provided a means to increase productivity and incomes; in the future the use of consultants, computers, weather stations, and more precise irrigation is likely to expand beyond the regions and the crops in which they are currently used.The California experience suggests that immense benefits are associated with the provision of knowledge that enables the introduction and improvement of technologies. Public policies that support provision of infrastructure and favorable economic conditions are crucial for technological development. However, policies involving the transfer of water in the past were not particularly conducive to increased irrigation efficiency. Water markets may offer an opportunity to transfer water away from agriculture; on the other hand, they may also provide a significant impetus for improving water use efficiency. As water markets develop in response to water scarcity, we may expect to see an increase in adoption of modern irrigation practices and more rapid development of new, improved practices.

In many cases in the past, the expansion of crop acreage was slowed by labor availability and costs associated with harvesting. The complexity of fruit and vegetable crop harvesting, partly related to the fragility of the produce, has combined with relatively small markets for equipment to make the introduction of harvesting equipment slower for these crops than for some major field crops. For many fruit and vegetable crops, mechanical harvesters were not introduced or significantly adopted until the 1960s or 1970s, and a range of significant commodities continue to be harvested by hand because mechanical harvesting technology remains unavailable or costly. Available data on the introduction and adoption of mechanical harvesters is sketchy and incomplete. 9 Relatively good information is available on the cotton harvester and the tomato harvester, which received particular attention from economists because it was controversial. University research has played a major role in developing harvesting technology for tomatoes, wine grapes, and lettuce. Economic considerations often delayed the introduction of such technologies once they were available, but also helped promote their adoption later.The processing tomato industry, in particular, was dependent on the Bracero Program, which was terminated in 1965. Introduced in the post-World War II period, the program contributed to the expansion of labor-intensive crops in California and to the transfer of production of major vegetable crops, especially tomatoes, from other states to California. That same year a mechanical tomato picker was introduced which coincided with the introduction of a new variety suited for mechanical harvesting. The design for the tomato harvester was devised by a private company , based on a design developed at the University of California at Davis. The machines worked better with new varieties of processing tomatoes bred especially for mechanical handling, which were also developed by the University. Following the cancellation of the Bracero Program, adoption of the tomato harvester was remarkably swift; by 1968, 95 percent of California’s processing tomatoes were mechanically harvested . Not only was the technology beneficial to growers—reducing labor uncertainty and decreasing costs—it also improved the lot of consumers by reducing the cost of tomato products. Critics charged, however, that the introduction of the tomato harvester negatively affected farm workers .

The case is not altogether clear. California’s processing tomato industry today employs many more workers than it did when the tomato harvester was first introduced. If the harvester were banned, the California processing tomato industry would be so adversely affected that the effects on workers would be clearly negative. Such longer-term consequences of the introduction of so-called labor-saving technology have not always been fully appreciated. The total impact on farm workers of harvest mechanization depends on both the effect on labor intensity , and the effect on the scale of production .10The introduction of the mechanical lettuce harvester seemed also to be a response to labor-supply problems. With the advent of the lettuce harvester, however, labor demand in both harvesting and post harvest activities declined. On the other hand, productivity increased significantly. Because owners needed more commitment and responsibility from workers, they began contracting with unions, and contracts brought workers higher pay and longer employment, although in many fewer jobs. In the year following the Bracero Program, illegal immigration of farm workers to California increased. The transaction costs associated with recruitment of seasonal labor during the Bracero Program and especially afterwards stimulated the use of farm labor contractors , who take responsibility for the recruitment of laborers. The adoption of FLCs was further stimulated by the introduction of the Immigration Reform and Control Act of 1986 , which was intended to reduce the flow of illegal immigrants and has changed the risk to farmers of employing potential illegals directly.11 Although the literature raises doubts about the effectiveness of the changing regulations in controlling the flow of immigrants, the rules have affected the nature and reliability of the agricultural labor force as well as the costs of labor. Such factors are likely to continue to be an incentive for farmers to seek labor-saving alternatives.Harvesting technology has played a major role in the California cotton industry, as documented by Musoke and Olmstead . California’s cotton industry expanded rapidly in the immediate post-World War II years, with the adoption of mechanical harvesting being a major reason. California cotton growers adopted mechanical harvesters more rapidly and more completely than farmers in other states. Musoke and Olmstead attribute this rapid adoption to factors such as the relatively large size of California farms and dry weather during the harvest season, factors that may also have contributed to California’s relatively rapid adoption of other mechanical technologies. By 1960, over 90 percent of California’s cotton was mechanically harvested; by 1965, virtually 100 percent.Mechanical harvesting and bulk handling equipment have been important innovations in California’s horticultural industries. In many fruit and vegetable industries, especially those where products were destined for processing,cultivo de frambuesas harvesting innovations came in the 1960s or earlier and became standard technology by the 1970s. For instance, Zahara and Johnson reported 100 percent mechanical harvesting in 1978 for a variety of processing vegetables, including snap beans, carrots, sweet corn, onions, green peas, and potatoes.

However, none of the fresh or processing fruits used significant mechanical harvesting except prunes and dates and tart cherries . In fresh vegetables, mechanical harvesting was important only for carrots and potatoes. Mechanical harvesters for wine grapes were introduced in California in the late 1960s, and by 1974 between 5 and 10 percent of the crush was mechanically harvested ; by 1978, 20 percent . Currently, perhaps half of the crush is mechanically harvested.12 On the other hand, by 1975 virtually all almonds, pecans, filberts, and walnuts were mechanically harvested; mostly produced in California.Other examples of genetic improvement have been entirely the result of local efforts. California’s almond yields per acre roughly tripled between 1950 and 1990, as a result of a combination of improved varieties that allow higher planting densities, and other improvements in technology.Other cost-saving improvements, such as improved irrigation methods and mechanical harvesting, and overall quality enhancement have helped spur the growth of the almond industry in California to the point where it now dominates the world market. Similar developments in technology and management have been an important impetus in many of California’s other “Cinderella” industries, including other nuts, fruits, and vegetables.In several industries, varietal improvement has brought improvements in quality, though sometimes at the expense of yield, or an increase in the number of varieties available, which offers more choice for consumers or an extension of the season for short-season fruits. Table grapes are a good example. In 1953 there were only three important table grape varieties . By 1993, eight specific table-grape varieties were planted on over 2,000 acres each; several of these are superior quality seedless varieties. The extension of the season and the range of varieties are thought to have provided an important stimulus to demand for fresh grapes.14 California’s grape industry has been devastated in the past by pests, such as Phylloxera, and is currently threatened by Pierce’s Disease, transmitted by the GlassyWinged Sharpshooter.The use of resistant rootstocks, a form of genetic improvement, was the solution for Phylloxera, and genetic resistance is seen by many as the long-term solution for Pierce’s disease as well.Technological regulation is likely to become more important over time, as elements of society become more concerned about the consequences of today’s production methods for issues such as food safety, environmental contamination, and animal welfare. Technological regulation attempts to exercise control over production methods so as to safeguard product quality, worker safety, animal welfare, and the environment. Technological regulation may also allow one group of producers to profit at the expense of others—and perhaps at the expense of society as a whole. An important example of this has been the regulation of variety choices in the California cotton industry under a law introduced in 1925, which restricted production to a single variety of Acala cotton, supposedly to promote demand. Constantine, Alston, and Smith showed that the evidence of an important stimulus to demand is lacking, yet the one-variety law had a depressing effect on yield in some parts of the San Joaquin Valley while growers in other parts of the Valley benefited both from having suitable planting material for their conditions and a higher price for their cotton. Overall, the beneficiaries outnumbered the losers, and the law remained in force for over 50 years, until a 1978 amendment opened the industry to private breeders.Barriers to the development and adoption of new technologies include market, social, and other economic factors as well as regulatory constraints. Taken together, these aspects are presenting substantial barriers to the development and adoption of genetically engineered crop varieties, generally, and for California’s specialty crops these barriers may preclude access to new varieties developed by genetic engineering. The same types of factors may leave many California crops as orphans with respect to conventional pest control technologies as well—for many such crops the market is too small and the research, regulatory, and other costs are too large to allow profitable development of new, specific pest-control technologies.To a large extent, the ability of California farmers to grow more than 200 different crops stems from their ability to develop and apply technologies enabling plants to resist a multitude of diseases and pests that prevent them from being grown elsewhere.

Removal of riparian vegetation to control PD is being hotly debated in California

Brown et al. estimated the optimal length of a barrier for an 800-feet long row of grapes originated at a riparian zone. They found the length of the barrier declined with the effectiveness of the barrier crop and the profits of grapes relative to the barrier crop.The average profits per acre of grapes without PD were $5,230, and the baseline return from Christmas trees was $1,764 per acre. A barrier characterized by an effectiveness parameter of .05, .25, and 1.0, respectively, requires the grower to plant barriers only 69, 21, and 12-feet wide while reducing profit per acre on average from $5,230 to $4,856, $5,127, and $5,175, respectively. Without any barrier, the average profit per acre will decline to $3,054, as most of the rows near the riparian zone will be decimated. Thus, a barrier crop with a .25 effectiveness allows the grower to earn 98 percent of the profit earned in the case of no PD, while effectiveness of .05 leads to a loss of about 9 percent of the profits and, without a barrier, close to 40 percent of the average profits are lost to PD. Brown et al. also considered a mixed strategy that allows removal of the riparian zone in addition to riparian buffers. Their analysis assumed the price of $1,489 per ton of grapes as a baseline.The U.S. Fish and Wildlife Service,arandanos cultivo which has jurisdiction over riparian areas, opposes clearing vegetation.

The results of the simulations suggest that partially removing the host vegetation is sub-optimal regardless of society’s willingness to pay for riparian habitat. As the price of grapes rises, the break even social value of riparian vegetation increases linearly. With the bench price of $1,489 per ton, the removal of a 6 foot by 100 foot strip of riparian vegetation would be socially optimal if it provided less than $5,481 in environmental benefits. Alternatively, the value of the riparian zone strip is implicitly above $5,481 if the riparian zone is maintained. Recent research focuses on modification of the riparian zone, which will replace plants that are hosts to the bacterium and vector, while maintaining a riparian zone.This insect, and the PD it carries, is not just a threat to raisins and table and wine grapes, but it also has the potential to spread the disease to other important agricultural commodities. A joint state-federal plan has dedicated a total of $36 million to eradicate and prevent the spread of the glassy-winged sharpshooter , a new arrival in California. The federal government will allocate $22 million to augment state and private agricultural industry efforts to control the spread of the GWS and support research to find methods to cure PD. The GWS is active in warmer climates. It has already decimated most of the grapevines in Temecula in Southern California, and it is a problem in Los Angeles and Kern counties. Purcell’s simulations predicted that the GWS will spread to 15 grape-growing counties including Fresno and Tulare, which produced over $500 million worth of grapes in 2000.

The damage potential of PD spread through GWS can reach billions of dollars over time . GWS transmitted PD from oleanders and other host crops, especially citrus, and it is now being controlled by spraying pesticides in host citrus orchards adjacent to grape vines. The bacterium Xylella fastidiosa affects other crops besides grapes including almonds, peaches, and oleander. Brown et al. suggests that the net present value of potential damage is greater than $2 billion. Ongoing research aims to find biological control and biotechnology solutions to these pests, but for now the solution is through application of chemical pesticides.A driving factor behind pesticide regulation in the United States is the desire to protect consumers from harmful residues on food. The Food Quality Protection Act was unanimously passed by the U.S. Congress in 1996 and hailed as a landmark piece of pesticide legislation. It amended the Federal Insecticide, Fungicide, and Rodenticide Act , and the Federal Food, Drug, and Cosmetic Act , and focused on new ways to determine and mitigate the adverse health effects of pesticides. FQPA is different from past legislation; it is based on the understanding that pesticides can have cumulative effects on people, and that policy should be designed to protect the most vulnerable segments of the population. Recent research described below has investigated some of the impacts that FQPA’s provisions—many of which have yet to be fully implemented—may have on California growers and consumers. Pesticides are also regulated to mitigate the impact on worker health or the greater environment. Of particular interest to many Californian growers is the pending ban on Methyl bromide, an extremely effective soil fumigant that is being phased out because of its impact on the ozone layer.

The publication of the National Research Council report Pesticides in the Diets of Infants and Children showed that pesticide residues have disproportionate effects on children. Children eat and drink more as a percentage of their body weight than adults; they also consume fewer types of food. These dietary differences account for a large part of the exposure differences between adults and children. The committee also found that pesticides have qualitatively different impacts on children because children are growing at such a rapid pace. This concern for the differential impact pesticides have on children is reflected in regulatory changes required by the FQPA. For instance, the “10X” provision of the FQPA requires an extra ten-fold safety margin for pesticides that are shown to have harmful effects to children and women during pregnancy. The FQPA has also resolved the “Delaney Paradox” created by the Delaney Clause of FFDCA. Prior to FQPA, the Delaney clause prohibited the use of any carcinogenic pesticide that became more concentrated in processed foods than the tolerance for the fresh form. This was supposed to protect consumer health, yet it had the paradoxical effect of promoting other non-carcinogenic pesticides that created other health risks for consumers. FQPA standardizes the tolerances for pesticide residues in all types of food, and looks at all types of health risks. The federal Environmental Protection Agency must now ensure that all tolerances are “safe,” defined as “a reasonable certainty that no harm will result from aggregate exposure to the pesticide” . Historically, pesticide exposure was regulated through single pathways, either through food, or water, or dermal exposure. Now the EPA must consider all pathways of pesticide exposure, including cumulative exposure to multiple pesticides through a common mechanism of toxicity. This means that even though pesticides may be sufficiently differentiated that they are used on different crops to control different pests, they can have similar health effects on people. The result is that in some instances, pesticide tolerances for seemingly different insecticides must be regulated together based on their cumulative effects. When FQPA was first signed into law, 49 Organophosphate pesticides were registered for use in pest control throughout the United States, and accounted for approximately one third of all pesticide sales .

OP insecticides are highly effective insect control agents because of their ability to depress the levels of cholinesterase enzymes in the blood and nervous system of insects. It has been suggested that while dietary exposure to a particular OP may be low,maceteros grandes reciclados the cumulative effects of simultaneous exposure to multiple OP insecticides could cause some segments of the U.S. population to exceed acceptable daily allowances . Reducing the risk from these aggregate effects is specifically addressed in the FQPA and is one of the reasons the EPA has chosen OP pesticides for the first cumulative risk assessment. Due to their popularity and widespread use, many in the agricultural community are worried about FQPA implementation resulting in increased restrictions on OP pesticides. By the time EPA released the Revised OP Cumulative Risk Assessment in 2002, 14 pesticides had already been canceled or proposed for cancellation, and 28 others have had considerable risk mitigation measures taken . Risk mitigation may include: Limiting the amount, frequency, or timing of pesticide applications; changes in personal protective equipment requirements ; ground/surface water safeguards; specific use cancellations; and voluntary cancellations by the registrant. Economic theory suggests that these increased restrictions and cancellations from the eventual implementation of FQPA will result in a reduced supply of commodities currently relying on OP pesticides for pest control. This will result in higher prices for consumers and a lower quantity sold. In order to estimate the possible welfare effects on the state of California, University of California researchers conducted a study on the effects of a total OP pesticide ban on 15 crops. The estimated price and quantity changes are presented in Table 1. Results of the economic analysis suggest that the total loss to producers and consumers in California from banning all OP use will be approximately $200 million. There is significant uncertainty as to the final level of OP restrictions; this is only an order or magnitude estimate of the effects. However, these effects only represent about 2 percent of the total revenue generated by the 15 crops studied in California. While the overall effects seem small, they may be more intense in some segments than others. The researchers found that the degree of impact rests on the effectiveness of alternative pest control strategies producers have to choose from when faced with an OP ban. In some cases, OP pesticides have no close substitute, and cancellation will have larger effects. For instance, the losses in broccoli, one of the crops most sensitive to an OP ban, are driven by the lack of an alternative insecticide to treat cabbage maggot.As illustrated above, it is generally true that removing a pesticide from the production process will result in an increase of the price of the treated commodity. If consumers respond to the increased prices by reducing consumption of the affected fruits and vegetables , they may suffer a loss of health benefits associated with the change in consumption.

Scientific evidence is accumulating which shows a protective effect from fruits and vegetables in the prevention of cancer, coronary heart disease, ischemic stroke, hypertension, diabetes mellitus, diverticulosis, and other common diseases. The level of protection suggested by these studies is often quite dramatic. A recent review of several studies found that “the quarter of the population with the lowest dietary intake of fruits and vegetables compared to the quarter with the highest intake has roughly twice the cancer rate for most types of cancer” . Negative health outcomes from a change in dietary behavior may offset the direct health benefits of a pesticide ban, such as reduced exposure to carcinogenic residues on produce. A recent study by Cash investigates the possible magnitude of such offsetting health effects. Using data on what over 18,000 people eat and previous findings on how people respond to changes in the price of fruits and vegetables, the author simulated some of the health effects of a small increase in produce prices. Specifically, Cash examined the effects of a one-percent increase in the price of broad categories of fruits and vegetables on coronary heart disease and ischemic stroke, twoof the most common causes of death in the United States. The results are reported in Table 2.For a one percent increase in the average price of all fruits and vegetables, the simulations indicate an increase of 6,903 cases of coronary heart disease and 3,022 ischemic strokes. In order to offset these 9,925 cases in a population of 253.9 million people, a pesticide action would have to prevent 1 in 25,580 cancers. This is almost four times as protective as the mean risk of pesticide uses that were banned between 1975 and 1989 . Although these results can not be applied directly to most individual pesticide bans—which typically only affect the price of a few crops—the study shows that pesticide regulations that reduce relatively small risks at high cost may actually have a negative impact on overall consumer health. Furthermore, the research also suggests that low-income consumers may be the hardest hit by the negative health impacts of price-induced dietary changes, whereas high-income consumers tend to reap the greatest direct benefits from reduced residue exposures. Economic theory tells us that regulatory intervention is justified in the presence of market failures. In the case of pesticide residues on food, the two most salient sources of failure are externality and incomplete information.

An easy test of the degree of follow-the-crop migrancy is to check turnover in a farm labor center

The people relationships on California farms are also different from stereotypical U.S. family farmers. Unlike family farmers who do most of the farm’s work with their hands every day, the managers responsible for most of California’s labor-intensive crops rarely hand-harvest themselves. Indeed, many are unable to communicate with the workers in their native languages: most managers are U.S.-citizen non-Hispanic whites, while most farm workers are Hispanic immigrants. A familiar adage captures many of the differences between California agriculture and mid-western family farms: California agriculture is a business, not a way of life . Production and employment are concentrated on the largest 5 percent of the state’s farms, and in most commodities, the 10 largest producers account for 30 to 50 percent of total production. However, there are many small farmers and small farm employers, which tend to obscure the degree of concentration. Dole Food Company is probably the largest California farm employer, issuing over 25,000 W-2 employee-tax statements annually. However, Dole does not show up in state employment records as a farm employer. Dole’s Bud of California vegetable growing operation is one of the largest employers in Monterey County,macetas redondas and is considered in the business of selling Groceries & Related Products, not farming . Sun World International is also classified in Groceries & Related Products, as is Grimmway Farms.

Similarly, Beringer Blass Wine Estates is classified as a Beverages manufacturer, as is Giumarra Vineyards Corp. and Ironstone Vineyards. Many of these non-farm operations use custom harvesters and labor contractors to bring workers to their farms, and they are required to report their employment and wages to EDD. During the 1990s, when average annual farm employment rose to a peak 413,000 in 1997, so did the percentage of workers on farms whose employers were non-farmer intermediaries—usually labor contractors who are classified as farm services by EDD. The percentage of workers on farms whose employer is a non-farmer intermediary is about 45 percent, up sharply from less than 30 percent in the mid-1980s.Employment in California agriculture is highly seasonal. The most labor-intensive phase of production for most commodities is farming, and the peak demand for labor shifts around the state in a manner that mirrors harvest activities. Harvesting fruits and vegetables occurs year-round, beginning with the winter vegetable harvest in Southern California and the winter citrus harvest in the San Joaquin Valley. However, the major activity during the winter months between January and March is pruning—cutting branches and vines to promote the growth of larger fruit. Pruning often accounts for 10 to 30 percent of production labor costs but, because pruning occurs over several months, there are fewer workers involved, and many pruners are year-round residents of the area in which they work. Harvesting activity moves to the coastal plains in the second quarter of AprilJune, as lemons and oranges are harvested in southern California and vegetable crops are thinned and then harvested in the Salinas Valley of northern California.

June marks the second highest month of employment on the state’s farms, as workers harvest strawberries and vegetables as well as early tree fruits, including cherries and apricots; melons and table grapes are harvested in the desert areas. Other workers thin peaches, plums, and nectarines, remove leaves in some vineyards, and thin large acreage crops such as cotton. Farm employment peaks in September, during the third quarter, reflecting the harvests of crops from Valencia oranges to tomatoes to tree fruits in the Central Valley of the state. However, the single largest labor-intensive harvest involves raisin grapes—some 40,000 to 50,000 workers have been hired to cut bunches of 20 to 25 pounds of green grapes and lay them on paper trays to dry into raisins. The workers typically receive $0.20 a tray, and the contractor who assembles them into crews of 30 to 40, and acts as their employer, receives another $0.05 a tray. During September, there is something of an early morning traffic jam, as vans ferry workers to fields and orchards, and employers wanting to wait as long as possible to harvest to raise the sugar content of their grapes worry that not enough workers will show up. During the fourth quarter, harvesting activities slow, and after the last grapes, as well as olives and kiwi fruit are harvested in October, most seasonal farm and food processing workers are laid off. Most workers remain in the areas in which they have worked—most workers are not migrants who follow the ripening crops—but many were born in Mexico, and some return to Mexico with their families for the months of December and January. If workers were willing to follow the ripening crops, and to switch between citrus and grapes, they could harvest work for 6 to 8 months a year.But few workers migrate from one area to another, and few switch crops within an area. In the mid-1960s, when migrancy was at its peak, a careful survey of farm workers found that only 30 percent migrated from one of California’s six major farming regions to another.

A 1981 survey of Tulare county farm workers found only 20 percent had to establish a temporary residence away from their usual home because a farm job took them beyond commuting distance , and surveys of California farm workers in the 1990s found that fewer than 12 percent followed the crops . A 2000-01 survey of 300 farm workers found 19 percent who moved in the previous two years to find farm work; fewer than 25 percent planned to move in the current year to find a farm job . There are many reasons why most farm workers stay in one area of California: the harvesting of many fruits and vegetables has been stretched out for marketing and processing reasons; the availability of unemployment insurance makes migration less necessary; and some farm workers with children who are not likely to follow them into the fields realize that migrancy makes it very difficult for children to obtain the education needed to succeed in the U.S. If follow-the-crop migrants filled the center, workers and families would be constantly arriving and departing, as they moved on to another job in a distant area. In fact, most migrant centers fill as soon as they open, and keep the same tenants for the season: workers know that they can obtain services for themselves and their children, especially in the state-run centers, and it is very hard to find alternative housing if the family packed up and sought another job in the manner of John Steinbeck’s Joad family.Until the 1940s, it was common for the wives of field workers to be employed in the packing houses that canned, froze or dried fruits and vegetables. However, after unions pushed packing-house wages to twice field worker levels in the 1950s and 1960s, packing-house jobs became preferred to field worker jobs, often representing a first rung up the American job ladder for field workers. About 40,000 workers are employed in the preserved fruits and vegetables subsection of the state’s manufacturing industry,maceta de 10 litros down from 50,000 in the early 1990s.Trends in farm and near-farm jobs are mixed. In the case of some vegetables and melons, non-farm packing and processing jobs have been turned into farm worker jobs by field packing, having workers in the field put broccoli or cantaloupes directly into cartons rather than having the crop picked by field workers and packed by non-farm workers in packing houses.4 In other cases, farm jobs have become non-farm jobs, as when the cutting and packing of lettuce in the field is replaced by fewer workers simply cutting lettuce, and when there are more non-farm jobs in packing plants as lettuce is cut and bagged: bagged lettuce uses almost 40 percent of U.S. lettuce.Employment on California farms was expected to drop sharply in the 1960s, as the end of the Bracero program, which brought Mexicans to work in U.S. fields between 1942 and 1964, was followed by sharply rising wages and unionization—the United Farm Workers union won a 40 percent wage increase in its first table grape contract in 1966. Processing tomatoes provides an example of the sharp drop in farm worker employment as a result of labor-saving mechanization.

In 1960, a peak 45,000 workers, 80 percent Braceros, hand picked 2.2 million tons from 130,000 acres of the processing tomatoes used to make ketchup. In 2000, about 5,000 workers were employed to sort 11 million tons of tomatoes from 350,000 acres that were picked by machines. The keys to tomato harvest mechanization included cooperation between scientists and between farmers, government, and processors. Plant scientists developed smaller tomatoes more uniform in size that ripened at the same time, and were firm enough so that the stalk could be cut, and the tomatoes shaken off, without damage. Engineers developed a machine to cut the plant, shake off the tomatoes, and use electronic eyes to distinguish red and green tomatoes and discard the green ones . Processors agreed to accept tomatoes in 12.5 ton truck mounted tubs rather than 60- pound lugs, and the government established grading stations at which random samples were taken to determine the quality and price. The cost of mechanizing the tomato harvest was relatively small—less than $1 million—and the estimated rate of return was hundreds of percent.The rapid diffusion of tomato harvesting machines in California—none were harvested by machine in 1960, and all were harvested by machine by 1970—was expected to usher in an era of machines replacing men on farms, economists and engineers boldly predicted that, by 2000, there would be practically no jobs left for unskilled seasonal farm workers by 2000 .However, the cooperation between researchers, farmers, processors, and the government that transformed the processing tomato industry in the 1960s proved to be the exception, not the rule. Farmers remained very interested in and supportive of mechanization research during the 1970s, when there were hundreds of public and private efforts to develop uniformly ripening crops and machines to harvest them, but interest waned in the late 1970s due to rising illegal immigration and a lawsuit. Mexico devalued the peso in 1976, and in 1977, for the first time, apprehensions of unauthorized Mexicans in the U.S. first topped 1 million. Apprehensions remained at about 1 million a year until after 1982, when another peso devaluation caused them to jump by 25 percent, and the rising number of unauthorized Mexicans, many of whom were from rural Mexico and sought jobs on U.S. farms, guaranteeing an ample supply of hand workers. Meanwhile, the UFW and California Rural Legal Assistance in 1979 filed a lawsuit against the University of California , charging that efforts to develop labor-saving machines were an unlawful expenditure of public funds because they displaced small farmers and farm workers . The suit asked that UC mechanization research be halted and a fund was created to assist small farmers and farm workers equal in size to what UC earned from royalties and patents on agricultural innovations .The suit was eventually settled by establishing a committee to review research priorities, but public and private support for mechanization research decreased, and scientists and engineers moved on to other issues. Most labor-saving research today is conducted by the private sector, and most of it is far less visible than machines replacing 90 percent of the hand harvesters, as in tomato processing. Precision planting and improved herbicides have dramatically reduced the need for thinning and hoeing labor. Many farmers have planted dwarf trees to increase yields, which can also reduce harvest labor needs. Much of today’s mechanization is motivated as much for non-labor reasons as to save on labor costs. For example, drip irrigation systems reduce the need for water as well as irrigator labor, and a machine harvesting wine grapes at night results in higher-quality grapes and uses less labor.In the 19th century, U.S. agriculture in general and California agriculture in particular were considered land-abundant and labor short, which led to labor shortages that were compounded in California by the dominance of large and specialized farms. California began producing fruits in the 1870s, when the completion of the transcontinental railroad and falling interest rates encouraged a shift from grazing cattle and growing grain without irrigation to labor-intensive, irrigated fruit and vegetable farming.

Existing inspection rules ensure that foreign and domestic meats meet the same standards

Products exempt from the mandatory COOL regulation include ingredients in a processed food item and food sold in restaurants or through the food service channel. AMS defines an ingredient in a processed food item as either “a combination of ingredients that result in a product with an identity that is different from that of the covered commodity” or “a commodity that is materially changed to the point that its character is substantially different from that of the covered commodity” . Examples of the former definition could be peanuts in a candy bar or salmon in sushi. Under this definition, a bag of frozen mixed vegetables would remain a covered commodity because it maintains its identity, but the peanuts and salmon in the earlier example would not. Examples of the latter definition include anything cooked, cured, or dried like corned beef briskets or bacon. These are to be considered functionally different products than the meat the processor began with, whereas vacuum-packed steaks or roasts retain their identity after processing and thus require mandatory labeling under COOL. COOL regulations do not affect restaurants,10 litre plant pots but have implications for nearly everyone else within the unprocessed food chain.

The law states that “the Secretary may require that any person that…distributes a covered commodity for retail sale maintain a verifiable record keeping audit trail…to verify compliance” for a period of up to two years . This includes foreign and domestic farmers and ranchers, distributors and processors, and retailers. We discuss the ramifications of this audit trail requirement for the cost of compliance below.The cost of COOL implementation can only be estimated at this time. The major direct costs of the program include the costs of segregating and tracking product origins, the physical cost of labels, and enforcement costs. AMS itself projects that domestic producers, food-handlers, and retailers will spend $2 billion and 60 million labor hours on COOL in the first year, though these figures were questioned by the GAO in a 2003 report. The GAO reports that the Food and Drug Administration has estimated that the cost of monitoring COOL for producers will be about $56 million annually. The costs of implementation for produce will likely be lower than the costs of implementation for meats as some fruits and vegetables are already labeled by country of origin. From a policy perspective, whether these uncertain costs outweigh the benefits to society of the program, and the extent to which retailers, producers and consumers will share these costs, are of equal importance. The extent to which COOL may benefit domestic producers depends on two considerations, whether country-of-origin information will induce and/or allow consumers to demand more domestic products relative to their foreign counterparts , and whether the costs of COOL implementation will be differentially higher for foreign suppliers than domestic suppliers.

If COOL costs foreign suppliers more to comply than domestic suppliers, the transaction costs imposed by COOL will be lower for domestic suppliers than for foreign suppliers. Even if the price elasticity of demand for foreign and domestic goods is the same, demand for foreign products will fall more than demand for domestic products, and some consumers who previously bought foreign goods will switch to buying domestic ones. This effect will be exacerbated to the extent that labels themselves affect consumers’ preferences or allow them to act upon preferences that were unsatisfied before mandatory labeling. If consumers truly prefer domestic products relative to foreign ones, all other characteristics being equal, COOL will be accompanied by increased demand for domestic goods. If this effect and the differentially higher compliance costs for foreign goods are large enough, this could theoretically offset the reduced demand for labeled goods occasioned by the transactions costs imposed by COOL. Gains to domestic producers are limited by the size of the market share claimed by foreign producers prior to the introduction of COOL, but in this case domestic producers would benefit from the regulation. Consumers could be net beneficiaries as well if mandatory labeling satisfied a preference that the market previously failed to serve. Economic theory and empirical evidence both suggest that the benefits of COOL are unlikely to outweigh the costs of compliance. Both consumers and suppliers are likely to be worse off as a result of this regulation.

The major support for this conclusion comes from the concept of “revealed preference.” In the absence of market failures, the fact that producers have not found it profitable to provide COOL to customers voluntarily is strong evidence that willingness to pay for this information does not outweigh the cost of providing it. If the benefits outweighed the costs, profit maximizing firms would have already exploited this opportunity. Of course, this argument depends on whether the market for agricultural products functions well and would be responsive to consumer demands for COOL if it existed. In this section, we argue that this is indeed the case, and provide empirical support for the theoretical argument that the costs of COOL exceed its benefits. These findings are consistent with the conclusion of the U.S. Food Safety and Inspection Service , that there is no evidence that “a price premium engendered by country of origin labeling will occur, and, if it does, [that it] will be large or persist over the long term.” There is little evidence that imperfections in the food market prevent producers from providing country-of-origin-labeling. Asymmetric information, where one party in a potential transaction has better information than the other, can indeed lead to inefficient outcomes. However, in standard economic theory this result arises either because a seller would like to signal that his product is of high quality but is unable to do so convincingly, or because a seller that has a low-quality product can pretend that it is high quality.But this situation does not plausibly apply in the case of COOL in agriculture. There is nothing now that inhibits producers from “signaling” the national origin of their products. Whatever their revealed preference, do consumers have a stated preference for country-of-origin labeling? The GAO summarizes survey evidence as indicating that American consumers claim they would prefer to buy U.S. food products if all other factors were equal,40 litre plant pots and that consumers believe American food products are safer than foreign ones. However, surveys also suggest that labeling information about freshness, nutrition, storage, and preparation tips is more important to consumers than country of origin . Revealed preference arguments in their simplest form suggest that if consumers truly preferred domestic food products, it would only take one grocer to limit store items to domestic-only products before other stores saw this grocer’s success and followed suit.Producers of organic products have voluntarily labeled their products to attempt to capture a premium, as have producers of “dolphin-safe tuna.” If demand for information exists, agricultural producers have generally been adept at seizing this opportunity. Similarly, many lamb imports from Australia and New Zealand already bear obvious country-of-origin labels going beyond legal requirements because consumers prefer this product to domestic lamb and lamb from the rest of the world . Thus, Australian and New Zealand suppliers have an incentive to label their lamb products because they infer a positive net benefit to doing so, while producers and retailers who abstain from the practice must know that sales will not increase enough from offset labeling costs. There are other non-economic arguments used to support mandatory COOL that relate to food safety. It is possible that COOL would make tracing disease outbreaks easier, thus reducing the health costs of food-related diseases. This is less likely than might initially seem to be the case, because of the long delay between disease outbreaks and the shipment of contaminated products .

If domestic products are systematically safer than foreign products, substitution towards domestic goods could also increase the average safety level of food consumed. However, there is little evidence that foreign food products are systematically less safe than domestic products.Foreign fruits and vegetables do not systemically carry more pesticide residue than their domestic counterparts . There is insufficient evidence to determine if bacteria levels differ between foreign and domestic produce . 8Not surprisingly, in light of revealed preference arguments, many retailers have argued that the cost of COOL implementation will be excessive and burdensome. As noted above, AMS has forecast an annual cost of $2 billion to implement the regulation. These costs will be borne by the private sector as the Farm Bill provides no funds to alleviate industry costs for developing and maintaining the necessary record-keeping systems . In addition, the statute prohibits the development of a mandatory identification system for certification purposes. Instead, USDA must “use as a model certification programs in existence on the date of this Act” . As discussed earlier, USDA is also allowed to require a verifiable record keeping audit trail from retailers to verify compliance.” These seemingly contradictory directions to the USDA—no mandatory identification system is allowed, but an audit trail from retailers may be required—could limit the AMS’s ability to implement the COOL legislation, but is likely intended to act as a prohibition against any efforts to mandate full-scale “trace back” requirements that would track products from the farm gate to the grocery store . Such a formal trace back requirement would impose costs with legal incidence on producers in the field unlike a certification program, where the legal incidence of the costs of regulation falls mostly on retailers and processors. Of course, the economic incidence of the costs of this regulation will be determined by the price elasticity of demand for products, as explained in the discussion that conceptualized COOL as a transaction cost. While retailers’ organizations, like the Food Marketing Institute, have generally been against mandatory COOL, perhaps the loudest complaints about the cost of COOL have come from the meat packing and processing industry. In particular, the costs of tracking and labeling the origin of ground meat products are expected to be relatively high. For example, the president of the American Meat Institute, a trade group representing meat packers and processors has claimed that COOL regulation will be costly and complicated and that it will “force companies to source their meat not based on quality or price, but based on what will simplify their labeling requirements” . The National Pork Producer’s Council also opposed COOL legislation , and has since funded a study that estimates that the cost of COOL implementation will translate into a $0.08 per pound increase in the average retail cost of pork . A key element of this study is an argument that, whatever the intention of the authors of the COOL legislation, implementation will in practice require complete “trace back” capability from the farm to the retail level. With the 2003 discovery of BSE in the U.S., a comprehensive trace back system for livestock may receive greater political support. Agricultural ranchers and growers have largely welcomed the COOL legislation. The California Farm Bureau , the Rocky Mountain Farmers Union , and the Western Growers Association , among other such organizations, have endorsed this regulation. These organizations generally argue that consumers “want” labeling, , consumers have a “right” to country-of-origin information , and that the legislation is a valuable “marketing tool” . The first of these arguments is weakened by the logic of revealed preference. In the case of meat products, the comments of the president of the American Meat Institute above explain the logic of the third justification; packers may demand more domestic inputs if this lowers the cost of COOL compliance. There is also some suggestion that the alleged market power exercised by the relatively concentrated meat-packing industry has created rents that COOL will dissipate . That is, the bargaining position of producers relative to packers will be improved as a result of these rules. This is at least in part because legal liability for failure to comply with COOL will rest with retailers, not with suppliers closer to the farm gate.COOL has been justified as an attempt to favor domestic products in the U.S. market, and early indications suggest that foreign suppliers believe it will do so. Canadian cattle groups have suggested that beef be given a “North American” label if it comes from any country in NAFTA .