Logue was a leading figure in co-operative studies and the employee ownership movement

While the GUC is an economic and cultural engine, it is surrounded by seven low income and racially segregated neighborhoods. The median income for households in these communities is $18,500; it is $47,626 in the rest of the city . Unemployment sits at 24 percent, nearly three times the rate for the city at large . The asymmetry between the economic dynamism of GUC and the racialized poverty of its adjacent neighborhoods points towards the unevenness of Cleveland’s economic recovery. Evergreen finds one of its origin points in a larger effort undertaken by the Cleveland Foundation , a local community foundation, to harness the wealth of the GUC for the economic benefit of the neighborhoods surrounding it. A product of Cleveland’s wealthy past, the Cleveland Foundation is one of America’s largest foundations, with an endowment of $1.8 billion. In 2005 it launched the Greater University Circle Initiative, which sought to capture some of the $3 billion dollars spent by GUC institutions each year for the purposes of local economic development . In 2006, India Pierce Lee, a program director with CF, heard Ted Howard from the Democracy Collaborative give a presentation on his vision of community wealth building. The Democracy Collaborative is a leader in the growing “new economy movement,” which seeks systemic change by challenging the imperative for constant economic growth and by promoting economic equality and democracy.

The Collaborative defines community wealth building as “improving the ability of communities and individuals to increase asset ownership, anchor jobs locally,hydroponic seedlings expand the provision of public services, and ensure local economic stability” . A key part of DC’s community wealth-building strategy is harnessing procurement flows from anchor institutions whose deep rootedness in a community creates an incentive to prioritize local economic development. Ted Howard’s strategy for community wealth building mirrored priorities of the Cleveland Foundation’s recent GUC initiative. Shortly after hearing his presentation, India Pierce Lee invited him and DC to do a feasibility study for enacting their community wealth-building strategies in Cleveland. The initial plan was to encourage pre-existing Community Development Corporations to incubate new social enterprises that could harness procurement flows, but no takers could be found. The plan was too risky for local CDCs whose expertise was rooted in affordable housing development . The worker-run co-operative model was a secondary plan that developed through conversations between Howard and John Logue from the Ohio Employee Ownership Center.He had written on the successful models in Mondragon , Emilia Romagna , and Quebec ; Evergreen was designed with these models in mind, especially Mondragon. The final plan was for a network of worker owned co-operatives designed to capture procurement flows from anchor institutions, especially strategic opportunities in the emerging green economy. It is important to understand Evergreen in the context of the Democracy Collaborative’s larger vision and work.

The DC describes their mission as the pursuit of “a new economic system where shared ownership and control creates more equitable and inclusive outcomes, fosters ecological sustainability, and promotes flourishing democratic and community life” . In 2015, DC launched the “Next System Project,” an initiative seeking to “launch a national debate on the nature of ‘the next system’ … to refine and publicize comprehensive alternative political-economic system models that are different in fundamental ways from the failed systems of the past and capable of delivering superior social, economic and ecological outcomes” . The NSP includes a statement signed by a broad assemblage of the political left . Position papers on eco-socialism, commoning, and solidarity economics have been forwarded as part of the effort to debate and actualize the “next system” . Evergreen, then, should be understood as a local experiment in next system design. As we unpack below, there are limits to Evergreen’s nationwide replication. But the movement building and systemic thinking that it is part of are crucial to the growth of the co operative economy, and the transformation of neoliberal capitalism. Given Evergreen’s roots in the “next system” vision of the Democracy Collaborative, it is surprising that it would attract support from the Cleveland Foundation, a wealthy charitable foundation established by a banker and governed by members of Cleveland’s economic elite. Indeed, once the plan for Evergreen was finalized, the CF pledged $3 million of seed funding to the project . The Foundation’s support for a network of worker-owned co-operatives reveals some openness among local higher-ups to the idea of systemic reform. India Pierce Lee, for example, had previously held a post as a Director of the Empowerment Zone with the City of Cleveland’s Department of Economic Development. Empowerment zones are federally designated high-distress communities eligible for a combination of tax credits, loans, grants, and other publicly funded benefits.

At this post, Pierce Lee saw millions of public dollars being spent on economic development – all of it directed to employers – with minimal to zero effect: few new businesses, few new jobs created . Brenner and Theodore have described empowerment zones as “neoliberal policy experiments” . Pierce Lee had experienced these experiments as failures, and was keen to try the alternative that Evergreen represented. Similarly, Howard told us how some CF board members raised ideological concerns over worker ownership, but that a willingness to try alternatives prevailed . Cleveland’s painful history of de-industrialization, out-migration, and persistent racialized poverty likely facilitated elite openness to new forms of economic development. The Cleveland Foundation provided Evergreen with crucial seed funding and technical support. “I cannot stress enough that without the people at the Cleveland Foundation, Evergreen would not have happened,” noted Candi Clouse from Cleveland State University’s Center for Economic Development during our interview . Replicating the “Cleveland Model” is challenging when an integral piece is a supportive and wealthy community foundation. The current conjuncture of “contested neoliberalism,” however, does make the availability of this support more probable. Research by DC on foundations experimenting with funding alternative economic development strategies found examples in Atlanta, Denver, and Washington, D.C. . But the authors also note how “many community foundation leaders talked about conservative boards, isolated from new ideas,4x8ft rolling benches who were reluctant to take up seemingly risky new ways” . Popular and elite frustration with neoliberalism means that foundation support for co-operative development is more possible than in previous eras, but this support remains contingent on local circumstance. Elite frustration with mainstream economic development mechanisms also played a key role in the city’s support of Evergreen. While the Cleveland Foundation provided seed funding and technical assistance, the city played a key role by helping to secure financing. Tracy Nichols, Director of Cleveland’s Department of Economic Development, had seen the plans for Evergreen and wanted to help finance it. Evergreen lost a bank loanin the 2008 financial crisis and having the City’s support in securing financing was a big step towards actualizing their plan. For Nichols, the fact that the Cleveland Foundation was providing seed funding and logistical support helped legitimate Evergreen as a safe bet for municipal resources. All three Evergreen co-operatives are capital intensive. Without clear policy frameworks for funding , putting financing in place required ingenuity. Evergreen is almost entirely debt-financed. The majority of its funds have come from two federal social financing programs: Department of Housing and Urban Development Section 108 funds, and New Market Tax Credits . Of the nearly $24 million that have been raised from federal sources for Evergreen’s development, approximately $11.5 million came from HUD section 108 loans and approximately $9.5 million came from NMTCs . The HUD Section 108 funds were established to provide communities with a source of financing for “economic development, housing rehabilitation, public facilities, and large-scale physical development projects” The HUD low-interest loans were inaccessible without the City’s sustained help; funds flow through state and local governments.

Monies from HUD provided the core financing for the $17 million, 3.25- acre greenhouse that now houses Green City Growers . The greenhouse is located on land that included residential housing prior to Evergreen’s development. Not only did Cleveland’s Department of Economic Development play a central role in securing financing, but it also facilitated the purchase of homes that needed to be demolished before construction of the greenhouse could begin. Unlike HUD Section 108 funds, New Market Tax Credits could be accessed directly by Evergreen’s founders without the City serving as intermediary. But NMTCs are also complex and would have been very challenging to negotiate without the Cleveland Foundation’s technical assistance. “We call the New Market Tax Credits a full employment program for lawyers and accountants,” reflected Howard “because there are hundreds of thousands of dollars of fees” . The Clinton Administration launched the NMTC program in 2000 as a for-profit community development tool. The goal of the program is to help revitalize low-income neighborhoods with private investment that is incentivized through federal income tax credits .5 The NMTC program is meant to be a “win-win” for investors and low-income communities, but investors win more, and at public expense . The NMTC program is arguably an example of what Peck and Ticknell call “roll-out neoliberalism” ; they argue that the neoliberal agenda “has gradually moved from one preoccupied with the active destruction and discreditation of Keynesian-welfarist and social-collectivist institutions to one focused on the purposeful construction and consolidation of neoliberalized state forms, modes of governance, and regulatory relations” . The NMTC program creates new profit opportunities for private investors at public expense: the privatization of gain and socialization of loss common to neoliberal economic policy. A framework that made federal loans available directly to community organizations would arguably be more efficient and less bureaucratic. Lacking this option, Evergreen’s founders pragmatically harnessed whatever resources they could access. Evergreen’s emergence would have been greatly facilitated by policy mechanisms that made financing and technical assistance more readily available. The international co operative movement has prioritized supportive legal frameworks as a key constituent of co-op growth , but there is not a robust literature on policy support for co operatives. Supportive legal frameworks for co-operatives are a “deeply under-researched area” . Based on our review of existing literature, however, we found that there were six primary forms of policy support that have been successfully deployed internationally: co-op recognition, financing, sectoral financing, preferential taxation, supportive infrastructure, and preferential procurement. The most developed examples of these policies are found in areas of dense co-operative concentration: the Basque region of Spain, Emilia Romagna in Northern Italy, and Quebec, Canada.6 Below is a table summarizing how these six policy forms are deployed in the co-op dense regions . The table is included to facilitate further research in the understudied area of co operative policy, and to clarify policy successes for organizers in the co-operative movement interested in emulating them. In the next section we examine “private” or ad hoc versions of the policy supports that Evergreen used and explore what these improvisations reveal about the legislative needs of the US co-op movement more broadly. Of the six enabling policy forms, Evergreen has benefitted from private and ad hoc versions of recognition, financing, supportive infrastructure, and procurement. The Mayor of Cleveland, Frank Jackson, and Ohio Senator Sherrod Brown spoke at the opening ceremonies of Green City Growers. These high-level endorsements, while not written into policy, conferred legitimacy and boosted media coverage. In terms of financing, having a wealthy foundation backstopping the initiative was a crucial first step. The Foundation’s support helped bring the City on board, giving the Department of Economic Development the confidence to access HUD 108 funds to support the co-op. As Nichols reflected: “the loans have some risk for us, but I know the Foundation is backing this initiative, and I know they don’t want it to fail. They have money. I don’t have to worry” . Without technical assistance from the Cleveland Foundation and the Ohio Employee Ownership Center , securing New Market Tax Credit funds would have been even more of a byzantine process. Both the Foundation and the OEOC offered legal support and co-op training respectively. The OEOC has received both state and federal funding, but its state funding has been largely cut. Again, the technical support Evergreen received was ad-hoc and private.

It is not sufficient that farmers and consumers perceive net benefits from GM crop varieties

The GM varieties that have been developed and adopted extensively to date have not experienced significant price discounts because of buyer resistance. This can probably be attributed to the nature of the crops. For feed grains, the buyers are other farmers who are comfortable with the technology, and for fiber crops such as cotton the food safety concerns do not apply. For the major food grains, wheat and rice, even if the farm-production economics potential of GM varieties is as large as for feed grains,market acceptance may differ sufficiently to limit their adoption. Rather than an other farmer, the relevant buyer for these crops is a food processor, manufacturer or retailer who may be reluctant to risk negative publicity or to risk losing consumers who would prefer a biotech-free label or who may not be confident that the biotech and non-biotech grain can be segregated.The adoption of biotechnology must provide net benefits to other participants in the marketing chain, such as food processors and retailers. Pricing of the technology may be a critical factor. Even if the new technology is more cost-effective than the traditional alter native,macetas de plastico 25 litros monopolistic pricing could mean that the technology supplier retains a large share of the benefits.

The cost savings passed on to processors and consumers may be a small fraction of the total benefits, rendering incentives for processors, retailers and consumers to accept the technology comparatively small. Processors and retailers can effectively block a new technology if it does not clearly benefit them, even if there would be net benefits to the general public.The size of the market matters. The cost to develop a new variety is essentially the same whether it is adopted on one acre or a million acres, but the benefits are directly proportional to the number of acres on which the variety is adopted. This is why biotech companies have focused on developing technologies for more widely planted agronomic crops, especially feed-grain and fiber crops for which market barriers are lower. The technology developer must also obtain regulatory approvals. It is difficult to obtain precise information on costs of regulatory approval for biotech crops and chemical pesticides, but ac cording to available estimates, the total cost of R&D — from “discovery” to commercial release of a single new pesticide or herbicide product — exceeds $100 million, and regulatory approval alone costs more than $10 million. A new technology must generate enough revenue for the developer over its life time to cover these costs, and for some crops the total acreage is simply not sufficient. Given the large fixed costs associated with regulatory approvals for specific uses, agricultural chemical companies have concluded that the potential market is too small to warrant the development of pesticides for many of California’s specialty crops, which have become technological orphans.

It does not follow that the government should invest in developing new conventional or GM pest-control technologies for these orphan crops. If the current regulatory policy and process is appropriate and efficiently implemented then the high cost is not excessive; if a new technology cannot generate benefits sufficient to pay those costs, then it is simply not economical to develop that technology. The question for technology orphan crops is whether it is possible to reduce the costs of R&D and regulatory approval sufficiently to make it profitable for the nation and the private sector to change their orphan status.On the supply side, “horticulture” includes an enormous diversity of fruit and vegetable crops, but it also includes many nonfood species, such asornamentals, flowers and recreational turfgrass. Collectively these horticultural crops compare well with major agronomic crops in terms of total value in the United States. However, they use much less acreage, and the market size for some biotech products depends on both acreage and production value. In 2000, the United States produced fruits, nuts and vegetables with a total value of more than $28 billion, of which California contributed about $14 billion . In addition, horticulture includes a small number of larger-scale crops as well as a large number of smaller-scale crops . At current costs for R&D and regulatory approval, it is unlikely that biotechnology products will be developed and achieve market acceptance for many of these smaller-scale crops in the near future . Further, experimentation with perennials such as grapes, nuts and fruit trees is comparatively expensive , and it is costly to bring new acreage into production or replace an existing vineyard or orchard with a new variety. On the demand side, the market for horticultural products, especially fresh fruits and vegetables, is undergoing important changes associated with the changing structure of the global food industry.

Increasingly fewer and larger supermarket chains have been taking over the global market for fruits and vegetables, especially fresh produce, and changing the way these products are marketed. Be cause fresh produce is perishable and subject to fluctuations in availability, quality and price, it presents special problems for supermarket managers compared with packaged goods. Supply-chain management, and the increasing use of contracts that specify production parameters as well as char acteristics and price, is replacing spot markets for many fresh products. A desire for standardized products, regard less of where they are sourced around the world, could limit the development and adoption of products targeting smaller market segments, unless retailers perceive benefits and provide shelf space for diversified products — such as biotech and non-biotech varieties of particular fruits and vegetables. On the other hand, an increasingly wealthy and discriminating consuming public can be expected to continue to demand increasingly differentiated products — with an ever-evolving list of characteristics such as organic, low fat, low-carbohydrate and farm-fresh.Hence retailers will have to balance the cost savings and convenience associated with global standardization against the benefits from providing a greater range of products,vertical rack system which will include GM products when retailers be gin to perceive benefits from stocking them. Unlike other types of foods, fruits and vegetables are often consumed fresh and in clearly identifiable and recognizable form. This has implications for perceptions of quality and food safety that may influence consumer acceptance — perhaps favorably, for instance, if a genetically modified sweet-corn could be marketed as reduced-pesticide . Other elements of GM horticulture — such as nonfood products, ornamentals or turfgrass — have advantages in terms of potential market acceptance. GM trap crops, which provide pesticide protection for other crops, and GM sentinel crops, which signal the presence of pests or provide other agronomic indicators — may be used in food production without overcoming barriers of acceptability to market middlemen or consumers . Biotechnology products designed for home gardeners may be more readily accepted because the grower is the final consumer. Market acceptance in the United States is also linked to continued access to export markets, particularly in the European Union and Japan where restrictions have been applied to biotech foods. The relative importance of the domestic market could help to account for the success of the GM feed-grain technologies in the United States, and it may also help to account for the success of these and other GM technologies in China. China is comparatively important in horticultural biotechnology — its investment in agricultural biotechnology is second only to the United States, but with a different emphasis, including significant investment in horticultural biotechnology .

The technological potential for GM horticultural crops appears great, particularly when we look beyond the “in put” traits that have dominated commercial applications to date, to opportunities in “output” traits, such as pharmaceuticals and shelf-life enhancements. Because delays in socially beneficial technologies mean forgone benefits, there may be a legitimate role for the government in facilitating a faster rate of development and adoption of horticultural biotechnology products. For instance, the government could reform property-rights institutions to increase efficiency and reduce R&D costs. IPRs apply to research processes as well as products, and limited access to enabling technology or simply the high cost of identifying all of the relevant parties and negotiating with them, may be retarding some lines of research — a type of technological gridlock . Nottenburg et al. suggest a government role in improving access to enabling technologies. Similarly, the government could revise its regulations to increase efficiency and reduce costs for regulatory approvals. Instead of requiring a completely separate approval for each genetic transformation “event,” it may be feasible to approve classes of technologies with more modest specific requirements for individual varieties. The government could also reduce some barriers to adoption, especially market acceptance of biotech food products, by providing information about their food safety and environmental implications. The biotech industry and agriculture can have an influence here, too. The general education of consumers and market inter mediaries about biotech products may be facilitated in a process of learning by experience with products — such as nonfood applications, or home garden applications — that have good odds of near-term success because of low barriers to market acceptance and good total benefits.IF and when genetically engineered horticultural products be come more widely available and adopted, they will enter an expanding marketplace that is becoming globally integrated and more consolidated. Fewer, larger firms will control access to a rising share of the world’s population, including rapidly growing middle-income consumers in the developing world. Consumers everywhere will be increasingly focused on convenient, ready-to-eat and value-added products. In order to compete on a global scale, GE produce must meet the challenges of the quickly evolving market for fruits and vegetables. In the United States alone, the estimated final value of fresh produce sold through retail and food-service channels surpassed $81 billion in 2002. Eu rope-wide fresh produce sales through supermarket channels alone were estimated to exceed $73 billion in 2002, and total final sales to exceed $100 billion. Worldwide, consumption and cultivation of fruits and vegetables is in creasing. Between 1990 and 2002, global fruit and vegetable production grew from 0.89 billion tons to 1.3 billion tons, and percapita availability expanded from 342 pounds to 426 pounds . Much of this growth has occurred in China, which is aggressively pursuing agricultural biotechnology . The global fresh fruit-and-vegetable marketing system is increasingly focused on adding value and decreasing costs by streamlining distribution and understanding customer demands. In the United States and Europe this dynamic system has evolved toward predominantly direct sales from shippers to supermarket chains, reducing the use of intermediaries. Food-service channels are absorbing a growing share of total food volume and are also developing more direct buying practices. The year-round availability of fresh produce is now seen as a necessity by both food service and retail buyers. Product form and packaging are also changing as more firms introduce value-added products, such as fresh cut produce, salad greens and related products in consumer-ready packages. Estimated U.S. sales of fresh-cut produce were over $12 billion in 2002. Fresh-cut sales are even higher in Eu rope and beginning to develop in Latin America and Asia as well. The implications of this trend may become as important to the biotechnology industry as the changes in market structure, since fresh-cut processors are increasingly demanding specific varieties bred with attributes beneficial to processing quality.The streamlining of marketing channels poses both challenges and opportunities for horticultural biotechnology. A smaller number of larger firms, controlling more of world food volume, now act as food-safety gatekeepers for their consumers, reflecting the diversity of consumer preferences in their buying practices. Where consumers perceive products utilizing biotechnology to be beneficial, retail and food-service firms will provide them. Products with specialized input traits valued by consumers, such as unique color, flavor, size or ex tended shelf-life, are the most likely to succeed in today’s marketplace. While large food-service and retail buying firms and international traders may offer easy access to consumer markets, if major buyers adopt policies unfavorable to GE foods, distribution obstacles could become insurmount able. Such policies are common among European food retailers, reflecting strong consumer concern there over GE products. The challenge to supply seasonal, perishable products year-round has fa- 82 CALIFORNIA AGRICULTURE, VOLUME 58, NUMBER 2 vored imports, and increased horizontal and vertical coordination and integration among shippers regionally, nationally and internationally.

Nutrients would diffuse and advect from the bulk soil toward the root zone

However, more work is required to determine how to better represent coupled microbe–plant nutrient competition and transport limitations. For example, we have previously shown that one can apply a homogenous soil environment assumption and include the substrate diffusivity constraint in the ECA competition parameters. In this approach, the diffusivity constraint can be directly integrated into the substrate affinity parameter. The “effective” KM would be higher than the affinity measured, e.g., in a hydroponic chamber . We hypothesize that our calibrated KM value, which led to an excellent match with the observations , effectively accounted for this extra diffusive constraint on nutrient uptake. A second approach would be to explicitly consider fine-scale soil fertility heterogeneity, explicitly represent nutrient movement , and apply the ECA framework at high resolution throughout the rhizosphere and bulk soil. However, to test, develop, and apply such a model requires fine-scale measurements of soil nutrient con centrations, microbial activity, and rhizosphere properties and dynamics; model representation of horizontal and vertical root architecture and microbial activity; effective nutrient diffusivities; and potentially, computational resources beyond what is practical in current ESMs.

Yet, there is potential value in this approach if we can produce a reduced order version of the fine-scale model that is reasonable and applicable to ESMs. A third approach,hydroponic farming of intermediate complexity, would be to simplify the spatial heterogeneity of root architecture, soil nutrient distributions, and nutrient transport. Roots could be conceptually clustered in the center of the soil column, where nutrients would become depleted and competition between microbes, roots, and abiotic processes would occur.The “radius of influence” concept that defines a root influencing zone could be used to simplify heterogeneity, with CT5 competition applied to this root influencing zone. More model development, large-scale application, and model-data comparisons are needed to justify such an approach. As we argued above, the choice of nutrient competition theory used by ESMs faces a dilemma between necessary model simplification and accurate process representation. Our goal is to rigorously represent nutrient competition in ESMs with a simple framework that is consistent with theory and appropriate observational constraints while not unduly sacrificing accuracy. We conclude that our ECA nutrient competition approach meets this goal, because it is simple enough to apply to climate-scale prediction and is based on reasonable simplifications to the complex nutrient competition mechanisms occurring in terrestrial ecosystems.Over the past two decades, the ecological significance of anadromous Pacific salmon has been well documented in aquatic ecosystems throughout the Pacific coastal ecoregion.

The annual return of salmon to fresh waters and the associated decomposition of post-reproductive carcasses result in the transfer of marine-derived nutrients and biomass to largely oligotrophic receiving ecosystems. Such inputs have been shown to increase primary production , invertebrate diversity and biomass , and fish growth rates . Since rates of primary production are typically low in many coastal salmon-bearing streams, even small nutrient inputs from anadromous salmon may stimulate autotrophic and heterotrophic production and produce cascading effects through the entire aquatic food web. In addition to subsidizing riverine biota, salmon-borne MDN also benefit vegetation within the riparian corridor . Marine nutrients are delivered to the terrestrial environment via deposition of carcasses during flood events, absorption and uptake of dissolved nutrients by riparian vegetation and removal of carcasses from the stream by piscivorous predators and scavengers . Empirical studies have shown that as much as 30% of the foliar nitrogen in terrestrial plants growing adjacent to salmon streams is of marine origin and that growth rates of riparian trees may be significantly enhanced as a result of salmon-derived subsidies . Nitrogen availability, in particular, is often a growth-limiting factor in many temperate forests and annual inputs ofmarine-derived nitrogen may be critical to the maintenance of riparian health and productivity in Pacific coastal watersheds. Our understanding of the ecological importance of MDN subsidies has been greatly advanced by the application of natural abundance stable isotope analyses. A biogenic consequence of feeding in the marine environment is that adult anadromous salmon are uniquely enriched with the heavier isotopic forms of many elements relative to terrestrial or freshwater sources of these same elements.

When fish senesce and die after spawning, these isotopically heavy nutrients are liberated and ultimately incorporated into aquatic and terrestrial food webs. Our research has determined that the stable nitrogen isotope “fingerprint” of adult anadromous salmon returning to coastal California basins is 15.46 ± 0.27‰ , a value markedly higher than most other natural N sources available to biota in coastal watersheds. This salient isotopic signal allows the application of stable isotope analyses to trace how salmon-borne nutrients are incorporated and cycled by organisms in the receiving watersheds. Researchers interested in the utilization of MDN by riparian trees have generally inferred sequestration and incorporation from foliar δ15N values. Nitrogen is a very minor constituent of wood cellulose and natural abundance levels of δ15N in tree rings have rarely been determined. Poulson et al. were among the first to successfully analyzed δ15N from trees , but analysis required combustion of prohibitively large quantities of material per sample. Since that time, advancements in stable isotope analytical techniques have made it possible to detect δ15N from small samples of material . This permits for non-destructive sampling of live trees via increment cores and provides a novel opportunity to assess the transfer of salmon derived nitrogen into the riparian zones of salmon-bearing watersheds. Reimchen et al. recently reported that wood samples extracted from western hemlock trees in British Columbia yielded clear evidence of SD-nitrogen incorporation with reproducible δ15N values. Intuitively, if spawning salmon represent a significant source of nitrogen to riparian trees in salmon-bearing watersheds, information on salmon abundance may be recorded in the growth and chemical composition of annual tree rings. By quantifying the nitrogen stable isotope composition of tree xylem it would be possible to explore changes in SD-nitrogen over decadal or sub-decadal time increments and determine whether the nutrient capital of riparian biota has been affected by diminished salmon returns. Moreover, if δ15N can be quantified from annual growth rings,macetas plastico cuadradas it may be possible to infer changes in salmon abundance over time and reconstruct historical salmon returns for periods and locations where no such information presently exists. Nearly all salmon recovery programs are built upon very uncertain estimates of population sizes prior to European settlement. The development of robust paleoecological methods to determine historical salmon abundance and variability would greatly assist resource managers in identifying and establishing appropriate restoration targets. For our initial pilot study we collected increment cores from 10 extant riparian Douglas fir trees growing adjacent to WB Mill Creek. All trees were located within 10.0 m of the active stream channel. Core samples were collected on 17 January 2004 from a 250 m section of riparian zone located immediately downstream of the 2.7 km index stream reach used by Waldvogel to derive minimum annual escapement estimates. Small diameter increment core samples were extracted from each tree and prepared for dendrological analysis using standard methods . We concurrently collected a second, large diameter , increment core from four trees for determination of annual nitrogen content and natural abundance stable nitrogen isotope ratios . Diameter at breast height , distance from the active stream channel, and general site characteristics were also recorded for each tree sampled. Increment core samples from Waddell Creek were collected on 16-18 October 2005.

Collections were made from two distinct areas within the watershed: a ~750 m length of riparian zone adjacent to the creek where salmon spawning is known to occur , and a ~500 m section of riparian zone located above a natural barrier to salmon migration . We collected paired increment cores from a total of 10 Douglas-fir and 16 coast redwood trees in the Waddell Creek watershed . Distances from the stream channel, DBH, and site characteristics were also determined for each tree sampled. Two coast redwood cores from the upstream control site were later determined to be damaged or unreliable and excluded from our analyses.Small-diameter increment core samples were air dried, mounted, and sanded for analysis of annual growth rings. Prepared cores were converted to digital images and ring widths were measured to the nearest 0.001 mm using an OPTIMAS image analysis system. Each increment core was measured in triplicate and mean ring widths values were used to generate a time series for each tree. Each time series was then detrended using the tree-ring program ARSTAN to remove trends in ring-width due to non-environmental factors such as increasing age and tree size. Detrending was accomplished using a cubic smoothing spline function that preserved ~50% of the variance contained in the measurement series at a wavelength of 32 years. Individual growth index values were derived by dividing the actual ring-width value by the value predicted from ARSTAN regression equations. Chronologies using the growth index values were subsequently combined into a robust growth index series for each sample site. Cross-dating of coast redwood trees from Waddell Creek was not successful due to the presence of anomalous rings, a high degree of ring complacency in some cores, and small sample sizes. Previous dendrochemical research has found than nitrogen may be highly mobile in the xylem of some tree species . Although the degree to which coast redwood and Douglas-fir trees exhibit radial translocation of nitrogen is largely unknown, such mobility could potentially obscure interpretation of nitrogen availability at the time of ring formation . To minimize potentially confounding effects associated with the translocation of nitrogenous products across ring boundaries, increment core samples from Waddell Creek were pretreated to remove soluble forms of nitrogen following the “short-duration” protocol outlined in Sheppard and Thompson . Briefly, tree-ring samples were sequentially Soxhlet extracted for 4 h in a mixture of ethyl alcohol and toluene , 4 h in 100% ethyl alcohol, and 1 h in distilled water . Increment cores collected from Mill Creek as part of our pilot study were not treated or extracted prior to stable isotope analysis.This model suggested a less significant decline in salmon abundance over the period 1946-1979. Results indicated years of high salmon escapement in 1947, 1949 and 1953 with estimated returns of 126, 116 and 116 fish·km-1, respectively . Conversely, low escapement was predicted in 1977, 1979 and 1957 . For both reconstructions, present day escapement to WB Mill Creek is as strong as the historical maxima values predicted by the models. Collectively, the initial results from WB Mill Creek suggest that salmon abundance may be reflected in riparian tree-ring variables such as indexed growth, [N], or δ15N. However, interpretation of results in this watershed is hindered by two important issues: the lack of comparative experimental controls, and the potentially confounding effects associated with the translocation of nitrogenous compounds across annual tree-ring boundaries. In order to infer the uptake of MD-nitrogen by riparian trees it is necessary to have ecologically analogous reference sites that are uninfluenced by anadromous salmonids. Robust experimental reference sites could potentially include proximate drainages or stream reaches located above impoundments that block access to anadromous salmon. No such sites were available in the vicinity of WB Mill Creek, however, as all salmon-free locations had significantly different site characteristics, especially with respect to stream gradient, floodplain development and soil characteristics. It is especially important to note that increment cores from WB Mill Creek demonstrated enriched δ15N and elevated [N] in the last 3-5 years of growth. Since we did not pre-treat our increment cores prior to analysis, it is unclear whether this enrichment is a true environmental signal resulting from increased salmon abundance and MD-nitrogen availability, or the product of internal translocation of nitrogen by the trees. Whatever the case, this late enrichment significantly influenced the results of our reconstructions since both high [N] andδ15N coincided with years of very high escapement to WB Mill Creek . Few paleoecological studies have successfully modeled changes in either marine-derived nutrient transfer or salmon abundance prior to European settlement. Finney et al. used stratigraphic variation in lake sediment δ15N and fossilized cladocerans and diatoms to infer changes in sockeye salmon abundance in Alaskan nursery lakes. Their ≈2200-year reconstruction revealed that large fluctuations in sockeye abundance were commonplace and that multi-decadal to century-long climatic regimes largely drove population cycles.

The revision will be implemented in steps and could facilitate the field based production of PMPs

As a consequence, a larger amount of product can be delivered earlier, which can help to prevent the disease from spreading once a vaccine becomes available. In addition to conventional chromatography, several generic purification strategies have been developed to rapidly isolate products from crude plant extracts in a cost-effective manner . Due to their generic nature, these strategies typically require little optimization and can immediately be applied to products meeting the necessary requirements, which reduces the time needed to respond to a new disease. For example, purification by ultrafiltration/diafiltration is attractive for both small and large molecules because they can be separated from plant host cell proteins , which are typically 100–450 kDa in size, under gentle conditions such as neutral pH to ensure efficient recovery . This technique can also be used for simultaneous volume reduction and optional buffer exchange, reducing the overall process time and ensuring compatibility with subsequent chromatography steps. HCP removal triggered by increasing the temperature and/ or reducing the pH is mostly limited to stable proteins such as antibodies, and especially, the former method may require extended product characterization to ensure the function of products,maceta cuadrada 20 litros such as vaccine candidates, is not compromised .

The fusion of purification tags to a protein product can be tempting to accelerate process development when time is pressing during an ongoing pandemic. These tags can stabilize target proteins in planta while also facilitating purification by affinity chromatography or non-chromatographic methods such as aqueous two-phase systems . On the downside, such tags may trigger unwanted aggregation or immune responses that can reduce product activity or even safety . Some tags may be approved in certain circumstances , but their immunogenicity may depend on the context of the fusion protein. The substantial toolkit available for rapid plant biomass processing and the adaptation of even large-scale plant-based production processes to new protein products ensure that plants can be used to respond to pandemic diseases with at least an equivalent development time and, in most cases, a much shorter one than conventional cell-based platforms. Although genetic vaccines for SARS-CoV-2 have been produced quickly , they have never been manufactured at the scale needed to address a pandemic and their stability during transport and deployment to developing world regions remains to be shown.Regulatory oversight is a major and time-consuming component of any drug development program, and regulatory agencies have needed to revise internal and external procedures in order to adapt normal schedules for the rapid decision-making necessary during emergency situations. Just as important as rapid methods to express, prototype, optimize, produce, and scale new products are the streamlining of regulatory procedures to maximize the technical advantages offered by the speed and flexibility of plants and other high-performance manufacturing systems.

Guidelines issued by regulatory agencies for the development of new products, or the repurposing of existing products for new indications, include criteria for product manufacturing and characterization, containment and mitigation of environmental risks, stage-wise safety determination, clinical demonstration of safety and efficacy, and various mechanisms for product licensure or approval to deploy the products and achieve the desired public health benefit. Regardless of which manufacturing platform is employed, the complexity of product development requires that continuous scrutiny is applied from preclinical research to drug approval and post-market surveillance, thus ensuring that the public does not incur an undue safety risk and that products ultimately reaching the market consistently conform to their label claims. These goals are common to regulatory agencies worldwide, and higher convergence exists in regions that have adopted the harmonization of standards as defined by the International Council for Harmonization ,2 in key product areas including quality, safety, and efficacy.Both the United States and the EU have stringent pharmaceutical product quality and clinical development requirements, as well as regulatory mechanisms to ensure product quality and public safety. Differences and similarities between regional systems have been discussed elsewhere and are only summarized here. Stated simply, the United States, EU, and other jurisdictions follow generally a two-stage regulatory process, comprising clinical research authorization and monitoring and result’s review and marketing approval. The first stage involves the initiation of clinical research via submission of an Investigational New Drug application in the United States or its analogous Clinical Trial Application in Europe.

At the preclinical clinical translational interphase of product development, a sponsor must formally inform a regulatory agency of its intention to develop a new product and the methods and endpoints it will use to assess clinical safety and preliminary pharmacologic activity . Because the EU is a collective of independent Member States, the CTA can be submitted to a country-specific regulatory agency that will oversee development of the new product. The regulatory systems of the EU and the United States both allow pre-submission consultation on the proposed development programs via discussions with regulatory agencies or expert national bodies. These are known as pre-IND meetings in the United States and Investigational Medicinal Product Dossier 3 discussions in the EU. These meetings serve to guide the structure of the clinical programs and can substantially reduce the risk of regulatory delays as the programs begin. PIND meetings are common albeit not required, whereas IMPD discussions are often necessary prior to CTA submission. At intermediate stages of clinical development , pauses for regulatory review must be added between clinical study phases. Such End of Phase review times may range from one to several months depending on the technology and disease indication. In advanced stages of product development after pivotal, placebo-controlled randomized Phase III studies are complete,growing strawberries hydroponically drug approval requests that typically require extensive time for review and decision-making on the part of the regulatory agencies. In the United States, the Food and Drug Administration controls the centralized marketing approval/authorization/ licensing of a new product, a process that requires in-depth review and acceptance of a New Drug Application for chemical entities, or a Biologics License Application for biologics, the latter including PMP proteins. The EU follows both decentralized processes as well as centralized procedures covering all Member States. The Committee for Medicinal Products for Human Use , part of the European Medicines Agency , has responsibilities similar to those of the FDA and plays a key role in the provision of scientific advice, evaluation of medicines at the national level for conformance with harmonized positions across the EU, and the centralized approval of new products for market entry in all Member States.The statute-conformance review procedures practiced by the regulatory agencies require considerable time because the laws were established to focus on patient safety, product quality, verification of efficacy, and truth in labeling. The median times required by the FDA, EMA, and Health Canada for full review of NDA applications were reported to be 322, 366, and 352 days, respectively . Collectively, typical interactions with regulatory agencies will add more than 1 year to a drug development program. Although these regulatory timelines are the status quo during normal times, they are clearly incongruous with the needs for rapid review, approval, and deployment of new products in emergency use scenarios, such as emerging pandemics.

Plant-made intermediates, including reagents for diagnostics, antigens for vaccines, and bioactive proteins for prophylactic and therapeutic medical interventions, as well as the final products containing them, are subject to the same regulatory oversight and marketing approval pathways as other pharmaceutical products. However, the manufacturing environment as well as the peculiarities of the plant-made active pharmaceutical ingredient can affect the nature and extent of requirements for compliance with various statutes, which in turn will influence the speed of development and approval. In general, the more contained the manufacturing process and the higher the quality and safety of the API, the easier it has been to move products along the development pipeline. Guidance documents on quality requirements for plant-made biomedical products exist and have provided a framework for development and marketing approval . Upstream processes that use whole plants grown indoors under controlled conditions, including plant cell culture methods, followed by controlled and contained downstream purification, have fared best under regulatory scrutiny. This is especially true for processes that use non-food plants such as Nicotiana species as expression hosts. The backlash over the Prodigene incident of 2002 in the United States has refocused subsequent development efforts on contained environments . In the United States, field-based production is possible and even practiced, but such processes require additional permits and scrutiny by the United States Department of Agriculture . In May 2020, to encourage innovation and reduce the regulatory burden on the industry, the USDA’s Agricultural Plant Health Inspection Service revised legislation covering the interstate movement or release of genetically modified organisms into the environment in an effort to regulate such practices with higher precision [SECURE Rule revision of 7 Code of Federal Regulations 340].In contrast, the production of PMPs using GMOs or transient expression in the field comes under heavy regulatory scrutiny in the EU, and several statutes have been developed to minimize environmental, food, and public risk. Many of these regulations focus on the use of food species as hosts. The major perceived risks of open-field cultivation are the contamination of the food/feed chain, and gene transfer between GM and non-GM plants. This is true today even though containment and mitigation technologies have evolved substantially since those statutes were first conceived, with the advent and implementation of transient and selective expression methods; new plant breeding technologies; use of non-food species; and physical, spatial, and temporal confinement . The United States and the EU differ in their philosophy and practice for the regulation of PMP products. In the United States, regulatory scrutiny is at the product level, with less focus on how the product is manufactured. In the EU, much more focus is placed on assessing how well a manufacturing process conforms to existing statutes. Therefore, in the United States, PMP products and reagents are regulated under pre-existing sections of the United States CFR, principally under various parts of Title 21 , which also apply to conventionally sourced products. These include current good manufacturing practice covered by 21 CFR Parts 210 and 211, good laboratory practice toxicology , and a collection of good clinical practice requirements specified by the ICH and accepted by the FDA . In the United States, upstream plant cultivation in containment can be practiced using qualified methods to ensure consistency of vector, raw materials, and cultivation procedures and/or, depending on the product, under good agricultural and collection practices . For PMP products, cGMP requirements do not come into play until the biomass is disrupted in a fluid vehicle to create a process stream. All process operations from that point forward, from crude hydrolysate to bulk drug substance and final drug product, are guided by 21 CFR 210/211 . In Europe, biopharmaceuticals regardless of manufacturing platform are regulated by the EMA, and the Medicines and Healthcare products Regulatory Agency in the United Kingdom. Pharmaceuticals from GM plants must adhere to the same regulations as all other biotechnology-derived drugs. These guidelines are largely specified by the European Commission in Directive 2001/83/EC and Regulation No 726/2004. However, upstream production in plants must also comply with additional statutes. Cultivation of GM plants in the field constitutes an environmental release and has been regulated by the EC under Directive 2001/18/EC and 1829/2003/EC if the crop can be used as food/feed . The production of PMPs using whole plants in greenhouses or cell cultures in bioreactors is regulated by the “Contained Use” Directive 2009/41/EC, which are far less stringent than an environmental release and do not necessitate a fully-fledged environmental risk assessment. Essentially, the manufacturing site is licensed for contained use and production proceeds in a similar manner as a conventional facility using microbial or mammalian cells as the production platform. With respect to GMP compliance, the major differentiator between the regulation of PMP products and the same or similar products manufactured using other platforms is the upstream production process. This is because many of the DSP techniques are product-dependent and, therefore, similar regardless of the platform, including most of the DSP equipment, with which regulatory agencies are already familiar.

A more robust strain could have higher resistances to salts present in the effluent

The dry biomass measurement showed highly unexpected results, with the TP-Effluent having more biomass accumulation than even TAP. One explanation for this result is that the residual chemicals in the effluent were not properly washed away or evaporated during the dry biomass collection process. KHCO3 is a salt that could have been retained in the dry biomass, which could explain the relatively large dry biomass measurement for the TP-Effluent cultures. Microscopic analysis showed that 75% TP-Effluent and 50% TP-Effluent cultures had the highest cell densities . The TP-Effluent and 25% TP-Effluent cultures more closely resembled the negative control TP cultures, with poor growth relative to the other cultures. Over longer growth periods, there could be a larger difference between the different acetate concentration, but for the purposes of this experiment, it was discovered that using smaller doses of effluent was practical and could increase the use efficiency of the costly effluent. This experiment in conjunction with the Drop-Out experiment also helped our collaborators at the University of Delaware understand how to optimize the chemical composition of the effluent. For the growth experiment where the effluent produced from the electrocatalytic process was procured and incorporated into the media, heterotrophic growth of algae was demonstrated successfully. Figure 4A shows that all three cultures grown with effluent media exhibit clear growth after 4 days. This growth is comparable to TAP, as shown in Figures 4B-D. It was also found that performing cell counts through hemocytometry,maceta 7l although more labor intensive, significantly decreased the errors between triplicates.

From this final experiment, the first instance of algal growth completely decoupled from photosynthesis was achieved. For future continuation of this project, the next steps are to optimize the growing process by media treatment or to employ the use of highly controlled bioreactors. The use of other algal species or other strains of Chlamydomonas reinhardtii can be considered as well.By doing so, there is an opportunity to develop a system that exceeds the efficiency of conventional photosynthetic systems and be applied to agriculture for food and biotechnology industries such as bio-fuel production. This project was presented as an online presentation at the 2021 Undergraduate Research and Creative Activity Symposium at the University of California, Riverside .The Paharpur Business Centre and Software Technology Incubator Park is a 7 story, 50,400 ft2 office building located near Nehru Place in New Delhi India. The occupancy of the building at full normal operations is about 500 people. The building management philosophy embodies innovation in energy efficiency while providing full service and a comfortable, safe, healthy environment to the occupants. Provision of excellent Indoor Air Quality is an expressed goal of the facility, and the management has gone to great lengths to achieve it. This is particularly challenging in New Delhi, where ambient urban pollution levels rank among the worst on the planet. The approach to provide good IAQ in the building includes a range of technical elements: air washing and filtration of ventilation intake air from rooftop air handler, the use of an enclosed rooftop greenhouse with a high density of potted plants as a bio-filtration system, dedicated secondary HVAC/air handling units on each floor with re-circulating high efficiency filtration and UVC treatment of the heat exchanger coils, additional potted plants for bio-filtration on each floor, and a final exhaust via the restrooms located at each floor.

The conditioned building exhaust air is passed through an energy recovery wheel and chemisorbent cartridge, transferring some heat to the incoming air to increase the HVAC energy efficiency. The management uses “green” cleaning products exclusively in the building. Flooring is a combination of stone, tile and “zero VOC” carpeting. Wood trim and finish appears to be primarily of solid sawn materials, with very little evidence of composite wood products. Furniture is likewise in large proportion constructed from solid wood materials. The overall impression is that of a very clean and well-kept facility. Surfaces are polished to a high sheen, probably with wax products. There was an odor of urinal cake in the restrooms. Smoking is not allowed in the building. The plants used in the rooftop greenhouse and on the floors were made up of a number of species selected for the following functions: daytime metabolic carbon dioxide absorption, nighttime metabolic CO2 absorption, and volatile organic compound and inorganic gas absorption/removal for air cleaning. The building contains a reported 910 indoor plants. Daytime metabolic species reported by the PBC include Areca Palm, Oxycardium, Rubber Plant, and Ficus alii totaling 188 plants . The single nighttime metabolic species is the Sansevieria with a total of 28 plants . The “air cleaning” plant species reported by the PBC include the Money Plant, Aglaonema, Dracaena Warneckii, Bamboo Palm, and Raphis Palm with a total of 694 plants . The plants in the greenhouse numbering 161 of those in the building are grown hydroponically, with the room air blown by fan across the plant root zones. The plants on the building floors are grown in pots and are located on floors 1-6. We conducted a one-day monitoring session in the PBC on January 1, 2010. The date of the study was based on availability of the measurement equipment that the researchers had shipped from Lawrence Berkeley National Lab in the U.S.A.

The study date was not optimal because a large proportion of the regular building occupants were not present being New Year’s Day. An estimated 40 people were present in the building all day during January 1. This being said, the building systems were in normal operations, including the air handlers and other HVAC components. The study was focused primarily on measurements in the Greenhouse and 3rd and 5th floor environments as well as rooftop outdoors. Measurements included a set of volatile organic compounds and aldehydes, with a more limited set of observations of indoor and outdoor particulate and carbon dioxide concentrations. Continuous measurements of Temperature and relative humidity were made selected indoor and outdoor locations. Air sampling stations were set up in the Greenhouse, Room 510, Room 311, the 5th and 3rd floor air handler intakes, the building rooftop HVAC exhaust,hydroponic grow systems and an ambient location on the roof near the HVAC intake. VOC and aldehyde samples were collected at least once at all of these locations. Both supply and return registers were sampled in rooms 510 and 311. As were a greenhouse inlet register from the air washer and outlet register ducted to the building’s floor level. Air samples for VOCs were collected and analyzed following the U.S. Environmental Protection Agency Method TO-17 . Integrated air samples with a total volume of approximately 2 L were collected at the sites, at a flow rate of <70 cc/min onto preconditioned multibed sorbent tubes containing Tenax-TA backed with a section of Carbosieve. The VOCs were desorbed and analyzed by thermodesorption into a cooled injection system and resolved by gas chromatography. The target chemicals, listed in Table 1, were qualitatively identified on the basis of the mass spectral library search, followed by comparison to reference standards. Target chemicals were quantified using multi-point calibrations developed with pure standards and referenced to an internal standard. Sampling was conducted using Masterflex L/S HV-07553-80 peristaltic pumps assembled with quad Masterflex L/S Standard HV-07017-20 pump heads. Concentrations of formaldehyde, acetaldehyde, and acetone were determined following U.S. Environmental Protection Agency Method TO-11a . Integrated samples were collected by drawing air through silica gel cartridges coated with 2,4-dinitrophenylhydrazine at a flow rate of 1 Lpm. Samples utilized an ozone scrubber cartridge installed upstream from the sample cartridge. Sample cartridges were eluted with 2 mL of high purity acetonitrile and analyzed by high-performance liquid chromatography with UV detection and quantified with a multipoint calibration for each derivitized target aldehyde. Sampling was conducted using Masterflex L/S HV-07553-71 peristaltic pumps assembled with dual Masterflex L/S Standard HV-07016-20 pump heads. Continuous measurements of PM2.5 using TSI Dustrak model 8520 monitors were made in Room 510 and at the rooftop-sampling site from about 13:30 to 16:30 of the sampling day. The indoor particle monitor was located on a desk in room 510 and the outdoor monitor was located on a surface elevated above the roof deck. Carbon dioxide spot measurements of about 10-minute duration were made throughout the building during the afternoon using a portable data logging real-time infrared monitor . Temperature and RH were monitored in the Greenhouse, room 510 and room 311 using Onset model HOBO U12-011 data loggers at one-minute recording rates. Outdoor T and RH were not monitored.

The measured VOC concentrations as well as their limits of quantitation by the measurement methods are shown in Table 2. Figures 1-6 show bar graphs of these VOCs. Unless otherwise shown, all measured compounds were above the minimum detection level, but not all measurements were above the LOQ. Those measurements with concentrations below the LOQ should be considered approximations. These air contaminants are organized by possible source categories including: carbonyl compounds that can be odorous or irritating; compounds that are often emitted by building cleaning products; those associated with bathroom products; those often found emitted from office products, supplies, materials, occupants, and in ambient air; those found from plant and wood materials as well as some cleaning products; and finally plasticizers commonly emitted from vinyl and other flexible or resilient plastic products. The groupings in this table are for convenience; many of the listed compounds have multiple sources so the attribution provided may be erroneous. The carbonyl compounds include formaldehyde that can be emitted from composite wood materials, adhesives, and indoor chemical reactions; acetaldehyde from building materials and indoor chemistry; acetone from cleaners and other solvents. Benzaldehyde sources can include plastics, dyes, fruits, and perfumes. Hexanal, nonanal, and octanal can be emitted from engineered wood products. For many of these compounds, outdoor air can also be a major source. Formaldehyde and acetone were the most abundant carbonyl compounds observed in the PBC. For context, the California 8-h and chronic non-cancer reference exposure level for formaldehyde is 9 µg m-3 and the acute REL is 55 µg m-3 . The 60 minute average formaldehyde concentrations observed in the PBC exceeded the REL by up to a factor of three. Acetone has low toxicity and the observed levels were orders of magnitude lower than concentrations of health concern. Hexanal, nonanal, and octanal are odorous compounds at low concentrations; odor thresholds established for them are 0.33 ppb, 0.53 ppb, and 0.17 ppb, respectively . Average concentrations observed within the PBC building were 3.8±0.8 ppb, 3.5±0.6 ppb, and 1.4±0.2 ppb, for these compounds, respectively, roughly ten times higher than the odor thresholds. Concentrations of these compounds in the supply air from the greenhouse were substantially lower, although still in excess of the odor thresholds. The concentration of hexanal and nonanal roughly doubled the ambient concentrations as the outside air passed through the greenhouse. Octanal concentrations were roughly similar in the ambient air and in the air exiting the greenhouse. Concentrations of benzene, d-limonene, n-hexane, naphthalene and toluene all increased in the greenhouse air in either the AM or PM measurements. The measured levels of these compounds were far below any health relevant standards, although naphthalene concentrations reached close to 50% of the California REL of 9 µg m-3 . The concentrations of these compounds were generally somewhat higher indoors relative to the greenhouse concentrations. The concentration of toluene in the building exhaust was 120 µg m-3, more than double the highest level measured indoors, suggesting a possible toluene source in the restrooms. The cleaning compound 2-butoxyethanol was slightly higher indoors, but at very low concentrations. Similar for trichloroethylene that was observed at extremely low levels indoors. The compounds listed in this category have many sources, including outdoor air. For the most part there was little difference across the building spaces for these compounds, and little difference from the ambient air measurement. The single exception to this observation is methylene chloride that appears to increase by about a factor of ten indoors. It is possible that this compound is in use as a cleaning solvent, or it may be present in computer equipment or other supplies. Methylene chloride is also used as a spot remover in dry cleaning processes and may be carried into the building on occupant clothing. The levels of this compound were low relative to health standards .

Credible and prediction intervals in the shoot at harvest were similar for both models

The viral decay rate in the soil determined by Roberts et al. was adopted because the experimental temperature and soil type are more relevant to lettuce growing conditions compared to the other decay study . Decay rates in the root and shoot were used from the hydroponic system predictions.The transport model was fitted to log10 viral concentration data from DiCaprio et al. , extracted from graphs therein using WebPlotDigitizer . In these experiments, NoV of a known concentration was spiked in the feed water of hydroponic lettuce and was monitored in the feed water, the root and shoot over time. While fitting the model, an initial feed volume of 800 mL was adopted and parameters producing final volumes of b200 mL were rejected. To fit the model while accounting for uncertainty in the data, a Bayesian approach was used to maximize the likelihood of the data given the parameters. A posterior distribution of the parameters was obtained by the differential evolution Markov chain algorithm, which can be parallelized and can handle multi-modality of the posteriors distribution without fine tuning the jumping distribution.

The rationale behind the model fitting procedure and diagnostics are discussed in Supplementary section S1H.The initial viral concentration in the irrigation water was drawn from an empirical distribution reported previously by Lim et al. for NoV in activated sludge treated secondary effluent. As justi- fied by Lim et al. ,vertical grow tables the sum of the concentrations of two genotypes known to cause illness was used to construct the distribution. To estimate the risk from consumption of lettuce, the daily viral dose was computed using Eq. 10 for the kth day. The body weight was drawn from an empirical distribution for all ages and genders in the United States, from a report of the percentile data of body weight. The lettuce consumption rate was drawn from an empirical distribution constructed from data reported by the Continuing Survey of Food Intakes by Individuals . The ‘consumer only’ data for all ages and genders was used, and hence the reported risk is only for those who consume lettuce. It is important to note that the daily viral dose was computed in from the model output using the shoot density ρshoot to be consistent with the consumption rate reported in CSFII. Several different NoV dose-response models have been pro posed based on limited clinical data. The validity of these models is a matter of much debate , which is beyond the scope of this study.

These models differ in their assumptions resulting in large variability of predicted risk out comes. To cover the range of potential outcomes of human exposure to NoV, we estimated and compared risk outcomes using three models: 1) Approximate Beta-Poisson ; 2) Hypergeometric ; and 3) Fractional Poisson . In the risk estimation, we considered NoV in the lettuce tissue exists as individual viral particles and used the disaggregated NoV models. The model equations are given by Eq. 11–13, Table 1. Ten thousand samples of the daily infectious risks were calculated from BP and FP models using MATLAB R2016a. Wolfram Mathematica 11.1 was used for the model estimation as it was faster. Using a random set of 365 daily risk estimates of , the annual infection risk was calculated according to the Gold Standard estimator using Eq. 14, Table 1. While there appears to be some dose dependence for illness resulting from infection Pill∣inf , this has not been clearly elucidated for the different dose response models. Hence, we adopted the procedure used in Mara and Sleigh and calculated annual illness risk with Eq. 15.Under the assumption of first order viral decay, NoV loads in water at two time points did not fall in the credible region of model predictions, indicating that mere first order decay was unsuitable to capture the observed viral concentration data. The addition of the AD factor into the model ad dressed this inadequacy and importantly supported the curvature ob served in the experimental data.

This result indicates the AD of viruses to hydroponic tank wall is an important factor to include in predicting viral concentration in all three compartments.The adequacy of model fit was also revealed by the credible intervals of the predicted parameters for the model with AD . Four of the predicted parameters: at, bt, kdec, s and kp, were restricted to a smaller subset of the search bounds, indicating that they were identifiable. In contrast, the viral transfer efficiency η and the kinetic parameters spanned the entirety of their search space and were poorly identifiable. However, this does not suggest that each parameter can independently take any value in its range because the joint distributions of the parameters indicate how fixing one parameter influences the likelihood of another parameter . Hence, despite the large range of an individual pa rameter, the coordination between the parameters constrained the model predictions to produce reliable outcomes . Therefore, the performance of the model with AD was considered adequate for estimating parameters used for risk prediction.Risk estimates for lettuce grown in the hydroponic tank or soil are presented in Fig. 4. Across these systems,cultivo de frambuesas en maceta the FP model predicted the highest risk while the 1F1 model predicted the lowest risk. For a given risk model, higher risk was predicted in the hydroponic system than in the soil. This is a consequence of the very low detachment rates in soil compared to the attachment rates. Comparison of results from Sc1 and Sc2 of soil grown lettuce indicated lower risks and dis ease burdens under Sc1 . Comparing with the safety guidelines, the lowest risk predicted in the hydroponic system is higher than the U.S. EPA defined acceptable annual drinking water risk of 10−4 for each risk model. The annual burdens are also above the 10−6 benchmark recommended by the WHO . In the case of soil grown lettuce, neither Sc1 nor Sc2 met the U.S. EPA safety bench mark. Two risk models predicted borderline disease burden according to the WHO benchmark, for soil grown lettuce in Sc1, but under Sc2 the risk still did not meet the safety guideline. Neither increasing holding time of the lettuce to two days after harvesting nor using bigger tanks significantly altered the predicted risk . In comparison, the risk estimates of Sales-Ortells et al. are higher than range of soil grown lettuce outcomes presented here for 2 of 3 models. The SCSA sensitivity indices are presented in Fig. 5. For hydroponi cally grown lettuce, the top 3 factors influencing daily risk are amount of lettuce consumed, time since last irrigation and the term involving consumption and ρshoot. Also, the risk estimates are robust to the fitted parameters despite low identifiability of some model parameters . For soil grown lettuce, kp ap pears to be the major influential parameter, followed by the input viral concentration in irrigation water and the lettuce harvest time. Scorr is near zero, suggesting lesser influence of correlation in the input parameters.In this study, we modeled the internalization and transport of NoV from irrigation water to the lettuce using ordinary differential equations to capture the dynamic processes of viral transport in lettuce.

This first attempt is aimed at underscoring the importance of the effect of time in determining the final risk outcome. The modeling approach from this study may be customized for other scenarios for the management of water reuse practices and for developing new guidelines for food safety. Moreover, this study identifies critical gaps in the current knowledge of pathogen transport in plants and calls for further lab and field studies to better understand risk of water reuse.To develop an adequate model to predict viral transport in plant issue, it is necessary to couple mathematical assumptions with an under standing of the underlying biogeochemical processes governing virus removal, plant growth, growth conditions and virus-plant interactions. For example, although a simple transport model without AD could predict the viral load in the lettuce at harvest, it failed to capture the initial curvature in the viral load in the growth medium . An alternative to the AD hypothesis that could capture this curvature is the existence of two populations of viruses as used in Petterson et al. , one decaying slower than the other. However, a closer examination of the double exponential model revealed that it was not time invariant. This means that the time taken to decay from a concentration C1 to C2 is not unique and depends on the history of the events that occurred . Other viral models, such as the ones used in Peleg and Penchina faced the same issues. The incorporation of AD made the model time invariant and always provided the same time for decay between two given concentrations. This model fitting experience showcases how mathematics can guide the understanding of biological mechanisms.

The hypothesis of two different NoV populations is less plausible than that of viral attachment and detachment to the hydroponic tank. While it appears that incorporating the AD mechanism does not significantly improve viral load prediction in lettuce shoot at harvest, this is a consequence of force fitting the model to data under the given conditions. Changing the conditions, for example, by reducing viral attachment rate to the tank wall, will underestimate virus load in the lettuce shoot in the absence of AD . Through this model fitting exercise, we also acknowledge that the model can be significantly improved with new insights on virus plant interactions and more data on the viral transport through plants. A potential cause for concern in the model fit is the wide intervals. However, there is significant uncertainty in the data as well suggesting that the transport process itself is noise prone. Moreover, from the perspective of risk assessment, the variability between dose-response models is higher than the within dose-response model variability . Since within dose-response model variability stems from uncertainty in viral loads at harvest among other factors, the wide intervals do not exert a bigger effect than the discordance from different dose response models.Some parameters are identifiable to a good degree through model fitting, but there is a large degree of uncertainty in the viral transport efficiencies and the AD kinetic parameters. While this could be a consequence of fitting limited number of data points with several parameters, the viral load at harvest and risk estimates were well constrained. This large variation in parameters and ‘usefully tight quantitative predictions’ is termed the sloppiness of parameter sensitivities, and has been observed in physics and systems biology . Well designed experiments may simultaneously reduce uncertainty in the parameters as well as predictions , and therefore increasing confidence in predictions. One possible experiment to reduce parameter uncertainty is recording the transpiration and growth rate to fit Eq. 6 independently to acquire at and bt.An interesting outcome of our analysis is the strong association of risk with plant growth conditions. The health risks from consuming lettuce irrigated with recycled wastewater are highest in hydroponic grown lettuce, followed by soil grown lettuce under Sc2 and the least in soil grown lettuce under Sc1 . This difference in risk estimates stems to a large degree from the difference in AD kinetic constants . Increasing katt, s will decrease risk as more viruses will get attached to the growth medium, while increasing kdet, s will have the opposite effect , as more detached viruses are available for uptake by the plant. The combined effect of the AD parameters depends on their magnitudes and is portrayed in Supplementary Fig. S5. This result indicates that a better understanding of the virus interaction with the growth environment can lead to an improved understanding of risk. More importantly, this outcome indicates that soil plays an important role in the removal of vi ruses from irrigation water through adsorption of viral particles. An investigation focused on understanding the influence of soil composition on viral attachment will help refine the transport model. The risk predicted by this dynamic transport model is greater than the EPA annual infection risk as well as the WHO annual disease burden benchmark. The reasons for this outcome are many-fold. First, there is a significant variability in the reported internalization of viruses in plants.

Mycorrhizal inoculation had no significant effect on the total weathering losses of any of the elements examined

When looking at the effects of CO2 the different seedling treatments were combined. When looking at the effect of CO2 on any growth or weathering parameters, only planted columns were considered. Statistical analysis was performed using JMP software version 5.01a . Two way ANOVA was used to determine treatment and interaction effects. Significant differences between CO2 treatments were assessed with a student T test, and significant differences between ectomycorrhizal treatments with Tukey’s HSD test, using a one way ANOVA.Upon harvest we observed highly variable rates of mycorrhizal colonization. We classified the seedlings as abundantly colonized , moderately colonized , or sparsely colonized . Of the 20 seedlings in the mycorrhizal treatments 6, 4, and 10 seedlings were abundantly, moderately, and sparsely colonized, respectively, at the time of harvest. While the root systems extended 12S21 cm down the 30 cm columns, no mycorrhizae were found deeper than 6 cm. There were no extraradical hyphae or ectomycorrhizae resembling Piloderma& fallax or Suillus&variegatus observed in the non mycorrhizal treatment, but there were turgid, smooth,maceta cuadrada plastico  black root tips observed in the non mycorrhizal treatments that may have been Thelephoroid mycorrhizae. Chitin analysis of roots and growing medium showed almost no chitin in our unplanted controls, and significant chitin in both mycorrhizal treatments . There were also significant amounts of chitin in the non mycorrhizal planted treatments. In both mycorrhizal treatments most chitin was found in the mineral mix, whereas in the non mycorrhizal treament most chitin was found in the rhizosphere and roots. Combining the two mycorrhizal treatments we found moderately more chitin in the elevated CO2 treatment, 86 mg chitin/column vs. 50 mg chitin / column . When the chitin content was expressed as total chitin content per gram seedling biomass , we see that the mycorrhizal treatments did have significantly more chitin than the non mycorrhizal treatment, and that elevated CO2 was associated with a higher chitin content in the Piloderma treatment.Formic, lactic, and acetic acids made up the majority of measured low molecular weight organic acids , comprising 82%, 12%, and 4% of total LMWOA’s, respectively.Much smaller amounts of malonate, oxalate, fumarate, and succinate were occasionally detected, but their occurrence in measurable quantities was not associated with any treatment. Nonplanted columns had significantly lower LMWOA levels than planted columns while P.&fallax columns had significantly higher LMWOA concentrations than non-planted, but significantly lower levels than either the nonmycorrhizal or S.&variegtus columns . These differences were driven by differences in the production of formate and lactate . Columns exposed to elevated CO2 produced significantly more total LMWOA’s , and this difference was driven primarily by significantly greater formic acid production . When amounts of LMWOA are calculated per gram seedling DW there are no significant differences between mycorrhizal or CO2 treatments .Solution pH was measured for the leachate from 7 sampling dates. Leachate pH was consistently alkaline . The nutrient solution used to water the columns was pH 5. There was no significant difference between elevated and ambient CO2 treatments nor between the non mycorrhizal treatments and the mycorrhizal treatments in leachate pH, but the leachate of the non planted controls was significantly lower in pH than the planted treatments on 5 of the 7 sampling dates and moderately but not significantly lower on the other two sampling dates. The pH of the column leachate from the planted columns did not change appreciably over time, while the leachate from the unplanted columns decreased slightly over time .The needle concentrations of Ca, K, Mg, Fe, and P were all at or above deficiency thresholds for P.&sylvestris , though P is near the lower limit of optimal growth. CO2 treatment had no effect on needle nutrient concentrations and mycorrhizal treatment only affected needle [Ca] . Neither CO2 level nor mycorrhizal treatment had a significant effect on total seedling contents of Ca, K, or Mg despite significant differences in seedling biomass between CO2 treatments. The differences in seedling Ca and K contents between treatments correlated tightly with seedling DW, while the opposite trend was found with Mg. Ectomycorrhizal colonization, particularly with Piloderma, increased seedling Mg concentration.Whole column weathering budgets for the 4 elements: Si, Ca, K, and Mg show no effect of CO2 on total weathering losses . The presence of seedlings, mycorrhizal or not, did significantly enhance weathering losses compared to nonSplanted controls.More Si and Mg were weathered in columns planted with P.&fallax inoculated seedlings, but these differences were not significant . The major pool of weathering losses for Mg, K, and Ca in the seedling treatments was uptake into the growing seedlings; losses of magnesium were particularly dominated by seedling uptake . Silica losses were fairly evenly distributed between ΔCEC and column leachate, while seedling uptake was negligible . As stated previously, ΔCEC was negative for calcium and slightly negative for some K and Mg treatments. When the nutrients added during watering were subtracted from the final budgets the net weathering losses of Ca, K, and Mg in the non planted treatment are negative or only slightly positive ,maceta 7 litros  suggesting a “missing sink” for weathering products.Elevated CO2 increased the biomass of the Pinus&sylvestris seedlings. Other studies on coniferous seedlings have generally found a growth stimulation with elevated CO2 , and this trend includes several studies on mycorrhizal Pinus&sylvestris seedlings , although some other studies have failed to find an effect of elevated CO2 on growth . We found a slight reduction in root:shoot with elevated CO2. Most studies on ectomycorrhizal P.& sylvestris seedlings which have noted a growth stimulation from elevated CO2 also found an increase in root:shoot . A decrease in root:shoot does not necessarily indicate reduced  below ground C allocation as carbon could be allocated to EM fungi rather than root biomass. If we use Ekblad and Nasholm’s estimate that 9.5% of fungal biomass is chitin, then our observed 21 mg average difference in chitin between elevated and ambient CO2 treatments would correspond to 220 mg more fungal biomass supported by seedlings in the elevated CO2 treatment. This difference would bring the root: shoot ratios of the two CO2 treatments to near parity. Additionally, a given amount of ectomycorrhizal biomass typically has a significantly higher respiration rate than the same mass of fine roots and thus represents more below ground carbon allocation This same explanation for potentially higher below ground carbon allocation despite lower root:shoot applies to our findings of slightly lower root: shoot in the two mycorrhizal treatments vs. the nonmycorrhizal treatment. Low molecular weight organic acid production was strongly associated with seedling biomass across both CO2 and mycorrhizal treatments. While EMF are often mentioned in the literature to produce significant amounts of LMWOA’s, our findings seem to fall in line with the majority of studies examining the EMF role in LMWOA production which fail to find an increase in LMWOA production when comparing EMF and nonSEMF seedlings . However, many of these studies do find that EMF significantly alter the composition of LMWOA’s produced, particularly increasing oxalic acid concentrations , which we did not. Similar to Fransson and Johansson , we did not find that elevated CO2 increased LMWOA production beyond its effect on seedling biomass. Elevated CO2 was associated with significantly increased chitin content in seedlings colonized by Piloderma&fallax, but not with seedlings colonized by Suillus& variegatus;&this increase was due to elevated chitin levels found in the growth matrix, not on the seedling roots. Increased mycorrhizal growth is commonly found in elevated CO2 treatments . Many studies show that this response is highly fungal speciesS specific . It is interesting to note that Fransson and Johansson , in which they assessed the effects of elevated CO2 on mycorrhizal growth of 5 ectomycorrhizal species, also found this strain of Piloderma&fallax responded far more to elevated CO2 than any other fungal species examined. Our finding of no increase  in seedling biomass in the mycorrhizal seedlings is not uncommon. Despite the fact that Pinus&sylvestris is considered obligately mycorrhizal, many studies have also found a growth depression of P.&sylvestris with mycorrhizal colonization when mycorrhizal and non mycorrhizal seedlings are compared . Given the high needle nutrient concentrations in our non mycorrhizal seedlings, and the very dense rooting we observed, it seems likely that high nutrient availability, and a very restricted rooting zone were the primary reasons we observed no growth stimulation by mycorrhizaeThe generally low levels of mycorrhizal colonization we observed were likely a result of nutrient levels being too high , or insufficient drainage . The significant amounts of chitin observed in the non mycorrhizal treatments may indicate either the presence of thelephoroid mycorrhizal contamination or saprotrophic fungi growing in the mineral mix. Our visual observations of shiny, turgid, smooth, black roots, and the fact that the majority of the chitin found in the non mycorrhizal treatments was found in the roots and not in the mineral mix suggests some thelephoroid mycorrhizal contamination. The larger size of the non mycorrhizal seedlings and their lower chitin levels indicates that thelephoroid infection was minor. Nutrient levels were sufficient for healthy balanced growth. Leachate concentrations of nutrient cations were steady suggesting that we increased the nutrient amounts sufficiently to keep up with the increasing nutrient demand of the growing seedlings. The fact that none of the needle nutrient concentrations were below sufficiency threshold further suggests that none of the seedlings had severe nutrient deficiencies that could have compromised carbon allocation physiology.The overall negative weathering observed for some mineral nutrients suggests a missing sink somewhere in our weathering budget. The two most likely possibilities are the formation of secondary precipitates that were not extracted upon harvest or a large pulse of weathering in the initial 3 weeks of equilibration , before seedlings were planted. We used a chemical speciation and equilibria model Visual MINTEQ to determine if secondary precipitates may have formed. Given the makeup of LMWOA’s observed, the pH’s measured and the elemental concentrations in the drainage as input parameters the only compound likely to have precipitated would have been Caoxalate, but only in very small quantities <3 uM. This leaves the initial “equilibration flush” as the likely missing sink in our weathering budget. All the columns were treated equally before planting so this missing sink should not affect the merit or interpretation of our results. Overall, and in every individual elemental flux , seedlings had a significant effect on weathering. For the nutrient cations K, Mg, and Ca, extra weathering products were taken up by the seedlings, while for Si, which was not taken up in appreciable quantities , extra weathering products were found on exchange sites in the mineral matrix. Mycorrhizal colonization did not significantly increase weathering rates or nutrient uptake, but seedlings colonized by Piloderma&fallax did exhibit a trend toward increased weathering. More Si and Mg were mobilized in the P.&fallax treatment despite the fact that P.&fallax colonized seedlings were on average smaller, though these differences were not statistically significant. P.&fallax colonized seedlings also had significantly higher Mg concentrations. Elevated CO2 had no effect on the weathering losses of Ca, K, or Si, but increased Mg losses . While seedlings grown in elevated CO2 did have higher plant and fungal biomass, and higher total seedling elemental contents, these increases in nutrient uptake were balanced by reduced leaching losses and not by enhanced mineral dissolution. Soil biota are capable of stimulating weathering of alumina silicate minerals by four distinct mechanisms. Proton&promotion: positively charged hydronium ions exuded by biota bind with partially charged negative surface sites on minerals, displacing cations from the mineral surface and destabilizing Si or Al on mineral surfaces, facilitating their release into solution. Plant growth is generally seen as a net acidifying phenomenon as a plant’s greater uptake of positively charged nutrient cations than negatively charged nutrient anions leads to a plant’s net exudation of protons. Ligand8promotion: an anion, either inorganic or organic, binds to mineral surface cations, again destabilizing the bond energy at the mineral surface stimulating release of surface cations and framework Si or Al. Removal&of&transport&limitation: the removal of weathering products from the surface boundary layer via nutrient uptake or enhanced solution flow eliminates or reduces the constant readsorption of these mineral weathering products that occurs in concert with dissolution, enhancing net dissolution.

We found our only common Tomentella species to be nitrophobic

We had a very low sequencing success rate and this appeared to be due to contamination of many samples by saprotrophic other fungal infection. Our sampling was done in mid summer, and this may be a time of high root turnover. The levels of nitrogen fertilization applied in this study termed “high” and “low” would be considered “extremely high” and “very high” in natural settings. The highest rates of nitrogen deposition in the eastern US are generally below 20 kg N/ha/yr , and many forests considered to be exhibiting nitrogen saturation associated decline are below 15 kg N/ha/yr . The high and low levels of fertilization employed here are 20 and 6 fold higher than the atmospheric deposition levels in this part of the northeast. Because there was no evidence of an interactive effect of nitrogen treatment and horizon with regards to community composition, we will first discuss our findings in the context of nitrogen addition experiments and then discuss the implications of our findings on horizon preference. There is some evidence that deciduous forests may respond differently to nitrogen deposition than coniferous forests , so, when possible, we will focus on studies done in deciduous forests. We will also not discuss studies that focus exclusively on ectomycorrhizal sporocarp  inventories. Sporocarp inventories formed the majority of early studies on the effects of acid deposition on ectomycorrhizal communities, and could be considered the canary in the coal mine that has spurred 2 decades of research following the seminal work by Arnolds . However, there is ample evidence that sporocarp abundance may not be reflective of mycorrhizal abundance . 

We will address the results of our colonization intensity measurements in the context of rooting distribution,vertical grow system  ecosystem biogeochemistry and ecosystem responses to nitrogen deposition.Our findings of a significant shift in ectomycorrhizal species composition are generally in agreement with other investigations on the effects of N deposition on ectomycorrhizal commmmunities. Avis et al. and Lucas and Casper found that nitrogen fertilization significantly impacted ECM community composition in oak forests of the eastern US, as have numerous studies in coniferous forests . Our results stand apart from those of Avis et al. and Lucas and Casper in that they observed significant effects on N fertilization on ECM community composition at N fertilization levels between 20 and 35 kg/ha/yr and in as little as two years, while we found no significant effects of 18 years of 50 kg/ha/yr of N fertilization on ectomycorrhizal community composition. Avis et al. also failed to find a significant community change after 18 years of fertilization at 54 kg/ha/yr. The majority of N addition studies on coniferous plots have found a significant impact of 30S50 kg N/ha/yr on ECM community composition , though Ishida and Nordin failed to find an effect of 12 years of 50 kg N/ha/yr on ECM community composition in a spruce forest. Wallenda and Kottke identified a general threshold for 20S30 kg N/ha/yr before marked changes in the ECM community composition are likely to be observed, although they caution that forest specific factors may change this threshold a great deal in either direction, and at the time of their review, there were no studies on the effects of N addition on ECM communities in deciduous forests.

Our findings of a marked reduction in ECM diversity with high N fertilization are also in line with the findings of Avis et al , and Lucas and Casper , which found significantly reduced ECM diversity in their high N treatments, though again, they noted significant decreases in ECM diversity at levels comparable to our “low” N treatment, while we did not. Avis et al. failed to find a significant effect of N addition on ECM diversity. Looking at studies in coniferous forests, a reduction in ECM diversity with N addition seems to be the general finding , but studies have also found no reduction in ECM diversity in forests fertilized with moderate or very high levels of N addition . We could clearly identify certain species to be nitrophilic or nitrophobic but we could not make such characterizations for any higher level phylogenetic groups other than the Clavulinaceae. We found that the Clavulinaceae were generally nitrophobic and this is in agreement with the findings of Avis et al. . Among our 5 most abundant Lactarius species, one was nitrophilic, two were more common in the low nitrogen treatment, and nitrogen had no effect on the abundance of the other two. Lactarius&quietus was also quite abundant in Avis et al.’s study on oak, but they found no consistent reaction to nitrogen fertilization. In their study across a depositional gradient in an Alaskan spruce forest Lilleskov et al. found Lactarius&theiogalus to be dramatically nitrophilic, shifting from 7% to 69% of all root tips from the low to high end of their nitrogen gradient; we found L.&theiogalus to be mildy nitrophilic, occurring in greatest abundance in the low N treatment. Cox et al.’s study across european Picea&abies forests identified the genera Lactarius and Thelepohra/Tomentella to be nitrophilic.Our most abundant species was Cenococcum&geophilum, and it was termed nitrophobic due to its sharply reduced abundance in the high N treatment. Avis et al. found Cenococcum abundance across N treatments variable, with one TRFLP type being nitrophilic and another nitrophobic. 

Lucas and Casper found Cenoccocum&geophilum abundance increased on oak roots in response to nitrogen fertilization . Avolio et al. looked at ECM community composition on pine and oak seedlings in response to nitrate fertilization and found that Cenococcum abundance increased markedly in response to N fertilization on oaks, and decreased markedly in response to N fertilization on pines. Cenococcum geophilum is widely thought to be acryptic species complex , and we examined the possibility that we had sub-taxa within C.&geophilum that might have a distinct affinity for nitrogen or horizon. Clustering our C.&geophilum sequences at different % similarities did yield different clusters,macetas cuadradas  but did not divide this large group according to any horizon or nitrogen affinity. This inability to define Cenococcum’s niche is not uncommon; in a 2007 commentary, Dickie identified Cenococcum as “one notable exception to the rule of niche differentiation”.The ECM communities in the organic and mineral soil were quite distinct, both across all treatments and within a given nitrogen treatment. Our findings of different ECM communities in the mineral and organic horizon is common to other studies which have examined the ECM communities in organic and mineral soil separately , though to date few studies have done so. We found evidence of increased diversity in the mineral soil. Rosling et al. found higher diversity in the mineral soil, while Dickie et al. found lower diversity in the mineral soil, and Scattolin et al. found moderately increased diversity of ECM species in the mineral soil. All three of these studies were done in coniferous forests. Certain species in our study exhibited clear preferences for one horizon or another, and these trends were also applicable to higher phylogenetic classifications. In particular the Russulaceae exhibited a preference for the organic horizon while members of the genus Inocybe, the order Agaricales, and the families Tricholomataceae and Clavulinaceae were significantly more abundant in the mineral soil. In contrast to our findings, Baier et al. and Scattolin et al. found that Lactarius and Russula were generally more abundant in the mineral soil, though they found individual species within both genera that were more abundant in the organic soil; both studies were done in high elevation coniferous forests. Tedersoo et al. found, as we did, that the Agaricales exhibited strong preference for mineral soil, though they also found the Clavulinaceae to be significantly more abundant in the organic horizon, in contrast to our findings. 

In general, our data suggests greater specialization in the mineral horizon than in the organic horizon. Of the 65 species we found, 31 were found only in the mineral soil, while only eight were found exclusively in the organic horizon. Rosling et al. also found a higher proportion of species occurring exclusively in mineral soil.Across all treatments, the ectomycorrhizal colonization intensity was very similar between the mineral and organic horizon, and this stands in some contrast to the published literature. Very few studies have exhaustively sampled the ectomycorrhizal community in the mineral soil , but those that have, have found as many, if not more, ECM in the mineral soil as in the organic soil . More studies have sampled the upper layers of the mineral soil, and ECM colonization intensity, measured as either percent root length colonized or percentage of fine root tips that are mycorrhizal, is generally lower in the mineral soil .However, Scattolin et al. found that the A and B horizons in a montane spruce forest in Italy had significantly higher mycorrhizal colonization intensity than the organic soil, so our finding of equal or slightly elevated colonization intensity in the mineral soil is not unprecedented. Magill et al. assessed the fine root biomass in the same plots we did two years before we sampled these stands. They found that between 55% and 62% of all fine roots were found in the mineral soil, though they only sampled the top 20cm of mineral soil. Thus, our findings of equivalent or higher colonization intensity in the mineral soil can be interpreted as equivalent or higher mycorrhizal biomass in the mineral soil.  Nitrogen addition had no effect on the colonization intensity in the low N treatment or in the organic horizon of the high N treatment, and increased the percentage root length colonized in the mineral soil in the high N treatment. According to Magill et al. , the fine root biomass in the organic horizon in these same stands is 25% and 27% lower in the low and high N treatments, respectively. Fine root biomass in the top 20 cm of the mineral horizon has not been significantly affected by nitrogen treatment, thus the percentage of fine roots found in the mineral horizon increased marginally from 55% to 62% with nitrogen addition . We can thus infer that there is a significantly higher fraction of total ectomycorrhizae in the mineral soil in the high N treatment. There is considerable variation in the literature in reported responses of ECM colonization and fine root biomass to nitrogen addition. The consensus seems to be that nitrogen addition decreases both, but the large number of reports finding the opposite suggests that characteristics of the individual forest being examined may be important to consider. In a meta analysis of 6 studies conducted in 14 forest stands, Treseder et al. found a moderate decrease in %RLC of 5.8% with nitrogen fertilization, but also noted that the responses to N were very heterogeneous. In a meta analysis of boreal forests’ responses to nitrogen addition, Cudlin et al. found a not significant decrease of 10% in percent mycorrhizal colonization. Carfrae at al. and Koren and Nylund found significantly increases in %RLC with nitrogen fertilization. Wöllecke et al. noted a sharp decrease in % RLC with nitrogen fertilization, but this was much more pronounced for the organic horizon than the mineral horizon, also indicating increased fraction of total mycorrhizal activity in the mineral soil under nitrogen addition. To the authors’ knowledge no published studies have examined the effects of nitrogen on ECM colonization intensity in deciduous forests. The literature on the effects of nitrogen on rooting activity is no more consistent. While, many micro and mesocosm studies have demonstrated a decreased ratio of root biomass to shoot biomass with increased N availability , and the mechanisms for this are well understood , the effect is generally driven by increases in above ground biomass and the effects of nitrogen fertilization on  below ground biomass in forests vary considerably. Cudlin et al. found an increase in root biomass of 10% in their meta analysis of 22 forest studies, while Ostonen et al. found a 20% decrease in specific root length in a meta analysis of 54 studies, both of these studies reported such high variation in responses to N that neither trend was significant.The altered mycorrhizal colonization we observed in the high N plot may be an adaptation to limitation or colimitation by phosphorous or base cations. 

The strength of the dry season promoted the enrichment of all water sources respect to earlier samplings

Leaf organic d13C and d18O values support this observation, because P. piscipula showed consistently higher d13C values than G. floribundum , coupled with lower d18O indicating that the decrease in photosynthetic carbon isotope discrimination was associated with greater stomatal conductance and greater photosynthesis . Greater photosynthesis in P. piscipula is consistent with maintaining a canopy of leaves later into the dry season. Thus, our results are most consistent with maintenance plant water potential to maximize carbon gain during the onset of the dry season. The observation that P. piscipula appeared to use shallower water sources and maintained its canopy of leaves later into the dry season was not expected. Part of this pattern is driven by the capability of P. piscipula to utilize dynamic sources of water, such as the cold front precipitation during the frontal season . This makes sense, because the Laja bedrock layer was a poor source of water at all times, and soil pockets, which are available, but heterogeneous in distribution, were always better sources of water than rock layers . Water content of soil/ bedrock sources changed along the year suggesting a different seasonal contribution to plant water uptake. The hg of topsoil in the wet and frontal seasons were very similar and three times greater than values measured in the dry season. The Sascab bedrock layer could be a significant source of water in the wet and frontal seasons, but not in the dry season.

Soil pockets had two times more water than topsoil in the dry season suggesting that they could be an important source of water for trees during the dry season. In the dry season,vertical farms the rock profile had hg between 1 and 5 % but nearly all were less than 1 %. These values were slightly lower than those reported by Querejeta et al. and Hasselquist et al. in nearby areas, which suggest that the bedrock was subjected to a greater evaporation during this study. The d18O values of water in this study integrated processes ranging from evaporation of soil and bedrock water sources, transpiration of tree species, and precipitation events. In the wet season, enriched values of d18O of water in topsoil 10–15 cm and trees revealed the occurrence of a depleted precipitation event that occurred on October 21, 1 day before sampling, bringing 19.7 mm of water . Furthermore, a frontal system including cold front #3, the tropical wave #37, as well as the remnants of tropical storm Kiko that formed in the pacific, converged on the study area days before the wet season sampling in October 2007 . Hurricanes, tropical storms and cold fronts generally have lower stable isotope ratios than convective precipitation events . For example, Perry et al. recorded d18O values of 9.91 % for precipitation during tropical storm Mitch in 1998, and precipitation events ranging from 6 to 10 % for d18O have been recorded in the vicinities of the study area . Consequently, depleted oxygen values in soil 15–30 cm and P. piscipula and G. floribundum trees could be accredited to precipitation originated from these events. Soil pockets also showed more negative values than rock, suggesting that depleted rain water reached this layer.

During dry season measurements in February 2008, the d18O of topsoil 0–15 cm was more positive than groundwater suggesting another depleted source of water. Cold front #29, which occurred 4 days before sampling and brought 33.9 mm , could be the main source of water. However, the more negative value of topsoil from 0 to 5 cm could be affected by dew water since this soil sampling was done early in the morning. More negative d18O values in topsoil than ground water have also been observed by Saha et al. in similar environmental conditions in Miami, associated with water condensation occurring at night in the upper soil layers. Condensation has been shown to deplete d18O soil water 10–15 cm depth by 5 % . Condensation can also account for up to 47 % of total transpiration . Surface dew is easily generated when temperatures go below the dew point at night or in early morning . Under tropical conditions in Tahiti, Clus et al. reported average dew yields of 0.102 mm of dew during the dry season. Therefore, condensed water could be an important source for P. piscipula. Overall, our results indicate that variation in phenology between these two deciduous tropical dry forest tree species, which vary in the timing of their deciduousness, is not akin to the relatively large variation in rooting depth that can occur between tropical evergreen and deciduous species , but rather reflects the diversity of plant physiological strategies that occur in tropical forest .The excremental body has been defined by Mark Feather stone as the disposable body that is set up within US society as a foil to the formalized, white, utopian American body in order to assert the US’s global power through corporeal poetics.Furthermore, the excremental body has been demonized as a way to justify the disembodied, mechanized body of supermodernity which represents the perfect, post human body which does not break down, feel pain, or expire.

Ultimately, the excremental body is the pathologized other that feels pain and ultimately represents “the horrific real of the vulnerable body.”In regards to the US’s involvement in Vietnam during the Cold War Era, American society was, and still is, culturally and psychologically unwilling to recognize and empathize with the suffering and tortured bodies in the Global South and within its own borders. The pathologization of the “natural” body, soil, water, urine, and all that is “other” allows US society to relentlessly exercise and grow its accursed share through neo colonial agendas.As the devastating effects of the Anthropocene become ever more acute, contemporary artists Jun Nguyen Hatsushiba and Jae Rhim Lee re present the excremental body at the forefront of their respective projects in order to remediate disembodiment,vertical plant growing highlight the interdependency of all life forms, and hold the US necropolitics accountable for countless ecological atrocities. The “Anthropocene” popularized in 2000 by the atmospheric chemist Paul Crutzen and ecologist Eugene Stoermer, has been deemed by geologists to begin in the 1950s due to the dramatic effects of modern and nuclear weapons upon ecosystems during WW2 and the Cold War battles fought between communism and capitalism.The Anthropocene critically links human actions with the rapid change and depletion of earth’s systems, but its history actually begins long before the 1950s. When German biologist, Ernst Haeckel, coined the term “ecology” in 1866, it became a discipline that facilitated the domination of colonial anthropocentrism over passive, human and nonhuman worlds — effectively becoming what T.J. Demos calls the “science of empire.”A lot has changed for the worse since 1866, since ecology’s colonial origins still persist through turbo capitalism’s apathy towards the biosphere exacerbates a psychological and cultural fear of morbidity due to a “uniquely modern form of egoism [which] has broken [the] interdependence between the living and the dead. With disastrous results for the living, who now think of the dead as eliminated.”This perversion of death is further clarified by Achille Mbembe’s theory of Necropolitics as “let live and make die” which is in direct response to Michel Foucault’s concept of biopolitics defined as “live and let die.” Coined by Foucault in the 1970’s during the Cold War era, biopolitics meant to “make live and let die” in line with the capitalist liberal governmentality that specifically sought to make life “cozy” for the First World nation states.On the other hand, necropolitics delineates a management of life in the neo liberal capitalist world: let all those who hold wealth and power live, and make the rest die through systematic abandonment.

The Vietnam War is particularly relevant to the artists Jun Nguyen Hatsushiba and Jae Rhim Lee since their projects relate to the necropolitics of US chemical warfare that sparked the formation of an ecological consciousness in American society. In these works, the contaminated human and environmental “body” collide.Staged in South Korea and later shown to a Japanese audience, Jun Nguyen Hatsushiba’s film, Memorial Project Waterfield: The Story of the Stars , aims to commemorate the losses incurred by ecological, political, and economic violence during the American War in Vietnam . Focusing on the violent trauma incurred on the body, land, and water Nguyen Hatsushiba re presents the act of urinating as a way to memorialize the embodied struggles of Vietnamese people who suffer from slow ecological violence and US cultural hegemony. By creating a multidimensional space within his work, he offers a heterogeneous re presentation of Vietnamese society and suffering that has been systematically erased by hegemonic US narratives of the Vietnam War. On the other hand, Jae Rhim Lee’s Infinity Burial Project offers consumers an opportunity to remediate the imperceptible accumulation of industrial toxins within the human body through the use of mushroom mycelia in order to promote environmental stewardship and provoke the psychological structures surrounding cultural death denial . By uncovering the ubiquity of invisible chemical contamination in America and offering a green burial alternative that facilitates a physical transfer of nutrients from a decomposing human body to the soil, her work expands one’s understanding of how human death is linked to vibrant, nonhuman systems. In order to ease a people’s fear of death stoked by super modernity, her work attempts to bridge the dichotomy between “man” and “nature” with her focus on interspecies relationships within the soil. Both artists utilize alienated substances, such as dejecta and decomposing bodies, in order to probe colonial logics and animacy hierarchies that are socially and racially charged. Such matter labeled as “waste” is often steeped in logics of purity and danger that justifies necropolitical classifications of people as valuable or disposable. Residues of human life, ranging from excreta and corpses to industrial toxins and landmines, serve as important reminders of humans’ undeniable entanglement with ecological systems that lie beyond human control. Although these residues are out of sight, and often imperceptible, they are potent reminders of anthropocentric frameworks and heighten the agency and animacy of nonhuman “objects” or systems. In this paper I will show how both artists seek to halt the repetitious calamities caused by humans by transforming substances that are considered taboo –– urine and corpses –– into opportunities for ecological, psychological, and cultural remediation.In 2006, Jun Nguyen Hatsushiba staged Memorial Project Waterfield: The Story of the Stars as a performance installation at the 6th Gwangju Biennale in South Korea with the main intention of turning it into a video performance . The physical performance space was confined to a prison like structure constructed out of 8 meter tall walls that were lined with long metal poles . Visitors could only view the work from an aerial perspective provided by standing on the bridge that overlooks the courtyard and connects the Gwangju Biennale’s two gallery buildings. 26,000 plastic water bottles packed the entire ground surface of the 10 meter wide by 14 meter long space. During the live performances, there were 3 groups of 5 men and women who would alternate shifts every hour. Some performers came from Vietnam in order to assist the artist with training, while the rest of the cast consisted of Korean men and women volunteers aged 18 – 30 years old. These volunteers were cast based on their interest in contemporary art, youthful appearance, and overall good health.During shifts, the performers were tasked with drinking water, urinating into containers, injecting urine into recently emptied bottles, carefully wading through the bottles, and listening to the spontaneous orders of the shift leader to take cover, sit, or lie down .Over the span of about 20 days, these repetitive, carefully orchestrated tasks gradually formed 50 urine colored stars that were each 1 meter wide and constituted the image of the 50 stars on the American Flag hybridized with the yellow star of the Vietnamese flag . This symbolic relationship between the two flags’ stars reflects the entwined historical, cultural, and economic relationships between the US and Vietnam. Nguyen Hatsushiba attributes the chemical destruction of US and Vietnamese ecosystems to militarization and neo liberal capitalism.

Replicability in EcoFABs has recently been demonstrated in a ring trial across multiple laboratories

Studies in EcoTrons will be increasing in the near future and will provide unprecedented insights into ecosystem functioning; for example, Roscher et al. found that the functional composition of communities is key in explaining carbon assimilation in grasslands. Mesocosms, which we call EcoPODs, are smaller versions of EcoTrons with higher experimental throughput that bridge laboratory and field studies . Existing EcoPODs have a footprint of approximately 2.1 m2 and can be filled with up to 1.23 m3 of soil of 0.8 m in depth . Using the EcoPOD lysimeter technology, intact soil monoliths can be retrieved from the field and studied under controlled conditions in the laboratory. The above ground portion is approximately 1.5 m tall and, therefore, allows the study of a number of different plants in soil with macro and microorganisms in the context of environmental changes. The contained nature of EcoPODs allows accurate mass balance calculations . EcoPODs allow precise conditioning of above and below ground temperature and moisture and, therefore, can simulate seasonal changes and enable short as well as long term experiments. They are equipped with state of the art sensor technology allowing in situ measurements of key environmental parameters,growing vegetables in vertical pvc pipe activities of organisms, and ecosystems at micrometer to meter scales.

EcoPODs can be equipped with multi and hyperspectral cameras that track plant biomass and physiological states. In conjunction with highly controlled physical and chemical conditions, researchers will be able to track the microbial activity within the system using a variety of genomic tools, including DNA or RNA shotgun metagenomics, proteomics, and metabolomics. This will facilitate tracking of microbial recruitment and activity across all life stages of the plant and can simulate seasonal changes. Broadly, this system can be used for fundamental research questions about biogeochemical cycles and the role of biodiversity in ecosystem processes, as well as applied studies that include biological or chemical components that require increased safety clearance and cannot be easily tested in the field. Because soil ecosystem and phytobiome experiments increasingly rely on in situ sensing over time, EcoPODs can also serve as a test bed for novel and improved sensing capabilities. Complimenting approaches to develop more field relevant laboratory growth systems are composed of one or more single plant chambers such as RootChips , GLO ROOT , EcoFAB , and other systems that enable detailed characterization. For example, RootChips systems provide a high throughput system for rhizosphere imaging, the GLO ROOT systems enable direct imaging of root architecture within soils, and EcoFABs are “fabricated ecosystems” that are aimed at creating model ecosystems on par with the model organisms used for genetic and biological studies. EcoFABs comprise a chamber, biological and abiotic components , and any measurement technologies .

EcoFABs allow real time microscopy for high resolution imaging of plant root architecture and are currently designed to provide sufficient materials for metabolomic, geochemical, and sequence based analyses. They are made using widely accessible 3D printing technologies to fabricate controlled microbiome habitats that can be standardized and easily disseminated between labs . This approach provides flexibility that enables scientists to add or change variables while monitoring microorganisms and their interaction with plants.EcoFABs are also envisioned to facilitate standardization of phytobiome research because construction materials are cheap and construction instructions are available . Analogous to medical drug testing pipelines, which generally begin as high throughput laboratory screens and are gradually scaled up to relevant mammal models and, finally, to human clinical trials, we envision phytobiome research studies to similarly follow a throughput versus relevance gradient from EcoFABs to EcoPODs and, finally, to field studies . Although this suite of fabricated ecosystems is not aimed at simulating the real world, the enhanced control over abiotic and biotic factors in these experimental platforms enables plant root microbiome interaction studies that are not possible in field experiments because fields generally display greater complexity and unpredictability or do not allow for manipulations. Thus, use of fabricated ecosystems can reveal important correlations and causations of individual metabolic reactions as well as biogeochemical cycles. Challenges that have been encountered or are foreseeable include the relatively short experimental durations that can be executed in EcoFABs as well as EcoPODs due to the size limitations of the respective platforms and because of potential increases in parasite pressure as a result of air and water flow limitations.

On the other hand, EcoTrons are not set up for quick turnover experiments and require expensive infrastructure to start and end experiments. Although insights obtained from greenhouse experiments have often not been replicable in the field, we expect that EcoFAB can serve as a reproducible system, in which microscopy and metabolomics can be applied to low complexity microbiomes in the context of plant roots. Data obtained from individual microorganisms can inform microbially based biogeochemical models, as discussed below. We expect EcoPODs and EcoTrons to facilitate in situ sensing,vertical greenhouse climate manipulations, and deep soil monolith access. Links and extrapolations among fabricated ecosystems and the field can be achieved by generating and testing hypotheses across platform scales. For example, field observations may be tested under replicable conditions in EcoTron or EcoPOD and promising microbial candidates could be isolated and further studied in EcoFABs. A reverse workflow is also imaginable, where promising microbial isolates or plants resulting from EcoFAB experiments may be tested in EcoPOD or EcoTron before being potentially released into field experiments. Furthermore, extrapolations could be testable beforehand by taking advantage of archived datasets from sources such as long term observatories, including Neon . Generally, challenges for extrapolating results of these fabricated ecosystems to realistic field conditions could be presented by the limited complexity in these laboratory systems; for example microbial isolates often perform predictably under laboratory conditions but may be inactivated by night temperatures or competitors. There is still a number of unknown unknowns which may significantly affect plant performance, microbial community dynamics, and soil nutrient cycling, and which vary from ecosystem to ecosystem, hence resulting in a disconnect between studies conducted in the laboratory versus in the field. Other challenges are presented by natural climate variability in the field and the uncertainty in climate change predictions, which are significantly affected by socioeconomical drivers . Although laboratory experiments may be conducted based on historic field data or even in tandem with real time field data measurements—for example, using sensor platforms coupled to edge computing —results may have limited applicability under future climate scenarios. However, this is also true for reproducibility of field experiments in general. Studying plant microbiome interactions and soil processes under defined conditions can assist in the identification and evaluation of such unknown unknowns which, in turn, will improve applicability of laboratory results to the field.Microbial communities found on healthy plants are incredibly taxonomically diverse and include bacteria, archaea, fungi, oomycetes, algae, protozoa, nematodes, and viruses.

This microbial complexity makes it impossible to definitively establish causal relationships between plant and microbial genotypes and phenotypes as well as environmental factors. Instead, representative synthetic communities of defined complexity enable systematic bottom up approaches in gnotobiotic systems under controlled and reproducible conditions to determine causal relationships . In order to systematically test plant microbial community dynamics and functions in relation to the chemical composition of the surrounding environment, comprehensive strain collections representing the phylogenetic and functional diversity of the plant microbiota have been established thanks to the cultivability of an unexpectedly large fraction of the members of the plant microbiota . This high cultivability of plant associated bacteria is likely based on low complexity food webs, continuous substrate supply by the plant, and an essentially aerobic environment . In addition to cultivation and subsequent whole genome analysis, screening SynComs of various complexity for interactions and metabolic activity in correlation with environmental parameters has been a bottleneck. Microfluidics tools such as massively parallel on chip coalescence of microemulsions enable screening of 100,000 communities per day . For example, bacterial isolates can be screened individually and in combinatory sets as SynComs for various useful properties, including plant growth promoting functions such as suppression of pathogens or degradation of harmful substrates, for their potential in bio fuel production, or as environmental remediation agents. Such tools coupled with high throughput DNA or RNA sequencing and long read sequencing platforms including PacBio and Oxford Nanopore , as well as metabolomics and various activity assays , now allow rapid profiling composition, function, and activity of SynComs as well as complex native microbial communities residing in soils and on plants.The quantity of data generated by the new technologies described above surpasses the capabilities of traditional analysis methods. Nevertheless, to gain insight, we need to integrate and fuse different data streams. To accomplish this, we must overcome the heterogeneous data types and lack of standards for data exchange. Ultimately, we need systems that can dynamically pull in diverse data from different devices and experimental modalities and intelligently interpret it using background knowledge in order to derive new hypotheses or make predictions such as being able to predict the consequences of specific environmental changes on plant health mediated by the microbiome. Machine learning methods and, in particular, deep learning have proven particularly useful for classification problems involving large datasets such as environmental data generated from technologies, including thermal sensing and LiDAR. Supervised ML techniques will learn to classify entities based on vectors of data characteristics, trained from prelabeled data. DL techniques involve the use of multilayer architecture neural networks . Different DL architectures can be applied to different problems. Convolution networks can be applied to image detection and recognition problems , whereas recurrent architectures such as long short term memory can be applied to time series data. One of the challenges of phytobiome data are the paucity of sample data or lack of resolution in imaging and instrumentation. One DL architecture designed to address this is the generative adversarial network . A GAN can generate plausible synthetic data by utilizing two NNs that are trained together in an adversarial scenario—one network attempts to distinguish real examples from fake ones, and the other creates plausible example data to fool the first. Over time, both models improve, and the generated examples become more plausible, reflecting real world characteristics of the domain without the need for explicit encoding of priors. In the context of phytobiome data, GAN could, for instance, help to synthesize and denoise imaging data . Although DL has seen tremendous gains and achieved much over the last decade, there are still a number of challenges. The input data must be in vector form, which is straightforward for sensor data; however, complex biological information must be embedded in a suitable fashion. NNs are famously inscrutable—they do not provide any explanation as to why they produce a particular result. This is particularly problematic in the face of adversarial attacks, in which the NN is deliberately fooled by fake data designed to elicit a misclassification. The burgeoning field of explainable artificial intelligence attempts to use a variety of techniques to make DL decision making less of a black box process. The field of DL and ML has seen a rapid advance in recent years but, in many cases, DL methods may not yield improvements over traditional methods. DL methods are best applied for complex multidimensional data such as imaging data or for predictions involving complex latent nonlinear mechanisms; for example, as found in ecosystem models. Some have successfully applied DL methods to modeling distinct ecosystem parameters such as soil temperature over a soil depth profile , and processes such as ice shelf melting as part of the Energy Exascale Earth System Model . DL methods will also gain importance in microbe enabled soil biogeochemical models that aim to predict links between climate change, elevated CO2 concentrations, plant–microbe interactions, and soil nutrient cycling . For example, the ecosys model allows for the incorporation of microbially based models using traits such as growth rate , optimal temperature , and resulting enzyme activity , as well as genome size. Microbial traits, which can be obtained from genomic data, help to identify and quantify trade offs .