The phytohormone JA is a 13-LOX oxylipin derived from lino lenic acid

Based on this linear model, we developed a all markers’ and marker pairs’ effects, those with higher −log10 PATOWAS pipeline to analyze traits through multiple ome- values indicate markers or marker pairs that are more relevant wide association studies. Therefore, the proposed model and to the phenotypic trait. PATOWAS can be used to study not only GWAS for G2P but In the present study, we sequentially submitted three omic also TWAS for T2P and MWAS for M2P, which is progress marker datasets to PATOWAS to analyze the two field traits, toward an integrative omics . YIELD and KGW. We downloaded the results after completion of To test this presumption and verify our consideration, we the analyses. Based on these results, multiple associative omics used PATOWAS to analyze the rice RIL datasets with two and the biological insight can be compared and integrated. For agronomic traits and three different omics markers. PATOWAS example, the combination of 1D association mapping across accepts 2D omics marker matrix data and 1D phenotypic trait G2P and T2P can help identify the genotype and expressed gene data as inputs .

PATOWAS results for one specific transcript markers with consistent physical positions; comparison associative omics mainly include three parts: variance component of the metabolites from 1D M2P association mapping can analysis for the partition of phenotypic variance, a 1D association uncover the biochemical relevance of tissue-specific metabolites map for the direct biological markers,30 in pot and a 2D association map and traits to be analyzed; and the investigation of major for the interaction of biological marker pairs . Of the biomarker pairs from 2D association mapping can be used to three variance components, the additive component for the build an association network. All these together provide a systems markers’ direct effects and the additive–additive component for biology view into the analyzed traits, leading toward an answer the marker pairs’ interaction effects are biologically meaningful about how genes, transcripts, proteins, and metabolites work and can be explained by the linear model. The higher the sum of together to produce an observable phenotype.Based on the variance component analysis results, we generated six pie charts displaying the three variance components of the two traits across associative genomics, associative transcriptomics, and associative metabolomics . We found that the two biologically meaningful variance components accounted for nearly all of the phenotypic trait variance in associative transcriptomics and associative metabo lomics but not in associative genomics. Also, YIELD was a more complex trait than KGW, as the two biologically meaningful variance components accounted for only 66% of the total phenotypic variance in associative genomics but nearly 100% of the total phenotypic variance in associative transcriptomics and metabolomics .

These findings demonstrate that a chain of environmentally responsive genes and metabolites can be observed and explained at the transcriptomic and metabolomic levels but not at the genomic level. Here we noticed that the marker number for transcripts was obviously one-order of scale higher than the other two. Consider the marker-by-marker interactions: The pairwise number of transcripts will reach to ~250 million, which is about two-order of scale larger than the other two kinds of omic markers. To test whether the higher ratio of biological explanatory components observed in the TWAS result is not due to the larger numbers of transcripts used in TWAS, we further produced a reduced transcript gene set with a number scale comparable to the genotypes and metabolites. We separately submitted the reduced transcript gene set to PATOWAS and checked the variance component analysis result. The procedures to generate a reduced gene set are described as follows: First we mapped the 22,584 transcript genes into the 1619 genotype bins ; one genotype bin may contain none to hundreds of transcript genes. Based on the 1D association mapping result, at most only one representative transcript in one bin was selected. We chose the transcript with the highest −log10 as the representative transcript of a genotype bin. Then we generated a reduced transcript gene set for each phenotypic trait, which essentially is a data matrix with a dimension of 1543 × 210 . Its number of markers was comparable to those in the analyzed genotypes and metabolites. The same approaches were also used to generate two positional comparable 1D G2P and T2P association mapping results in the following section.

We submitted the reduced transcript data and the two phenotypic traits, KGW and YIELD, to PATOWAS for further study. Based on the variance component analysis results, two additional pie charts displaying the three variance components of the two traits in associative transcriptomics were plotted . Again, we observed that the two biologically meaningful components explained nearly 100% of the phenotypic variance, with only a fluctuation between the two components. Thus, we conclude that the much larger numbers of transcripts used in TWAS is not the reason for the higher explanatory ratio of phenotypic variance in associative transcriptomics.Modern GWAS application often involves a panel with hundreds of thousands, or even millions, of genetic variants under only several hundred individual samples. The statistical modeling of such cases is usually challenging because the sample size is substantially smaller than the number of covariates. This is well-known as a “large p small n” problem and requires careful assessment of the statistical characteristics. Our proposed method really can explain more of phenotypic variance, but the cost is that it generates a large number of pairwise covariates. Therefore, it is worthwhile to assess the heritability of the proposed LMM, particularly at the high dimensional data. First, the predictability that is represented by the squared correlation coefficient between the observed and predicted phenotypic value was applied. The squared correlation is approximately equal to R2 = 1−PRESS/SS, where PRESS is the predicted residual error sum of squares and SS is the total sum of squares of the phenotypic values. In principle, we treated each transcript or metabolite marker as an intermediate phenotypic trait and predicated all of these intermediated phenotypic values from all the genotypic data. Therefore, each transcript or metabolite will have an R2 value, predictability . We then used the HAT method to calculate the PREDs for all transcripts and metabolites , applied a series of variable thresholds to the PREDs,dutch bucket for tomatoes and selected the transcript and metabolite markers. Finally, we submitted the subsets of selected transcript genes and metabolites to PATO WAS for variance component analysis and calculated the broad sense heritability, H. Figure 3 shows the assessment result of the broad-sense heritability with the selected markers by PRED thresholding. We found that the number of selected markers continued decreasing as the PRED threshold increased; however, the broad-sense H provides us with a very different perspective of different traits and different associative omics. It needs only ~1000 and fewer than 100 transcripts to explain more than 97% of the phenotypic variance in traits YIELD and KGW, respectively. In associative metabolomics, only 30 metabolites are enough to explain more than 90% of the phenotypic variance. In general, trait KGW is more conserved than trait YIELD, and associative metabolomics is more conserved than associative transcriptomics. Variance component analysis provides us with a big picture by partitioning the phenotypic variation into three components. The two biologically meaningful components for individual markers’ direct effects and the marker pairs’ interaction effects can be further illustrated by 1D and 2D association mapping, respectively.For trait YIELD, three 2D association mapping results were analyzed, and each association matrix was illustrated as a scaled image with pseudocolor . By com parison, we found that genotypic markers were neighbor dependent, as evidenced by the clustering of dots, whereas expressed transcript gene and metabolite markers were neighbor independent, as evidenced by a random distribution of dots. This phenomenon could be explained by the existence of linkage disequilibrium blocks in population genetics. We are usually interested in the significant ≥ Significance_Th) marker pairs instead of all the marker pairs. Similar to 1D association mapping, we could set a significance threshold to generate a binarized version of the 2D association matrix.

We further zoomed in to a specified local region for each associative omics and found that associative genomics demonstrated a 2D local rectangular array while the associative transcriptomics and associative metabolo mics showed a 1D local strip . The specificity of the 2D local structure pattern for associative genomics was due to the existence of LD blocks in genomics level. Further, the dimension size of 2D local rectangular array corresponds to the LD block size.In 2015 the global average concentration of carbon dioxide in the atmosphere reached a record high of 400 μmol CO2 mol-1 air and if current trends continue by the end of this century it could even surpass 800 μmol CO2 mol-1 air. Rising [CO2] is largely responsible for changes in our climate including increased temperatures and altered precipitation patterns. These changes in weather patterns will ultimately influence crop productivity and are predicted to be particularly detrimental to summer crops, such as maize, which will likely experience severe episodes of drought. Maize represents an essential part of the world’s grain food and feed supply, and the majority of the maize cropping systems depends on natural precipitation. Maize uses the C4 photosynthetic mechanism which is not limited by [CO2], and therefore, yields will only benefit from rising [CO2] under conditions of drought when the indirect effect of reduced stomatal conductance enhances the plants water-use efficiency allowing photosynthesis to continue despite limited water conditions. Nevertheless, in addition to abiotic stress, plant diseases and insect pests are also major limiting factors of maize productiv ity, yield, and quality; however, our understanding of how the combination of both elevated [CO2] and drought will affect maize susceptibility to biotic stressors is limited. The mycotoxigenic fungal pathogen, Fusarium verticillioides not only reduces the maize yield by causing rot in all parts of the plant but also produces carcinogenic polyke tide-derived mycotoxins termed fumonisins which render harvested grain unsafe for human or animal consumption. Mycotoxins, such as fumonisins, are among the top food safety concerns with regard to climate change because environmental conditions predicted for the future are important factors that contribute to fumonisin contamination. Warmer temperatures increase evapotranspiration further intensifying drought which has been shown to correlate with Fv disease development and enhance fumonisin accumulation in grain. Recently, we demonstrated that elevated [CO2] also enhances maize susceptibility to Fv infection, but the increase in fungal biomass did not correspond with greater fumonisin levels resulting in an overall reduction in fumonisin per unit fungal biomass. Following Fv inoculation, the accumulation of maize soluble sugars, free fatty acids, lipox ygenase transcripts, jasmonic acid and salicylic acid phytohormones, and ter penoid phytoalexins was dampened at elevated [CO2]. An influx of fatty acid substrate is essential for the burst of JA that initiates the defense signaling process. JA and other oxyli pins are synthesized from free fatty acids through the LOX pathway. The fatty acids are oxidized by LOX enzymes at either the 9 or 13 carbon position to produce 9-LOX or 13-LOX oxylipins, respectively.The defense related functions of 9-LOX metabolites are not well characterized, but they have been implicated in the stimulation of mycotoxin production. Elevated [CO2] appears to effect both 9- and 13-LOX oxylipin biosynthesis at the level of fatty acid sub strate supply and LOX-gene transcription. Lower concentrations of defensive phytochemicals, such as the zealexin and kauralexin terpenoid phytoalexins, due to compromised JA biosynthesis and signaling is consistent with increased Fv proliferation. Additionally, a damp ened response of 9-LOX metabolites could reduce the ratio of fumonisin per Fv biomass. Elevated [CO2] similarly enhances C3 crop susceptibility to herbivory by compromising LOX-gene transcription, JA biosynthesis and JA-dependent antiherbivore defenses. However, in C3 crops JA regulated defenses appeared to be compromised by an antagonistic boost in SA production, which does not occur in maize. Furthermore, the effects of elevated [CO2] were negated when soybean plants were simultaneously exposed to drought stress. Whether drought will negate the effects of elevated [CO2] on maize susceptibility to Fv is unknown, and it is unclear what the interactive effects of elevated [CO2] and drought will do to fumonisin levels. Although individual stress responses display measurable specificity, plants are frequently simultaneously challenged by several stress factors resulting in the activation of multiple signals that engage in cross-talk and alter individual responses.

Post-hoc analysis of mean differences were performed using Tukey’s HSD test

Multiple measurements of plant growth were evaluated in order to comprehensively assess the potential of BC as an alternative to peat in soil-free substrates . Germination rates were determined by daily counts for the first 10 days following sowing, after which seedlings were thinned to 1 per pot. Seedlings were transplanted into pots that had zero germination. Replacement seedlings were used from substrates with equivalent %BC but no pH adjustment and were the same age as seedlings in the experimental trial. Weekly measurements over 9 weeks were taken for plant height and for relative chlorophyll content as leaf greenness using a SPAD 502 Plus Chlorophyll Meter . SPAD meters measure the difference between red light and infrared light absorbance, and for a given species and cultivar under the same growing conditions SPAD values can be used as an indicator of relative chlorophyll content . To ensure accurate measurement of new leaf tissue, four separate points were consistently measured on the second fully extended leaf from the top of the plant . SPAD measurements were taken between the tip and apex of the leaf to better reflect chlorophyll content and reduce measurement variability . At early stage flowering in week 9 , above-ground biomass was harvested. Fresh and dry biomass was measured individually for shoots, flowers, and buds. Total N was determined separately for non-flowering and flowering biomass by dry combustion using an elemental analyzer . Total above-ground biomass N was calculated from non-flowering and flowering shoot biomass and N measurements.

To examine fertigation effects on substrate properties over the 9- week growing period,nft growing system root-free substrates were analyzed for pH, electrical conductivity , and plant-available nitrogen and phosphorus . Available N was determined by extraction with 2 mol L−1 KCl with shaking for 60 min. Ammonium and nitrate N in the centrifuged extract were measured colorimetrically using the salicylate-hypochlorite method and vanadium chloride reduction method , respectively. Available P was determined by extraction with 0.5 mol L−1 NaHCO3 at pH 8.5 with shaking for 30 min, and orthophosphate P in the filtered extract was estimated as molybdate-reactive P . Available N and P in post-harvest substrates were corrected for substrate moisture content, which was determined gravimetrically by drying at 105 °C.Analyses of variance was used to analyze differences among the treatments for plant growth and substrate properties. Assumptions of normality and homoscedasticity of residuals were tested with the Shapiro-Wilk and Levene tests, respectively, using SAS Version 9.4 . Data were transformed when possible to meet these assumptions, including log transformation , square root transformation and Poisson transformation for variables with zero values . ANOVA was first performed using an exploratory model to test for potential inter actions of substrate and pH adjustment for each response variable. If there was no interaction, simple mean differences of response variables were evaluated. If there was a significant interaction of BC substitution and pH adjustment, effects were analyzed separately for each factor.If transformations were not successful, non-parametric analysis was performed with JMP Version 11 using a Welch ANOVA, and significant differences in means for BC substitution treatments relative to the non-substituted control were evaluated using the Steel test. Relationships among post-harvest substrate properties were explored using linear correlation analysis with PROC CORR in SAS v9.4.By evaluating an alkaline BC at high volumetric rates in soil-free substrates, this study addresses a potential obstacle to the feasibility of BC-based substrates for plant production .

The present data demonstrate that substituting a softwood BC with strongly alkaline pH for peat at high rates in soil-free substrates does not require pH adjustment under common greenhouse conditions because germination, shoot biomass and N content, and flowering of marigold did not significantly differ between substrates with and without initial adjustment to pH 5.8. BC substitution may even improve plant growth, as marigold plants with intermediate BC substitution exhibited higher relative chlorophyll content relative to 0% BC . These results are in mixed support of the stated hypothesis because BC substitution and pH adjustment effects on marigold depended on the stage of growth. As hypothesized, increasing BC substitution decreased plant height and chlorophyll content in the early stages of marigold growth. Though pH adjustment of BC substrates negatively affected germination and height, this may have been due to phytotoxicity of PLA used to decrease pH of high %BC substrates. By week 9, plant growth was similar regardless of BC substitution and initial pH adjustment, failing to support the hypotheses that high BC substitution rates would impair plant growth and that this would be alleviated by pH adjustment. However, since fertigation provided excess nutrients, pH was likely less important for nutrient availability. Equivalent and slightly positive effects of BC substitution at high rates and without pH adjustment can be partially attributed to the convergence of pH over 9 weeks of fertigation and plant growth to pH 4.4–7.4. As this high-temperature softwood BC has a higher pH than most BCs and was used at high substitution rates , it represents a ‘worst-case scenario’ liming effect. BCs produced from other feed stocks and/or at lower temperatures may not have as pronounced liming effects. De creases in substrate pH over time could reflect a number of processes: a residual liming effect of BC,container size for strawberries which could also account for the slight upward pH drift of substrates initially corrected to pH 5.8; nitrifification; rhizosphere acidification due to cation uptake. Though downward pH drift in peat-based substrates initially limed to a circumneutral pH has been found to be inverse to the base saturation of peat , the 0% BC substrates initially limed to pH 5.8 in this study did not exhibit significant pH changes.

The availability and plant uptake of N may be impacted by substrate pH, as indicated by extractable inorganic N, relative differences in chlorophyll content, and above-ground plant N. This may explain initial decreased plant height and relative chlorophyll content in high BC substrates with high initial pH . Foliar chlorosis in ornamental plants, including marigold, grown in high pH substrates has been induced by liming in peat substrates and could reflect non-N deficiencies such as iron and manganese . Similar above-ground biomass and total N despite greater relative chlorophyll content in high BC substrates by week 9 indicates that initial differences in chlorophyll content by BC substitution did not persist and that initial greater chlorophyll content for marigold in high BC sub strates did not necessarily translate to greater biomass and N uptake. A lack of N defificiency under conditions of fertigation is further evidenced by overall high concentrations of available N in substrates at week 9 and by the absence of correlation between available N with marigold above-ground biomass and N content. Elevated chlorophyll content with BC substitution may therefore reflect enhanced plant access to non-N nutrients. Available N was inverse to SPAD values in week 9 and did not reflect similar above-ground plant N concentrations. The disparity be tween marked differences in substrate N availability under conditions of fertigation yet similar above-ground biomass N content could be explained by pH-dependent gaseous losses of N in pH unadjusted substrates and/or differences in extractability influenced by pH-dependent binding. That extractable inorganic P did not differ as much as inorganic N across pH gradient of pH unadjusted substrates could indicate similar anion exchange capacity of substrates. High available N and P in substrates challenges the hypothesis that BC substitution can influence marigold growth by affecting availability of nutrients added by fertigation. For example, post-harvest available P was positively correlated with marigold biomass but was two orders of magnitude higher than thresholds of deficiency . Though high C:N substrates such as peat can entail sufficient N im mobilization so as to compromise plant growth , N fertilization as in this study would be expected to rapidly alleviate N deficiency. This time-dependent effect may have manifested as lower chlorophyll content in high BC substrates in week 1 but not week 9. Similarly, N fertilization alleviated slightly lower biomass accumulation of marigolds grown in pine wood-based substrates com pared to peat . Though the experimental design of this study removed water and nutrient limitations by daily fertigation, the present findings indicate a potential benefit of BC for water availability in soil-free substrates. The increase in WHC with BC substitution that peaked at 30% BC supports this hypothesized benefit of BC at high rates for soil-free substrates , as well as in inorganic matrices like soils .Marigold germination and growth response to BC substitution in pH adjusted substrates was likely due to the use of pyroligneous acid to decrease pH.

An increasing amount of PLA was applied to reduce increasingly elevated pH at high rates of the alkaline BC used. Since pH adjusted substrates had the same target pH , the difference can be attributed to a non-pH effect of the almond shell PLA used in this study. PLAs are a complex mixture of organic compounds of varying biological and phytological activity, including toxicity. These include organic acids , phenols, ketones phenylethers, and furan and pyran derivatives . The survival and equivalent growth of marigold seedlings transplanted into pH adjusted substrates with no seed germination suggests greater sensitivity of seeds than seedlings to PLA effects and is consistent with previous findings of PLA inhibition of germination . Par allel in vitro experiments , revealed full inhibition of marigold and lettuce germination at PLA ≥ 2.50% and ≥ 1.25% , respectively, though a similiar response occurred for acetic acid, a major PLA component at the same concentration. Studies indicate mixed effects of PLA on biological activity, with both plant-growth promoting and toxic effects, and antimicrobial effects. For example, PLA improved in vitro rooting of pear , and at rates of up to 6% increased fruiting of edible mushrooms in sawdust-based substrates . On the other hand, germination of cress was inhibited by exposure to volatiles from pyrolysis, which are captured via condensation in the production of PLA . Similarly, cress germination was inhibited by BCs with high volatile contents . Like BC, feed stock and production conditions can significantly impact PLA composition and anti-biological activity , and thus the negative impacts of PLA observed in this study may be specific to the almond shell PLA used here.The potential of pyrolyzed biomass in soil-free substrates has been investigated since the mid-20th century. For example, Kono investigated the utility of charcoal to improve substrate physical properties such as water holding capacity and bulk density for orchid production . However, the rapidly expanding body of knowledge on BC, including the ability to design BCs based on feed stock and pyrolysis conditions, means that BCs can be engineered to target additional benefits for to soil-free substrates. Significant enrichment in available N and P over the course of 9 weeks of fertigation reflects high input conditions in greenhouse production systems. Compared to peat, the longer decomposition half-life of high-temperature BCs such as the one in this study, and the potential of nutrient ions to bind to BC and re-solubilize when applied to soils raises the possibility of re-using BC-based substrates as fertilizers. BC substitution may increase the longevity of peat-based substrates under conditions of high nutrient availability common in their use . Decomposition of peat during long grow periods, in particular under high N additions, can compromise physical and chemical properties . Partially replacing peat with less decomposable materials can decrease the overall decomposition rate of the remaining peat component of substrates even under N fertilization , raising the possibility of extending the lifetime of peat-based substrates with partial BC substitution. The availability of BC as a secondary product of bio-energy production and/or waste stream management , as well as lower transportation costs made possible by regional or on-site BC production, could further leverage economic advantages over peat and peat alternatives . Recent studies support the unique ability of BC to mediate biological interactions with benefits for greenhouse production such as enhanced pathogen and pest suppression. For example, 1–5% additions of citrus wood BC to peat-based substrates increased expression of pathogen defense genes in strawberry and as a result suppressed fungal disease ; for tomato and pepper , such additions delayed and reduced disease from fungal pathogens and mites .

Leaves were ground to fine powder using mortar and pestle with liquid nitrogen

Fremont “forager” research was stimulated by the work of Steve Simms at Topaz Slough m Utah’s west desert. Topaz Slough was characterized as a temporary Fremont occupation and seemed evidence of an adaptation quite different from the more often investigated Fremont village sites. As a consequence of Shnms’ findings, which but on the work of Madsen , the term ‘Fremont foragers’ has become something of a buzz phrase in Fremont studies Despite the clear interest in Fremont strategic variability and the fact that many west desert open sites with Fremont occupations are known , few have been excavated, and reports of that work have tended to be brief . The present monograph, reporting the excavation of a “Fremont village” on a large dune on the extreme southwest margm of the Great Salt Lake Desert, is the first detailed site report of an extensively excavated Fremont forager site. This review briefly describes the contents of the monograph by chapter and comments on the contributions of the work. The Introduction states the reasons for the Buzz-Cut Dune excavations. The site was discovered during an effort to blade the top off a large dune to improve the one-of-sight between communication towers at Dugway Proving Ground. This activity exposed several possible structures, some of which contained Fremont diagnostics.

In order to salvage the clearly significant site information. Anthrax is a severe infectious disease caused by Bacillus anthracis. The spores can be produced easily and released in air as a biological weapon,plastic potting pots leading to a fatality rate of 86–89% . Bacillus anthracis secrets anthrax toxin, which is composed of a cell-binding protein, namely protective antigen , and two enzymatic proteins called lethal factor and edema factor . The cellular toxicity starts with the binding of PA to anthrax toxin receptors, after which the bound PA is cleaved by a furin family protease, leaving a 63 kDa fragment bound to the receptors . The receptor-PA complex then self-assembles into a heptamer 7, allowing binding of LF and EF, which is then internalized to the cytosol through endocytosis, causing disruption to normal cellular physiology . Antitoxins based on receptor-decoy binding show promising advantages over an antibody-based strategy since it is difficult to engineer toxins to escape the inhibitory effect of the decoy without compromising binding to its cellular receptor. By making the extracellular domain of the main anthrax toxin receptor Capillary Morphogenesis Gene 2 protein recombinantly , that can be used as a prophylaxis or post-exposure treatment, to neutralize anthrax toxins in blood, preventing cell infection. Additionally, fusing an Fc domain to rCMG2 increases the serum half-life through interaction with the salvage neonatal Fc-receptor and lowers renal clearance rate . These factors make rCMG2-Fc a promising anthrax decoy protein, which retains the high binding affinity to the PA along with a longer blood circulatory half-life than rCMG2 .

We used a plant-based expression system for protein expression due to its rapid production rate and inherent scalability, which is critical for providing rapid response under emergency conditions. Moreover, plants rarely carry animal pathogens and are capable of post-translational modification, making them an appealing alternative to traditional protein expression systems such as mammalian cell culture or microbial fermentation . N-glycosylation can affect protein folding, structural integrity, and function , which makes it an important design consideration for glycoprotein based therapeutics. In some cases, proteins with proper glycosylation exhibit optimal efficacy. For example, Fc glycosylation is required to elicit effector functions of human IgG1 . Thus, it should be preserved when immune defense is desired, for instance, when expressing anti tumor mAbs . On the other hand, for drugs that treat chronic conditions, the absence of glycosylation is desired to avoid effector functions and associated inflammatory responses. Another important consideration is that glycosylated proteins are less susceptible to proteases, such as pepsin, compared with aglycosylated counterparts , which should be considered to maximize protein yield. Although the impacts of protein N-glycosylation have been studied, typically only one or two aspects were studied at a time, and these studies were done on antibodies . This study provides a comprehensive approach utilizing a combination of experimental and computational techniques to evaluate the effects of N-glycosylation on rCMG2-Fc fusion protein properties. In this study, the protein expression, toxin neutralization efficacy, binding kinetics, thermostability, and structural configuration were studied experimentally and compared among three rCMG2-Fc glycoform variants.

In addition, we employ atomistic molecular dynamics simulation to understand the structure and dynamics of the predominant glycoform of the APO, ER, and Agly variants. Atomistic MD simulations are well-suited for the study of biomolecular systems, providing full accessibility to virtual, high-resolution, time-ordered, atomic trajectories . MD simulations have been used to study many different biological systems, including lipid membranes, trans-membrane proteins, and other glycoproteins . While fully atomistic protein simulation is a powerful tool to investigate structural and functional information, it is important to recognize the current limitations of the technique. In particular, protein folding is known to occur on the order of microseconds to seconds , while atomistic protein simulation is generally limited to hundreds of nanoseconds due to limited computing resources. This limitation generally prohibits the straightforward simulation of protein fold transitions. The length-scale of atomistic protein simulations is also computationally restricted, allowing only one rCMG2-Fc dimer to be simulated. Despite these limitations, this work shows MD simulation data is capable of providing insight into the effects of glycosylation on protein structure, and improving our understanding and interpretation of experimental observations. To the best of our knowledge, no study has been conducted on Fc-fusion protein considering that manyexperimental and molecular simulation factors. This study provides an integrated experimental and computational approach to evaluate Fc N-glycosylation impacts on rCMG2-Fc properties,4×8 flood tray and potentially serves as a guideline for general glycoprotein based therapeutic design, especially for Fc-fusion proteins.The codon optimized CMG2-Fc sequence includes the extracellular domain of CMG2 , followed by two serine residues, the upper hinge of IgG2 , and Fc region of human IgG1.The resulting sequence corresponds to the APO variant as described previously.A SEKDEL C-terminal motif was included to make the ER variant; a point mutation of N268Q on Fc was included to make the Agly variant. The genes encoding rCMG2-Fc variants were codon-optimized for expression in Nicotiana benthamiana. The full construct consists of the CaMV 35S promoter, Ω leader sequence, gene encoding the Ramy3D signal peptide, followed by rCMG2-Fc gene and octopine synthase terminator . Agrobacterium tumefaciens EHA105 with the helper plasmid was transfected with the resulting binary expression vectors separately via electroporation. A binary vector capable of expressing P19 to suppress RNAi-mediated gene silencing in Nicotiana benthamiana plants was co-infiltrated with the rCMG2-Fc-APO binary vector as previously described .Plant tissue was collected at day 6 after infiltration. To evaluate the average expression level, leaves from 10 plants were collected and stored at −80°C prior to extraction.The leaf powder was weighted and mixed with extraction buffer at a leaf mass to buffer volume ratio of 1:7. The mixture was incubated on a shaker at 4°C for 1 h and then centrifuged at 1,800g at 4°C for 1 h, followed by 0.22 μm filtration to remove insoluble particles.

Filtered plant extract was loaded to protein A column and eluted with glycine-HCl buffer at pH of 3.0. Purified protein was immediately titrated to neutral pH with 1 M tris buffer, and buffer exchanged to 1X PBS through overnight dialysis at 4°C.Expression of rCMG2-Fc in crude plant extract was quantified by a sandwich ELISA. First, ELISA microplate wells were coated with Protein A at a concentration of 50 μg/ml in 1X PBS buffer for 1 h, followed with plate blocking with 5% nonfat milk in 1X PBS buffer for 20 min. Crude plant extracts and purified standards were loaded to the plate and incubated from 1 h . The bound rCMG2-Fc was detected by incubating a horseradish peroxidase -conjugated goat anti human IgG at a concentration of 0.5 μg/ml for 1 h. Plates were washed three times with 1X PBST between each of these steps. All incubation steps were done at 37°C, with an incubation volume of 50 μl. Next, 100 μl of ELISA colorimetric TMB substrate was added to each well and incubated for 10 min, followed by the addition of 100 μl of 1 N HCl to stop the reaction. The absorbance at 450 nm was measured with a microplate reader . The absorbance of protein standard was plotted as a function of rCMG2-Fc concentration, and was fitted to the 4-parameter model in SoftMax Pro software. The concentration of rCMG2-Fc in crude plant extract was determined by interpolating from the linear region of the standard curve.SDS-PAGE and Western blot analyses were performed on purified rCMG2-Fc variants. Protein was denatured and reduced by treating samples at 95°C for 5 min with 5% of 2-mercaptoethanol . For nonreducing SDS-PAGE, samples were denatured by heat treatment at 95°C for 5 min. Samples were loaded to precast 4–20% SDS-Tris HCl polyacrylamide gels , running at 200 V for 35 min. For SDS-PAGE, the gel was washed three times with water and stained with Coomassie Brilliant Blue R-250 Staining Solution . For Western blot analysis, samples were transferred to a nitrocellulose membrane by electrophoretic transfer using the iBlot Gel Transfer Device . For Western blot detecting the CMG2 domain, the membrane was probed with a goat anti-CMG2 polyclonal antibody at a concentration of 0.3 μg/ml, followed by incubation of a polyclonal AP-conjugated rabbit anti-goat IgG antibody at 1:10,000 dilution. For Western blot detecting the Fc domain, the membrane was incubated with a polyclonal AP-conjugated goat anti-human IgG antibody at 1:3,000 dilution. The blots were developed using SIGMAFAST BCIP/NBT according to the product instruction.About 100 ns simulations of Agly, MAN8, and GnGnXF rCMG2-Fc glycoforms were performed in GROMACS with the AMBER ff14SB and GLYCAM06-j force fields. The AMBER topology files were exported to GROMACS format using ACPYPE with updated modifications which enable simulations with the GLYCAM forcefield in GROMACS . The 100 ns production simulations were first preceded by energy minimization in vacuum, solvation, solvated energy minimization, a 100 ps NVT equilibration, and finally a 100 ps NPT equilibration. Both energy minimizations were terminated with a maximum force tolerance of 1,000 kJ mol−1 nm−1. Each glycoform was solvated with explicit water with a minimum distance of 1.2 nm between the glycoprotein and the edge of the periodic box. The solvated systems were then neutralized with either sodium or chloride ions, and then concentrated to 0.155 M NaCl. The velocity-rescale thermostat was used with a reference temperature of 310 K and a time constant of 0.1 ps. The isotropic Parrinello-Rahman barostat was used with a reference pressure of 1 bar, a time constant of 2 ps, and an isothermal compressibility of 4.5 × 10−5 bar−1. All nonbonded interactions employed a short-range cutoff of 1 nm, with vertically shifted potentials such that the potential at the cutoff range is zero. The Particle-Mesh Ewald method with cubic interpolation was used to model long range electrostatic interactions. All non-water bonds were constrained with LINCS , while water bonds were constrained with SETTLE . A 2 fs timestep was used with a sampling interval of 0.1 ns, for a total of 1,000 data points per 100 ns simulation.Recombinant CMG2-Fc variants were transiently expressed in Nicotiana benthamiana whole plants via agroinfiltration under identical conditions, and the expression levels were determined in crude leaf extract at 6 days post infiltration with a sandwich ELISA detecting the Fc domain of rCMG2-Fc. The expression level rankings on the leaf fresh weight basis from high to low were: APO , ER , and Agly variants . Both APO and Agly variants, which only differ in the N-glycosylation generated by a point mutation of N268Q, were targeted to plant Apoplast. The only N-glycosylation site within rCMG2-Fc is located at the CH2 domain of IgG1 Fc . The significantly higher expression of the APO variant with respect to the Agly variant might be due to stabilizing effects of N-glycans on protein accumulation in planta. This observation is consistent with previous studies, where proteins are more susceptible to protease cleavage after deglycosylation . In many cases, targeting proteins to the ER will result in a greater protein yield compared with targeting to the cytosol or apoplast .

The mean intake of local sheep protein was reported to be one day per week

Studies identified mutton as a core food staple, comprising 6% of the total energy and 10% of the total protein consumed in the Diné diet. The current study participants reported that 35% of their total protein meat intake is comprised by local O. aries.In the study community, the typical serving size per foodstuff is reported to be 76.54 g of sheep muscle protein, roasted or boiled whole liver 377.5 g, roasted one whole kidney 76 g, and roasted or boiled lung 111.5 g. Using the typical serving size for each food stuff, we used the maximum HM concentration for each food item to calculate the weekly intake of HM from the current diet of the study population. The calculations for Mo and Se are based on the information provided by harvesters and are reflective of the sheep meat average consumption of one day per week. The Recommended Dietary Allowance for Mo was exceeded by more than a factor of 2 but, but the tolerable upper intake limit was not exceeded. The liver alone exceeded the Mo RDA by a factor of 1.8. By our estimates, an individual would have to consume half of the typical serving size to meet the RDA. The Mo levels in all sheep food products consumed comprised of 4.6% of the UL. The Se Reference Dietary Intake for adult males and females is 55 µg per day,big pot for flowers and the Tolerable Upper Intake level is set at 400 µg/day.

For all sheep products, the harvesters exceeded the Se RDI by more than a factor of seven and were slightly below the tolerable upper intake level of 400 µg/day. The liver protein intake alone comprised 80% of the tolerable upper limit of Se. Hypothetically, if one consumes liver protein more than once a week in the current scenario, the Se UL will be exceeded. The reported values only take into account the levels representative of sheep protein intake evaluated in this study and exclude non-subsistence and other dietary sources. In summary, in this U mining impacted area, our calculations indicate that the Se levels found in locally harvested sheep exceeded the RDI significantly but were marginally below the established daily tolerable upper intake level. Similarly, the RDA for Mo was exceeded, while the Mo UL was not exceeded. Consuming liver once a week has exhibited exceedances in Se RDI and Mo RDA, and it is anticipated that eating more than one serving size of liver per week would cause one to exceed the Se UL. Dietary sheep intake should be adjusted to avoid exceeding the Cd PTWI and Se UL. Diversification of the overall dietary intake or minimizing the intake of high HM content foods are recommended until further research can be done. Our study was comprised of adults only; therefore, our calculations are based exclusively on adult food intake. Recommendations based on tailored research are needed for those that are more sensitive to HM exposure such as children, the elderly, pregnant women, or those with at risk health conditions. There is no dietary intake guideline for the remaining HM examined in this study.

Tropical regions are experiencing wide-spread and rapid conversion of native vegetation to agricultural land uses, often resulting in severe soil degradation . Several studies document rapid degradation of selected soil properties, with a focus on processes affecting carbon cycling, soil fertility, and soil physical properties . However, few studies report on land-use changes to soil colloidal properties, such as miner alogical and surface charge characteristics. The few existing studies showed either no differences in clay mineralogy or minimal clay mineral degradation with strong acidification resulting from long-term ammonium fertilizer application . In contrast to soil organic carbon and fertility characteristics, the soil colloidal fraction is generally more resilient to changes in land use and therefore any changes are difficult to detect in short-term studies. As a result, there is a distinct paucity of information regarding changes in soil mineralogical characteristic in response to land use change, especially for tropical Andosols that represent >50% of global Andosol land area . Indonesia has 127 active volcanoes resulting in wide distribution of Andosols.These Andosols support high agricultural productivity with some of the world’s highest human-carrying capacity being found on volcanic soils in Indonesia . Andosols have several unique physical and chemical properties owing to a dominance of ‘active’ iron, aluminum and aluminosilicate materials in their colloidal fraction. While volcanic soils have been extensively studied, there are relatively few studies examining the active Fe/Al fraction of tropical Andosols and their resilience to ecosystem perturbations, such as land-use change. It is expected that the active Fe/ Al fraction will be more kinetically and thermodynamically susceptible to alteration from land-use conversion than crystal line clay minerals . As the active Fe/Al fraction imparts soil and ecosystem resiliency, it is important to document long-term changes in the soil colloidalfraction in response to land-use change as this fraction has a strong effect on carbon cycling, soil fertility and nutrient leaching.

In Indonesia, land use/land cover of Andosols is primarily native rainforest, tea plantation, horticultural crops, paddy fields and other food crops. Land-use conversion from native rainforest to agriculture has taken place over long periods of time making assessments of long-term alteration to soil properties possible. Our previous assessment of changes in soil properties following conversion from rainforest to agronomic land use in Indonesia demonstrated strong resilience of soil physical properties, carbon stocks and fertility factors . For example,grow table soils showed increased bulk density and meso/micropores that contributed to increased plant-available water retention capacity; increased soil carbon and nitrogen stocks with a redistribution of organic matter from topsoil to subsoil horizons; and lower carbon mineralization and microbial biomass, especially in topsoil horizons. This study expands upon our previous work to examine changes in the mineralogical and surface charge characteristics of the soil colloidal fraction following conversion of native rainforest to tea plantation and horticultural crops. Results of this study inform strategic policy and management practices involving tropical forest conservation/reforestation and sustainable agricultural production in tropical Andosols.Under the perudic/isothermic soil climatic regime in West Java, Indonesia, chemical weathering rates are expected to be very high. Weathering rates are further accelerated by several characteristics of the basaltic-andesite tephra, such as large surface area , high permeability and a glass fraction dominated by highly weatherable colored glass. Shoji et al. measured rates of aluminum release from colored glass being 1.5 times greater than noncolored glass and weathering rates increased ~1.5 times for each 10 °C increase in temperature between 0 and 30 °C. Thus, it is not unexpected that the volcanic glass fraction was largely depleted within the ~8,700 and ~14,500 years since deposition. In particular, substantial desilication has occurred as indicated by the decrease in the Si:Al molar ratio from ~2.9 in a typical basaltic-andesite material to the range of 0.94 to 1.36 in the investigated soils. As the silica content of the soil becomes depleted, the clay-size mineralogy transforms from those mate rials requiring high silica activities to those stable at lower silica activities . Without periodic tephra deposition to rejuvenate the weathering sequence, the intensity of the isothermic/ perudic weathering environment would lead to severe desilication and occurrence of Ultisols/Oxisols dominated by low activity clays in the humid tropics. In terms of the land-use treatments, markedly higher weathering rates were suggested under PF1 and TP1 based on their higher total Ti and Zr concentrations that accumulate during chemical weathering/leaching . For soil profiles developed from uniform parent materials, Ti and Zr concentrations decrease with increasing soil depth . However, in this study, Ti and Zr concentrations increased with increasing soil depth reflecting the older, more highly weathered soil materials at depth. Higher weathering rates in the PF1 and TP1 pedons are further supported by the higher Fed:Fet indicating a greater release of iron by chemical weathering.

We posit that higher apparent weathering rates may result from the lower pH and higher DOC concentrations found in the PF1 and TP1 pedons . We acknowledge that subtle differences in the chemical composition of the tephra depos its may contribute to differences among pedons. Therefore, it is difficult to assess whether the apparent differences in chemical weathering can be solely attributed to changes in land use over the ~100-yr period.The dominance of poorly crystalline materials is likely associated with rapid weathering of the volcanic glass fraction. Rapid chemical weathering results in supersaturation of the soil solution, which kinetically favors formation of metastable nanocrystalline and paracrystalline precipitation products . Further, the lack of a distinct dry season hinders the crystallization process as Ostwald ripening and dehydration are energetically and kinetically favored by prolonged periods of high temperature and desic cation . These findings are similar to the weathering of tephra deposits in mesic/udic climatic regimes, such as in northern Japan and New Zealand, which yields a prevalence of poorly crystalline materials with few crystalline minerals at a similar stage of weathering . Allophanic materials are the favored weathering product in humid, temperate climates under conditions of low organic matter concentrations and pHH2O > 4.9 . Soil genesis associated with the native pine forest, which was the dominant land use for the majority of time during soil formation at all sites, resulted in high organic matter and low pH values that might be expected to favor formation of Al-humus complexes as opposed to allophanic materials, thereby leading to formation of non-allophanic Andosols. However, another important factor controlling the fate of Al3+ in non-allophanic Andosols of northern Japan is eolian inputs of 2:1 layer silicates from Asia, which incorporate Al3+ in their interlayer position rendering it unavailable for allophane formation . Eolian inputs of 2:1 layer silicates in Indonesia are expected to be minor compared to northern Japan. Therefore, the release of Al3+ by chemical weathering appears to exceed the incorporation of Al3+ into organic com plexes and 2:1 layer silicates leading to abundant formation of allophanic materials in this study. In terms of land use, the TP1 topsoil displayed a distinctly lower allophanic content in the upper two horizons than the other land-use types. The lower allophane concentrations may be due to dissolution of allophanic materials by soil acidification associated with high rates of ammonia- fertilizer applica tion to tea plantations . Dahlgren and Saigusa demon strated rapid dissolution of allophanic materials under acidic conditions and dissolution rates showed a strong H+ – dependence. This interpretation is consistent with Takahashi et al. who documented a decrease in allophanic materials with a concomitant increase in Al–humus complexes asso ciated with strong acidification of tea plantations. Notably, the lower allophanic material concentrations in the upper two horizons of the TP1 pedon correspond to the highest Al humus complex concentrations found in this study. While the PF1 profile also displayed strong acidification, its allophanic content was consistently high throughout the pedon com pared to the other land-use types. We posit that biogenic cycling of silica by forest vegetation coupled with higher evapotranspiration in the native forest, which lowers the leaching potential, may compensate for the strong natural acidity by maintaining higher silica activities to support the stability of allophanic materials. The overall high Feo/Fed ratio across all soil profiles indicates the dominance of nanocrystalline iron oxides, such as ferrihydrite, whose formation is favored by high organic matter concentrations and high Al3+/silica activities that are posited to inhibit crystallization . Additionally, the perudic moisture regime prevents soil profile desiccation limiting Ostwald ripening and dehydration that promote crystallization. The 2Bw3 horizon of the HF1 profile was a notable exception to the overall iron oxide pattern having a very high Fed content and lower Feo:Fed . This horizon occurs at a tephra-unit discontinuity and may represent a redox fea ture associated with imperfect vertical drainage . We posit that redox cycling driven by alternating anaerobic/aerobic cycles leads to ferrolysis that contributes to intense proton generation and localized weathering leading to Fe oxides and gibbsite enrichment . Higher Feo concentrations in the PF1 and TP1 profiles may be associated with the appreciably lower pH values compared to the HF1/IH1 profiles .

Logue was a leading figure in co-operative studies and the employee ownership movement

While the GUC is an economic and cultural engine, it is surrounded by seven low income and racially segregated neighborhoods. The median income for households in these communities is $18,500; it is $47,626 in the rest of the city . Unemployment sits at 24 percent, nearly three times the rate for the city at large . The asymmetry between the economic dynamism of GUC and the racialized poverty of its adjacent neighborhoods points towards the unevenness of Cleveland’s economic recovery. Evergreen finds one of its origin points in a larger effort undertaken by the Cleveland Foundation , a local community foundation, to harness the wealth of the GUC for the economic benefit of the neighborhoods surrounding it. A product of Cleveland’s wealthy past, the Cleveland Foundation is one of America’s largest foundations, with an endowment of $1.8 billion. In 2005 it launched the Greater University Circle Initiative, which sought to capture some of the $3 billion dollars spent by GUC institutions each year for the purposes of local economic development . In 2006, India Pierce Lee, a program director with CF, heard Ted Howard from the Democracy Collaborative give a presentation on his vision of community wealth building. The Democracy Collaborative is a leader in the growing “new economy movement,” which seeks systemic change by challenging the imperative for constant economic growth and by promoting economic equality and democracy.

The Collaborative defines community wealth building as “improving the ability of communities and individuals to increase asset ownership, anchor jobs locally,hydroponic seedlings expand the provision of public services, and ensure local economic stability” . A key part of DC’s community wealth-building strategy is harnessing procurement flows from anchor institutions whose deep rootedness in a community creates an incentive to prioritize local economic development. Ted Howard’s strategy for community wealth building mirrored priorities of the Cleveland Foundation’s recent GUC initiative. Shortly after hearing his presentation, India Pierce Lee invited him and DC to do a feasibility study for enacting their community wealth-building strategies in Cleveland. The initial plan was to encourage pre-existing Community Development Corporations to incubate new social enterprises that could harness procurement flows, but no takers could be found. The plan was too risky for local CDCs whose expertise was rooted in affordable housing development . The worker-run co-operative model was a secondary plan that developed through conversations between Howard and John Logue from the Ohio Employee Ownership Center.He had written on the successful models in Mondragon , Emilia Romagna , and Quebec ; Evergreen was designed with these models in mind, especially Mondragon. The final plan was for a network of worker owned co-operatives designed to capture procurement flows from anchor institutions, especially strategic opportunities in the emerging green economy. It is important to understand Evergreen in the context of the Democracy Collaborative’s larger vision and work.

The DC describes their mission as the pursuit of “a new economic system where shared ownership and control creates more equitable and inclusive outcomes, fosters ecological sustainability, and promotes flourishing democratic and community life” . In 2015, DC launched the “Next System Project,” an initiative seeking to “launch a national debate on the nature of ‘the next system’ … to refine and publicize comprehensive alternative political-economic system models that are different in fundamental ways from the failed systems of the past and capable of delivering superior social, economic and ecological outcomes” . The NSP includes a statement signed by a broad assemblage of the political left . Position papers on eco-socialism, commoning, and solidarity economics have been forwarded as part of the effort to debate and actualize the “next system” . Evergreen, then, should be understood as a local experiment in next system design. As we unpack below, there are limits to Evergreen’s nationwide replication. But the movement building and systemic thinking that it is part of are crucial to the growth of the co operative economy, and the transformation of neoliberal capitalism. Given Evergreen’s roots in the “next system” vision of the Democracy Collaborative, it is surprising that it would attract support from the Cleveland Foundation, a wealthy charitable foundation established by a banker and governed by members of Cleveland’s economic elite. Indeed, once the plan for Evergreen was finalized, the CF pledged $3 million of seed funding to the project . The Foundation’s support for a network of worker-owned co-operatives reveals some openness among local higher-ups to the idea of systemic reform. India Pierce Lee, for example, had previously held a post as a Director of the Empowerment Zone with the City of Cleveland’s Department of Economic Development. Empowerment zones are federally designated high-distress communities eligible for a combination of tax credits, loans, grants, and other publicly funded benefits.

At this post, Pierce Lee saw millions of public dollars being spent on economic development – all of it directed to employers – with minimal to zero effect: few new businesses, few new jobs created . Brenner and Theodore have described empowerment zones as “neoliberal policy experiments” . Pierce Lee had experienced these experiments as failures, and was keen to try the alternative that Evergreen represented. Similarly, Howard told us how some CF board members raised ideological concerns over worker ownership, but that a willingness to try alternatives prevailed . Cleveland’s painful history of de-industrialization, out-migration, and persistent racialized poverty likely facilitated elite openness to new forms of economic development. The Cleveland Foundation provided Evergreen with crucial seed funding and technical support. “I cannot stress enough that without the people at the Cleveland Foundation, Evergreen would not have happened,” noted Candi Clouse from Cleveland State University’s Center for Economic Development during our interview . Replicating the “Cleveland Model” is challenging when an integral piece is a supportive and wealthy community foundation. The current conjuncture of “contested neoliberalism,” however, does make the availability of this support more probable. Research by DC on foundations experimenting with funding alternative economic development strategies found examples in Atlanta, Denver, and Washington, D.C. . But the authors also note how “many community foundation leaders talked about conservative boards, isolated from new ideas,4x8ft rolling benches who were reluctant to take up seemingly risky new ways” . Popular and elite frustration with neoliberalism means that foundation support for co-operative development is more possible than in previous eras, but this support remains contingent on local circumstance. Elite frustration with mainstream economic development mechanisms also played a key role in the city’s support of Evergreen. While the Cleveland Foundation provided seed funding and technical assistance, the city played a key role by helping to secure financing. Tracy Nichols, Director of Cleveland’s Department of Economic Development, had seen the plans for Evergreen and wanted to help finance it. Evergreen lost a bank loanin the 2008 financial crisis and having the City’s support in securing financing was a big step towards actualizing their plan. For Nichols, the fact that the Cleveland Foundation was providing seed funding and logistical support helped legitimate Evergreen as a safe bet for municipal resources. All three Evergreen co-operatives are capital intensive. Without clear policy frameworks for funding , putting financing in place required ingenuity. Evergreen is almost entirely debt-financed. The majority of its funds have come from two federal social financing programs: Department of Housing and Urban Development Section 108 funds, and New Market Tax Credits . Of the nearly $24 million that have been raised from federal sources for Evergreen’s development, approximately $11.5 million came from HUD section 108 loans and approximately $9.5 million came from NMTCs . The HUD Section 108 funds were established to provide communities with a source of financing for “economic development, housing rehabilitation, public facilities, and large-scale physical development projects” The HUD low-interest loans were inaccessible without the City’s sustained help; funds flow through state and local governments.

Monies from HUD provided the core financing for the $17 million, 3.25- acre greenhouse that now houses Green City Growers . The greenhouse is located on land that included residential housing prior to Evergreen’s development. Not only did Cleveland’s Department of Economic Development play a central role in securing financing, but it also facilitated the purchase of homes that needed to be demolished before construction of the greenhouse could begin. Unlike HUD Section 108 funds, New Market Tax Credits could be accessed directly by Evergreen’s founders without the City serving as intermediary. But NMTCs are also complex and would have been very challenging to negotiate without the Cleveland Foundation’s technical assistance. “We call the New Market Tax Credits a full employment program for lawyers and accountants,” reflected Howard “because there are hundreds of thousands of dollars of fees” . The Clinton Administration launched the NMTC program in 2000 as a for-profit community development tool. The goal of the program is to help revitalize low-income neighborhoods with private investment that is incentivized through federal income tax credits .5 The NMTC program is meant to be a “win-win” for investors and low-income communities, but investors win more, and at public expense . The NMTC program is arguably an example of what Peck and Ticknell call “roll-out neoliberalism” ; they argue that the neoliberal agenda “has gradually moved from one preoccupied with the active destruction and discreditation of Keynesian-welfarist and social-collectivist institutions to one focused on the purposeful construction and consolidation of neoliberalized state forms, modes of governance, and regulatory relations” . The NMTC program creates new profit opportunities for private investors at public expense: the privatization of gain and socialization of loss common to neoliberal economic policy. A framework that made federal loans available directly to community organizations would arguably be more efficient and less bureaucratic. Lacking this option, Evergreen’s founders pragmatically harnessed whatever resources they could access. Evergreen’s emergence would have been greatly facilitated by policy mechanisms that made financing and technical assistance more readily available. The international co operative movement has prioritized supportive legal frameworks as a key constituent of co-op growth , but there is not a robust literature on policy support for co operatives. Supportive legal frameworks for co-operatives are a “deeply under-researched area” . Based on our review of existing literature, however, we found that there were six primary forms of policy support that have been successfully deployed internationally: co-op recognition, financing, sectoral financing, preferential taxation, supportive infrastructure, and preferential procurement. The most developed examples of these policies are found in areas of dense co-operative concentration: the Basque region of Spain, Emilia Romagna in Northern Italy, and Quebec, Canada.6 Below is a table summarizing how these six policy forms are deployed in the co-op dense regions . The table is included to facilitate further research in the understudied area of co operative policy, and to clarify policy successes for organizers in the co-operative movement interested in emulating them. In the next section we examine “private” or ad hoc versions of the policy supports that Evergreen used and explore what these improvisations reveal about the legislative needs of the US co-op movement more broadly. Of the six enabling policy forms, Evergreen has benefitted from private and ad hoc versions of recognition, financing, supportive infrastructure, and procurement. The Mayor of Cleveland, Frank Jackson, and Ohio Senator Sherrod Brown spoke at the opening ceremonies of Green City Growers. These high-level endorsements, while not written into policy, conferred legitimacy and boosted media coverage. In terms of financing, having a wealthy foundation backstopping the initiative was a crucial first step. The Foundation’s support helped bring the City on board, giving the Department of Economic Development the confidence to access HUD 108 funds to support the co-op. As Nichols reflected: “the loans have some risk for us, but I know the Foundation is backing this initiative, and I know they don’t want it to fail. They have money. I don’t have to worry” . Without technical assistance from the Cleveland Foundation and the Ohio Employee Ownership Center , securing New Market Tax Credit funds would have been even more of a byzantine process. Both the Foundation and the OEOC offered legal support and co-op training respectively. The OEOC has received both state and federal funding, but its state funding has been largely cut. Again, the technical support Evergreen received was ad-hoc and private.

It is not sufficient that farmers and consumers perceive net benefits from GM crop varieties

The GM varieties that have been developed and adopted extensively to date have not experienced significant price discounts because of buyer resistance. This can probably be attributed to the nature of the crops. For feed grains, the buyers are other farmers who are comfortable with the technology, and for fiber crops such as cotton the food safety concerns do not apply. For the major food grains, wheat and rice, even if the farm-production economics potential of GM varieties is as large as for feed grains,market acceptance may differ sufficiently to limit their adoption. Rather than an other farmer, the relevant buyer for these crops is a food processor, manufacturer or retailer who may be reluctant to risk negative publicity or to risk losing consumers who would prefer a biotech-free label or who may not be confident that the biotech and non-biotech grain can be segregated.The adoption of biotechnology must provide net benefits to other participants in the marketing chain, such as food processors and retailers. Pricing of the technology may be a critical factor. Even if the new technology is more cost-effective than the traditional alter native,macetas de plastico 25 litros monopolistic pricing could mean that the technology supplier retains a large share of the benefits.

The cost savings passed on to processors and consumers may be a small fraction of the total benefits, rendering incentives for processors, retailers and consumers to accept the technology comparatively small. Processors and retailers can effectively block a new technology if it does not clearly benefit them, even if there would be net benefits to the general public.The size of the market matters. The cost to develop a new variety is essentially the same whether it is adopted on one acre or a million acres, but the benefits are directly proportional to the number of acres on which the variety is adopted. This is why biotech companies have focused on developing technologies for more widely planted agronomic crops, especially feed-grain and fiber crops for which market barriers are lower. The technology developer must also obtain regulatory approvals. It is difficult to obtain precise information on costs of regulatory approval for biotech crops and chemical pesticides, but ac cording to available estimates, the total cost of R&D — from “discovery” to commercial release of a single new pesticide or herbicide product — exceeds $100 million, and regulatory approval alone costs more than $10 million. A new technology must generate enough revenue for the developer over its life time to cover these costs, and for some crops the total acreage is simply not sufficient. Given the large fixed costs associated with regulatory approvals for specific uses, agricultural chemical companies have concluded that the potential market is too small to warrant the development of pesticides for many of California’s specialty crops, which have become technological orphans.

It does not follow that the government should invest in developing new conventional or GM pest-control technologies for these orphan crops. If the current regulatory policy and process is appropriate and efficiently implemented then the high cost is not excessive; if a new technology cannot generate benefits sufficient to pay those costs, then it is simply not economical to develop that technology. The question for technology orphan crops is whether it is possible to reduce the costs of R&D and regulatory approval sufficiently to make it profitable for the nation and the private sector to change their orphan status.On the supply side, “horticulture” includes an enormous diversity of fruit and vegetable crops, but it also includes many nonfood species, such asornamentals, flowers and recreational turfgrass. Collectively these horticultural crops compare well with major agronomic crops in terms of total value in the United States. However, they use much less acreage, and the market size for some biotech products depends on both acreage and production value. In 2000, the United States produced fruits, nuts and vegetables with a total value of more than $28 billion, of which California contributed about $14 billion . In addition, horticulture includes a small number of larger-scale crops as well as a large number of smaller-scale crops . At current costs for R&D and regulatory approval, it is unlikely that biotechnology products will be developed and achieve market acceptance for many of these smaller-scale crops in the near future . Further, experimentation with perennials such as grapes, nuts and fruit trees is comparatively expensive , and it is costly to bring new acreage into production or replace an existing vineyard or orchard with a new variety. On the demand side, the market for horticultural products, especially fresh fruits and vegetables, is undergoing important changes associated with the changing structure of the global food industry.

Increasingly fewer and larger supermarket chains have been taking over the global market for fruits and vegetables, especially fresh produce, and changing the way these products are marketed. Be cause fresh produce is perishable and subject to fluctuations in availability, quality and price, it presents special problems for supermarket managers compared with packaged goods. Supply-chain management, and the increasing use of contracts that specify production parameters as well as char acteristics and price, is replacing spot markets for many fresh products. A desire for standardized products, regard less of where they are sourced around the world, could limit the development and adoption of products targeting smaller market segments, unless retailers perceive benefits and provide shelf space for diversified products — such as biotech and non-biotech varieties of particular fruits and vegetables. On the other hand, an increasingly wealthy and discriminating consuming public can be expected to continue to demand increasingly differentiated products — with an ever-evolving list of characteristics such as organic, low fat, low-carbohydrate and farm-fresh.Hence retailers will have to balance the cost savings and convenience associated with global standardization against the benefits from providing a greater range of products,vertical rack system which will include GM products when retailers be gin to perceive benefits from stocking them. Unlike other types of foods, fruits and vegetables are often consumed fresh and in clearly identifiable and recognizable form. This has implications for perceptions of quality and food safety that may influence consumer acceptance — perhaps favorably, for instance, if a genetically modified sweet-corn could be marketed as reduced-pesticide . Other elements of GM horticulture — such as nonfood products, ornamentals or turfgrass — have advantages in terms of potential market acceptance. GM trap crops, which provide pesticide protection for other crops, and GM sentinel crops, which signal the presence of pests or provide other agronomic indicators — may be used in food production without overcoming barriers of acceptability to market middlemen or consumers . Biotechnology products designed for home gardeners may be more readily accepted because the grower is the final consumer. Market acceptance in the United States is also linked to continued access to export markets, particularly in the European Union and Japan where restrictions have been applied to biotech foods. The relative importance of the domestic market could help to account for the success of the GM feed-grain technologies in the United States, and it may also help to account for the success of these and other GM technologies in China. China is comparatively important in horticultural biotechnology — its investment in agricultural biotechnology is second only to the United States, but with a different emphasis, including significant investment in horticultural biotechnology .

The technological potential for GM horticultural crops appears great, particularly when we look beyond the “in put” traits that have dominated commercial applications to date, to opportunities in “output” traits, such as pharmaceuticals and shelf-life enhancements. Because delays in socially beneficial technologies mean forgone benefits, there may be a legitimate role for the government in facilitating a faster rate of development and adoption of horticultural biotechnology products. For instance, the government could reform property-rights institutions to increase efficiency and reduce R&D costs. IPRs apply to research processes as well as products, and limited access to enabling technology or simply the high cost of identifying all of the relevant parties and negotiating with them, may be retarding some lines of research — a type of technological gridlock . Nottenburg et al. suggest a government role in improving access to enabling technologies. Similarly, the government could revise its regulations to increase efficiency and reduce costs for regulatory approvals. Instead of requiring a completely separate approval for each genetic transformation “event,” it may be feasible to approve classes of technologies with more modest specific requirements for individual varieties. The government could also reduce some barriers to adoption, especially market acceptance of biotech food products, by providing information about their food safety and environmental implications. The biotech industry and agriculture can have an influence here, too. The general education of consumers and market inter mediaries about biotech products may be facilitated in a process of learning by experience with products — such as nonfood applications, or home garden applications — that have good odds of near-term success because of low barriers to market acceptance and good total benefits.IF and when genetically engineered horticultural products be come more widely available and adopted, they will enter an expanding marketplace that is becoming globally integrated and more consolidated. Fewer, larger firms will control access to a rising share of the world’s population, including rapidly growing middle-income consumers in the developing world. Consumers everywhere will be increasingly focused on convenient, ready-to-eat and value-added products. In order to compete on a global scale, GE produce must meet the challenges of the quickly evolving market for fruits and vegetables. In the United States alone, the estimated final value of fresh produce sold through retail and food-service channels surpassed $81 billion in 2002. Eu rope-wide fresh produce sales through supermarket channels alone were estimated to exceed $73 billion in 2002, and total final sales to exceed $100 billion. Worldwide, consumption and cultivation of fruits and vegetables is in creasing. Between 1990 and 2002, global fruit and vegetable production grew from 0.89 billion tons to 1.3 billion tons, and percapita availability expanded from 342 pounds to 426 pounds . Much of this growth has occurred in China, which is aggressively pursuing agricultural biotechnology . The global fresh fruit-and-vegetable marketing system is increasingly focused on adding value and decreasing costs by streamlining distribution and understanding customer demands. In the United States and Europe this dynamic system has evolved toward predominantly direct sales from shippers to supermarket chains, reducing the use of intermediaries. Food-service channels are absorbing a growing share of total food volume and are also developing more direct buying practices. The year-round availability of fresh produce is now seen as a necessity by both food service and retail buyers. Product form and packaging are also changing as more firms introduce value-added products, such as fresh cut produce, salad greens and related products in consumer-ready packages. Estimated U.S. sales of fresh-cut produce were over $12 billion in 2002. Fresh-cut sales are even higher in Eu rope and beginning to develop in Latin America and Asia as well. The implications of this trend may become as important to the biotechnology industry as the changes in market structure, since fresh-cut processors are increasingly demanding specific varieties bred with attributes beneficial to processing quality.The streamlining of marketing channels poses both challenges and opportunities for horticultural biotechnology. A smaller number of larger firms, controlling more of world food volume, now act as food-safety gatekeepers for their consumers, reflecting the diversity of consumer preferences in their buying practices. Where consumers perceive products utilizing biotechnology to be beneficial, retail and food-service firms will provide them. Products with specialized input traits valued by consumers, such as unique color, flavor, size or ex tended shelf-life, are the most likely to succeed in today’s marketplace. While large food-service and retail buying firms and international traders may offer easy access to consumer markets, if major buyers adopt policies unfavorable to GE foods, distribution obstacles could become insurmount able. Such policies are common among European food retailers, reflecting strong consumer concern there over GE products. The challenge to supply seasonal, perishable products year-round has fa- 82 CALIFORNIA AGRICULTURE, VOLUME 58, NUMBER 2 vored imports, and increased horizontal and vertical coordination and integration among shippers regionally, nationally and internationally.

Nutrients would diffuse and advect from the bulk soil toward the root zone

However, more work is required to determine how to better represent coupled microbe–plant nutrient competition and transport limitations. For example, we have previously shown that one can apply a homogenous soil environment assumption and include the substrate diffusivity constraint in the ECA competition parameters. In this approach, the diffusivity constraint can be directly integrated into the substrate affinity parameter. The “effective” KM would be higher than the affinity measured, e.g., in a hydroponic chamber . We hypothesize that our calibrated KM value, which led to an excellent match with the observations , effectively accounted for this extra diffusive constraint on nutrient uptake. A second approach would be to explicitly consider fine-scale soil fertility heterogeneity, explicitly represent nutrient movement , and apply the ECA framework at high resolution throughout the rhizosphere and bulk soil. However, to test, develop, and apply such a model requires fine-scale measurements of soil nutrient con centrations, microbial activity, and rhizosphere properties and dynamics; model representation of horizontal and vertical root architecture and microbial activity; effective nutrient diffusivities; and potentially, computational resources beyond what is practical in current ESMs.

Yet, there is potential value in this approach if we can produce a reduced order version of the fine-scale model that is reasonable and applicable to ESMs. A third approach,hydroponic farming of intermediate complexity, would be to simplify the spatial heterogeneity of root architecture, soil nutrient distributions, and nutrient transport. Roots could be conceptually clustered in the center of the soil column, where nutrients would become depleted and competition between microbes, roots, and abiotic processes would occur.The “radius of influence” concept that defines a root influencing zone could be used to simplify heterogeneity, with CT5 competition applied to this root influencing zone. More model development, large-scale application, and model-data comparisons are needed to justify such an approach. As we argued above, the choice of nutrient competition theory used by ESMs faces a dilemma between necessary model simplification and accurate process representation. Our goal is to rigorously represent nutrient competition in ESMs with a simple framework that is consistent with theory and appropriate observational constraints while not unduly sacrificing accuracy. We conclude that our ECA nutrient competition approach meets this goal, because it is simple enough to apply to climate-scale prediction and is based on reasonable simplifications to the complex nutrient competition mechanisms occurring in terrestrial ecosystems.Over the past two decades, the ecological significance of anadromous Pacific salmon has been well documented in aquatic ecosystems throughout the Pacific coastal ecoregion.

The annual return of salmon to fresh waters and the associated decomposition of post-reproductive carcasses result in the transfer of marine-derived nutrients and biomass to largely oligotrophic receiving ecosystems. Such inputs have been shown to increase primary production , invertebrate diversity and biomass , and fish growth rates . Since rates of primary production are typically low in many coastal salmon-bearing streams, even small nutrient inputs from anadromous salmon may stimulate autotrophic and heterotrophic production and produce cascading effects through the entire aquatic food web. In addition to subsidizing riverine biota, salmon-borne MDN also benefit vegetation within the riparian corridor . Marine nutrients are delivered to the terrestrial environment via deposition of carcasses during flood events, absorption and uptake of dissolved nutrients by riparian vegetation and removal of carcasses from the stream by piscivorous predators and scavengers . Empirical studies have shown that as much as 30% of the foliar nitrogen in terrestrial plants growing adjacent to salmon streams is of marine origin and that growth rates of riparian trees may be significantly enhanced as a result of salmon-derived subsidies . Nitrogen availability, in particular, is often a growth-limiting factor in many temperate forests and annual inputs ofmarine-derived nitrogen may be critical to the maintenance of riparian health and productivity in Pacific coastal watersheds. Our understanding of the ecological importance of MDN subsidies has been greatly advanced by the application of natural abundance stable isotope analyses. A biogenic consequence of feeding in the marine environment is that adult anadromous salmon are uniquely enriched with the heavier isotopic forms of many elements relative to terrestrial or freshwater sources of these same elements.

When fish senesce and die after spawning, these isotopically heavy nutrients are liberated and ultimately incorporated into aquatic and terrestrial food webs. Our research has determined that the stable nitrogen isotope “fingerprint” of adult anadromous salmon returning to coastal California basins is 15.46 ± 0.27‰ , a value markedly higher than most other natural N sources available to biota in coastal watersheds. This salient isotopic signal allows the application of stable isotope analyses to trace how salmon-borne nutrients are incorporated and cycled by organisms in the receiving watersheds. Researchers interested in the utilization of MDN by riparian trees have generally inferred sequestration and incorporation from foliar δ15N values. Nitrogen is a very minor constituent of wood cellulose and natural abundance levels of δ15N in tree rings have rarely been determined. Poulson et al. were among the first to successfully analyzed δ15N from trees , but analysis required combustion of prohibitively large quantities of material per sample. Since that time, advancements in stable isotope analytical techniques have made it possible to detect δ15N from small samples of material . This permits for non-destructive sampling of live trees via increment cores and provides a novel opportunity to assess the transfer of salmon derived nitrogen into the riparian zones of salmon-bearing watersheds. Reimchen et al. recently reported that wood samples extracted from western hemlock trees in British Columbia yielded clear evidence of SD-nitrogen incorporation with reproducible δ15N values. Intuitively, if spawning salmon represent a significant source of nitrogen to riparian trees in salmon-bearing watersheds, information on salmon abundance may be recorded in the growth and chemical composition of annual tree rings. By quantifying the nitrogen stable isotope composition of tree xylem it would be possible to explore changes in SD-nitrogen over decadal or sub-decadal time increments and determine whether the nutrient capital of riparian biota has been affected by diminished salmon returns. Moreover, if δ15N can be quantified from annual growth rings,macetas plastico cuadradas it may be possible to infer changes in salmon abundance over time and reconstruct historical salmon returns for periods and locations where no such information presently exists. Nearly all salmon recovery programs are built upon very uncertain estimates of population sizes prior to European settlement. The development of robust paleoecological methods to determine historical salmon abundance and variability would greatly assist resource managers in identifying and establishing appropriate restoration targets. For our initial pilot study we collected increment cores from 10 extant riparian Douglas fir trees growing adjacent to WB Mill Creek. All trees were located within 10.0 m of the active stream channel. Core samples were collected on 17 January 2004 from a 250 m section of riparian zone located immediately downstream of the 2.7 km index stream reach used by Waldvogel to derive minimum annual escapement estimates. Small diameter increment core samples were extracted from each tree and prepared for dendrological analysis using standard methods . We concurrently collected a second, large diameter , increment core from four trees for determination of annual nitrogen content and natural abundance stable nitrogen isotope ratios . Diameter at breast height , distance from the active stream channel, and general site characteristics were also recorded for each tree sampled. Increment core samples from Waddell Creek were collected on 16-18 October 2005.

Collections were made from two distinct areas within the watershed: a ~750 m length of riparian zone adjacent to the creek where salmon spawning is known to occur , and a ~500 m section of riparian zone located above a natural barrier to salmon migration . We collected paired increment cores from a total of 10 Douglas-fir and 16 coast redwood trees in the Waddell Creek watershed . Distances from the stream channel, DBH, and site characteristics were also determined for each tree sampled. Two coast redwood cores from the upstream control site were later determined to be damaged or unreliable and excluded from our analyses.Small-diameter increment core samples were air dried, mounted, and sanded for analysis of annual growth rings. Prepared cores were converted to digital images and ring widths were measured to the nearest 0.001 mm using an OPTIMAS image analysis system. Each increment core was measured in triplicate and mean ring widths values were used to generate a time series for each tree. Each time series was then detrended using the tree-ring program ARSTAN to remove trends in ring-width due to non-environmental factors such as increasing age and tree size. Detrending was accomplished using a cubic smoothing spline function that preserved ~50% of the variance contained in the measurement series at a wavelength of 32 years. Individual growth index values were derived by dividing the actual ring-width value by the value predicted from ARSTAN regression equations. Chronologies using the growth index values were subsequently combined into a robust growth index series for each sample site. Cross-dating of coast redwood trees from Waddell Creek was not successful due to the presence of anomalous rings, a high degree of ring complacency in some cores, and small sample sizes. Previous dendrochemical research has found than nitrogen may be highly mobile in the xylem of some tree species . Although the degree to which coast redwood and Douglas-fir trees exhibit radial translocation of nitrogen is largely unknown, such mobility could potentially obscure interpretation of nitrogen availability at the time of ring formation . To minimize potentially confounding effects associated with the translocation of nitrogenous products across ring boundaries, increment core samples from Waddell Creek were pretreated to remove soluble forms of nitrogen following the “short-duration” protocol outlined in Sheppard and Thompson . Briefly, tree-ring samples were sequentially Soxhlet extracted for 4 h in a mixture of ethyl alcohol and toluene , 4 h in 100% ethyl alcohol, and 1 h in distilled water . Increment cores collected from Mill Creek as part of our pilot study were not treated or extracted prior to stable isotope analysis.This model suggested a less significant decline in salmon abundance over the period 1946-1979. Results indicated years of high salmon escapement in 1947, 1949 and 1953 with estimated returns of 126, 116 and 116 fish·km-1, respectively . Conversely, low escapement was predicted in 1977, 1979 and 1957 . For both reconstructions, present day escapement to WB Mill Creek is as strong as the historical maxima values predicted by the models. Collectively, the initial results from WB Mill Creek suggest that salmon abundance may be reflected in riparian tree-ring variables such as indexed growth, [N], or δ15N. However, interpretation of results in this watershed is hindered by two important issues: the lack of comparative experimental controls, and the potentially confounding effects associated with the translocation of nitrogenous compounds across annual tree-ring boundaries. In order to infer the uptake of MD-nitrogen by riparian trees it is necessary to have ecologically analogous reference sites that are uninfluenced by anadromous salmonids. Robust experimental reference sites could potentially include proximate drainages or stream reaches located above impoundments that block access to anadromous salmon. No such sites were available in the vicinity of WB Mill Creek, however, as all salmon-free locations had significantly different site characteristics, especially with respect to stream gradient, floodplain development and soil characteristics. It is especially important to note that increment cores from WB Mill Creek demonstrated enriched δ15N and elevated [N] in the last 3-5 years of growth. Since we did not pre-treat our increment cores prior to analysis, it is unclear whether this enrichment is a true environmental signal resulting from increased salmon abundance and MD-nitrogen availability, or the product of internal translocation of nitrogen by the trees. Whatever the case, this late enrichment significantly influenced the results of our reconstructions since both high [N] andδ15N coincided with years of very high escapement to WB Mill Creek . Few paleoecological studies have successfully modeled changes in either marine-derived nutrient transfer or salmon abundance prior to European settlement. Finney et al. used stratigraphic variation in lake sediment δ15N and fossilized cladocerans and diatoms to infer changes in sockeye salmon abundance in Alaskan nursery lakes. Their ≈2200-year reconstruction revealed that large fluctuations in sockeye abundance were commonplace and that multi-decadal to century-long climatic regimes largely drove population cycles.

The revision will be implemented in steps and could facilitate the field based production of PMPs

As a consequence, a larger amount of product can be delivered earlier, which can help to prevent the disease from spreading once a vaccine becomes available. In addition to conventional chromatography, several generic purification strategies have been developed to rapidly isolate products from crude plant extracts in a cost-effective manner . Due to their generic nature, these strategies typically require little optimization and can immediately be applied to products meeting the necessary requirements, which reduces the time needed to respond to a new disease. For example, purification by ultrafiltration/diafiltration is attractive for both small and large molecules because they can be separated from plant host cell proteins , which are typically 100–450 kDa in size, under gentle conditions such as neutral pH to ensure efficient recovery . This technique can also be used for simultaneous volume reduction and optional buffer exchange, reducing the overall process time and ensuring compatibility with subsequent chromatography steps. HCP removal triggered by increasing the temperature and/ or reducing the pH is mostly limited to stable proteins such as antibodies, and especially, the former method may require extended product characterization to ensure the function of products,maceta cuadrada 20 litros such as vaccine candidates, is not compromised .

The fusion of purification tags to a protein product can be tempting to accelerate process development when time is pressing during an ongoing pandemic. These tags can stabilize target proteins in planta while also facilitating purification by affinity chromatography or non-chromatographic methods such as aqueous two-phase systems . On the downside, such tags may trigger unwanted aggregation or immune responses that can reduce product activity or even safety . Some tags may be approved in certain circumstances , but their immunogenicity may depend on the context of the fusion protein. The substantial toolkit available for rapid plant biomass processing and the adaptation of even large-scale plant-based production processes to new protein products ensure that plants can be used to respond to pandemic diseases with at least an equivalent development time and, in most cases, a much shorter one than conventional cell-based platforms. Although genetic vaccines for SARS-CoV-2 have been produced quickly , they have never been manufactured at the scale needed to address a pandemic and their stability during transport and deployment to developing world regions remains to be shown.Regulatory oversight is a major and time-consuming component of any drug development program, and regulatory agencies have needed to revise internal and external procedures in order to adapt normal schedules for the rapid decision-making necessary during emergency situations. Just as important as rapid methods to express, prototype, optimize, produce, and scale new products are the streamlining of regulatory procedures to maximize the technical advantages offered by the speed and flexibility of plants and other high-performance manufacturing systems.

Guidelines issued by regulatory agencies for the development of new products, or the repurposing of existing products for new indications, include criteria for product manufacturing and characterization, containment and mitigation of environmental risks, stage-wise safety determination, clinical demonstration of safety and efficacy, and various mechanisms for product licensure or approval to deploy the products and achieve the desired public health benefit. Regardless of which manufacturing platform is employed, the complexity of product development requires that continuous scrutiny is applied from preclinical research to drug approval and post-market surveillance, thus ensuring that the public does not incur an undue safety risk and that products ultimately reaching the market consistently conform to their label claims. These goals are common to regulatory agencies worldwide, and higher convergence exists in regions that have adopted the harmonization of standards as defined by the International Council for Harmonization ,2 in key product areas including quality, safety, and efficacy.Both the United States and the EU have stringent pharmaceutical product quality and clinical development requirements, as well as regulatory mechanisms to ensure product quality and public safety. Differences and similarities between regional systems have been discussed elsewhere and are only summarized here. Stated simply, the United States, EU, and other jurisdictions follow generally a two-stage regulatory process, comprising clinical research authorization and monitoring and result’s review and marketing approval. The first stage involves the initiation of clinical research via submission of an Investigational New Drug application in the United States or its analogous Clinical Trial Application in Europe.

At the preclinical clinical translational interphase of product development, a sponsor must formally inform a regulatory agency of its intention to develop a new product and the methods and endpoints it will use to assess clinical safety and preliminary pharmacologic activity . Because the EU is a collective of independent Member States, the CTA can be submitted to a country-specific regulatory agency that will oversee development of the new product. The regulatory systems of the EU and the United States both allow pre-submission consultation on the proposed development programs via discussions with regulatory agencies or expert national bodies. These are known as pre-IND meetings in the United States and Investigational Medicinal Product Dossier 3 discussions in the EU. These meetings serve to guide the structure of the clinical programs and can substantially reduce the risk of regulatory delays as the programs begin. PIND meetings are common albeit not required, whereas IMPD discussions are often necessary prior to CTA submission. At intermediate stages of clinical development , pauses for regulatory review must be added between clinical study phases. Such End of Phase review times may range from one to several months depending on the technology and disease indication. In advanced stages of product development after pivotal, placebo-controlled randomized Phase III studies are complete,growing strawberries hydroponically drug approval requests that typically require extensive time for review and decision-making on the part of the regulatory agencies. In the United States, the Food and Drug Administration controls the centralized marketing approval/authorization/ licensing of a new product, a process that requires in-depth review and acceptance of a New Drug Application for chemical entities, or a Biologics License Application for biologics, the latter including PMP proteins. The EU follows both decentralized processes as well as centralized procedures covering all Member States. The Committee for Medicinal Products for Human Use , part of the European Medicines Agency , has responsibilities similar to those of the FDA and plays a key role in the provision of scientific advice, evaluation of medicines at the national level for conformance with harmonized positions across the EU, and the centralized approval of new products for market entry in all Member States.The statute-conformance review procedures practiced by the regulatory agencies require considerable time because the laws were established to focus on patient safety, product quality, verification of efficacy, and truth in labeling. The median times required by the FDA, EMA, and Health Canada for full review of NDA applications were reported to be 322, 366, and 352 days, respectively . Collectively, typical interactions with regulatory agencies will add more than 1 year to a drug development program. Although these regulatory timelines are the status quo during normal times, they are clearly incongruous with the needs for rapid review, approval, and deployment of new products in emergency use scenarios, such as emerging pandemics.

Plant-made intermediates, including reagents for diagnostics, antigens for vaccines, and bioactive proteins for prophylactic and therapeutic medical interventions, as well as the final products containing them, are subject to the same regulatory oversight and marketing approval pathways as other pharmaceutical products. However, the manufacturing environment as well as the peculiarities of the plant-made active pharmaceutical ingredient can affect the nature and extent of requirements for compliance with various statutes, which in turn will influence the speed of development and approval. In general, the more contained the manufacturing process and the higher the quality and safety of the API, the easier it has been to move products along the development pipeline. Guidance documents on quality requirements for plant-made biomedical products exist and have provided a framework for development and marketing approval . Upstream processes that use whole plants grown indoors under controlled conditions, including plant cell culture methods, followed by controlled and contained downstream purification, have fared best under regulatory scrutiny. This is especially true for processes that use non-food plants such as Nicotiana species as expression hosts. The backlash over the Prodigene incident of 2002 in the United States has refocused subsequent development efforts on contained environments . In the United States, field-based production is possible and even practiced, but such processes require additional permits and scrutiny by the United States Department of Agriculture . In May 2020, to encourage innovation and reduce the regulatory burden on the industry, the USDA’s Agricultural Plant Health Inspection Service revised legislation covering the interstate movement or release of genetically modified organisms into the environment in an effort to regulate such practices with higher precision [SECURE Rule revision of 7 Code of Federal Regulations 340].In contrast, the production of PMPs using GMOs or transient expression in the field comes under heavy regulatory scrutiny in the EU, and several statutes have been developed to minimize environmental, food, and public risk. Many of these regulations focus on the use of food species as hosts. The major perceived risks of open-field cultivation are the contamination of the food/feed chain, and gene transfer between GM and non-GM plants. This is true today even though containment and mitigation technologies have evolved substantially since those statutes were first conceived, with the advent and implementation of transient and selective expression methods; new plant breeding technologies; use of non-food species; and physical, spatial, and temporal confinement . The United States and the EU differ in their philosophy and practice for the regulation of PMP products. In the United States, regulatory scrutiny is at the product level, with less focus on how the product is manufactured. In the EU, much more focus is placed on assessing how well a manufacturing process conforms to existing statutes. Therefore, in the United States, PMP products and reagents are regulated under pre-existing sections of the United States CFR, principally under various parts of Title 21 , which also apply to conventionally sourced products. These include current good manufacturing practice covered by 21 CFR Parts 210 and 211, good laboratory practice toxicology , and a collection of good clinical practice requirements specified by the ICH and accepted by the FDA . In the United States, upstream plant cultivation in containment can be practiced using qualified methods to ensure consistency of vector, raw materials, and cultivation procedures and/or, depending on the product, under good agricultural and collection practices . For PMP products, cGMP requirements do not come into play until the biomass is disrupted in a fluid vehicle to create a process stream. All process operations from that point forward, from crude hydrolysate to bulk drug substance and final drug product, are guided by 21 CFR 210/211 . In Europe, biopharmaceuticals regardless of manufacturing platform are regulated by the EMA, and the Medicines and Healthcare products Regulatory Agency in the United Kingdom. Pharmaceuticals from GM plants must adhere to the same regulations as all other biotechnology-derived drugs. These guidelines are largely specified by the European Commission in Directive 2001/83/EC and Regulation No 726/2004. However, upstream production in plants must also comply with additional statutes. Cultivation of GM plants in the field constitutes an environmental release and has been regulated by the EC under Directive 2001/18/EC and 1829/2003/EC if the crop can be used as food/feed . The production of PMPs using whole plants in greenhouses or cell cultures in bioreactors is regulated by the “Contained Use” Directive 2009/41/EC, which are far less stringent than an environmental release and do not necessitate a fully-fledged environmental risk assessment. Essentially, the manufacturing site is licensed for contained use and production proceeds in a similar manner as a conventional facility using microbial or mammalian cells as the production platform. With respect to GMP compliance, the major differentiator between the regulation of PMP products and the same or similar products manufactured using other platforms is the upstream production process. This is because many of the DSP techniques are product-dependent and, therefore, similar regardless of the platform, including most of the DSP equipment, with which regulatory agencies are already familiar.

A more robust strain could have higher resistances to salts present in the effluent

The dry biomass measurement showed highly unexpected results, with the TP-Effluent having more biomass accumulation than even TAP. One explanation for this result is that the residual chemicals in the effluent were not properly washed away or evaporated during the dry biomass collection process. KHCO3 is a salt that could have been retained in the dry biomass, which could explain the relatively large dry biomass measurement for the TP-Effluent cultures. Microscopic analysis showed that 75% TP-Effluent and 50% TP-Effluent cultures had the highest cell densities . The TP-Effluent and 25% TP-Effluent cultures more closely resembled the negative control TP cultures, with poor growth relative to the other cultures. Over longer growth periods, there could be a larger difference between the different acetate concentration, but for the purposes of this experiment, it was discovered that using smaller doses of effluent was practical and could increase the use efficiency of the costly effluent. This experiment in conjunction with the Drop-Out experiment also helped our collaborators at the University of Delaware understand how to optimize the chemical composition of the effluent. For the growth experiment where the effluent produced from the electrocatalytic process was procured and incorporated into the media, heterotrophic growth of algae was demonstrated successfully. Figure 4A shows that all three cultures grown with effluent media exhibit clear growth after 4 days. This growth is comparable to TAP, as shown in Figures 4B-D. It was also found that performing cell counts through hemocytometry,maceta 7l although more labor intensive, significantly decreased the errors between triplicates.

From this final experiment, the first instance of algal growth completely decoupled from photosynthesis was achieved. For future continuation of this project, the next steps are to optimize the growing process by media treatment or to employ the use of highly controlled bioreactors. The use of other algal species or other strains of Chlamydomonas reinhardtii can be considered as well.By doing so, there is an opportunity to develop a system that exceeds the efficiency of conventional photosynthetic systems and be applied to agriculture for food and biotechnology industries such as bio-fuel production. This project was presented as an online presentation at the 2021 Undergraduate Research and Creative Activity Symposium at the University of California, Riverside .The Paharpur Business Centre and Software Technology Incubator Park is a 7 story, 50,400 ft2 office building located near Nehru Place in New Delhi India. The occupancy of the building at full normal operations is about 500 people. The building management philosophy embodies innovation in energy efficiency while providing full service and a comfortable, safe, healthy environment to the occupants. Provision of excellent Indoor Air Quality is an expressed goal of the facility, and the management has gone to great lengths to achieve it. This is particularly challenging in New Delhi, where ambient urban pollution levels rank among the worst on the planet. The approach to provide good IAQ in the building includes a range of technical elements: air washing and filtration of ventilation intake air from rooftop air handler, the use of an enclosed rooftop greenhouse with a high density of potted plants as a bio-filtration system, dedicated secondary HVAC/air handling units on each floor with re-circulating high efficiency filtration and UVC treatment of the heat exchanger coils, additional potted plants for bio-filtration on each floor, and a final exhaust via the restrooms located at each floor.

The conditioned building exhaust air is passed through an energy recovery wheel and chemisorbent cartridge, transferring some heat to the incoming air to increase the HVAC energy efficiency. The management uses “green” cleaning products exclusively in the building. Flooring is a combination of stone, tile and “zero VOC” carpeting. Wood trim and finish appears to be primarily of solid sawn materials, with very little evidence of composite wood products. Furniture is likewise in large proportion constructed from solid wood materials. The overall impression is that of a very clean and well-kept facility. Surfaces are polished to a high sheen, probably with wax products. There was an odor of urinal cake in the restrooms. Smoking is not allowed in the building. The plants used in the rooftop greenhouse and on the floors were made up of a number of species selected for the following functions: daytime metabolic carbon dioxide absorption, nighttime metabolic CO2 absorption, and volatile organic compound and inorganic gas absorption/removal for air cleaning. The building contains a reported 910 indoor plants. Daytime metabolic species reported by the PBC include Areca Palm, Oxycardium, Rubber Plant, and Ficus alii totaling 188 plants . The single nighttime metabolic species is the Sansevieria with a total of 28 plants . The “air cleaning” plant species reported by the PBC include the Money Plant, Aglaonema, Dracaena Warneckii, Bamboo Palm, and Raphis Palm with a total of 694 plants . The plants in the greenhouse numbering 161 of those in the building are grown hydroponically, with the room air blown by fan across the plant root zones. The plants on the building floors are grown in pots and are located on floors 1-6. We conducted a one-day monitoring session in the PBC on January 1, 2010. The date of the study was based on availability of the measurement equipment that the researchers had shipped from Lawrence Berkeley National Lab in the U.S.A.

The study date was not optimal because a large proportion of the regular building occupants were not present being New Year’s Day. An estimated 40 people were present in the building all day during January 1. This being said, the building systems were in normal operations, including the air handlers and other HVAC components. The study was focused primarily on measurements in the Greenhouse and 3rd and 5th floor environments as well as rooftop outdoors. Measurements included a set of volatile organic compounds and aldehydes, with a more limited set of observations of indoor and outdoor particulate and carbon dioxide concentrations. Continuous measurements of Temperature and relative humidity were made selected indoor and outdoor locations. Air sampling stations were set up in the Greenhouse, Room 510, Room 311, the 5th and 3rd floor air handler intakes, the building rooftop HVAC exhaust,hydroponic grow systems and an ambient location on the roof near the HVAC intake. VOC and aldehyde samples were collected at least once at all of these locations. Both supply and return registers were sampled in rooms 510 and 311. As were a greenhouse inlet register from the air washer and outlet register ducted to the building’s floor level. Air samples for VOCs were collected and analyzed following the U.S. Environmental Protection Agency Method TO-17 . Integrated air samples with a total volume of approximately 2 L were collected at the sites, at a flow rate of <70 cc/min onto preconditioned multibed sorbent tubes containing Tenax-TA backed with a section of Carbosieve. The VOCs were desorbed and analyzed by thermodesorption into a cooled injection system and resolved by gas chromatography. The target chemicals, listed in Table 1, were qualitatively identified on the basis of the mass spectral library search, followed by comparison to reference standards. Target chemicals were quantified using multi-point calibrations developed with pure standards and referenced to an internal standard. Sampling was conducted using Masterflex L/S HV-07553-80 peristaltic pumps assembled with quad Masterflex L/S Standard HV-07017-20 pump heads. Concentrations of formaldehyde, acetaldehyde, and acetone were determined following U.S. Environmental Protection Agency Method TO-11a . Integrated samples were collected by drawing air through silica gel cartridges coated with 2,4-dinitrophenylhydrazine at a flow rate of 1 Lpm. Samples utilized an ozone scrubber cartridge installed upstream from the sample cartridge. Sample cartridges were eluted with 2 mL of high purity acetonitrile and analyzed by high-performance liquid chromatography with UV detection and quantified with a multipoint calibration for each derivitized target aldehyde. Sampling was conducted using Masterflex L/S HV-07553-71 peristaltic pumps assembled with dual Masterflex L/S Standard HV-07016-20 pump heads. Continuous measurements of PM2.5 using TSI Dustrak model 8520 monitors were made in Room 510 and at the rooftop-sampling site from about 13:30 to 16:30 of the sampling day. The indoor particle monitor was located on a desk in room 510 and the outdoor monitor was located on a surface elevated above the roof deck. Carbon dioxide spot measurements of about 10-minute duration were made throughout the building during the afternoon using a portable data logging real-time infrared monitor . Temperature and RH were monitored in the Greenhouse, room 510 and room 311 using Onset model HOBO U12-011 data loggers at one-minute recording rates. Outdoor T and RH were not monitored.

The measured VOC concentrations as well as their limits of quantitation by the measurement methods are shown in Table 2. Figures 1-6 show bar graphs of these VOCs. Unless otherwise shown, all measured compounds were above the minimum detection level, but not all measurements were above the LOQ. Those measurements with concentrations below the LOQ should be considered approximations. These air contaminants are organized by possible source categories including: carbonyl compounds that can be odorous or irritating; compounds that are often emitted by building cleaning products; those associated with bathroom products; those often found emitted from office products, supplies, materials, occupants, and in ambient air; those found from plant and wood materials as well as some cleaning products; and finally plasticizers commonly emitted from vinyl and other flexible or resilient plastic products. The groupings in this table are for convenience; many of the listed compounds have multiple sources so the attribution provided may be erroneous. The carbonyl compounds include formaldehyde that can be emitted from composite wood materials, adhesives, and indoor chemical reactions; acetaldehyde from building materials and indoor chemistry; acetone from cleaners and other solvents. Benzaldehyde sources can include plastics, dyes, fruits, and perfumes. Hexanal, nonanal, and octanal can be emitted from engineered wood products. For many of these compounds, outdoor air can also be a major source. Formaldehyde and acetone were the most abundant carbonyl compounds observed in the PBC. For context, the California 8-h and chronic non-cancer reference exposure level for formaldehyde is 9 µg m-3 and the acute REL is 55 µg m-3 . The 60 minute average formaldehyde concentrations observed in the PBC exceeded the REL by up to a factor of three. Acetone has low toxicity and the observed levels were orders of magnitude lower than concentrations of health concern. Hexanal, nonanal, and octanal are odorous compounds at low concentrations; odor thresholds established for them are 0.33 ppb, 0.53 ppb, and 0.17 ppb, respectively . Average concentrations observed within the PBC building were 3.8±0.8 ppb, 3.5±0.6 ppb, and 1.4±0.2 ppb, for these compounds, respectively, roughly ten times higher than the odor thresholds. Concentrations of these compounds in the supply air from the greenhouse were substantially lower, although still in excess of the odor thresholds. The concentration of hexanal and nonanal roughly doubled the ambient concentrations as the outside air passed through the greenhouse. Octanal concentrations were roughly similar in the ambient air and in the air exiting the greenhouse. Concentrations of benzene, d-limonene, n-hexane, naphthalene and toluene all increased in the greenhouse air in either the AM or PM measurements. The measured levels of these compounds were far below any health relevant standards, although naphthalene concentrations reached close to 50% of the California REL of 9 µg m-3 . The concentrations of these compounds were generally somewhat higher indoors relative to the greenhouse concentrations. The concentration of toluene in the building exhaust was 120 µg m-3, more than double the highest level measured indoors, suggesting a possible toluene source in the restrooms. The cleaning compound 2-butoxyethanol was slightly higher indoors, but at very low concentrations. Similar for trichloroethylene that was observed at extremely low levels indoors. The compounds listed in this category have many sources, including outdoor air. For the most part there was little difference across the building spaces for these compounds, and little difference from the ambient air measurement. The single exception to this observation is methylene chloride that appears to increase by about a factor of ten indoors. It is possible that this compound is in use as a cleaning solvent, or it may be present in computer equipment or other supplies. Methylene chloride is also used as a spot remover in dry cleaning processes and may be carried into the building on occupant clothing. The levels of this compound were low relative to health standards .

Credible and prediction intervals in the shoot at harvest were similar for both models

The viral decay rate in the soil determined by Roberts et al. was adopted because the experimental temperature and soil type are more relevant to lettuce growing conditions compared to the other decay study . Decay rates in the root and shoot were used from the hydroponic system predictions.The transport model was fitted to log10 viral concentration data from DiCaprio et al. , extracted from graphs therein using WebPlotDigitizer . In these experiments, NoV of a known concentration was spiked in the feed water of hydroponic lettuce and was monitored in the feed water, the root and shoot over time. While fitting the model, an initial feed volume of 800 mL was adopted and parameters producing final volumes of b200 mL were rejected. To fit the model while accounting for uncertainty in the data, a Bayesian approach was used to maximize the likelihood of the data given the parameters. A posterior distribution of the parameters was obtained by the differential evolution Markov chain algorithm, which can be parallelized and can handle multi-modality of the posteriors distribution without fine tuning the jumping distribution.

The rationale behind the model fitting procedure and diagnostics are discussed in Supplementary section S1H.The initial viral concentration in the irrigation water was drawn from an empirical distribution reported previously by Lim et al. for NoV in activated sludge treated secondary effluent. As justi- fied by Lim et al. ,vertical grow tables the sum of the concentrations of two genotypes known to cause illness was used to construct the distribution. To estimate the risk from consumption of lettuce, the daily viral dose was computed using Eq. 10 for the kth day. The body weight was drawn from an empirical distribution for all ages and genders in the United States, from a report of the percentile data of body weight. The lettuce consumption rate was drawn from an empirical distribution constructed from data reported by the Continuing Survey of Food Intakes by Individuals . The ‘consumer only’ data for all ages and genders was used, and hence the reported risk is only for those who consume lettuce. It is important to note that the daily viral dose was computed in from the model output using the shoot density ρshoot to be consistent with the consumption rate reported in CSFII. Several different NoV dose-response models have been pro posed based on limited clinical data. The validity of these models is a matter of much debate , which is beyond the scope of this study.

These models differ in their assumptions resulting in large variability of predicted risk out comes. To cover the range of potential outcomes of human exposure to NoV, we estimated and compared risk outcomes using three models: 1) Approximate Beta-Poisson ; 2) Hypergeometric ; and 3) Fractional Poisson . In the risk estimation, we considered NoV in the lettuce tissue exists as individual viral particles and used the disaggregated NoV models. The model equations are given by Eq. 11–13, Table 1. Ten thousand samples of the daily infectious risks were calculated from BP and FP models using MATLAB R2016a. Wolfram Mathematica 11.1 was used for the model estimation as it was faster. Using a random set of 365 daily risk estimates of , the annual infection risk was calculated according to the Gold Standard estimator using Eq. 14, Table 1. While there appears to be some dose dependence for illness resulting from infection Pill∣inf , this has not been clearly elucidated for the different dose response models. Hence, we adopted the procedure used in Mara and Sleigh and calculated annual illness risk with Eq. 15.Under the assumption of first order viral decay, NoV loads in water at two time points did not fall in the credible region of model predictions, indicating that mere first order decay was unsuitable to capture the observed viral concentration data. The addition of the AD factor into the model ad dressed this inadequacy and importantly supported the curvature ob served in the experimental data.

This result indicates the AD of viruses to hydroponic tank wall is an important factor to include in predicting viral concentration in all three compartments.The adequacy of model fit was also revealed by the credible intervals of the predicted parameters for the model with AD . Four of the predicted parameters: at, bt, kdec, s and kp, were restricted to a smaller subset of the search bounds, indicating that they were identifiable. In contrast, the viral transfer efficiency η and the kinetic parameters spanned the entirety of their search space and were poorly identifiable. However, this does not suggest that each parameter can independently take any value in its range because the joint distributions of the parameters indicate how fixing one parameter influences the likelihood of another parameter . Hence, despite the large range of an individual pa rameter, the coordination between the parameters constrained the model predictions to produce reliable outcomes . Therefore, the performance of the model with AD was considered adequate for estimating parameters used for risk prediction.Risk estimates for lettuce grown in the hydroponic tank or soil are presented in Fig. 4. Across these systems,cultivo de frambuesas en maceta the FP model predicted the highest risk while the 1F1 model predicted the lowest risk. For a given risk model, higher risk was predicted in the hydroponic system than in the soil. This is a consequence of the very low detachment rates in soil compared to the attachment rates. Comparison of results from Sc1 and Sc2 of soil grown lettuce indicated lower risks and dis ease burdens under Sc1 . Comparing with the safety guidelines, the lowest risk predicted in the hydroponic system is higher than the U.S. EPA defined acceptable annual drinking water risk of 10−4 for each risk model. The annual burdens are also above the 10−6 benchmark recommended by the WHO . In the case of soil grown lettuce, neither Sc1 nor Sc2 met the U.S. EPA safety bench mark. Two risk models predicted borderline disease burden according to the WHO benchmark, for soil grown lettuce in Sc1, but under Sc2 the risk still did not meet the safety guideline. Neither increasing holding time of the lettuce to two days after harvesting nor using bigger tanks significantly altered the predicted risk . In comparison, the risk estimates of Sales-Ortells et al. are higher than range of soil grown lettuce outcomes presented here for 2 of 3 models. The SCSA sensitivity indices are presented in Fig. 5. For hydroponi cally grown lettuce, the top 3 factors influencing daily risk are amount of lettuce consumed, time since last irrigation and the term involving consumption and ρshoot. Also, the risk estimates are robust to the fitted parameters despite low identifiability of some model parameters . For soil grown lettuce, kp ap pears to be the major influential parameter, followed by the input viral concentration in irrigation water and the lettuce harvest time. Scorr is near zero, suggesting lesser influence of correlation in the input parameters.In this study, we modeled the internalization and transport of NoV from irrigation water to the lettuce using ordinary differential equations to capture the dynamic processes of viral transport in lettuce.

This first attempt is aimed at underscoring the importance of the effect of time in determining the final risk outcome. The modeling approach from this study may be customized for other scenarios for the management of water reuse practices and for developing new guidelines for food safety. Moreover, this study identifies critical gaps in the current knowledge of pathogen transport in plants and calls for further lab and field studies to better understand risk of water reuse.To develop an adequate model to predict viral transport in plant issue, it is necessary to couple mathematical assumptions with an under standing of the underlying biogeochemical processes governing virus removal, plant growth, growth conditions and virus-plant interactions. For example, although a simple transport model without AD could predict the viral load in the lettuce at harvest, it failed to capture the initial curvature in the viral load in the growth medium . An alternative to the AD hypothesis that could capture this curvature is the existence of two populations of viruses as used in Petterson et al. , one decaying slower than the other. However, a closer examination of the double exponential model revealed that it was not time invariant. This means that the time taken to decay from a concentration C1 to C2 is not unique and depends on the history of the events that occurred . Other viral models, such as the ones used in Peleg and Penchina faced the same issues. The incorporation of AD made the model time invariant and always provided the same time for decay between two given concentrations. This model fitting experience showcases how mathematics can guide the understanding of biological mechanisms.

The hypothesis of two different NoV populations is less plausible than that of viral attachment and detachment to the hydroponic tank. While it appears that incorporating the AD mechanism does not significantly improve viral load prediction in lettuce shoot at harvest, this is a consequence of force fitting the model to data under the given conditions. Changing the conditions, for example, by reducing viral attachment rate to the tank wall, will underestimate virus load in the lettuce shoot in the absence of AD . Through this model fitting exercise, we also acknowledge that the model can be significantly improved with new insights on virus plant interactions and more data on the viral transport through plants. A potential cause for concern in the model fit is the wide intervals. However, there is significant uncertainty in the data as well suggesting that the transport process itself is noise prone. Moreover, from the perspective of risk assessment, the variability between dose-response models is higher than the within dose-response model variability . Since within dose-response model variability stems from uncertainty in viral loads at harvest among other factors, the wide intervals do not exert a bigger effect than the discordance from different dose response models.Some parameters are identifiable to a good degree through model fitting, but there is a large degree of uncertainty in the viral transport efficiencies and the AD kinetic parameters. While this could be a consequence of fitting limited number of data points with several parameters, the viral load at harvest and risk estimates were well constrained. This large variation in parameters and ‘usefully tight quantitative predictions’ is termed the sloppiness of parameter sensitivities, and has been observed in physics and systems biology . Well designed experiments may simultaneously reduce uncertainty in the parameters as well as predictions , and therefore increasing confidence in predictions. One possible experiment to reduce parameter uncertainty is recording the transpiration and growth rate to fit Eq. 6 independently to acquire at and bt.An interesting outcome of our analysis is the strong association of risk with plant growth conditions. The health risks from consuming lettuce irrigated with recycled wastewater are highest in hydroponic grown lettuce, followed by soil grown lettuce under Sc2 and the least in soil grown lettuce under Sc1 . This difference in risk estimates stems to a large degree from the difference in AD kinetic constants . Increasing katt, s will decrease risk as more viruses will get attached to the growth medium, while increasing kdet, s will have the opposite effect , as more detached viruses are available for uptake by the plant. The combined effect of the AD parameters depends on their magnitudes and is portrayed in Supplementary Fig. S5. This result indicates that a better understanding of the virus interaction with the growth environment can lead to an improved understanding of risk. More importantly, this outcome indicates that soil plays an important role in the removal of vi ruses from irrigation water through adsorption of viral particles. An investigation focused on understanding the influence of soil composition on viral attachment will help refine the transport model. The risk predicted by this dynamic transport model is greater than the EPA annual infection risk as well as the WHO annual disease burden benchmark. The reasons for this outcome are many-fold. First, there is a significant variability in the reported internalization of viruses in plants.