The severity of Bot. infection was calculated as the isolation frequency per site

Total canopy cover was thus the sum of percent GV and NGV, and dieback was calculated as the total percent NGV to reflect the severity of canopy-level symptoms across each site.Ten individuals within each of the 30 sites were randomly selected for sampling using the random points generator feature in ArcMap , for a total of 300 shrubs. Individuals were located in the field using a combination of a 1m resolution NAIP imagery base map , a GPS device, a laser range finder, and transect tape. For stands not located within a polygon, individuals were selected either using a transect tape and a point intercept method , or haphazardly selected within the accessible confines of the stand to provide an even distribution of sampling throughout the stand. Whenever possible, individuals located more than 2m from trails and fuel breaks were selected to avoid any edge effects. Individuals with any signs of pruning or other human damage caused by humans were not selected. All individuals were sampled once between April and September 2019. Two branchlets , each containing necrotic lesions and adjacent asymptomatic wood tissue, plant pots round were clipped per individual using sterile techniques, for a total of 600 samples. Samples were retrieved from approximately breast height and opposite sides of the shrub, whenever possible.

All individuals had at least two necrotic lesions, even if no significant dieback was observed, allowing these methods to be carried out across all 300 individuals. Samples were then placed in labeled plastic bags, stored on ice, and brought back to the lab and placed in a 4o C refrigerator.Samples were rinsed of dirt and debris and surfaced sterilized using 100% ethyl alcohol, 0.5% bleach, and a 70% ethyl alcohol rinse. Cross sections between 1-2 mm were isolated from the advancing canker margin and plated onto half-strength potato dextrose agar amended with streptomycin antibiotic. Cultures were incubated at room temperature until fungal colonies developed , and isolates of hyphal tips near the advancing margin were then re-plated onto half-strength PDA-strep to obtain pure cultures. From pure culture, any samples identified to have morphological characteristics consistent with those of Bot. fungi  were selected for PCR. A few isolates from cultures inconsistent with Bot. characteristics were randomly selected from each site to amplify and sequence to verify our morphotyping method. The internal transcribed spacer region 1 and alpha-elongation factor-1 genes were amplified using PCR primer pairs ITS1F and ITS4, and EF1-728F and EF1-986R, respectively, using methods modified from White et al., and Slippers et al . Successfully amplified samples were sequenced using the UC Berkeley Sequencing Facility .

Data were square-root transformed when necessary to meet the assumptions of normality. Differences in mean Bot. infection severity between elevation categories were calculated using one-way ANOVA with Tukey’s HSD for post-hoc analysis with R Statistical Software . Correlations between actual elevation and Bot. infection severity were assessed using simple linear regression and ANOVA to test for significance . Generalized linear models were developed to identify patterns of dieback, with dieback severity values as the response variable, and elevation , Bot. infection severity , and aspect as possible explanatory variables. If multiple models received substantial support , the best model was confirmed by calculating the relative importance of each term based on the sum of their Akaike weights . The proportion of variance explained by the models was calculated by measuring the adjusted D2 value .This study provides definitive support for the hypothesis that shrub dieback, during a recent drought, and pathogen infection are strongly related in a wild shrubland setting. This is the first known quantitative support for the hypothesis that in A. glauca, an ecologically important shrub species in the study region, dieback is related to pathogen infection occurring along an elevational gradient.

As expected, N. australe and B. dothidea were the two most frequently retrieved pathogens across all sites, however N. australe, the introduced pathogen, had almost twice the abundance of B. dothidea. N. australe is driving the correlation between elevation and Bot. infection, as the frequency was greater at lower elevations compared to upper elevations, while B. dothidea abundance did not change significantly across elevations. Level of Bot. infection was confirmed to be a significant predictor of stand-level dieback severity. The data also confirm that stand dieback severity is generally greater at lower elevations, which in this region experience higher temperatures and lower annual rainfall than the higher elevations sampled.While the presence of Bot. species has been reported previously in Santa Barbara County, this study represents the first effort to understand the abundance and distribution of Bots occurring in natural shrublands, and the first wild land shrub survey of Bots across a climate gradient. The high frequency and wide distribution of Bots retrieved from our study sites support the hypothesis that Bot. species are widespread across a natural landscape, and likely contributing to the extensive dieback resulting from the recent drought. Bot. fungi were retrieved from nearly every site in this study . We could not determine Bots. presence from three sites due to contamination issues. The broad extent of the study area suggests that infection is widespread in the region, and likely extends beyond the range of our study. While both N. australe and B. dothidea together made up the most frequently retrieved pathogens, our data show that N. australe has a larger distribution and occurs in greater abundance across the study region than B. dothidea . This trend was consistent across all elevations, but particularly at lower elevations . One possible explanation for this is that N. australe, being a recently introduced pathogen, spreads more rapidly as an exotic species in A. glauca compared to B. dothidea, which has been established in California for over 150 years . This hypothesis is consistent with previous studies that have shown variations in Bot. species abundance and virulence in Myrtaceous hosts occurring in native versus introduced ranges . However, it is difficult to evaluate the incidence of B. dothidea and N. australe in the present study in relation to historical documentation since many species in the Bot. complex have, until recently, been mischaracterized . Only with the recent development of molecular tools have researchers begun to accurately trace phylogenetic and geographic origins of Bot. species. Such studies are beginning to elucidate the complex existence of Bot. fungi as both endophytes and pathogens around the world, and much more research is needed to understand their pathogenicity in various hosts under different conditions. Nevertheless, large round garden pots it remains clear from our study that Bot. species, particularly N. australe, are both abundant and widely distributed in this region, and are important pathogens in A. glauca shrubs.Because Bot. taxa were the most frequently retrieved pathogens and were significantly correlated with dieback, we believe that they drive A. glauca dieback. Further, stand dieback severity increased significantly with Bot. infection. This is not to say that other pathogens do not also contribute to disease symptoms, but we found no evidence of any other pathogens occurring in such high incidence as Bot. species. While Brooks and Ferrin identified B. dothidea as a likely contributor to disease and dieback in dozens of native chaparral species during an earlier drought event in southern California, and Swiecki and Bernhardt found B. dothidea in association with a dieback event in stands of Arctostaphylos myrtifolia in northern California, our study yields the most extensive results of Bot. infection and related dieback in a chaparral shrub species across a landscape. Further, our study resolves species identity within the Bot. clade and highlights the role of the recently introduced pathogen, N. australe.A significant finding in this study was the relationship of Bot. infection and dieback with elevation. Bot. abundance and dieback were both found to be greatest at lower elevations, which was driven mostly by the high frequency of N. australe retrieved at these sites.

This represents the first quantitative evidence supporting that A. glauca vulnerability to fungal infection is influenced by stress levels along an elevation gradient. A similar pattern was observed in northern California by Swiecki and Bernhardt , who suggested that dieback in Ione manzanita infected with B. dothidea was greater in drier sites compared to more mesic ones, although no comparison of infection rates between sites was conducted in their study. The elevation gradient in our study was used a proxy for stress levels because annual precipitation decreases with decreasing elevation within our study region . Higher temperatures, which are associated with lower elevations, are also known to play an important role in drought-related mortality, as water loss from evapotranspiration is increased . Furthermore, unpublished data for dry season predawn xylem pressure potentials on a subset of sites along the same elevational gradient revealed more negative water potentials in A. glauca at lower elevations compared to upper elevations as spring and summer drought sets in . Thus, there is evidence that shrubs at low elevations indeed experienced the greatest water stress during the 2011-2018 drought, which predisposed them to higher levels of Bot. infection and enhanced dieback compared to upper elevation sites. More in-depth studies on the microbial communities and fungal loads of healthy and diseased shrubs throughout the region would help elucidate such trends. Another possibility for the higher incidence of Bot. infection at lower elevations is that the lower ranges of A. glauca populations in Santa Barbara are often located adjacent orin close proximity to agricultural orchards, ranches, and urban settings, which are common sources of plant pathogens, including Bots . Eucalyptus, avocado, and grapevines, which are abundant in these areas, are particularly well-known Bot. hosts and potential facilitators of Bot. introduction . Therefore, sources of inoculum from nearby populations of agricultural and horticultural hosts could be responsible for continual transmission Bots in wildland A. glauca populations, and would likely result in greater rates of infection at lower elevations. Furthermore, many of the lower sites in the survey were located near roads and/or trails, which are often subjected to additional stress from human activity like pruning and trail clearing; activities that are known to spread and promote infection by Bot. pathogens . While we avoided sites that showed signs of such activities in our survey, we cannot rule out the potential contributions of proximity to human encroachment to the overall higher rates of Bot. infection across the lower elevation zone. It is worth noting that while our study revealed a trend of increased dieback in lower elevations, some upper elevation sites also exhibited high levels of dieback, and Bot. fungi were retrieved from many of these sites. Upper elevations also experienced significant stress during the 2011-2018 drought, and water-related microsite variables outside the scope of this study like slope, solar incidence, soil composition, and summer fog patterns factors likely contributed to increased stress and subsequent dieback. Additionally, N. luteum, N. parvum, and D. sarmentorum were isolated primarily from upper sites. Host plants in these sites may serve as potential reservoirs for disease because the milder climate conditions promote greater host survival and thus pathogen persistence asendophytes. This serves as an important reminder that continued global change-type drought may eventually jeopardize susceptible species populations even at the upper boundary of their range.Our results are consistent with well-known theoretical models describing the relationship between environmental stress and biotic infection, which generally ascribe extreme drought stress as a mechanism for plant predisposition to disease . These frameworks illustrate dynamic interactions between environmental stress, plant hydraulic functioning and carbon balance, and biotic attack, and a growing body of research has focused on understanding the roles of these factors in driving plant mortality, especially during extreme drought . While the data collected in this study do not directly address the specific mechanisms leading to Bot. infection and dieback in A. glauca, our results can be discussed in the context of how life histories and physiological adaptions elicit differential responses to drought in woody plants, particularly in chaparral shrubs. For example, shallow-rooted, obligate seeder shrubs like A. glauca have been shown to be more susceptible to drought-induced mortality during acute, high intensity drought than deep-rooted, resprouter shrubs . This supports our observations of pronounced A. glauca decline during an historic California drought compared to nearby resprouter species like chamise , and laurel sumac .

Linear regression of the time-aligned CH4 and C2H6 results in a molar enhancement ratio

Aerodyne Research, Inc. drove ground-based transects in a mobile laboratory equipped with highly precise Aerodyne tunable infrared laser direct absorption spectrometers measuring a variety of species . A LI-COR non-dispersive infrared instrument measured CO2 and H2O. Meteorological and positional data were collected at all tracer release sites and on the vehicle, using multiple AIRMAR 200WX WeatherStation® instruments and a Hemisphere V103 GPS Compass. To minimize drift and maintain accurate baseline values on the TILDAS instruments in the minAML, a valve sequence enabled overblowing of the inlet with ultra-zero air every 15 min for 45 s . Scientific Aviation equipped an aircraft with a Picarro G2301-f cavity ringdown spectrometer , TILDAS Vaisala HMP60 humidity and temperature probe, and Hemisphere VS330 GPS Compass used for positioning and calculating wind velocity . Since SA had a TILDAS on board measuring C2H6 during these times, black plastic pots it was possible to treat these flights as a tracer release experiment similar to that performed with the ground-based equipment. A full description of the equipment used during this project can be found in the Supplement of Arndt et al. .

During this study, the aircraft flew low and close to the sites, at an average distance of ∼ 900 m and an altitude of ∼ 325 m. Each site had a combination of spread out point source emitters and large open area sources . SA conducted 11 flights over 6 d, usually flying twice a day, in the late morning and midafternoon. Flights typically lasted 1–2 h for a given farm, flying in spirals looping around the perimeter of the animal housing and manure management areas. ARI measured for 3 d at Dairy 1 and 5 d at Dairy 2. The mobile lab drove at several different times of day for each site, trying to capture any diurnal effect, but always overlapped with the aircraft at least once a day.Tracer gases, ethane and acetylene, were released from ground-based tripods at a variety of locations on the dairy farms with the intention of co-locating with known emission sources . Tracers were used to distinguish and quantify sources by positioning them within each respective emission area. Often, each tracer was released at a single point from each major source, typically the liquid manure management and animal housing areas . For this study, only the position and release rate of C2H6 is relevant. Release rates of C2H6 ranged from 10 to 40 slpm throughout the project .

A schematic of tracer release being performed at a dairy farm is shown in Fig. 1. Detailed descriptions of the tracer flux ratio technique used during this work can be found in Arndt et al. or more generally in Roscioli et al. . In summary, tracer gas released close to a source produces a plume that experiences the local wind dynamics and meteorological conditions akin to the nearby emission of interest, thereby proving a representation of those emissions. A plume is considered to be a co-located enhancement above ambient concentrations of CH4 and tracer gas. Active tracer release overlapped with on-site flight transects for approximately 11 h during this week-long project. Exact timing of the overlap between the release of C2H6 and sampling periods by the aircraft is shown in Table 1. Ethane was selected over other gases due to the lack of potential interference with nearby sources and its long atmospheric lifetime. At one of the two sites, C2H6 from a small well pad could be observed on the ground at close distances. This interference was characterized and eliminated using its measured C2H6 : CH4 ratio in combination with wind direction and farm layout.Analysis of tracer flux data involves comparing slopes or areas of enhancements between tracer gas and site CH4 emissions. The molar enhancement ratio, scaled by the amount of tracer gas released, determines a CH4 emission rate for the specific plume encounter. Area analysis compares integrated plumes of CH4 and C2H6, particularly necessary during close transects when plumes do not temporally or spatially co-align. Both analysis methods were performed on this dataset and are discussed in further detail in Sect. 3.2.

Due to the speed of the aircraft , observations of plume emissions were brief. On average, identified plumes lasted 12 s , not including a significant amount of time collected before and after enhancements to ensure accuracy of baseline calculations during analysis. Prior to analysis, all data had appropriate calibration factors applied, correcting minor deviations in flow rate by mass flow controllers and instrument performance for specific species. Instrument calibrations occurred in the field at several times during this campaign using mixed-gas standards diluted with ultra-zero air. Distance between tracer release locations and aircraft position was determined using basic trigonometry. Uncertainties for emission rate estimates are determined as 95 % confidence intervals. Plumes observed by the aircraft were included in the analysis after meeting certain criteria. Requirements included tracer gas flowing on-site for more than 10 min prior to observation, correlated plumes of CH4 and C2H6 based on high coefficient of determination from a least-squares fit , and positive enhancements above baseline for CH4 and C2H6. After meeting these standards, each plume was viewed and additional conditions were manually considered: wind direction and speed , duration of the enhancement, validity of the linear regression fits, quality of calculated baseline for integration purposes, location of the aircraft relative to the sources, and correlation between CH4 and other species indicating interferences or source allocation.During each flight, identifiable plumes of CH4 were observed regularly, approximately every 1–2 min. Figure 4 depicts repeated measurements of CH4 emissions representative of the whole farm, revealing characteristics about emission sources at each site. Viewed from the south, manure and animal housing areas at Dairy 1 line up together, whereas at Dairy 2 the anaerobic lagoon and settling cells are offset from the housing areas. While these observations largely depend on wind direction and distance from the source, some features gave insight into where emissions came from on-site. Broad emissions can be readily attributed to the large collection of point source emitters milling around barns and open lots . Sharp peaks and broad plateaus indicate an encounter with out gassing by a large area source . Gaussian shapes appear to be an amalgamation of both major sources mixed downwind. Temporal and spatial differences exist between the aircraft measurements used in this dataset and the ground-based measurements collected as part of the initial study at each dairy farm showing characteristics of emitted methane plumes as observed by the aircraft downwind to the south. Each time trace depicts the high rate of repetition in the flown transects around each site. 2018). Measurements by the minAML occurred during the day and night at a variety of distances from each site . The aircraft had good coverage during the middle of the day, plastic pot black with flights in the late morning and early afternoon performing frequently repeated transects around each site . The ground-based tracer release experiment observed very low plume enhancements in the hot midday conditions due to low winds and strong vertical mixing while the aircraft saw good signal, but it had no issue collecting nighttime measurements when the aircraft did not operate. Tracer flux ratio methodology thrives with strong winds and downwind road access perpendicular to the dominant wind direction. Close placement of tracer gas to a point source and distant measurements by the mobile lab allow time and space for the tracer to co-disperse with emission gas and merge together in the measured plume.

During this field campaign, the aircraft flew close to the site measuring emissions in a calm wind and saw an abundance of signal due to strong surface heating. These conditions proved favorable for the aircraft and mass balance calculations but stretch the possible application of the tracer release method. Even so, the attempt to perform a tracer release experiment observed from an aircraft proved largely successful and provided direct insight as to how these measurements relate to the ground based observations. Due to the sensitivity of the C2H6 instrument on the aircraft, it was readily apparent when the tracer gas was present and intermingling with the farm emissions. Figure 5 visualizes the initiation of tracer release at Dairy 2 and the time it takes for tracer gas to disperse on-site. Prior to releasing any tracer gas, the concentration of C2H6 shows a relatively steady baseline. After initiating the release of tracer gas at 20 slpm, it took approximately 20 min before the aircraft begins to detect it initially and another 15 min before the plume characteristics were stabilized. We suspect this was due to the prevailing conditions of weak horizontal winds and strong but varying vertical mixing at the site. The aircraft ascended above the emission plume for 10–20 min after the release began, taking it out of plume detection range, which may have lengthened the time it took to first detect tracer gas. Based on the average wind direction and horizontal speed from 10:39 PDT to 11:00 PDT , we could expect to begin seeing tracer gas after ∼ 6 min at a distance of 1.6 km . Instead, we saw the first spike around 11 min after beginning release. For the plumes reported in this dataset, there is no observed dependence of emission rate with sampling altitude. In Fig. 6, CH4 emissions are plotted versus aircraft altitude. Emissions between 0 and 6500 kg d−1 appear to be randomly distributed between 100 and 600 m at each site . Two outliers show higher emission rates at low altitudes, unmatched at higher altitudes. Above 650 m are three other points scattered across a wide range of emissions . These outliers occurred when the aircraft flew close to the site at an angle that put the lagoon between the aircraft and the tracer release point. The impact of measuring a source closer than the tracer is a potential overestimation of the emission due to differences in dispersion . Increasing emissions with decreasing height, in some cases, could be attributed to the influence of a strongly lofted lagoon signal at a site. Lower flights could then cause the aircraft to encounter a larger proportion of the manure-related emissions instead of the ideal case: a well-mixed plume representative of the entire site.Swirling and calm winds shifted emissions around each site at various times over multiple days. When selecting valid plumes, proximity of the aircraft during an enhancement to a single source introduces a dilemma. Varying distances between the tracer gas release point and presumed source could affect the determined emission rate, due to imperfect codispersion. For example, using a tracer plume located 500 m away to represent a source 300 m away would be problematic. When measuring at greater distances with better resolution , it is often trivial to identify when the tracer inadequately represents the emission. Flying several times faster than the driven transect provided notable repeatability but made spatial understanding of the site difficult with respect to emission sources. Direct estimates of liquid manure emissions proved unrealistic at both dairies due to sparse number of CH4 plumes with sufficient tracer representation, despite favorable wind direction and aircraft position. A few plumes of acceptable data quality were identified as being related to liquid manure emissions at Dairy 2 , but estimates were significantly higher than reported in Arndt et al. at 4893±1331 kg CH4 d −1 . Due to concerns that the tracer release location was not close enough to the liquid manure source to be representative, especially due to non-ideal transect geometry and limited horizontal wind, these data are not reported in Table 2. Relative apportionment of CH4 between sources showed manure-associated plumes leading the fractional contribution at Dairy 1 and Dairy 2 . This was an expected finding based on US EPA methodology estimates for this month at Dairy 2 . Given the temporal nature of manure emissions, as reported by Leytem et al. , it should be reinforced that these results only represent a short period of time in a single season. Despite the difficulty of collecting or identifying many distinct manure associated plumes via measurements taken from this aircraft, the general apportionment of source emissions appears to remain evident. Clear hot measurement days could have stimulated anaerobic activity in manure lagoons and caused greater release of gases , while strong thermal convection lofted concentrated and unmixed plumes.

Many of these regulations focus on the use of food species as hosts

Accordingly, continuous harvesting and extraction can be carried out using appropriate equipment such as screw presses , whereas continuous filtration and chromatography can take advantage of the same equipment successfully used with microbial and mammalian cell cultures . Therefore, plant-based production platforms can benefit from the same >4-fold increase in space-time yield that can be achieved by continuous processing with conventional cell-based systems . As a consequence, a larger amount of product can be delivered earlier, which can help to prevent the disease from spreading once a vaccine becomes available. In addition to conventional chromatography, several generic purification strategies have been developed to rapidly isolate products from crude plant extracts in a cost-effective manner . Due to their generic nature, these strategies typically require little optimization and can immediately be applied to products meeting the necessary requirements, which reduces the time needed to respond to a new disease. For example, large plastic pots purification by ultrafiltration/diafiltration is attractive for both small and large molecules because they can be separated from plant host cell proteins , which are typically 100–450 kDa in size, under gentle conditions such as neutral pH to ensure efficient recovery .

This technique can also be used for simultaneous volume reduction and optional buffer exchange, reducing the overall process time and ensuring compatibility with subsequent chromatography steps. HCP removal triggered by increasing the temperature and/ or reducing the pH is mostly limited to stable proteins such as antibodies, and especially, the former method may require extended product characterization to ensure the function of products, such as vaccine candidates, is not compromised . The fusion of purification tags to a protein product can be tempting to accelerate process development when time is pressing during an ongoing pandemic. These tags can stabilize target proteins in planta while also facilitating purification by affinity chromatography or non-chromatographic methods such as aqueous two-phase systems . On the downside, such tags may trigger unwanted aggregation or immune responses that can reduce product activity or even safety . Some tags may be approved in certain circumstances , but their immunogenicity may depend on the context of the fusion protein. The substantial toolkit available for rapid plant biomass processing and the adaptation of even large-scale plant-based production processes to new protein products ensure that plants can be used to respond to pandemic diseases with at least an equivalent development time and, in most cases, a much shorter one than conventional cell-based platforms.

Although genetic vaccines for SARS-CoV-2 have been produced quickly , they have never been manufactured at the scale needed to address a pandemic and their stability during transport and deployment to developing world regions remains to be shown.Regulatory oversight is a major and time-consuming component of any drug development program, and regulatory agencies have needed to revise internal and external procedures in order to adapt normal schedules for the rapid decision-making necessary during emergency situations. Just as important as rapid methods to express, prototype, optimize, produce, and scale new products are the streamlining of regulatory procedures to maximize the technical advantages offered by the speed and flexibility of plants and other high-performance manufacturing systems. Guidelines issued by regulatory agencies for the development of new products, or the repurposing of existing products for new indications, include criteria for product manufacturing and characterization, containment and mitigation of environmental risks, stage-wise safety determination, clinical demonstration of safety and efficacy, and various mechanisms for product licensure or approval to deploy the products and achieve the desired public health benefit.

Regardless of which manufacturing platform is employed, the complexity of product development requires that continuous scrutiny is applied from preclinical research to drug approval and post-market surveillance, thus ensuring that the public does not incur an undue safety risk and that products ultimately reaching the market consistently conform to their label claims. These goals are common to regulatory agencies worldwide, and higher convergence exists in regions that have adopted the harmonization of standards as defined by the International Council for Harmonization ,2 in key product areas including quality, safety, and efficacy.Both the United States and the EU have stringent pharmaceutical product quality and clinical development requirements, as well as regulatory mechanisms to ensure product quality and public safety. Differences and similarities between regional systems have been discussed elsewhere and are only summarized here. Stated simply, the United States, EU, and other jurisdictions follow generally a two-stage regulatory process, comprising clinical research authorization and monitoring and result’s review and marketing approval. The first stage involves the initiation of clinical research via submission of an Investigational New Drug application in the United States or its analogous Clinical Trial Application in Europe. At the preclinicalclinical translational interphase of product development, a sponsor must formally inform a regulatory agency of its intention to develop a new product and the methods and endpoints it will use to assess clinical safety and preliminary pharmacologic activity . Because the EU is a collective of independent Member States, the CTA can be submitted to a country-specific regulatory agency that will oversee development of the new product. The regulatory systems of the EU and the United States both allow pre-submission consultation on the proposed development programs via discussions with regulatory agencies or expert national bodies. These are known as pre-IND meetings in the United States and Investigational Medicinal Product Dossier 3 discussions in the EU. These meetings serve to guide the structure of theclinical programs and can substantially reduce the risk of regulatory delays as the programs begin. PIND meetings are common albeit not required, whereas IMPD discussions are often necessary prior to CTA submission. At intermediate stages of clinical development , pauses for regulatory review must be added between clinical study phases. Such End of Phase review times may range from one to several months depending on the technology and disease indication. In advanced stages of product development after pivotal, placebo-controlled randomized Phase III studies are complete, tall plastic pots drug approval requests that typically require extensive time for review and decision-making on the part of the regulatory agencies. In the United States, the Food and Drug Administration controls the centralized marketing approval/authorization/ licensing of a new product, a process that requires in-depth review and acceptance of a New Drug Application for chemical entities, or a Biologics License Application for biologics, the latter including PMP proteins. The EU follows both decentralized processes as well as centralized procedures covering all Member States. The Committee for Medicinal Products for Human Use , part of the European Medicines Agency , has responsibilities similar to those of the FDA and plays a key role in the provision of scientific advice, evaluation of medicines at the national level for conformance with harmonized positions across the EU, and the centralized approval of new products for market entry in all Member States.The statute-conformance review procedures practiced by the regulatory agencies require considerable time because the laws were established to focus on patient safety, product quality, verification of efficacy, and truth in labeling. The median times required by the FDA, EMA, and Health Canada for full review of NDA applications were reported to be 322, 366, and 352 days, respectively . Collectively, typical interactions with regulatory agencies will add more than 1 year to a drug development program. Although these regulatory timelines are the status quo during normal times, they are clearly incongruous with the needs for rapid review, approval, and deployment of new products in emergency use scenarios, such as emerging pandemics.

Plant-made intermediates, including reagents for diagnostics, antigens for vaccines, and bio-active proteins for prophylactic and therapeutic medical interventions, as well as the final products containing them, are subject to the same regulatory oversight and marketing approval pathways as other pharmaceutical products. However, the manufacturing environment as well as the peculiarities of the plant-made active pharmaceutical ingredient can affect the nature and extent of requirements for compliance with various statutes, which in turn will influence the speed of development and approval. In general, the more contained the manufacturing process and the higher the quality and safety of the API, the easier it has been to move products along the development pipeline. Guidance documents on quality requirements for plant-made biomedical products exist and have provided a framework for development and marketing approval . Upstream processes that use whole plants grown indoors under controlled conditions, including plant cell culture methods, followed by controlled and contained downstream purification, have fared best under regulatory scrutiny. This is especially true for processes that use non-food plants such as Nicotiana species as expression hosts. The backlash over the Prodigene incident of 2002 in the United States has refocused subsequent development efforts on contained environments . In the United States, field-based production is possible and even practiced, but such processes require additional permits and scrutiny by the United States Department of Agriculture . In May 2020, to encourage innovation and reduce the regulatory burden on the industry, the USDA’s Agricultural Plant Health Inspection Service revised legislation covering the interstate movement or release of genetically modified organisms into the environment in an effort to regulate such practices with higher precision [SECURE Rule revision of 7 Code of Federal Regulations 340].4 The revision will be implemented in steps and could facilitate the field based production of PMPs. In contrast, the production of PMPs using GMOs or transient expression in the field comes under heavy regulatory scrutiny in the EU, and several statutes have been developed to minimize environmental, food, and public risk. The major perceived risks of open-field cultivation are the contamination of the food/feed chain, and gene transfer between GM and non-GM plants. This is true today even though containment and mitigation technologies have evolved substantially since those statutes were first conceived, with the advent and implementation of transient and selective expression methods; new plant breeding technologies; use of non-food species; and physical, spatial, and temporal confinement . The United States and the EU differ in their philosophy and practice for the regulation of PMP products. In the United States, regulatory scrutiny is at the product level, with less focus on how the product is manufactured. In the EU, much more focus is placed on assessing how well a manufacturing process conforms to existing statutes. Therefore, in the United States, PMP products and reagents are regulated under pre-existing sections of the United States CFR, principally under various parts of Title 21 , which also apply to conventionally sourced products. These include current good manufacturing practice covered by 21 CFR Parts 210 and 211, good laboratory practice toxicology , and a collection of good clinical practice requirements specified by the ICH and accepted by the FDA . In the United States, upstream plant cultivation in containment can be practiced using qualified methods to ensure consistency of vector, raw materials, and cultivation procedures and/or, depending on the product, under good agricultural and collection practices . For PMP products, cGMP requirements do not come into play until the biomass is disrupted in a fluid vehicle to create a process stream. All process operations from that point forward, from crude hydrolysate to bulk drug substance and final drug product, are guided by 21 CFR 210/211 . In Europe, bio-pharmaceuticals regardless of manufacturing platform are regulated by the EMA, and the Medicines and Healthcare products Regulatory Agency in the United Kingdom. Pharmaceuticals from GM plants must adhere to the same regulations as all other biotechnology-derived drugs. These guidelines are largely specified by the European Commission in Directive 2001/83/EC and Regulation No 726/2004. However, upstream production in plants must also comply with additional statutes. Cultivation of GM plants in the field constitutes an environmental release and has been regulated by the EC under Directive 2001/18/EC and 1829/2003/EC if the crop can be used as food/feed . The production of PMPs using whole plants in greenhouses or cell cultures in bioreactors is regulated by the “Contained Use” Directive 2009/41/EC, which are far less stringent than an environmental release and do not necessitate a fully-fledged environmental risk assessment. Essentially, the manufacturing site is licensed for contained use and production proceeds in a similar manner as a conventional facility using microbial or mammalian cells as the production platform. With respect to GMP compliance, the major differentiator between the regulation of PMP products and the same or similar products manufactured using other platforms is the upstream production process.

Each bird was presented with all three depths below the perch in a randomized order

The tables consisted of a piece of plexiglass atop a wooden frame, covered by a white tarp canopy which was open at the two narrow ends of the table. The canopy enclosed the table so there were only two open ends as well as prevented glare on the glass from the overhead lights. Half of the table was considered the “shallow side” and had a fixed board covered in checkerboard material just beneath the glass. The opposite side of the table was considered the “deep side” and had a movable, checkerboard shelf that could be adjusted to be just below, 15 cm below, or 75 cm below the glass. When the shelf was adjusted to be 15 or 75 cm below the glass, square pot plastic the deep side of the table provided the illusion of depth without the risk of the bird falling. A strip of LED lights was just below and perpendicular to the glass on the deep side, in order to illuminate the glass and reduce reflection. A perch was secured in the center of the table between the division of the deep and shallow sides and 15 cm above the glass. The placement of the perch allowed the birds to sit in the center of the table and prevented the bird from receiving any tactile information about the presence of glass on the deep side; the bird could not extend its toes over the edge of the cliff or bend forward enough to peck the glass.

A platform made from poultry flooring was suspended over the deep side of the cliff at 30 cm wide and 20 cm from the perch for the small table and 37.5 cm wide and 25 cm from the perch for the large table. The bird had the option of escaping through the open ends of the canopy by either jumping from the perch to the platform or by walking across the shallow side. The depth of the cliff was adjusted prior to the start of the trial. Before each trial involving perceived depth both sides of the plexiglass was wiped down with a microfiber cloth to remove all dust and debris. The experimenter stood by the opening of the shallow side and placed the bird on perch facing the deep side of the table. The research assistant started the timer and the experimenter released their hands as soon as the bird had grasped the perch with both feet simultaneously. The experimenter remained standing at the shallow side of the table until one minute had elapsed or the bird had exited through one of the ends of the table. If after one minute the bird had not exited the apparatus, the experimenter extended their right arm towards the perch with fingers outstretched to encourage them to exit through the deep side of the table.

The trial ended after the bird had exited or after 1.5 minutes had elapsed. This procedure was repeated until the bird had completed three trials and had been presented with all three depths. Latency to jump to the platform, type of jump, and frequency of looking down, or orienting the head down, were recorded .Reliability In order to verify that behaviors were coded consistently, inter- and intra-coder agreement was calculated using Cohen’s κ. Observations were considered to be in agreement if the behaviors recorded were identical and within 2 s of each other. It was found that there was very strong agreement for intra-coder reliability with each research assistant re-coding 10 videos. For inter-coder reliability, all observers coded 5% of the videos and it was also found that they had very strong agreement . Statistical analysis Each bird was considered the experimental unit, and data were assessed for normality prior to analysis. All statistical analyses were conducted in IBM® SPSS® Statistics 27.0 with a significance level set to P < 0.05. Data are presented as the mean . A binomial logistic regression and a binomial test were run on choice to exit through the short or long arm for the unequal Y-maze configurations as well as the choice to exit left or right for the equal Y-maze configurations. The main and interaction effects of age and rearing treatment on the latency to exit the Y-maze was analyzed using a factorial ANOVA with pen as a random effect. Birds that did not exit the maze were assigned the maximum amount of time for the trial .

A generalized estimating equation was performed to investigate the main and interaction effects of age, depth and rearing treatment, as well as the random effect of pen, on crossing the visual cliff and type of movement used to cross the cliff. A factorial ANOVA was used to assess the main and interaction effects of age, depth, and rearing treatment, as well as the random effect of pen on the frequency of looking down over the cliff edge while on perch and the latency to cross the visual cliff. Birds that did not cross the cliff were assigned the maximum amount of time for the trial .In order to better understand how access to vertical structures during the pullet phase impacts the development of spatial cognition in hens, this study utilized two novel depth perception tasks: Y-maze and visual cliff test. SINGLE and MULTI birds, as well as their FLOOR reared counterparts, were shown to exit the Y-maze through the short arm significantly more frequently than what would be expected by chance. These results demonstrate that the birds were able to choose the shorter escape route when presented with the choice between 30 and 90 cm. This suggests that, regardless of rearing system, birds were able to differentiate between these two distances on a floor based task. FLOOR birds also crossed the visual cliff at the same frequency as SINGLE and MULTI reared at each cliff depth. If FLOOR birds did not adequately perceive depth, it could be expected that they would cross the cliff at the same frequency regardless of the height. However, it was found that all birds, regardless of rearing treatment, were more likely to cross at 15 cm depth than the 30 cm and 90 cm depths. Additionally, all birds had a significantly shorter latency when crossing at the 15 cm depth when compared to the greater depths. These results demonstrate that the birds were responding differently to variations in depth, suggesting that FLOOR birds do not have a deficit in the ability to perceive depth when compared to SINGLE and MULTI birds. This is an intriguing addition to the current evidence that access to vertical space at a young age improves spatial memory and navigation in hens . Although these aspects of spatial cognition are affected by access to perches and platforms during development, square pots for plants depth perception appears to be unaffected.This study provides evidence that the development of accurate depth perception does not require specific visual experience, such as exposure to depth and height, during rearing. This is supported by studies with one-day old chicks and four-day old chicks without exposure to vertical structures which found these animals already possessed the ability to perceive depth. Gibson and Walk preformed a cross species comparison of depth perception abilities by testing infant rat pups, kittens, chicks, lambs, goat kids, and piglets on a visual cliff. They found that all animals tested, including animals that were less than one day of age, demonstrated avoidance of the deep side of the visual cliff, akin to that of their adult counterparts. Gibson and Walk concluded that depth perception abilities are most likely innate, however did not exclude the chance that learning may play a role to some degree. The theory that depth perception is an innate skill was challenged by Tallarico and Ferrell when they raised chicks on either the deep side or the shallow side of a visual cliff for the first 30-40 hours of life. When placed in the center of a visual cliff with both deep side and shallow sides present, chicks reared in the deep side environment were significantly more likely to cross to the deep side of the cliff than those reared on the shallow side.

Tallarico and Ferrell used this as evidence that there is not innate avoidance of a drop off or cliff. However, chicks reared in the deep side environment may have learned that the cliff was not a real threat and that there was a barrier preventing them from falling. Instead, evidence from previous research as well as the current study suggests that depth perception is an innate ability and rearing chicks on the floor does not prevent the proper development of depth perception. Despite the same ability to perceive depth between the three treatments, differences in frequency of crossing the visual cliff was observed between the FLOOR, SINGLE, and MULTIbirds. The FLOOR birds were significantly less likely to cross the visual cliff than the other two treatments at 8 and 16 weeks of age. This difference dissipated at week 30, when all birds, regardless of rearing treatment, crossed the visual cliff at the same rate. This suggests that something inhibited the FLOOR birds from crossing the visual cliff at a younger age when housed in the treatment pen and that this reversed after the FLOOR birds were exposed to their vertically complex adult housing after 16 weeks. One possibility is that MULTI and SINGLE birds may have been more comfortable with the platform as a landing surface due to their experience with poultry flooring in their home pens. However, the appearance of the poultry flooring in the visual cliff and rearing pen systems was different in color and size. The color of the poultry flooring in the MULTI and SINGLE rearing treatment pens was black as opposed to the white poultry flooring used for the visual cliff. Also, the platform was a much smaller landing surface than the large platforms used in the MULTI and SINGLE rearing systems as well as the adult multi-tier aviary system. Another possibility is that FLOOR birds were less likely to cross due to lack of experience with tall vertical structures such as perches and platforms above 30 cm in height. This lack of experience with vertical structures may prevent hens reared on the floor from readily crossing between perches and platforms when first introduced to a more vertically complex environment. FLOOR birds at 8 weeks of age also showed significantly longer latencies to cross the visual cliff than SINGLE and MULTI birds at 8 weeks of age. This further supports our suggestion that FLOOR reared birds were tentative when using vertical structures prior to being transitioned into their adult housing. However, it is interesting that FLOOR birds at 16 weeks did not have significantly longer latency than the other rearing treatments. Meaning that FLOOR pullets at 16 weeks of age were less hesitant than their 8 week old counterparts to utilize vertical structures. Across all treatments, birds at 16 weeks of age were less likely to cross the visual cliff than at 8 or 30 weeks old. Additionally, all birds exhibited longer latencies to exit the Y-maze and cross the visual cliff at this intermediate time point when compared to the other time points. The visual cliff was smaller at the 8 week time point and was increased in size by 25% for the 16 and 30 week time points. The size of the visual cliff was increased after the first time point due to the increased size of the birds. It is important to note that the frequency of crossing and latencies to cross were not significantly different between the 8 week and 30 week old birds despite the table being the same size at 30 weeks as it was at 16 weeks. Also, the birds did not cross the cliff at 16 weeks of age significantly differently than at the other time points, demonstrating that they did not need to jump or fly to reach the platform. Therefore, it is unlikely that the size difference of the visual cliff contributed to the avoidance of crossing and longer latencies observed at the 16 week time point. It would be expected that if the increase in size of the visual cliff increased the physical difficulty of the task, the birds would have no longer been able to consistently step from the perch to the platform for the visual cliff.

Consumers used to associate organic agriculture with small farms and diverse crop production

The pesticide products used in organic agriculture are generally less toxic, but their reduced efficacy could drive higher application rates, which makes the overall environmental impact of organic agriculture less obvious. In fact, certain pesticides used in organic agriculture have been found to be more toxic than conventional pesticides targeting the same pest . In Läpple and Van Rensburg , the authors found that farmers who entered organic production after the supporting policy was launched are more likely to be profit-driven and less environmentally concerned than farmers who began organic production before any supporting policy was in place. Therefore, studying pesticide use in organic agriculture and how it changes can expand the understanding of organic agriculture and its future. The consolidation into larger operations is another important issue for organic agriculture because it could undermine the perception of organic agriculture as environmentally friendly. Although both the number of organic farms and total organic acreage has increased, blueberry package consolidation still exists if large farms grow faster than small farms.

Meanwhile, the consolidation process had been clearly documented for the organic food processing sector and U.S. agriculture in general . Farm size, measured in acreage, was found to be positively correlated with pesticide use for staple crop productions in the previous literature for conventional agriculture . If this relationship also applies to organic agriculture, then cropland consolidation could have a negative impact on the environment, which means that organic agriculture could become less environmentally friendly than it used to be as the consolidation proceed. I find that the farm size is positively associated with use of sulfur and fixed copper pesticides in the organic crop production. Organic agriculture in California has a diverse crop portfolio, which affects farm size and pesticide use simultaneously. Certain crops are produced in a large scale, measured in acreage, and require intensive pesticide use. How changes in the crop mix interact with the consolidation process is another issue investigated in this essay. The objective of this essay is threefold: to identify organic fields in the PUR database using historical pesticide use records; to characterize the patterns and trends of production and pesticide use for those identified organic fields collectively by crop, crop acreage, year, farm size, and other attributes; to assess the environmental impacts of pesticide use in organic agriculture and the consolidation of organic cropland.

For each agricultural pesticide application, the PUR database specifies the location of application in the variable “COMTRS”, which stands for the county, meridian, township, range, and section as defined by the Public Lands Survey mapping system . This information allows us to locate which section does the field belong and aggregate pesticide usage at the 1×1 mile PLSS section-level, which is the finest spatial scale reported in the PUR database. This detailed section-level analysis of the spatial distribution of organic fields in California and, how it has changed over time, is only possible using my method for identifying organic production fields in the PUR database. In the PUR database, acreage information is recorded as both treated acreage and planted acreage. The former represents the acres physically treated in a pesticide application while the latter remains constant for the field within a year. However, researchers have demonstrated that planted acreage in the PUR database is not consistently reliable for annual crops . So, in this essay, we use the maximum treated acreage in a given year as the acreage for each field for annual crops. This approach assumes that the entire field is treated with pesticide at least once per year. If this assumption is invalid, then the planted acreage will be under counted. As presented below, the validity of this approach is supported by the consistency of state-scale crop acreages that are generated from the PUR database with those from other data sources. One caveat of the PUR database for organic production is that since 2000, pesticide products deemed as having “minimum impacts” are no longer required to be registered with CDPR, which exempts them from the pesticide use reporting requirement. A detailed list of these pesticide ingredients can be found in the California Code of Regulations section 6147 .

Most ingredients exempted from registration are natural or naturally-derived products , which could presumably be used in organic agriculture and have impacts on the surrounding environment. However, these exempted ingredients are not widely applied, based on their minimal amounts of usage in the PUR database prior to 2000 when they were still required to be reported. Therefore, this issue is not likely to invalidate the results, especially because the number of fields where only such ingredients were applied before 2000 is small. For convenience, some chemically-related individual active ingredients were grouped together, such as combining the many different strains of Bacillus thuringiensis, which target different insects and are each treated as a distinct AI in the PUR database, into a single “microbial” group. A detailed list of microbials is available in the appendix. The group of “Copper, fixed” includes the summation of copper, copper oxychloride, copper octanoate, copper oxide, and copper hydroxide; and the two forms of copper sulfate .Organic growers are required to comply with a set of crop management standards, regarding seeds and planting stock practices, soil fertility and crop nutrient management, pest, weed, and disease management, and crop rotation among others . The most relevant requirement for this essay is that there is a 36-month transition period between the last application of any prohibited substance under organic regulations and of- ficially recognized organic production. The field identification method relies on this requirement. First, we constructed a list of allowed and prohibited substances based on various sources . Second, we checked each field in the PUR database, to see which AIs were applied over the previous three years. If there were no applications of any prohibited ingredients, then the field was considered organic as of that year. Organic growers who do not use any chemical tools at all to manage pests and weeds are missing from the PUR database entirely, and therefore not identified in this essay. However, blueberry packaging based on acreage comparisons between the PUR database and other data sources, those growers appear to operate a very limited number of acres. A field could comply with the pest, weed, and disease management standards of the NOP while violating other standards and still not qualify for organic production. Because the PUR database only contains pesticide use information, my method cannot distinguish such fields from actual certified organic fields. On the other hand, growers could follow organic farming practices but choose not to certify their fields for various reasons. However, as mentioned above, the amounts of acreage in these categories must not be very substantial because the PUR-derived organic crop acreages agree with those from CAC compiled sources, suggesting that my method is valid. One caveat of this method is the consistency of field information in the PUR database from year to year. As mentioned previously, the “SITE_LOCATION_ID” on pesticide permits, a number chosen by growers or assigned by county, indicate a physical field location, but the id number may change from year to year. When that ID does change, a new “field” appears in the PUR database for which we do not have information on its historical pesticide applications. In this situation, we assume for annual crops that the land was fallow before a new “SITE_LOCATION_ID” was assigned. This assumption could cause us to overestimate the total organic acreage somewhat, by including fields with a new “SITE_LOCATION_ID” which may have had prohibited substance applications in the past three years. Pasture and rangeland have unique pest management practices and enormous acreage, but they are not covered in this essay as they do not suit my primary purpose of evaluating the environmental impacts of pesticide use in organic crop fields.

For pesticide applications on the identified organic fields in the PUR database, the PURE index was used to assess the potential environmental impacts for five environmental dimensions: surface water, groundwater, soil, air, and pollinator . The PURE index is calculated for five different environmental dimensions: surface water, groundwater, soil, air, and pollinators. For dimensions other than air, index values are calculated based on predicted environmental concentrations and standard toxicity values for relevant organisms. The algorithm used to calculate the predicted environmental concentrations includes the site-specific environmental conditions , which is a major advantage over other indices for assessing the environmental impacts of pesticide use, such as the Environmental Impact Quotient . The predicted environmental concentrations have been proven to align with monitoring data in a previous study . The index value for air is calculated using the predicted volatile organic compound emissions of each pesticide product. Individual index values are normalized to range from 0 to 100 . The PURE index values are calculated for each AI in each pesticide application for each field. These disaggregated index values are then summed at the field level to provide a general index value assessment for each field. To evaluate the overall impact for each crop, an acreage-weighted average across all relevant fields can be determined. This aggregation can be taken one step further, across all AIs, to show the potential impact for all pesticide use using the same aggregation process for organic and conventional fields.Measures of State-specific total organic acreage from different data sources are compared for all seven crops/crop groups available in the USDA organic certification and survey data plus strawberries from the CSC survey data . Among these eight crops/crop groups, four of them are annual and the others are perennial crops. Their organic acreage is plotted in Figure 2.2 and Figure 2.3 below. As mentioned previously, then data sources have different reporting requirements and discrepancies can be caused by a variety of reasons that apply to all crops. In the CDFA registration data, new organic growers report their expected acreage for the next year. If growers decide not to engage in that expected organic production, their registration records remain in the system, which could produce an inaccurate inflation in acreage data, especially for crops that went through a rapid growth of organic production. Growers with less than $5,000 annual organic sales are required to register their production with CDFA but do not have to apply for organic certification. So their acreage might not be counted in USDA data. Both USDA and CDFA data relies on a set of well-defined organic standards and restrictive regulations, which did not exist before 2001. For perennial crops, new registrations with CDFA or new certifications with USDA must include the documentation of orchards before they are fully established . Therefore, we observe new orchards and vineyards in the USDA and CDFA data before they enter the PUR database. If growers adopt the organic pest management program but do not market their products as organic, their acreage is only covered in the PUR database, not the others.Meanwhile, discrepancies in acreage are caused by crop specific reasons. For lettuce , USDA has consistently reported higher acreage values since 2004. In 2016, the lettuce acreage from the USDA organic survey is more than double the acreage from the CDFA registration data . One reason could be potential double or multiple cropping of lettuce in one calendar year. When growers harvested lettuce multiple times from the same field, USDA reported the sum of acres for each harvest while CDFA asked for the size of the field. My method accounts for this phenomenon by counting days between the first and last pesticide applications. Normally, both leaf and head lettuce require at most 130 days from planting to harvest in California . So if two pesticide applications occur more than 130 days apart for the same field, we assume that the lettuce was harvested twice and the acreage would be doubled. After this adjustment, the PUR database still falls short of the acreage documented in the USDA dataset but is in-between the other two sources since 2003. Before 2003, CDFA had more acreage than the other data series because the crop category “lettuce, salad mix”, which contains arugula, red/green mustard, and other crops , used to be reported as lettuce. For strawberries , the CSC data always show somewhat less acreage because their data are derived from surveys and the survey response rate is not reported. For apples , the organic acreage is small compared other perennial crops, which amplifies the potential measurement errors.

The signal and noise event classification score distributions from the networks are distinct

The paper concludes with a short summary and plans for the future.The ARIANNA experiment is an array of autonomous radio stations located in Antarctica. Stations have operated at sea-level on the Ross Ice Shelf in Moore’s Bay, about 110 km from McMurdo Station, which is the largest research base on the continent. In addition, two stations have operated at the South Pole, which is colder and higher in elevation than the environment at Moore’s Bay Several architectures were implemented in the prototype array at Moore’s Bay. Most stations consisted of four downward facing log periodic dipole antennas to specifically look for neutrino events, as shown in figure 1. Two other stations at Moore’s Bay and two at the South Pole were configured with eight antennas, which included a mixture of LPDAs and dipoles. These stations were simultaneously sensitive to cosmic rays that interact in the atmosphere and neutrinos. The radio signals are digitized and captured using a custom-made chip design known as the SST. The analog trigger system of ARIANNA imposes requirements on individual wave forms; a high and low threshold must occur within 5 ns, nursery grow bag and multiple antennas channels must meet the high-low threshold within a 30 ns coincidence window.

These criteria are based on the expectation that thermal noise fluctuations are approximately independent, whereas neutrino signals produce correlated high-low fluctuations in a given antenna, and produce comparable signals in multiple antenna channels. These requirements reduce the rate of thermal noise triggers for a given trigger threshold while maintaining the sensitivity to Askaryan pulses from high-energy neutrinos. Once a station has triggered, the digitized wave forms of every antenna channel contain 256 samples with a voltage accuracy of 12 bits. The event size in an eight-channel station is 132 kbits. The waveform data from all channels are piped into an Xilinx Spartan 4 FPGA, and then further processed and stored to an internal 32 GB memory card by an MBED LPC 1768 microcontroller. There are up to eight channels on each board that process the radio signal from each antenna. Once a triggered event is saved to local storage it is then transferred to UC Irvine through a long-range WiFi link during a specified communication window. The ARIANNA stations also use Iridium satellite network as a backup system. Satellite communication is relatively slow, with a typical transfer rate of one event every 2–3 minutes. For both communication methods, currently the hardware system is limited to either communication or data collection. Therefore, neutrino search operations are disabled during data communication.

As radio neutrino technologies move beyond the prototype stage, the relatively expensive and power consumptive AFAR system will be eliminated. Perhaps it will be replaced by a better wireless system, such as LTE, for sites relatively close to scientific research bases, but for more remote locations, only satellite communications such as Iridium are feasible. Given the current limitation of 0.3 events/min imposed by Iridium communication, and the fact that neutrino operations cease during data transfer which generates unwanted deatime, stations that rely solely on Iridium communication are expected to operate at trigger rates from ∼ 0.3 mHz to keep losses due to data transfer, trans, below 3%. The trigger thresholds of ARIANNA are adjusted to a certain multiple of the Signal to Noise Ratio , defined here as the ratio of the maximum absolute value of the amplitude of the waveform to the RMS noise. Currently, the pilot stations are set to trigger above 4.4 SNR to reach the constrained trigger rate of order 1 mHz. In the next section, the expected gain in sensitivity is studied for a lower threshold of 3.6 SNR, which corresponds to 100Hz, the maximum operation rate of the stations. For more information on the ARIANNA detector, see.The real-time rejection of thermal noise that is presented in this article would enable the trigger threshold to be lowered significantly — thus increasing the detection rate of UHE neutrinos — while keeping a low event rate of a few mHz.

To estimate the increase in sensitivity, the effective volume of an ARIANNA station is simulated for the two trigger thresholds corresponding to a thermal noise trigger rate of 10 mHz , and a four orders-of-magnitude higher trigger rate . We use the relationship between trigger threshold and trigger rate from to calculate the thresholds. NuRadioMC is used to simulate the sensitivity of the ARIANNA detector at Moore’s Bay. The expected radio signals are simulated in the ARIANNA detector on the Ross ice shelf, i.e., an ice shelf with a thickness of 576 m and an average attenuation length of approx. 500 m, and where the ice-water interface at the bottom of the ice shelf reflects radio signals back up with high efficiency. The generated neutrino interactions are distributed uniformly in the ice around the detector with random incoming directions. The simulation is performed for discrete neutrino energies and includes a simulation of the full detector response and the trigger algorithm as described above. The resulting gain in sensitivity is shown in figure 2 and increases by almost a factor of two at energies of 1017 eV. The improvement decreases towards higher energies because fewer of the recorded events are close to the trigger threshold but at 1018 eV there is still an increase in sensitivity of 40%.To implement a deep learning filter, the general network structure needs to be optimized for fast and accurate classification. For accuracy, the two metrics are neutrino signal efficiency and noise rejection factor , where ratio is the ratio of correctly identified noise events to the total number of noise events. The goal is to reject several orders-of-magnitude of thermal noise fluctuations while retaining most of the neutrino signals. In the following, the target is 5 orders-of-magnitude thermal noise rejection while providing a high signal efficiency at or above 95%. Typically using a more complex network structure yields more accurate results, but this also creates a slower network. These two constraints need to be optimized as the deep learning architecture is developed. In the following two sections, deep learning techniques are used to train models then study their efficiency and processing time. In section 5.1, a commonly used method of template matching will be investigated to compare with the deep learning approach.NuRadioMC is used to simulate a representative set of the expected neutrino events for the ARIANNA detector, following the same setup as described in section 2.2 but for randomly distributed neutrino energies that follow an energy spectrum expected for an astrophysical and cosmogenic neutrino flux; the astrophysical flux measurement by IceCube with a spectral index of 2.19 is combined with a model for a GZK neutrino flux based on Auger data for a 10% proton fraction. The resulting radio signals are simulated in the four LPDA antennas of the ARIANNA station by convolving the electric-field pulses with the antenna response, plastic growing bag and the rest of the signal chain is approximated with an 80 MHz to 800 MHz band-pass filter. An event is recorded if the signal pulse crossed a high and a low threshold of 3.6 times RMS noise within 5 ns in at least two LPDAs within 30 ns. At such a low trigger threshold, noise fluctuations can fulfil the trigger condition at a non-negligible rate. Therefore, the signal amplitude is required to be at least 2.8 times the RMS noise before adding noise to avoid spurious triggers on thermal-noise fluctuations. In total 121,597 events that trigger the detector are generated and this is called the signal data set in the following. The training data set for thermal noise fluctuations is obtained by simulating thermal noise in the four LPDA antennas and saving only those events where a thermal noise fluctuation fulfills the trigger condition described above. In total 1.1 million events are generated and this is called the noise data set in the following. The limitations of the simulations and their impact on the obtained results are discussed at the end of this article.All of the networks are created with Keras, a high-level interface to the machine-learning library TensorFlow. Our primary motivation is to develop a thermal noise rejection method that operates on the existing ARIANNA hardware with an evaluation rate of at least 50 Hz, which is a factor of 104 larger than our current trigger rate. To increase the execution rate of the neural network, the hardware is one option to optimize; however, any alteration to the hardware is constrained by two main factors: the power consumption of the component and the reliability in the cold climate.

Thus, this study will focus primarily on optimizing the execution rate by identifying the smallest network that reaches our objective. While the number of trainable parameters can give an indication of network size, the number of Floating Point Operations is the chosen metric for network size in this paper. The number of FLOPs can be approximated by multiplying the amount of operations performed by floating point numbers with the amount of nested loop iterations required to classify incoming data. Besides making the network size smaller, another way to improve the network speed is to reduce the input data size. Instead of feeding the signal traces from all four antennas into the network, one way to cut down on the size of input data is to use only the two antennas that caused the trigger. As each signal trace consists of 256 samples, the total input size to the network is 512 samples. In addition, a further reduced input data set is studied for various sizes by selecting the antenna with the highest signal amplitude and only using a window of values around the maximum absolute value. The window size was not fully optimized, but a good balance between input data size and efficiency is 100 samples around the maximum value. The reasoning for this is that the dominant neutrino signal does not span over the whole record length and typically only spans over less than 50 samples. The two network architectures studied in the following are a fully connected neural network and a convolutional neural network , depicted in figure 3. The FCNN used in this baseline test is a fully connected single hidden layer network with a node size of 64 for the 100 input samples and 128 for the 512 input samples, a ReLU activation, and a sigmoid activation in the output layer. The CNN structure consists of 5 filters with 10×1 kernels each, a ReLU activation, a dropout of 0.5, a max pooling with size 10×1, a flattening step to reshape the data, and a sigmoid activation in the output layer. Both the CNN and FCNN are trained using the Adam optimizer with varying learning rates from 0.0005-0.001 depending on which value works best for each individual model. The training data set contains a total of 100,000 signal events and 600,000 noise events, where 80% is for training and 20% is to validate the model during training. Once the network is trained, the test data are used which contain 21,597 signal events and 500,000 noise events. With the sigmoid activation in the output layer, the classification distribution falls between 0 and 1, where close to 0 is noise-like data and close to 1 is signal-like data. Once trained, with the 100 input sample CNN mentioned above, the distribution shown in figure 4 is obtained. From this distribution, the amount of signal efficiency vs. noise rejection can be varied by choosing different network output cut values. Training and testing these networks with each input data size yields the signal efficiency vs. noise rejection plot in figure 5. Each data point corresponds to a different network output value, and the final cut value is chosen by optimizing the noise rejection for the desired signal acceptance. All of these input data sizes produce efficiencies above the required threshold of 95% for signal, and all were able to reach at least 5 orders-of-magnitude noise rejection. Since all of the networks have efficiencies above our target of 95% for signal at 105 noise rejection, the main consideration is the amount of FLOPs required for each network because this directly impacts the processing time. Typically, CNN’s have less parameters overall due to their convolutional nature, which focuses on smaller features within a waveform; comparatively, the FCNN considers the whole waveform to make its prediction, so it requires more node connections.

The small RNAs mapping in this region were mainly 21-nt and produced from both strands

Epigenetics has been proposed as crucial in shaping plant phenotypic plasticity, putatively explaining the rapid and reversible alterations in gene expression in response to environmental changes. This fine-tuning of gene expression can be achieved through DNA methylation, histone modifications and chromatin remodeling . Small non-coding RNAs are ubiquitous and adjustable repressors of gene expression across a broad group of eukaryotic species and are directly involved in controlling, in a sequence specific manner, multiple epigenetic phenomena such as RNA-directed DNA methylation and chromatin remodeling and might play a role in genotype by environment interactions. In plants, small ncRNAs are typically 20–24 nt long RNA molecules and participate in a wide series of biological processes controlling gene expression via transcriptional and post-transcriptional regulation . Moreover, small RNAs have been recently shown to play an important role in plants environmental plasticity . Fruit maturation, the process that starts with fruit-set and ends with fruit ripening , has been largely investigated in fleshy fruits such as tomato and grapevine. These studies highlighted, among others, plastic planters the vast transcriptomic reprogramming underlying the berry ripening process , the extensive plasticity of berry maturation in the context of a changing environment , and the epigenetic regulatory network which contributes to adjust gene expression to internal and external stimuli .

In particular, small RNAs, and especially microRNAs , are involved, among others, in those biological processes governing fruit ripening . In this work, we assessed the role of small ncRNAs in the plasticity of grapevine berries development, by employing next-generation sequencing. We focused on two cultivars of Vitis vinifera, Cabernet Sauvignon, and Sangiovese, collecting berries at four different developmental stages in three Italian vineyards, diversely located. First, we described the general landscape of small RNAs originated from hotspots present along the genome, examining their accumulation according to cultivars, environments and developmental stages. Subsequently, we analyzed miRNAs, identifying known and novel miRNA candidates and their distribution profiles in the various samples. Based on the in silico prediction of their targets, we suggest the potential involvement of this class of small RNAs in GxE interactions. The results obtained provide insights into the complex molecular machinery that connects the genotype and the environment.RNA extraction was performed as described in Kullan et al. . Briefly, total RNA was extracted from 200 mg of ground berries pericarp tissue using 1 ml of Plant RNA Isolation Reagent following manufacturer’s recommendations. The small RNA fraction was isolated from the total RNA using the mirPremier R microRNA Isolation kit and dissolved in DEPC water. All the steps suggested in the technical bulletin for small RNA isolation of plant tissues were followed except the “Filter Lysate” step, which was omitted.

The quality and quantity of small RNAs were evaluated by a NanoDrop 1000 spectrometer , and their integrity assessed by an Agilent 2100 Bioanalyzer using a small RNA chip according to the manufacturer’s instructions. Small RNA libraries were prepared using the TruSeq Small RNA Sample Preparation Kit , following all manufacturers’ instructions. Forty-eight bar-coded small RNA libraries were constructed starting from 50 ng of small RNAs. The quality of each library was assessed using an Agilent DNA 1000 chip for the Agilent 2100 Bioanalyzer. Libraries were grouped in pools with six libraries each . The pools of libraries were sequenced on an Illumina Hiseq 2000 at IGA Technology Services . The sequencing data were submitted to GEO–NCBI under the accession number GSE85611.In order to investigate whether the overall distribution and accumulation of small RNA is affected by the interaction between different V. vinifera genotypes [Cabernet Sauvignon and Sangiovese ] and environments [Bolgheri , Montalcino and Riccione ], we investigated the regions in the grapevine genome from where a high number of small RNAs were being produced , by applying a proximity-based pipeline to group and quantify clusters of small RNAs as described by Lee et al. . The nuclear grapevine genome was divided in 972,413 adjacent, non-overlapping, fixed-size windows or clusters. To determine the small RNA cluster abundance, we summed the hits-normalized-abundance values of all the small RNAs mapping to each of the 500 bp clusters, for each library .

To reduce the number of false positives, we considered a cluster as expressed when the cluster abundance was greater than the threshold for a given library, eliminating regions where few small RNAs were generated, possibly by chance. Libraries from bunch closure, representing green berries, and 19 ◦Brix representing ripened berries, where used in this analysis. From the 972,413 clusters covering the whole grapevine genome, 4408 were identified as expressed in at least one sample. As showed in Figure 1, CS-derived libraries have a higher number of expressed clusters when compared to SG-derived libraries of the same developmental stage and from the same vineyard. The exceptions were the Sangiovese green berries collected in Riccione and Sangiovese ripened berries collected in Montalcino, which have a higher number of expressed clusters than the respective CS ones. The two cultivars show a completely different small RNA profile across environments. When Cabernet berries were green, a higher number of sRNA-generating regions were found active in Bolgheri than in Montalcino and Riccione. Differently, ripened berries had the highest number of sRNA producing regions expressed in Riccione, while Bolgheri and Montalcino show a similar level of expressed clusters . Sangiovese green berries instead show the highest number of active sRNA-generating regions in Riccione, and this number is twice the number found in Bolgheri and Montalcino that is similar. Ripened berries collected in Montalcino and Riccione show almost the same high level of sRNA-generating clusters, whereas those collected in Bogheri present a lower number . We also noted that when cultivated in Bolgheri, neither Cabernet Sauvignon or Sangiovese change dramatically the number of expressed clusters during ripening, while in Riccione Cabernet Sauvignon shows a 2-fold increase of sRNAproducing clusters, which is not observed in Sangiovese. Next, the small RNA-generating clusters were characterized on the basis of the genomic regions where they map, i.e., genic, intergenic and transposable elements. In general, when the berries were green, the numbers of sRNA-generating loci located in genic and intergenic regions were roughly equal in all environments and for both cultivars, except for Sangiovese berries collected in Riccione, plastic nursery plant pot which show a slight intergenic disposition of sRNA-producing regions . Differently, in ripened berries on average 65% of the sRNA-generating loci were in genic regions, indicating a strong genic disposition of the sRNA-producing clusters . The shift of sRNA-producing clusters from intergenic to mostly genic is more pronounced in Cabernet Sauvignon berries collected in Riccione, with an increase of approximately 20% of expressed clusters in genic regions when berries pass from the green to the ripened stage. When comparing the clusters abundance among libraries, we found that 462 clusters were expressed in all libraries. The remaining 3946 expressed clusters were either shared among groups of libraries or specific to unique libraries. Interestingly, 1335 of the 4408 expressed clusters were specific to Riccione-derived libraries . The other two environments showed a much lower percentage of specific clusters, 263 and 140 in Bolgheri and Montalcino respectively . Comparing the expressed clusters between cultivars or developmental stages, we did not observe a similar discrepancy of specific clusters toward one cultivar or developmental stage; roughly the same proportion of specific clusters was found for each cultivar and for each developmental stage .

Among the 1335 specific clusters of Riccione, 605 were specific to Cabernet Sauvignon ripened berries of and 499 to Sangiovese green berries. Other smaller groups of expressed clusters were identified as specific to one cultivar, one developmental stage or also one cultivar in a specific developmental stage. When comparing the expressed clusters with the presence of transposable elements annotated in the grapevine genome , we noticed that approximately 23% of the sRNA-generating regions were TE-associated. Sangiovese green berries from Riccione have the highest proportion of TE-associated expressed clusters, while Cabernet Sauvignon ripened berries also from Riccione show the lowest proportion of TE associated expressed clusters. Sangiovese berries have the highest percentage of expressed clusters located in TE when cultivated in Riccione, compared to the other two vineyards. Interestingly, Cabernet Sauvignon berries show the lowest proportion of TE-associated clusters when growing in Riccione , independently from the berry stage. In all the libraries, Long Terminal Repeat retrotransposons were the most represented TE. More specifically, the gypsy family was the LTR class associated with the highest number of sRNA hotspots. The other classes of TE associated with the sRNA-generating regions can be visualized in Figure 3B.To determine the global relationship of small RNA-producing loci in the different environments, cultivars and developmental stages, we performed a hierarchical clustering analysis. As showed in Figure 4, the libraries clearly clustered according to the developmental stage and cultivar and not according to the environments. Ripened and green berries had their profile of sRNA-generating loci clearly distinguished from each other. Inside each branch of green and ripened samples, Cabernet Sauvignon and Sangiovese were also well separated, indicating that, the cultivar and the stage of development in which the berries were sampled modulate the outline of sRNA-producing loci more than the environment. Notwithstanding the evidence that developmental stage and variety have the strongest effect in terms of distinguishing samples clustering, we were interested to verify the environmental influence on small RNA loci expression in the two cultivars. Thus, for each sRNA-generating cluster we calculated the ratio between cluster abundance in Cabernet Sauvignon and Sangiovese in each environment and developmental stage, thereby revealing the genomic regions with regulated clusters, considering a 2-fold change threshold, a minimum abundance of 5 HNA in each library and a minimum sum of abundance of 30 HNA . Figure 5 shows how different environments affect the production of small RNAs. In Bolgheri, regardless the developmental stage there were many clusters with a very high abundance level in Cabernet Sauvignon . In Montalcino and even more in Riccione we also observed differences between the expressions of clusters in the two cultivars, with ripened and green berries showing an almost opposite profile in terms of number of clusters more expressed in Cabernet Sauvignon or Sangiovese. When the berries were green, in Montalcino Cabernet Sauvignon shows the highest number of up-regulated clusters, while in Riccione, Sangiovese has the highest number of up-regulated clusters. The opposite behavior was noticed in ripened berries, with Sangiovese having the highest number of up-regulated clusters in Montalcino and Cabernet Sauvignon in Riccione . Notably, we observed a small percentage of regulated clusters exhibiting at least a 10-fold higher abundance of small RNA in Cabernet Sauvignon or Sangiovese when compared to each other . An examination of those clusters showed that a substantial difference could exist between the cultivars, depending on the vineyard and the developmental stage. For example, in Riccione, a cluster matching a locus encoding a BURP domain-containing protein showed a fold change of 390 when comparing green berries of Sangiovese with Cabernet Sauvignon. Similarly, the majority of the highly differentially expressed clusters showed a similar profile: strong bias toward 21-nt sRNAs and a low strand bias. These findings suggest that these small RNAs might be the productof RDR polymerase activity rather than degradation products of mRNAs.We applied a pipeline adapted from Jeong et al. and Zhai et al. to identify annotated vvi-miRNAs, their variants, novel species-specific candidates and, when possible, the complementary 3p or 5p sequences. Starting from 25,437,525 distinct sequences from all the 48 libraries, the first filter of the pipeline removed sequences matching t/r/sn/snoRNAs as well as those that did not meet the threshold of 30 TP4M in at least one library or, conversely, that mapped in more than 20 loci of the grapevine genome . Only sequences 18–26-nt in length were retained. Overall, 27,332 sequences, including 56 known vvi-miRNAs, passed through this first filter and were subsequently analyzed by a modified version of miREAP as described by Jeong et al. . miREAP identified 1819 miRNA precursors producing 1108 unique miRNA candidates, including 47 known vvi-miRNA. Next, the sequences were submitted to the third filter to evaluate the single-strand and abundance bias retrieving only one or two most abundant miRNA sequence for each precursor previously identified. A total of 150 unique miRNA corresponding to 209 precursors were identified as candidate miRNAs.

Wines made from Cabernet Sauvignon are dark red with flavors of dark fruit and berries

These are ACC oxidase, which is involved in ethylene biosynthesis; a lipoxygenase, part of a fatty acid degradation pathway giving rise to flavor alcohols such as hexenol; α-expansin 1, a cell wall loosening enzyme involved in fruit softening, and two terpene synthases, which produce important terpenes that contribute to Cabernet Sauvignon flavor and aroma. The high similarity of these transcript profiles indicates that ethylene biosynthesis and signaling may be involved in the production of grape aroma. Supporting this argument, two recent studies have shown that a tomato ERF TF , falling in the same ERF IX subfamily, has a strong effect on ethylene signaling and fruit ripening. The transcript abundance of AtERF6 in Arabidopsis is strongly increased by ethylene, which is triggered by the MKK9/MPK3/MPK6 pathway. The transcript abundance of VviMKK9 in the Cabernet Sauvignon berries was higher in the skin than the pulp, but there were no significant differences for VviMPK3 or VviMPK6 . This is not too surprising since AtMKK9 activates AtMPK3 and AtMPK6 by phosphorylation. In addition, french flower bucket the transcript abundance of AtERF6 in Arabidopsis increases with ROS, SA, cold, pathogens, and water deficit.

There were no visible signs of pathogen infection in these berries. Additional circumstantial evidence for ethylene signaling in the late stages of berry ripening was that the transcript abundance of many VviERF TFs was significantly affected by berry ripening and/or tissue . The transcript abundance of 129 members from the berries was determined to be above background noise levels on the microarray . The expression profiles of the 92 significantly affected AP2/ERF super family members were separated into six distinct clusters by hierarchical clustering and indicated that this super family had a complex response during berry ripening . The 12 members of Cluster 1 responded similarly in both the skin and pulp, gradually decreasing with increasing °Brix with a large decrease in transcript abundance at the 36.7 °Brix level. Cluster 2 with 14 members, including 8 members of the VviERF6 clade, had much higher transcript abundance in the skin with a sharp peak at 23.2 °Brix. Cluster 3 had similar profiles in both the skin and pulp with a peak abundance at 25° Brix. Cluster 4 with 7 members was a near mirror image of cluster 2, with a sharp valley for transcript abundance in the skin between 23 and 25 °Brix.

Cluster 5 had 36 members with a steady increase in transcript abundance in the pulp but no substantial increase in the skin until 36.7 °Brix. Finally, in Cluster 6, there were 13 members with a higher transcript abundance in skins compared to pulp. Their transcript abundance increased with increasing °Brix level, but decreased in the skin. The transcript abundance of important components of the ethylene signaling pathway characterized in Arabidopsis and presumed to be functional in grape were also affected by °Brix level and tissue . Three different ethylene receptors, VviETR1, VviETR2, and VviEIN4 decreased with °Brix level in the skin, however there was very little or no change in the pulp. Likewise, VviCTR1, another negative regulator of ethylene signaling that interacts with the ethylene receptors, decreased between 22.6 and 23.2 °Brix in both the skin and the pulp. The transcript abundance of the positive regulator, VviEIN2, peaked at 25 °Brix in both the skin and the pulp. AtEIN2 is negatively regulated by AtCTR1 and when it is released from repression, turns on AtEIN3 and the ethylene signaling pathway downstream.

The transcript abundance of VviEIN3 increased with °Brix level, peaking at 25 °Brix in the skin, and was much higher than in the pulp. Although more subtle, its profile was very similar to VviERF6L1. Derepression of the negative regulators and the increase in positive regulators indicated that ethylene signaling was stimulated during this late stage of berry ripening.The transcript abundance of many of the genes involved in the isoprenoid biosynthesis pathway peaked between 23 and 25 °Brix level, particularly in the skin; this stimulation of transcript abundance continued in both the carotenoid and terpenoid biosynthesis pathways . DXP synthase is a key regulatory step in isoprenoid biosynthesis and its profile was similar to VviERF6L1; its transcript abundance was correlated with the transcript abundance of several terpene synthases in the terpenoid biosynthesis pathway . About 50% of the putative 69 functional terpene synthases in the Pinot Noir reference genome have been functionally characterized. Another 20 genes may be functional but need further functional validation or checking for sequencing and assembly errors. On the NimbleGen Grape Whole-Genome array there are 110 probe sets representing transcripts of functional, partial and psuedo terpene synthases in Pinot Noir . It is uncertain how many may be functional in Cabernet Sauvignon.

There were 34 probe sets that significantly changed with °Brix or the °Brix and Tissue interaction effect; 20 of these are considered functional genes in Pinot Noir. Terpene synthases are separated into 4 subfamilies in the Pinot Noir reference genome; they use a variety of substrates and produce a variety of terpenes. Many of these enzymes produce more than one terpene. The top 8 transcripts that peaked in the skin at the 23.2 to 25 °Brix stages were also much higher in the skin relative to pulp . Five of the eight probesets match four functionally-classified genes in Pinot Noir ; these terpene synthases clustered very closely with VviTPS54, a functionally annotated – Linalool/- Nerolidol synthase. VviTPS58, a -geranyl linalool synthase, was also in the cluster. The other two probesets match partial terpene synthase sequences in the Pinot Noir reference genome. The transcript abundance of genes involved with carotenoid metabolism also changed at different °Brix levels and with tissue type . CCDs are carotenoid cleavage dioxgenases and are involved in norisoprenoid biosynthesis. The transcript abundance of VviCCD1 changed significantly with °Brix level and was higher in skin than pulp, except at 36.7 °Brix. Likewise, the transcript abundance of VviCCD4a and VviCCD4b changed significantly with °Brix level, but was higher in the pulp than the skin. The transcript abundance of VviCCD4c significantly increased with °Brix level, but there were no significant differences between tissues. VviCCD1 and VviCCD4 produce β- and α-ionone , geranylacetone , and 6-methyl-5-hepten-2-one in grapes. There were no significant effects on the transcript abundance of VviCCD7. The transcript abundance of VviCCD8 significantly increased with°Brix level and was higher in pulp than skin. Phytoene synthase, which was also increased in the skin compared to the pulp , and VviCCD1, have been associated with β-ionone and β-damascenone biosynthesis. Other important grape flavors are derived from the fatty acid metabolism pathway and lead to the production of aromatic alcohols and esters. The transcript abundance of many genes associated with fatty acid biosynthesis and catabolism changed with °Brix level . In particular the transcript abundance of a number of genes were correlated with the transcript abundance of VviERF6L1 including VviACCase, Acetyl-CoA carboxylase; KAS III ; VviOAT, ; VviFAD8; ; VviLOX2 and VviHPL . The transcript abundance of alcohol dehydrogenases was affected by tissue and °Brix level . Some ADHs are associated with the production of hexenol and benzyl alcohol. Methoxypyrazines give herbaceous/bell pepper aromas. They are synthesized early in berry development and gradually diminish to very low levels at maturity. Nevertheless, humans can detect very low concentrations of these aroma compounds. Four enzymes, VviOMT1, VviOMT2, VviOMT3 and VviOMT4 , synthesize methoxypyrazines. The transcript abundance of VviOMT1 was higher in the pulp than the skin . In addition, bucket flower the transcript abundance of VviOMT1 decreased significantly with °Brix level in the pulp. There were no significant differences in the trancript abundance in the skin or pulp for VviOMT2, VviOMT3 or VviOMT4 . There was a high correlation of the transcript abundance of VviOMT1 in the pulp with 2-isobutyl-3-methoxypyrazine concentrations in the berries . The transcript abundance of VviOMT2, VviOMT3, or VviOMT4 in either skin or pulp was not correlated with IBMP concentrations . This is consistent with the suggestion that the pulp is the main contributor of IBMP in the berry. Our data indicated that VviOMT1 in the pulp may contribute to the IBMP concentration in these berries.Orthologs of RIN and SPL tomato transcription factors, which are known to be very important fruit ripening trancription factors, were at much higher transcript levels in the skin and decline with °Brix level .

The transcript abundance of the VviNOR ortholog in grape was higher in the pulp and increased slightly to peak at 25 °Brix. In addition, the transcript abundance of VviRAP2.3, an inhibitor of ripening in tomato , decreased in the skin with a valley at 23.2 °Brix; it belongs to Cluster 4 of the AP2/ERF super family . Of particular interest was VviWRKY53 [UniProt: F6I6B1], which had a very similar transcript profile as VviERF6L1 . AtWRKY53 is a TF that promotes leaf senescence and is induced by hydrogen peroxide. This is the first report we know of implicating WRKY53 in fruit ripening . AtERF4 induces AtWRKY53 and leaf senescence, so the interactions between WRKY and ERF TFs are complex. WRKY TFs bind to the WBOX elements in promoters and VviERF6L1 has a number of WBOX elements in its promoter . In addition, AtMEKK1 regulates AtWRKY53 and the transcript abundance of VviMEKK1 peaked at 23.2 °Brix in the skin as well. Interestingly, the transcript abundance of both VviERF4 and VviERF8, whose orthologs in Arabidopsis promote leaf senescence, were at their highest level of transcript abundance at the lowest °Brix levels examined in this study .This study focused on the very late stages of the mature Cabernet Sauvignon berry when fruit flavors are knownto develop. Cabernet Sauvignon is an important red wine cultivar, originating from the Bordeaux region of France. It is now grown in many countries. They also can contain herbaceous characters such as green bell pepper flavor that are particulary prevalent in under ripe grapes. Grape flavor is complex consisting not only of many different fruit descriptors, but descriptors that are frequently made up of a complex mixture of aromatic compounds. For example, black currant flavor, in part, can be attributed to 1,8-cineole, 3-methyl-1-butanol, ethyl hexanoate, 2- methoxy-3-isopropylpyrazine, linalool, 4-terpineol, and β- damascenone and major components of raspberry flavor can be attributed to α- and β-ionone, α- and β- phellandrene, linalool, β-damascenone, geraniol, nerol and raspberry ketone. Some common volatile compounds found in the aroma profiles of these dark fruits and berries include benzaldehyde, 1-hexanol, 2-heptanol, hexyl acetate, β-ionone, β-damascenone, linalool, and α-pinene. In a study of Cabernet Sauvignon grapes and wines in Australia, Cabernet Sauvignon berry aromas wereassociated with trans-geraniol and 2-pentyl furan and Cabernet Sauvignon flavor was associated with 3-hexenol, 2-heptanol, heptadienol and octanal. In another comprehensive study of 350 volatiles of Cabernet Sauvignon wines from all over Australia, the factors influencing sensory attributes were found to be complex; in part, norisoprenoids and δ − and γ-lactones were associated with sweet and fruity characteristics and red berry and dried fruit aromas were correlated with ethyl and acetate esters. In Cabernet Sauvignon wines from the USA, sensory attributes were complex also and significantly affected by alcohol level of the wine. Linalool and hexyl acetate were postitively associated with berry aroma and IBMP was positively correlated with green bell pepper aroma. In France, β-damascenone was found to contribute to Cabernet Sauvignon wine aroma. Thus, flavor development in berries and wines is very complex, being affected by a large number of factors including genetics, chemistry, time and environment. In this paper we begin to examine the changes in transcript abundance that may contribute to flavor development. We show that the transcript abundance of many genes involved in fatty acid, carotenoid, isoprenoid and terpenoid metabolism was increased in the skin and peaked at the °Brix levels known to have the highest fruit flavors . Many of these are involved in the production of dark fruit flavors such as linalool synthases, carotenoid dioxygenases and lipoxygenases. These genes serve as good candidates for berry development and flavor markers during ripening. A broader range of studies from different cultivars, locations and environments are needed to determine a common set of genes involved in berry and flavor development. A similar study was conducted on the production of volatile aromas in Cabernet Sauvignon berries across many developmental stages, including a detailed analysis of the °Brix levels that was surveyed in this study. They found that the production of alcohol volatiles from the lipoxygenase pathway dominated in the later stages of berry ripening and suggested that the activity of alcohol dehydrogenases also could play an important role.

A black dashed lineoutlines the region we will be imaging using the nanoSQUID microscope

In certain cases, moir´e heterostructures host super lattice minibands with narrow bandwidth, placing them in a strongly interacting regime where Coulomb repulsion may lead to one or more broken symmetries. In several such systems, the underlying bands have finite Chern numbers, setting the stage for the appearance of anomalous Hall effects when combined with time-reversal symmetry breaking. Notably, in twisted bilayer graphene low current magnetic switching has been observed, though consensus does not exist on the underlying mechanism.The δBI dips may be understood as a consequence of current-driven domain wall motion. As established above, applied current drives nucleation of minority magnetization domains. Once these domains are nucleated, increasing the current magnitude is expected to enlarge them through domain wall motion. Where domain walls are weakly pinned, a small increase in the current δI drives a correspondingly small motion δx of the domain wall, producing a change in the local magnetic field δBI characterized by a sharp negative peak at the domain wall position . We may then use this mechanism to map out the microscopic evolution of domains with current. Fig. 6.5h shows a spatial map of δBI , procona florida container measured at three different values of ISD corresponding to distinct features in the transport data.

Evidently, the domain wall moves from its nucleation site on the device boundary towards the device bulk. Local measurements of δBI as a function of ISD show that this motion is itself characterized by threshold behavior, corresponding to the domain wall rapidly moving between stable pinning sites. A full correspondence of transport features and local domain dynamics is presented in the associated publication. The symmetry of the observed magnetic switching is suggestive of a spin or valley Hall effect driven mechanism. To investigate this hypothesis experimentally, we use local magnetic imaging to directly probe the current-driven accumulation of magnetic moments throughout the density- and displacement field tuned phase space. Figs. 6.6c-e show δBI maps measured at three different points, away from the regime where the ground state is ferromagnetic. These fillings correspond to the Hubbard band edges, where the Berry curvature is expected to be enhanced by the appearance of correlation driven gaps. Notably, large spin Hall effects are observed near ν = 1 even far from band inversion, with possible implications for the nature of the strong insulating state observed there. We have shown here that the combination of intrinsic spin Hall effect with intrinsic magnetism provides a mechanism for a current-actuated magnetic switch in a single two dimensional electron system. The physical properties we invoke to explain this phenomenon are generic to all intrinsicChern magnets.

We emphasize that in both twisted bilayer graphene and our current MoTe2/WSe2 heterostructure, magnetic switching arises in regimes for which doping, elevated temperature, or disorder ensure that electrical current flows in the sample bulk. Ultra-low current switching of magnetic order has also been observed in twisted bilayer graphene. In that system, where spin orbit coupling is negligible, nearly identical mechanisms may arise mediated by orbital, rather than spin, Hall effects. The bulk nature of the spin Hall torque mechanism means that similar phenomena should manifest not only in the growing class of intrinsic Chern magnets, but in all metals combining strong Berry curvature and broken time-reversal symmetry, including crystalline graphite multi-layers. Research into charge-to-spin current transduction has identified a set of specific issues restricting the efficiency of spin torque switching of magnetic order. Spin current is not necessarily conserved, and as a result a wide variety of spin current sinks exist within typical spin torque devices. Extensive evidence indicates that in many spin torque systems a significant fraction of the spin current is destroyed or reflected at the spin-orbit material/magnet boundary. In addition, the transition metals used as magnetic bits in traditional spin-orbit torque devices are electrically quite conductive, and can thus shunt current around the spin-orbit material, preventing it from generating spin current. These issues are entirely circumvented here through the use of a material that combines a spin Hall effect with magnetism, and as a result of these effects this spin Hall torque device has better current-switching efficiency than any known spin torque device.

We started this discussion with a favorable comparison of the impact of disorder on the ABMoTe2/WSe2 Chern magnet to graphene-based Chern magnets. I’m sure the reader was just as disappointed as we were to see the dramatic disorder landscape on display in Fig. 6.4E, which presents a map of the magnetization in the AB-MoTe2/WSe2 Chern magnet. This is not a refutation of our original claims; it remains true that the repeatability of the fabrication protocol of the AB-MoTe2/WSe2 Chern magnet is unambiguously much better than that of tBLG/hBN, or even tMBG. It is also easy to lose track of the scale of these images- the tBLG/hBN Chern magnet was only a few square microns, whereas this sample supports a Chern magnet that is almost a hundred square microns in area. The presence of these ‘holes’ in the magnetization of this Chern magnet is not a result of strong twist angle disorder.We do not know the precise origin of these holes, but there are a few possibilities that we can discuss. Bubbles are some of the most common defects in stacks of two dimensional crystals, and they can form between any two layers of a stack. As presented and described in Fig. 6.7A-C, AFM imaging reveals topographic defects precisely aligned with the regions in the Chern magnet in which magnetism has been destroyed. There are two clearly distinct distributions of defects, with thicknesses that differ by about an order of magnitude. It is possible that these correspond to bubbles between two distinct pairs of layers of the stack. Another possibility is that partial oxidation or deliquescence of the MoTe2 crystal has occured. This crystal is indeed air and moisture sensitive, and degradation can happen even inside a glovebox, as illustrated for a CrI3 crystal in Fig. 6.7D-F. Whatever issue is generating this disorder, it will likely be necessary to resolve it in order to fabricate more sophisticated devices based on this Chern magnet.We have so far discussed a variety of phenomena realized in gate-tunable exfoliated heterostructures. In all cases, these phenomena were accessible experimentally because of the presence of a moir´e super lattice, which gave us access to electronic bands that could be completely filled or depleted at will using an electrostatic gate. We will next be discussing an atomic crystal without a moir´e super lattice. This material does not have flat bands, and we will have no hope of completely filling or depleting any of the bands in the system. Instead, it has features in its band structure that lend themselves to interaction-driven phenomena, procona London container specifically flat-bottomed bands satisfying the Stoner criterion. The material we will be studying is an allotrope of three-layer graphene called ABC trilayer graphene. In addition to a variety of other interesting phases, this material supports both spin and orbital magnetism. We will discuss why this is the case, and we will study the ABC trilayer magnets using the nanoSQUID microscope.As in three dimensional crystals, many two dimensional crystals have multiple allotropes that are stable under different conditions. Trilayer graphene is such a material. We label multilayer grapheneallotropes using letters that refer to the relative positions of atoms of different layers, projected onto the two dimensional plane. We have already encountered ABA trilayer graphene in the introduction, and this material has atoms in the third layer aligned to atoms in the first layer. At room temperature and pressure the ABA stacking order is preferred, but trilayer graphene has a metastable allotrope, ABC trilayer graphene, that can either be prepared or found naturally occurring. In ABC trilayer graphene atoms in the third layer are aligned neither with the first nor with the second layer.

ABC trilayer graphene has band structure that differs significantly from ABA trilayer graphene, and these differences have important consequences for its properties.The band structure of ABC trilayer graphene at two different displacement fields is illustrated in Fig. 7.1. In the absence of a displacement field, the system is metallic at all electron densities. When a large displacement field is applied to the system, it becomes a band insulator when the Fermi level is tuned between the two resulting bands. This is the regime of displacement field that we will be discussing. ABC trilayer graphene has extremely weak spin-orbit coupling, so the spin degree of freedom is present and more or less completely orthogonal to electronic degrees of freedom, contributing only a twofold degeneracy to the band structure. Just like most other allotropes of graphene, ABC trilayer graphene has valley degeneracy, and this produces an overall fourfold energetic degeneracy of its band structure. This is illustrated in Fig. 7.2. As is abundantly clear from these plots, the bandspresent in ABC trilayer graphene are not flat; they have extremely large bandwidths. However, the bands do satisfy the flat-bottomed band condition, and as a result we can expect these systems to be able to spin- and valley-polarize without paying significant kinetic energy costs.A schematic of the device we will discuss is presented in Fig. 7.3A. This device allows us to perform several different experiments: we can tune the electron density and displacement field in the ABC trilayer graphene layer, we can measure in-plane electronic transport , and we can measure the out-of-plane capacitive conductivity as a function of electron density and displacement field. Data extracted from this contrast mechanism is presented in Fig. 7.3B. This dataset is restricted to the hole band; i.e., the bottom band in all of the plots we have so far encountered. Sharp features in this dataset correspond to spontaneous symmetry breaking; these features are marked with the numbers and . The right side of this plot, labelled with an electron density of zero, corresponds to charge neutrality in this system and lies in the gap of the band insulator. Therefore and both correspond to situations in which the hole band is very slightly filled. The valley and spin subbands of ABC trilayer graphene are presented in schematic form in Fig. 7.4A in the absence of electronic interactions. When we tune the Fermi level into these bands and activate interactions, we cannot produce a gap- the bandwidths of these bands are far too high- but we can produce full spin or valley polarization, as illustrated in Fig. 7.4B. The precise situations in which we find this system at and are presented in Fig. 7.4C and D; these situations correspond repsectively to full spin polarization but no valley polarization in and full spin and valley polarization in . Valley polarization couples strongly to transport, generating a large anomalous Hall effect and ferromagnetic hysteresis, as presented in Fig. 7.4E.Although these magnets occur in an atomic crystal, they are composed entirely of electrons we have forced into the system with an electrostatic gate, and as a result we can expect their magnetizations to be considerably smaller than fully spin-polarized atomic crystals. We will use the nanoSQUID microscope to image these magnetic phases. An optical image of the ABC trilayer graphene device used to produce data for the publications is presented in Fig. 7.5A. A nanoSQUID image of this region using AC bottom gate contrast is presented in Fig. 7.5B. This magnetic image was taken in the same phase in which we observe magnetic hysteresis, as presented in Fig. 7.4E. Clearly the system is quite magnetized; we also see evidence of internal disorder, likely corresponding to bubbles between layers of the heterostructure. We can park the SQUID over a corner of the device and extract a density- and displacement field-tuned phase diagram of the magnetic field generated by the magnetization of the device; this is presented in Fig. 7.5C. Electronic transport data of the same region is presented in Fig. 7.5D. The spin magnet has only a weak impact on electronic transport, but the valley ferromagnet couples extremely strongly to electrical resistance. The system also supports a pair of superconductors, including a spin-polarized one; these phases are subjects of continued study. Capacitance data over the same region of phase space is presented in Fig. 7.5E.ABC trilayer graphene is the first atomic crystal known to support purely orbital magnetism. Other related systems have since been discovered to host similar phenomena, including bilayer graphene. 

We will discuss a considerable amount of electronic transport and capacitance data as well

The properties of crystals differ from the properties of atoms floating in free space because the atomic orbitals of the atoms in a crystal are close enough to those of adjacent atoms for electrons to hopbetween atoms. The resulting hybridization of atomic orbitals produces quantum states delocalized over the entire crystal with the capacity to carry momentum. This situation is shown in schematic form in Fig. 1.9A. For quantum states delocalized over the entire crystal, position ceases to be a useful basis. Instead, under these conditions we label electronic wave functions by their momenta, kx and ky. The atomic orbitals that prior to hybridization had discrete energy spectra now have energy spectra given by discrete functions of momentum, f. We call these functions electronic bands.Electrons loaded into the electronic bands of a two dimensional crystal will occupy the quantum states with the lowest available energies, plastic planter pot so we can specify a maximum energy at which we expect to find electrons for any given electron density. We call that energy EF , the Fermi level. We can raise the Fermi level by adding additional electrons to the crystal, as shown in Fig. 1.9B. We have already discussed how two dimensional crystals naturally allow for manipulation of the electron density, and thus the Fermi level.

We have also already discussed how the application of an out-of-plane electric field to a two dimensional crystal will change the structure of the atomic orbitals supported by that crystal. It naturally follows that atomic orbitals so modified will produce different electronic bands, as shown in Fig. 1.9C. It is relatively straightforward to compute how electronic bands will respond to the application of a displacement field . We will be using the momentum and energy basis for the rest of this document; this basis is known as momentum space. The simplest experiment we can perform to probe the electronic properties of a two dimensional crystal in this geometry is an electronic transport experiment, in which a voltage is applied to a region of the crystal with another region grounded, so that electrical current flows through the crystal. We can check if the crystal supports any electrical transport at all, and if it does we can measure the electrical resistance of the crystal this way, in close analogy to how this is done for three dimensional crystals. Crystals will only accept and thus conduct electrons if there are available quantum states at the Fermi level; we call these crystals metals , and they can be identified in band structure diagrams by the intersection of the Fermi level with an electronic band . Crystals without empty quantum states at the Fermi level will not accept and conduct electrons , and they can be identified in band structure diagrams with crystals for which the Fermi level does not intersect with an electronic band .

There exists a variety of other experiments we can perform on two dimensional crystals in order to understand their properties. Two dimensional crystals can support electronic transport in the in-plane direction if they are metals, as shown in Fig. 1.10. Capacitors can also support electronic transport in the out-of-plane direction, as long as that electronic transport occurs at finite frequency. The same structure that we use to modify the electron density and ambient out-of-plane electric field of a two dimensional crystal can also be used as a capacitive AC conductor, as illustrated in Fig. 1.11A. The conductance will depend only on the frequency at which an AC voltage is applied and the geometry of the parallel plate capacitor. However, if a two dimensional crystal is added in series, the capacitance of the top gate to the bottom gate may be substantially modified. If the two dimensional crystal is an insulator, electric fields will penetrate it and the capacitance between the two gates will not change. However, if the two dimensional crystal is a metal it will accept electrons and cancel the applied electric field, dramatically reducing the capacitance between the top and bottom gates and neutralizing the AC current through the capacitor. This technique can be used to measure the electronic properties of specifically the bulk of a two dimensional crystal; it is the property that was both calculated and measured in Fig. 1.2C and D.

These two techniques are the bread and butter of the experimental study of two dimensional crystals, because they require only the ability to create stacks of two dimensional crystals and access to tools common to the study of all other microelectronic systems. However, the primary focus of this thesis will be on systems for which the nanoSQUID microscope can provide important information that is inaccessible to these techniques, and so we will discuss a few such systems next.Consider the following procedure: we obtain a pair of identical two dimensional atomic crystals. We slightly rotate one relative to the other, and then place the rotated crystal on top of the other . The resulting pattern brings the top layer atoms in alignment with the bottom layer atoms periodically, but with a lattice constant that is different from and in practice often much larger than the lattice constant of the original two atomic lattices. We call the resulting lattice a ‘moir´e super lattice.’ The idea to do this with two dimensional materials is relatively new, but the notion of a moir´e pattern is much older, and it applies to many situations outside of condensed matter physics. Pairs of incommensurate lattices will always produce moir´e patterns, and there are many situations in daily life in which we are exposed to pairs of incommensurate lattices, like when we look out a window through two slightly misaligned screens, or try to take pictures of televisions or computer screens with our camera phones. Of course these ‘crystals’ differ pretty significantly from the vast majority of crystals with which we have practical experience, so we’ll have to tread carefully while working to understand their properties. To start with, if we attempt to proceed as we normally would- by assigning atomicorbitals to all of the atoms in the unit cell, computing overlap integrals, and then diagonalizing the resulting matrix to extract the hybridized eigenstates of the system- we would immediately run into problems, because the unit cell has far too many atoms for this calculation to be feasible. Some moir´e super lattices that have been studied in experiment have thousands of atoms per unit cell. There exist clever approximations that allow us to sidestep this issue, and these have been developed into very powerful tools over the past few years, 30 litre plant pots but they are mostly beyond the scope of this document. I’d like to instead focus on conclusions we can draw about these systems using much simpler arguments. The physical arguments justifying the existence of electronic bands apply wherever and whenever an electron is exposed to an electric potential that is periodic, and thus has a set of discrete translation symmetries. For this reason, even though the moir´e super lattice is not an atomic crystal, we can always expect it to support electronic band structure for the same reason that we canal ways expect atomic crystals to support band structure. Two crystals with identical crystal symmetries will always produce moir´e super lattices with the same crystal symmetry, so we don’t need to worry about putting two triangular lattices together and ending up with something else.Another property we can immediately notice is that the electron density required to fill a moir´e super lattice band is not very large.

This can be made clear by simply comparing the original atomic lattice to a moir´e super lattice in real space . Full depletion of a band in an atomic crystal requires removing an electron for every unit cell , and full filling of the band occurs when we have added an electron for every unit cell. We have already discussed how this is not possible for the vast majority of materials using only electrostatic gating, because the resulting charge densities are immense. Full depletion of the moir´e band, on the other hand, requires removing one electron per moir´e unit cell, and the moir´e unit cell contains many atoms . So the difference in charge density between full filling and full depletion of an electronic band in a moir´e super lattice is actually not so great , and indeed this is easily achievable with available technology. Before we go on, I want to make a few of the limitations of this argument clear. There are two things this argument does not necessarily imply: the moir´e bands we produce might not be near the Fermi level of the system at charge neutrality, and the bandwidth of the moir´e super lattice need not be small. In the first case, we won’t be apply to modify the electron density enough to reach the moir´e band, and in the latter, we won’t be able to fill the moir´e band’s highest energy levels using our electrostatic gate. We know of examples of real systems with moir´e super lattice bands that fail each of those criteria. But if these moir´e super lattice bands are near charge neutrality, and if their bandwidths are small, then we should be able to easily fill and deplete them with an electrostic gate.This makes them desirable targets for the types of experiments we’ve discussed above. Finally, moir´e super lattice bands inherit any electronic degeneracies- like, for example, electron spin- that came with the original lattice. We haven’t discussed electronic degeneracies yet, and we will shortly. So if a moir´e super lattice satisfies all of these criteria, then it will provide a set of electronic bands that can be completely filled or depleted with an electronic gate. I’m sure this seems to the reader like a pretty niche system, and that’s more or less because it is. There aren’t too many material systems that need their atomic bonds aligned with a mechanical goniometer, and it’s hard to imagine ever integrating such a procedure into an industrial fabrication line. However, it’s tough to adequately express how hard it would be to replicate the properties of a moir´e super lattice band in an atomic crystal. I made an attempt to do so in the introduction to this thesis; suffice to say the control we have over the properties of these systems is more or less unprecedented within experimental condensed matter physics, and this means that we can perform experiments on electronic phases in these systems that would be difficult or impossible in atomic crystals.A variety of scanning probe microscopy techniques have been developed for examining condensed matter systems. It’s easy to justify why magnetic imaging might be interesting in gate-tuned two dimensional crystals, but magnetic properties of materials form only a small subset of the properties in which we are interested. Scanning tunneling microscopy is capable of probing the atomic-scale topography of a crystal as well as its local density of states, and a variety of scanning probe electrometry techniques exist as well, mostly based on single electron transistors. It’s worth pointing out that if you’re interested specifically in performing a scanning probe microscopy experiment on a dual-gated device, then these techniques both struggle, because the top gate both blocks tunnel current and screens out the electric fields to which a single electron transistor would be sensitive. Magnetic fields have an important advantage over electric fields: most materials have very low magnetic susceptibility, and thus magnetic fields pass unmodified through the vast majority of materials . This means that magnetic imaging is more than just one of many interesting things one can do with a dual-gated device; in these systems, magnetic imaging is a member of a very short list of usable scanning probe microscopy techniques. The simplest way in which we can use our nanoSQUID magnetometry microscope is as a DC magnetometer, probing the static magnetic field at a particular position in space . There are situations in which this is a valuable tool, and we will look at some DC magnetometry data shortly, but in practice our nanoSQUID sensors often suffer from 1/f noise, spoiling our sensitivity for signals at low or zero frequency. One of the primary advantages of the technique is its sensitivity, and to make the best of the sensor’s sensitivity we must measure magnetic fields at finite frequencies. We have already discussed how we can use electrostatic gates to change the electron density and band structure of two dimensional crystals.