Carbon is one of the most abundantly available elements on earth

Natalia Molina traces the relationship between public health in Los Angeles and compares the experiences of Chinese to that of Japanese and Mexican residents in the city in the period between 1879-1939.Taken together these works remind us that Chinatown was only one of the many districts that composed Los Angeles’s core in the first half of the twentieth century, and that the actions of Chinese people must be understood in relationship to the other people and ethnic groups that lived in this section of the city. Perhaps the most interesting work done by an historian on Los Angeles Chinatown has been the work of Isabella Quintana who highlights interactions between Mexican Americans and Chinese Americans in the plaza area between 1871 and 1938 focusing on the racialized and gendered nature of space. In her essay, “Making Do, Making Home: Borders and the Worlds of Chinatown and Sonoratown in Early Twentieth Century Los Angeles,” Quintana explores the architecture of Chinese and Mexican homes in the Plaza area to “imagine social worlds created by women that presented alternative ways of living to those dictated by colonialism, industrialization, exclusion, and segregation in Los Angeles.”Given a paucity of first-hand accounts by Chinese and Mexican women during the period,10 liter pot Quintana shows how it is possible to understand the interactions between women from these two groups by looking closely at architectural records.

Old Chinatown, New Chinatown, and China City were located adjacent to the Los Angeles Plaza in one of the most diverse sections of Los Angeles. Nonetheless all three districts came to be seen as Chinatowns by those outside the Chinese American community. The process which marked these three communities were marked Chinese in the popular imagination despite the demographic reality of this part of the city can only be understood when we begin to explore Chinatown’s place in the popular imagination. Over the last century, representations of Chinatown have become an important site through which whites as well as other non-Asian Americans envision Asia and Asian people. In many ways, Elaine Kim’s observation about depictions of Chinese people in Anglo-American literature as a whole holds especially true for Chinatown in particular. Kim writes that, “many depictions of Chinese have been generalized to Asians, particularly since Westerners have found it difficult to distinguish among East Asiannationalities.”Certainly, the racialization of Chinatown has had a profound influence not only on the way whites perceive Chinese American communities but also on the way whites perceive many Asian American communities other than Chinatown. While there exists a substantial amount of scholarly work on the representation of Chinatowns in various works of literary fiction, scholarship that explores the place of Chinatown in popular imagination more generally are much less frequent. Two of the most important studies are to engage the relationship between Orientalism and Chinatown are Anthony Lee’s Picturing Chinatown and Kay Anderson’s Vancouver Chinatown. Anthony Lee’s Picturing Chinatown: Art and Orientalism in San Francisco explores the “history of imaginings” of San Francisco’s Chinatown in the period between 1850s and the 1950s, as represented through photography, paintings, and performance.

While the last two chapters do deal with ways in which Chinese Americans represented themselves through art, representations made by Chinese Americans are not the primary focus of Lee’s study. As Lee states in his introduction, outside of these last two chapters, his book “has precious little to say about the representations of Chinatown by its actual inhabitants.”What Lee is more interested in is recovering, “something of the pressure exerted on the art by the daily lives and experiences of Chinatown’s inhabitants.”Lee sees these artistic representations of Chinatown as being generated by “unequal social and political relations between Chinese and non-Chinese” and thus believes these works ultimately tell us more about larger white society than they are about Chinatown itself. Equally important for understanding the relationship between Chinatown as place and Chinatown as an idea is geographer Kay Anderson’s Vancouver Chinatown: Racial Discourse in Chinatown, 1875-1980. Drawing on the concept of hegemony advanced by Antonio Gramsci, Anderson states that Chinatown, “has been a historically specific idea, a cultural concept rooted in the symbolic system of those with the power to define…”Anderson posits that Chinatown is at its heart related to a set of racial and ethnic ideas held by whites about a particular place. She writes that Chinatown “was not a neutral term, referring somehow unproblematically to the physical presence of China in Vancouver. Rather it was an evaluative term ascribed by Europeans no matter how the residents of the territory might have defined themselves.”

The works of both Anderson and Lee have been deeply influential on my present study and yet my goal in the present study is to complicate the idea that Chinatown is a product of the white imagination and that writing about Orientalism in Chinatown should focus primarily on the representations and actions of whites leaders and cultural producers. The focus that scholars of American Orientalism have placed on actions and cultural productions produced by white Americans is no doubt a legacy of Said’s original work. Said’s 1978 text Orientalism focuses entirely on the writings of Europeans, and yet in his theoretical elaboration of Orientalism, Said lays the groundwork for understanding the ways in which this discourse could be contested. In fashioning this theory of Orientalism, Said draws on two theorists with significantly different understandings of power: Michel Foucault and Antonio Gramsci. Drawing from Foucault, Said defines Orientalism as a discourse. Said writes that, “no one writing, thinking, or acting on the Orient could do so without taking account of the limitations on thought and action opposed by Orientalism.” He elaborates that Orientalism did not determine what could be said about the Orient but rather that Orientalist “interests” were always involved in discussions of the Orient.But Said breaks with Foucault in one important way. Said writes, “Yet unlike Michel Foucault,10 liter drainage collection pot to whose work I am greatly indebted, I do believe in the determining imprint of individual writers upon the otherwise anonymous collective body of texts constituting a discursive formation like Orientalism.”This is an important ontological distinction that allows him to theorize the way the actions of individual authors created this discourse.In advancing my theory of Chinese American Orientalism, I argue that the Chinese American merchant class utilized North American Chinatowns to articulate their own distinct cultural representation of China. This should not simply be seen as an act of “self-Orientalism.” Chinese American merchants and others in the Chinese American community did not simply reproduce dominant ideas about China as presented in European and American literature and culture. Rather, Chinese American Orientalism was a distinct cultural formation that functioned for a moment counter-hegemonically. Because Chinese American Orientalism functioned within the larger framework of Orientlalism, Chinese Americans were not free to present Chinatown to tourists anyway they wished. But what they could do was subvert dominant expectations of the community in subtle ways, while still representing the district as a site of Otherness. While Chinese American Orientalism was deeply linked to visual culture, it manifested itself materially in Chinatowns across North America. Examples of Chinese American Orientalism include the architecture that came to define so many North American Chinatowns, Chinese American cuisine such as Egg Foo Yung and fortune cookies, and the embodied performances of race, gender, and nation enacted by Chinese American merchants and others in Chinatown. Chinese American Orientalism drew on all of the senses of the visiting tourist.Tourists did not simply watch Chinese Americans perform ethnicity from a distance.

In Chinatown, tourists could taste, smell, touch, hear, and see the version of the Orient presented to them by the merchant class. Tactile, visual, and edible, Chinese American Orientalism was also in a way political. Chinese American Orientalism presented a unified non-threatening image of China as a commodity, that appealed to white sensibilities enough to make a profit but did so in a way that did not replicate the worst aspects of Yellow Peril iconography that had left so many in the community disenfranchised.It is one of the building blocks of all forms of life. Commercially, carbon is essential to modern civilization through its multitude of applications such as a power source in the form of coal and hydrocarbons, a raw material for essential components like steel, polymers etc. to name a few. Carbon exists in the form of different allotropes such as graphite, diamond etc. Each allotrope differs from another in their physical properties. Depending on the application, different allotropes of carbon utilized as applicable. In the field of research, the recently discovered allotropes such as fullerenes, graphene, and nanotubes are being studied extensively due to their interesting physical properties. The discovery of carbon nanotubes by S.Ijima in 1991 initiated intense research interest in them. The research over a decade infers that CNT exhibit rare amalgamation of mechanical strength, stiffness and low density with exceptional transport properties. These attributes have made CNT an ideal choice for multifunctional materials that combine the best of CNTs with other materials such as polymers, ceramics, and metals. Thus, CNTs are blended with other materials to obtain tailored materials for various applications such as transistors, as hydrogen storing material, as probes and sensors; in lightweight bicycles, antifouling coating of ship hulls, electrostatic discharge shield on satellites, artificial muscles and actuators etc. Actuators are utilized in a wide variety of actuation application in various fields such as robotics, artificial muscles, micro-electro-mechanical-systems and micro-optomechanical-system. It is achieved by converting different types of energy to mechanical energy to move or control a mechanism or a system. On a macroscopic scale, actuators can be powered using materials such as piezoelectric materials, conducting polymers, shape memory alloys etc. The process is quite efficient as well. However, on a micro- and nanoscale, such materials are not suitable. Critical issues such as scalability into nanoscales, high performance and ease of operation for implementation in nanotechnology and biomedical impedes their applications severely. Under such circumstances, substitutes such as metallic nanoparticles, nanowires, and CNTs could be used instead. Nanotubes and nanowires have been used to configure dynamic systems like cantilever beams, linear and torsional actuators on a nanoscale level in laboratory conditions. Nanostructures, particularly those made of CNT, exhibit actuation to applied external stimuli such as heat, electric voltage, and light. The aim of this dissertation is to investigate the photo mechanical properties of the CNT-based composite. To this end, the dissertation is structured as follows: Chapter 2 discusses the general background of carbon nanotubes and Reactive Ethylene Terpolymer , the polymer matrix. Chapter 3 explores the process of fabrication of the composite in a detailed manner. Chapter 4 discussed the experiments conducted to investigate the photo mechanical actuation and discuss the results of the experiment. Chapter 5 suggests a mechanism describing the photo mechanical actuation process. Chapter 6 concludes the dissertation with a summary of the study.MWCNT structure is slightly different from that of SWCNT. Instead of just one nanotube, the structure of MWCNT consists of multiple tubules nested within each other. The outer diameter is usually in the range of 20 to 30 nm depending on the number of walls within a nanotube. The electronic properties of the MWCNT depend on theouter most tubule resulting in either a metallic or semiconductor-like behavior depending on the chirality of the outermost tubule. Since its discovery, CNT was expected to have superlative mechanical properties comparable to that of graphene. Computational simulations conducted by Overney et al. also provided similar results predicting Young’s modulus to be 1500 GPa. The first mechanical tests were conducted on MWCNT in 1997 by Wong et al using atomic force microscope which yielded Young’s modulus of 1.28 TPa and average bending strength of 14 GPa. However, generally accepted value Young’s modulus was measured by Yu et al. in 2000 obtained through measuring stress-strain measurements of an MWCNT pinned at one end inside an electron microscope. Young’s modulus was measured between 270 – 950 GPa. The outermost layer of MWCNT failed at applied tensile loads ranging between 11 and 63 GPa at strains up to 12% . Using the same method, Yu et al. could measure Young’s modulus for SWNT to be in the range 370 – 1470 TPa and tensile strength to be between 10 and 52 GPa.

A fundamental shift in receivership focus occurred over time in response to the environment

Prior to the receivership, the highest position within the division was a director of nursing . When the CPHCS structure was later formalized by clinical functional area, the statewide chief nurse executive developed a mirror position at the prison level with the same title to replace the DON role. As an entirely new state classified position, it carried new aspects of responsibility. This process was replicated within the other clinical departments, with the receivership level having a statewide head to which a prison-level clinical administrator indirectly reported in matrix style. The prison-level clinical head reported directly to the prison-level head of health care , which in turn directly reported to the federal receiver. This position was a structural anomaly; this position existed as the only clinical-area lead role that did not have a CPHCS-mirrored lead position. This is to say that no statewide chief executive officer of health care was created. The healthcare CEOs reported directly to the receiver, with no indirect supporting relationship within the CDCR structure. This complex reporting structure is shown in Figure 4. Among the greater challenges created by this structure was the negative impact on managerial capacity.

Due to the variance in scope between the CDCR and CPHCS managers,best indoor vertical garden system the concentration of resources for important tasks differed based on what the manager determined as important. While CPHCS executives were only interested in output related to projects, CDCR managers cared about the goals—measures of patient outcomes. Organizational structure dictated the managerial capacity to move the organizational-level mark of success. As discussed in previous chapters, it is necessary to understand management behavior at the program or project level in order to analyze and match motivations to actions.The disengagement from daily operational issues on the part of the receiver was due to the necessary bifurcation of duties between the political and operational. This situation is akin to the division of labor observed in the private sector between a chief executive officer and his or her chief operating officer or president . The practice, however, of having two rather disjointed administrative layers between three organizations reporting to the same individual did lead to operational issues. The reasons behind this were the often-differing requirements of the individual statewide executives as compared with those of the local healthcare CEOs. The CEOs tended to embrace a more short-term, day-to-day focus on operations and the allocation of scare resources therein. Statewide leadership was more project-oriented because their mutual supervisor, the receiver, required them to develop and implement the turnaround plan of action that was defined by the receivership mission.

The healthcare CEOs, in turn, were expected to assist in the smooth implementation of the various and often simultaneous projects related to the mission, while also serving as the operational head of the prison health care unit. The consequences of this clash of perception, objectives, and decision timing were unfortunately only briefly studied and captured in post-implementation interviews. This was due to the timing of the establishment of the CEO position, which occurred toward the end of the CCM implementation. It was previously noted that within the CPHCS structure two distinct groups of administrators existed: There were the individuals hired to serve within the receivership entity at the end of their civil-service careers, bringing decades’ worth of invaluable experience of CDCR operations and institutional knowledge. The other group consisted of those bringing a fresh perspective, having no significant correctional environment experience within the state but providing decades of private-sector health care operational expertise to the new organization. Due to the state’s budget crisis and renewed focus on eliminating high-dollar programs, the legislature of the state focused on the prison system and the proposed $8 billion dollar budget, seeking to cut the program in part or whole.

While the federal courts underlying the receivership did not make it possible for many of the proposed legislature reductions to occur, the strong political emphasis placed on the spending decisions within the receivership created the need for a change to occur. Managers that were recruited for higher-ranking headquarters vacancies and for prison level healthcare CEO roles were only those with strong fiscal management experience in their backgrounds. The group of administrators reporting to the receiver was initially a diverse group of people with many at the statewide executive layer selected from the private sector. These individuals brought immense experience from private health care sector work such as running departments or systems in leading health care organizations around the nation. Much of this labor pool was secured on a contract basis, and their reimbursement arrangement was commensurate with their vast experience. During this time , which was the period coinciding with the start of the pilot phase of the CCM program, there were only a handful of high-ranking administrators on the clinical side who were drawn from existing posts within CDCR. During observations made under this study, a cost cutting change was made in which many of the early-phase administrators were dropped in favor of lower-cost, full-time employee executives having years of experience within the state’s correctional system. Much of the vision of private-sector program leadership had already been put in motion prior to the shift, and it was part of the new cohort of managers’ role to complete the implementation started by managers who had different thought processes and objectives. The focus of the new group of executives was guided largely by habits learned over a career’s tenure within CDCR, which differed from the modus operandi of the previous, more innovative set of managers. It was assumed that continuity of work in progress would be natural because the incoming managers had years of agency experience.

The new administrative group enjoyed years of relationship-building experience, which enabled some prison-level staff administration and worker-level personnel to work more in synch with the vision set forth by the headquarters personnel. There is an unresolved debate among researchers about whether public-sector managers differ from their private-sector counterparts . At the core are the differences in managerial capacity between sectors, which may describe some of the underlying factors that constrain or enable managers to act and react. While this study is not designed to resolve that debate,growing strawberries vertically this section will describe a tool used in this program implementation that may be useful to researchers in capturing the elements of managerial capacity. A management-assessment model was developed in conjunction with a survey of the same name to assess how close managers were to the work under their control. Proximity to work was a concept used by this author in previous managerial work experience and is best described as a manager’s knowledge of all relevant information needed to make sound administrative decisions. It is measured by a manager’s involvement in the work carried out by her or his staff. Arguably, traits such as intelligence or even charisma could be attributed to the perception of how close a manager is to his or her work, if an outside observer asserts such perception. When measured directly by the manager, however, and on a self-reported basis, an estimate of the person’s level of engagement or involvement with his or her scope of responsibility can be determined. The knowledge derived from this self-assessment is useful both in rating a manager’s ability to rate staff and in understanding how involved he or she is in the work performed within the department. If a manager is disengaged from the work performed, then it is unlikely that preventative processes are in motion to avoid fatal project delays. Should the manager be disengaged from the day-to-day issues when a project-related task is at risk or slips a deadline, it will already be too late to prevent the resulting consequences. In order to keep projects on target, plans are put together and managers are made aware of issues before they become problems. An informed managed is an engaged manager, and this improves a project’s likelihood of success . It has been asserted that successful program implementation requires managers to be informed about the work for which they are accountable. This is based on the assumption that managers are competent and capable of making decisions.

Unpacking this assumption illuminates an important and fundamental organizational characteristic that must be present. Managers must be empowered to act, and if they are, it is here asserted that their competence in judgment will enable them to guide a company to successful performance. It is not argued here that managers are the sole motivators or even arbiters of organizational or program-level performance. As Barclay states, “buy-in and following orders are necessary from staff to make any change work . . . since it is often up to the employees to make the change work, it should be the employees that are focused upon” . Vlachoutsicos agrees, noting that successful managers rely on skilled subordinates, and additionally the teams that the managers oversee must also be empowered or else the unit is slated for failure. In his work on techniques for success in the clinical environment, Edgar Staren argues that an effective manager enables his or her staff differentially to be empowered according to individual abilities. This relates back to the original point of this section—managers must be close to the work they oversee. The ability to recognize the positive motivators that are effective for each individual staff member requires knowledge of the staff on an individual basis. Familiarity with the work performed on an ongoing basis enables a manager to be effective at understanding staff’s issues as well as providing knowledge of potential barriers to success. For this reason, the manager is studied in this chapter with the hopes of identifying aspects of management behavior that can be improved on to enable program-level success. Managerial capacity is the ability for administrators to understand the work requirements and to make situational specific adjustments accordingly. This definition is more granular than that provided by Meier and O’Toole in their evidence-based analysis of managerial performance. These researchers treat managerial capacity in a manner similar to this study’s approach. They look at public-sector management as their setting of choice, and they define managerial capacity as “the managerial talent and effort that could be mobilized in an organization when needed” . To assess managerial capacity with the goal of linking results to program-level performance, a managerial model of assessment was used. The model selected was previously created for assessing management behavior for a program implementation in the for-profit sector. The assessment model was developed by the author of this paper and successfully used to gauge and intervene on the behavior of leaders in a $500 million dollar health care program implementation. The statewide chief nurse executive who was responsible for the CCM program’s implementation approved the model and survey tool for use. The model used is shown in Figure 5. It relates and maps to a 10-question survey called the Leadership Level Assessment Survey. It was designed to focus beyond the overarching strategic management process , and assess how in touch managers are with the work that is being performed under their responsibility. Previous attempts to define strategic management processes have not focused sufficiently on the process of management and how to improve performance over time. The concept of management levers has been proposed , and information provided about how to plan strategically . However, detail is lacking about the steps to take as an administrator to improve performance. Figure 5 shows the constructs of the managerial assessment behavioral model, and is best read from right to left.The model is based on the theory that administrators who are more involved in the work they are responsible for, and have confidence in the work as it is performed, will improve organizational performance. This theory can be broken up into two distinct parts: confidence and engagement. The first relates to manager’s involvement in the work for which they are accountable. When managers at any level in an organization increase their personal involvement in the work at hand, the result can be an improvement in the oversight of the work product. This is true only if the workers perceive this as genuine interest and solicitation of feedback is made. Receipt of feedback on management vision should be included in order to avoid the perception of micromanagement.

The program under review in this study was called the chronic care model program

Hierarchical organizations are generally resistant to change that threatens their power. Despite numerous federal class-action lawsuits brought against CDCR throughout the 1980s, 1990s, and 2000s, a receivership that severed control of health care from the agency was required to fix the problems. In essence, this meant the bifurcation of the organization and dismantling of structure, splitting health care and custodial functions, in order to achieve change within the agency. Health care reform was the goal, but developing the proper structure and program around the new receivership governance structure was both a key obstacle and the research focus of this paper. Programmatic implementations are often complex, requiring the coordination of resources, personnel, and new processes or technologies. Managing implementations from the perspective of change management alone is a big topic in both research and practice . The addition of a highly bureaucratic environment on top of the other complexities makes for challenging implementations. Public-sector implementations of large-scale programs face this additional environmental challenge. Pollitt and Bouckaert define public management reform as “deliberate changes to the structures and processes of public-sector organizations with the objective of getting them to run better” .

Reform here is a synonym for improvement efforts. Generally speaking, vertical growing towers organizational changes for survival are associated with improving dated or simply ineffective methods. The ways in which these changes are made are based on implementation of new programs, which hold the DNA for the new processes and structures thought to be required to make the improvement. As the effort of reform progresses forward, the implementation process encompasses larger issues than just the program itself. For the public sector, the judgment of successful reform is based on final impact to the community . The program implementation process itself may be successful in terms of new structure and processes adopted by the internal staff; however, if impact to the community is not perceived as positive, then the program itself may be deemed a failure. The Drug Abuse Resistance Education program is a classic example of this. As the nation’s most widely employed school-based drug prevention program, it has processes which are deemed efficient; however, final impact on the community has been less than fruitful in its mission to reduce the number of young people on drugs . Achieving successful program implementation within a large public sector organization is then a function of being technically proficient in the details of implementation, as well as producing tangibly visible outcomes for public view. This study investigates one possible path to achieving success and the challenges that the agency faced.

Over the past two decades, reform efforts in the public sector have been characterized by the use of management practices and techniques originally developed within the private sector. These reforms have ranged from budgeting methods to performance management . There are significant differences between public and private sector organizations as it relates to organizational change. Models and processes that are transferred from one sector to the other can lead to contradictory results. In comparison to private organizations, public organizations tend to be characterized by a multitude of decision-makers, a larger diversity of stakeholders, more intensive organizational dynamics and a more bureaucratic structure . This concept will be explored in greater depth and detail in Chapter 2. Reform efforts in the public sector’s human health services industry over the past twenty years have been beset with numerous challenges in the quest to adopt models from other sectors. Programs in the human health services industry are generally considered to be more complex than programs in other areas. This is due to the fact that human service technologies are delivered through the actions of individuals and organizations, which exist within multilayered social contexts . The setting of a department of corrections further adds to the complexity of the multilayered social context in that the prison system is highly segmented and institutionalized . The adoption of private sector models by public correctional organizations has thus been not often undertaken.

The literature offers no framework for successful implementation within the very complex health care correctional system, although reform efforts are unquestionably required for this sub-sector. To improve the delivery of health care services and advance overall outcomes of treatment, the chronic care model was selected for implementation in the CDCR because the private , not for profit sector had demonstrated evidence of success in improving care using this model . This model originated in the not-for-profit sector and was selected based on coordination of care, attention to patients with multiple comorbid chronic conditions, and emphasis on the adherence to practitioner guidelines. It was a particularly complex effort that required integrating the demands of six disciplines that generally work separately, and three federal courts—each represented by a panel of court monitors, each with their own set of demands. Due to the organizational and environmental complexities, the challenges to implementation appeared insurmountable. Organizational structure, size, and the requirement to integrate the disparate objectives of various entities were all issues facing the receivership in its attempts to improve health care and meet federal requirements. These issues will be reviewed in greater detail in Chapters 2 and 3.Implementations of public-sector health care programs originating in the not-for profit sector are rare. Therefore, there are few such examples in the literature. Subsequently, there is a lack of guidelines available to facilitate implementation efforts. A public-sector administrator cannot readily look to the literature for answers on how to effectively adopt a successful private-sector model to solve a current organizational problem. The distinct variables one must take into account in order to make the successful implementation are not well explicated in the few examples available in the literature. Broader generalizations, however, may be derived from existing academic and practitioner publications. Planned implementation can be viewed as the expression of rational organizational behavior with the manger functioning as a technician whose primary task is to make the appropriate actions, with respect to established knowledge, to achieve efficiency and effectiveness .

Within the health care literature, a recent study suggested that planned implementations in the mental health services field can be successful if attention is giving to the needs of staff,container vertical farming which can be facilitated by proper management . Australian researchers, Rooney et al. support the claims made by McCrae and Banerjee , in their review of large scale change at a public hospital. They concluded that planned organizational change can be effectively managed when employees have established a strong sense of connection to the workplace. These studies reinforce the central objective of this study, which is to provide an example of planned organizational change that was staffed with capable management, who were properly engaged around the implementation activities. Readiness for change is also noted as an important aspect of the planned organizational change cycle . Readiness is defined by Holt et al. as the degree to which individuals involved are primed, motivated, and capable to deal with the change. It is best when combined with well-constructed and directed communication efforts as suggested by Jordan et al. , to build social interaction competence during the change effort. At a broader level, Kotter provides an 8-step guide for managing change which starts at developing a sense of urgency about the change and cumulates in institutionalizing the change culture. His work tends to lack references to extant literature ; however a compilation of works from the various organization literatures is given by Burke . Burke provides insight into the broader theoretical issues related to organizational change and serves an excellent basis for understanding the more complex model of evidence-based implementations given by Aarons, Hurlburt, & Horwitz . In their advancement of an implementation science conceptual model, Aarons, Hurlburt and Horwitz suggest a 4-stage implementation process for public sector service systems. They assert that program implementation starts with exploration, then advances to adoption , followed by actual implementation, and ending in sustaining of the changes. The research undertaken in this paper follows the Aarons, Hurlburt, and Horwitz model, and while it does not delve deeply into the final stage of sustainability , it fully endorses this as a requirement for implementation. This study seeks to provide an example of successful program implementation while providing some guidelines for the administrator. Success in this study is measured by improvements in health care outcomes, as outlined in Chapter 4. Achieving this success will be argued to be a function of proper program development and middle manager involvement. To begin, a distinction must be made between leadership and management in order to frame the discussion around the role of the public-sector administrator in program implementation.

An administrator could be a leader, a manager, both, or neither depending upon the time and scenario under review. As noted by Bass and Avolio , “Leaders manage and managers lead, but the two activities are not synonymous . . . management functions can potentially provide leadership; leadership activities can contribute to managing. The research undertaken in this dissertation looks at managers in their ability and capacity to manager and direct program activities and tasks. Management that is engaged and committed to the work under there area of responsibility is seen to be critical to implementation and organizational success . A framework to understand management behavior in relation to program implementation performance is provided in Chapter 3, using techniques from the organizational-development field. This study seeks to add to the literature on health care by providing an example of program implementation that emphasizes managerial behavior as central to its success. At the core of the managerial behavior leading to program success were interventions performed on the executive and mid-level management layers—interventions that were designed to improve managerial capacity within this public-sector environment. Managerial capacity in this study is defined as the ability for a manager to understand the work and react appropriately to achieve objectives. It is theorized that the more a manager is capable to understand the work, the better the outcome of the work, assuming that the manager is additionally empowered to alter the work as the need arises. By altering work, it is meant that resources may be differentially applied to get the work completed. Within this setting, the managers were empowered to alter the work performed under the receivership. Greater clarity and treatment of this important topic is provided in Chapter 3. It is important to understand how managers engage with their staff within a program implementation setting to achieve successful outcomes. This knowledge is fundamental to replicating success in other, similar implementation efforts. Two concepts or types of behavior are developed in this study as central to program implementation success in the public sector: managerial confidence and engagement in work. These aspects of managerial behavior relate directly to the previously provided definition of managerial capacity. Confidence in the work underway and being engaged in the details of how that work is carried out are functions of understanding the work at hand . These behaviors are viewed as skills that can be taught to managers, and it is theorized that improvement of these skills leads directly to program performance. These skills are considered to be fundamental to the management of organizational change . Fernandez and Rainey’s treatment of organizational change within the public sector identified eight factors pertaining to the development and progression of organizations. Their propositions are summarized in Table I, as taken from Packard et al. .Fernandez and Rainey’s factors provide an understanding of the management processes involved in change management. It was noted at the start of this chapter that change management is a primary obstacle to success in program implementation. The guidelines that Fernandez and Rainey provide are intended to help promote an environment of trust and confidence for an organization in moving forward with a planned implementation . Ultimately it is the trust and confidence of the employee that leads to organizational success when managing change . Gaining and maintaining that trust is the responsibility of managers. Chapter 3 will explain in greater depth how managers can be taught how to manage change in this manner.This research attempts to assess and quantify the successful aspects of public-sector administrative behavior to implement complex programs in a generally hostile and bureaucratic environment. In support of this, a tool to assess management behavior is developed and, based on its results, interventions to improve organizational performance are designed.

Nutrigenomics is an emerging discipline that uses nutrients to modify genetic expression

Meta-analytic studies provide more fine-grained details about the cognitive benefits of exercise for the elderly and offer four kinds of good news. First, the effects can be large, reducing the risk of Alzheimer’s disease by 45% and increasing cognitive performance by 0.5 SD . Second, though women may gain more than men, everyone seems to benefit, including both clinical and nonclinical populations. Third, improvements extend over several kinds of psychological functions, ranging from processing speed to executive functions. Fourth, executive functions, such as coordination and planning, appear to benefit most, a welcome finding given that executive functions are so important, and that both they and the brain areas that underlie them are particularly age sensitive . Finally, meta-analyses reveal the specific elements of exercise that benefit cognition. Relatively short programs of one to three months in length offer significant benefits, though programs of six months or longer are more beneficial. There seems to be a threshold effect for session duration,vertical greenhouse because sessions shorter than 30 minutes—while valuable for physical health—yield minimal cognitive gains. Cognitive benefits are enhanced by more strenuous activity and by combining strength training with aerobics .

In short, research validates the words of the second U.S. president, John Adams, who wrote, “Old minds are like old horses; you must exercise them if you wish to keep them in working order” . Fortunately, even brief counseling can motivate many patients to exercise , and the risks are minimal, although an initial medical exam may be warranted. Yet despite the many mental and medical benefits of exercise, only some 10% of mental health professionals recommend it. And who are these 10%? Not surprisingly, they are likely to exercise themselves .Growing evidence suggests that food supplements offer valuable prophylactic and therapeutic benefits for mental health. Research is particularly being directed to Vitamin D, folic acid, SAME , and—most of all—fish oil . Fish and fish oil are especially important for mental health. They supply essential omega-3 fatty acids, especially EPA and DHA , which are essential to neural function. Systemically, omega-3s are anti-inflammatory, counteract the pro-inflamatory effects of omega-6 fatty acids, and are protective of multiple body systems. Unfortunately, modern diets are often high in omega-6s and deficient in omega-3s . Is this dietary deficiency associated with psychopathology? Both epidemiological and clinical evidence suggest that it is. Affective disorders have been the ones most closely studied, and epidemiological studies, both within and between countries, suggest that lower fish consumption is associated with significantly, sometimes dramatically, higher prevalence rates of these disorders .

Likewise, lower omega-3 levels in tissue are correlated with greater symptom severity in both affective and schizophrenic disorders, a finding consistent with emerging evidence that inflammation may play a role in these disorders . However, epidemiological studies of dementia and omega-3 fatty acid intake are as yet inconclusive . Epidemiological, cross-sectional, and clinical studies suggest that omega-3 fatty acid supplementation may be therapeutic for several disorders. Again, depression has been the disorder most closely studied . Several meta-analyses suggest that supplementation may be effective for unipolar, bipolar, and perinatal depressive disorders as an adjunctive, and perhaps even as a stand-alone, treatment , although at this stage, supplementation is probably best used adjunctively. Questions remain about optimal DHA and EPA doses and ratios, although one meta-analysis found a significant correlation between dose and treatment effect, and a dose of 1,000 mg of EPA daily is often mentioned, which requires several fish oil capsules . There are also cognitive benefits of supplementation. In infants, both maternal intake and feeding formula supplementation enhance children’s subsequent cognitive performance . In older adults, fish and fish oil supplements appear to reduce cognitive decline but do not seem effective in treating Alzheimer’s disease . The evidence on omega-3s for the treatment of other disorders is promising but less conclusive. Supplementation may benefit those with schizophrenia and Huntington’s disease as well as those exhibiting aggression in both normal and prison populations. In children, omega-3s may reduce aggression and symptoms of attention-deficit/hyperactivity disorder .

A particularly important finding is that fish oils may prevent progression to first episode psychosis in high-risk youth. A randomized, double-blind, placebo-controlled study was conducted of 81 youths between 13 and 25 years of age who had subthreshold psychosis. Administering fish oil with 1.2 g of omega-3s once per day for 12 weeks reduced both positive and negative symptoms as well as the risk of progression to full psychosis. This risk was 27.5% in controls but fell to only 4.9% in treated subjects. Particularly important was the finding that benefits persisted during the nine months of follow-up after treatment cessa-tion . Such persistence has not occurred with antipsychotic medications, which also have significantly more side effects. Although coming from only a single study, these findings suggest another important prophylactic benefit of fish oils. With one exception, risks of fish oil supplementation at recommended doses are minimal and usually limited to mild gastrointestinal symptoms. The exception occurs in patients on anticoagulants or with bleeding disorders, because fish oils can slow blood clotting. Such patients should therefore be monitored by a physician. Omega-3s modify genetic expression and as such are early exemplars of a possible new field of “psychonutrigenomics.” Because genetic expression is proving more modifiable, and nutrients more psychologically important than previously thought, psychonutrigenomics could become an important field. Space limitations allow only brief mention of another significant supplement, Vitamin D. Vitamin D is a multipurpose hormone with multiple neural functions, including neurotrophic, antioxidant, and anti-inflammatory effects . Vitamin D deficiency is widespread throughout the population, especially in the elderly, and exacts a significant medical toll; several studies suggest associations with cognitive impairment, depression, bipolar disorder, and schizophrenia.

Mental health professionals are therefore beginning to join physicians in recommending routine supplementation and, where indicated, testing patients’ Vitamin D blood levels and modifying supplement levels accordingly . There are further benefits to supplementation and pescovegetarian diets. First, they have multiple general health benefits and low side effects. Second, they may ameliorate certain comorbid disorders—such as obesity, diabetes,vertical grow towers and cardiovascular complications—that can accompany some mental illnesses and medications. A diet that is good for the brain is good for the body. As such, dietary assessment and recommendations are appropriate and important elements of mental health care.We have barely begun to research the many implications of artificial environments, new media, hyperreality, and our divorce from nature. However, the problems they may pose can be viewed in multiple ways. Biologically, we may be adapted to natural living systems and to seek them out. This is the biophilia hypothesis, and multiple new fields—such as diverse schools of ecology, as well as evolutionary, environmental, and eco- psychologies—argue for an intimate and inescapable link between mental health and the natural environment . In existential terms, the concern is that “modern man— by cutting himself off from nature has cut himself off from the roots of his own Being” , thereby producing an existential and clinical condition generically described as nature-deficit disorder . Clinicians harbor multiple concerns. Evolutionary and developmental perspectives suggest that children in environments far different from the natural settings in which we evolved, and to which we adapted, may suffer developmental disorders, with ADHD being one possible example . Likewise, evolutionary theory and cross-cultural research suggest that for adults, artificial environments and lifestyles may impair mental well-being and also foster or exacerbate psychopathologies such as depression .Fortunately, natural settings can enhance both physical and mental health. In normal populations, these enhancements include greater cognitive, attentional, emotional, spiritual, and subjective well-being . Benefits also occur in special populations such as office workers, immigrants, hospital patients, and prisoners . Nature also offers the gift of silence.

Modern cities abound in strident sounds and noise pollution, and the days when Henry Thoreau could write of silence as a “universal refuge . . . a balm to our every chagrin” are long gone. Unfortunately, urban noise can exact significant cognitive, emotional, and psychosomatic tolls. These range, for example, from mere annoyance to attentional difficulties, sleep disturbances, and cardiovascular disease in adults and impaired language acquisition in children . By contrast, natural settings offer silence as well as natural sounds and stimuli that attention restoration theory and research suggest are restorative . As yet, studies of specific psychotherapeutic benefits are limited, and the benefits are sometimes conflated with those of other therapeutic lifestyle factors. Though further research is clearly needed, immersion in nature does appear to reduce symptoms of stress, depression, and ADHD and to foster community benefits . In hospital rooms that offer views of natural settings, patients experience less pain and stress, have better mood and post surgical outcomes, and are able to leave the hospital sooner . Consequently, nature may be “one of our most vital health resources” . Given the global rush of urbanization and technology, the need for mental health professionals to advocate for time in, and preservation of, natural settings will likely become increasingly important.Not surprisingly, good relationships are crucial to psychotherapy. Multiple meta-analyses show that they account for approximately one third of outcome variance, significantly more than does the specific type of therapy , and that “the therapeutic relationship is the cornerstone” of effective therapy . As Irvin Yalom put it, the “paramount task is to build a relationship together that will itself become the agent of change.” Ideally, therapeutic relationships then serve as bridges that enable patients to enhance life relationships with family, friends, and community. The need may be greater than ever, because social isolation may be increasing and exacting significant individual and social costs. For example, considerable evidence suggests that, compared with Americans in previous decades, Americans today are spending less time with family and friends, have fewer intimate friends and confidants, and are less socially involved in civic groups and communities . However, there is debate over, for example, whether Internet social networking exacerbates or compensates for reduced direct interpersonal contact and over the methodology of some social surveys . Yet there is also widespread agreement that “the health risk of social isolation is comparable to the risks of smoking, high blood pressure and obesity…. [while] participation in group life can be like an inoculation against threats to mental and physical health” . Beyond the individual physical and mental health costs of greater social isolation are public health costs. In “perhaps the most discussed social science article of the twentieth century” , and in a subsequent widely read book, Bowling Alone: The Collapse and Revival of American Community, the political scientist Robert Putnam focused on the importance of social capital. Social capital is the sum benefit of the community connections and networks that link people and foster, for example, beneficial social engagement, support, trust, and reciprocity . Social capital seems positively and partly causally related to a wide range of social health measures—such as reduced poverty, crime, and drug abuse—as well as increased physical and mental health in individuals. Yet considerable evidence suggests that social capital in the United States and other societies may have declined significantly in recent decades . In short, relationships are of paramount importance to individual and collective well-being, yet the number and intimacy of relationships seem to be declining. Moreover, “the great majority of individuals seeking therapy have fundamental problems in their relationships” . Clients’ relationships are a major focus of, for example, interpersonal and some psychodynamic psychotherapies . Yet clients’ interpersonal relationships often receive insufficient attention in clinical and training settings compared with intrapersonal and pharmacological factors . Focusing on enhancing the number and quality of clients’ relationships clearly warrants a central place in mental health care.Involvement in enjoyable activities is central to healthy lifestyles, and the word recreation summarizes some of the many benefits . In behavioral terms, many people in psychological distress suffer from low reinforcement rates, and recreation increases reinforcement. Recreation may overlap with, and therefore confer the benefits of, other TLCs such as exercise, time in nature, and social interaction. Recreation can involve play and playfulness, which appear to reduce defensiveness, enhance well-being, and foster social skills and maturation in children and perhaps also in adults . Recreation can also involve humor, which appears to mitigate stress, enhance mood, support immune function and healing, and serve as a mature defense mechanism .

Successful degraders will be tested in other cell lines known to co-express A2AR and RNF43

We demonstrate proof-of-concept that conjugation of an adenosine 2a receptor agonist to an RNF targeting antibody leads to efficient degradation of A2AR in vitro. Furthermore, we find that A2AR degradation is dependent on site of conjugation on the antibody scaffold and linker length between the small molecule and antibody. Overall, this strategy has the potential to convert any GPCR-directed small molecule into an effective degradation-based antagonist. Importantly, ADC-TACs can be generally applied to the targeted degradation of other multi-pass membrane protein classes. To determine whether it is possible to degrade GPCRs using ADC-TACs, A2AR was first targeted as a proof-of-concept. A2AR is expressed on the surface of natural killer and CD8+ T cells and is primarily involved in responding to levels of adenosine in the extracellular environment. Binding of adenosine agonizes A2AR to activate a downstream signaling cascade, resulting in an increase in intracellular cyclic AMP levels and overall immunosuppression. This is especially evident in the tumor micro-environment,vertical grow shelf where high levels of extracellular ATP decompose to generate excess adenosine.Along with the immunosuppressive role, A2AR agonism is thought to contribute to cell proliferation via the MAPK/ERK/JNK signaling pathways.

Given this role in cell proliferation, there has been an effort to develop antagonists to counteract agonism of the receptor in both the immune and cancer cell context. As such, A2AR is an appealing first target for applying our ADC-TAC approach towards the targeted degradation of GPCRs. To this end, A2AR agonist CGS21680 was chosen for conjugation as it had previously been conjugated onto an Fc domain, and the crystal structure indicated that our proposed conjugation site off the solvent exposed carboxylic acid was unlikely to interfere with small molecule binding .Three analogs of CGS21680 with DBCO- n-NH2 linkers of varying lengths were synthesized . We hypothesized that conjugation of CGS21680 onto the variable domain of an antibody with the indicated linker lengths would be capable of spanning the interface between A2AR and RNF43. Final compounds were confirmed by LC/MS and purified by HPLC before conjugation onto previously described anti-RNF43 Fabs. To enable site-specific labeling, methionine conjugation chemistry involving oxaziridine labeling of methionines was chosen due to its high selectivity. Sites were chosen that have been previously demonstrated to have both high labeling percentage and stability to hydrolysis of the sulfimide for methionine mutations.To prevent off-target labeling, a methionine that was present in the H1 complementarity determining region of the anti-RNF Fab was mutated to leucine. Two additional methionines that are buried in the Fab scaffold were also removed to limit unwanted labeling.

As seen by multi-point biolayer interferometry , removing the endogenous methionine had little effect on binding to RNF Fc fusion . Five sites of methionine mutation were then each incorporated into the scaffold . Sites were chosen in the variable domain under the assumption that conjugation closer to the CDRs would require shorter linker lengths to the small molecule to allow simultaneous binding of E3 ligase and target protein. These mutants were expressed as Fabs in the periplasm of the C43 E. coli strain. Multi-point BLI confirmed that introduction of methionine at the five sites had little to no effect on binding to RNF43 Fc fusion . Next, we sought to determine whether the anti-RNF43 Fab mutants could be selectively labeled with the azide derivative of the piperidine-derived oxaziridine 8 and the CGS21680-DBCO conjugates . All mutants showed robust and rapid labeling with the oxaziridine reagent after 30 min incubation, with little unlabeled antibody remaining in solution . Furthermore, all mutants showed robust click chemistry labeling after overnight incubation with CGS21680-DBCO, with near complete conversion to agonist-labeled product . Ability of oxaziridine- and agonist-labeled Fabs to bind RNF43 Fc fusion was confirmed with multi-point BLI as compared to unlabeled Fabs . Furthermore, we find that the conjugation sites are highly stable at both room temperature and 37ºC after 3 days . These conjugates were used for testing A2aR degradation in vitro. To determine if an ADC-TACs could degrade endogenous A2AR, a MOLT-4 derived line that co-expresses A2AR and RNF43 was used for degradation experiments.

Cells were dosed with either PBS control, 100 nM CGS21680 alone, or varying concentrations of ADC-TAC and A2AR levels analyzed by western blotting. Excitingly, dose-dependent degradation of endogenous A2AR was observed for multiple antibody-small molecule conjugates tested, with maximal degradation of 50-60% observed . Interestingly, a “hook effect”, characteristic of over saturation by bispecific molecules,was observed in which higher concentrations of conjugate led to decreased degradation of A2AR. We also observed a dependence on the site of antibody conjugation as well as linker length. Specifically, LC S7M and R66M conjugation sites show dose-dependent degradation while other labeling sites do not. Furthermore, the PEG4 linker length conjugates at these sites show successful degradation while the slightly longer PEG6 and 9 conjugates do not. Going forward, conjugates that show efficient degradation in preliminary experiments will be triaged for further validation and understanding of degradation mechanism using pathway inhibitors. Future studies will also focus on expanding the ADC-TAC target scope to other therapeutically relevant GPCRs,vertical hydroponic such as chemokine receptors CXCR4 and CCR5. For conjugation with oxaziridine, 50 µM Fab was incubated with 5 molar equivalents of oxaziridine azide for 30 min at room temperature in phosphate-buffer saline . The reaction was quenched with 10 molar equivalents methionine. The antibody was buffer exchanged into PBS and desalted using a 0.5-mL Zeba 7-kDa desalting column . Then, 10 molar equivalents of DBCO-CGS21680 was added and incubated at room temperature overnight. The agonist-labeled conjugate was desalted using the 0.5-mL Zeba 7-kDa desalting column to remove excess DBCO-CGS21680. Full conjugation at each step was monitored by intact mass spectrometry using a Xevo G2-XS Mass Spectrometer .

Cells at 1 million cells/mL were treated with antibody-drug conjugate, agonist, or antagonist in complete growth medium. After 24 hrs, cells were pelleted by centrifugation . Cell pellets were lysed with RIPA buffer containing complete mini protease inhibitor cocktail on ice for 40 min. Lysates were spun at 16,000xg for 10 min at 4ºC and protein concentrations were normalized using BCA assay. 4x NuPAGE LDS sample buffer and 2-mercaptoethanol was added to the lysates. Equal amounts of lysates were loaded onto a 4-12% Bis-Tris gel and ran at 200V for 37 min. The gel was incubated in 20% ethanol for 10 min and then transferred onto a polyvinylidene difluoride membrane. The membrane was blocked in PBS with 0.1% Tween20 + 5% bovine serum albumin for 30 min at room temperature with gentle shaking. Membranes were co-incubated overnight with rabbit-anti-A2aR and mouse-anti-tubulin at 4ºC with gentle shaking in PBS + 0.2% Tween20 + 5% BSA. Membranes were washed four times with tris-buffered saline + 0.1% Tween20 and then co-incubated with HRP-anti-rabbit IgG and 680RD goat anti-mouse IgG in PBS + 0.2% Tween20 + 5% BSA for 1 hr at room temperature. Membranes were washed four times with TBS + 0.1% Tween20, then washed with PBS. Membranes were first imaged using an OdysseyCLxImager . Super Signal West Pico PLUS Chemiluminescent Substrate was then added and image using a ChemiDoc Imager . Band intensities were quantified using Image Studio Software .SARS-CoV-2 has emerged as a global health concern and effective therapeutics are necessary to curb the COVID-19 pandemic. Many potential therapeutic options for treating COVID-19 have been explored, from small molecules,convalescent patient sera,decoy receptors,neutralizing antibodies,and other protein scaffolds.In particular, antibodies are advantageous due to their specific and potent binding, demonstrated pharmacokinetics, and ability to be recombinantly produced and manufactured at scale. SARS-CoV-2 antibodies have been derived from several sources, including B-cells of convalescent patients,animal immunization,prior coronavirus infections,and synthetic libraries or de novo design.Most of the antibodies reported to date potently target the receptor binding domain in the trimeric Spike protein on the surface of SARS-CoV-2,which is highly immunogenic and is the key protein that mediates cellular entry via interaction with the host angiotensin-converting enzyme II receptor.However, given the widespread global impact of this pandemic and limitations in biologic manufacturing capacities, means to further increase the potency of these antibodies and thereby decrease the dose required will be critical in meeting the global demand for therapeutics.

Additionally, testing different scaffolds and targeting mechanisms against coronavirus could lead to a better understanding of the most effective modalities and ultimately lead to a more resilient therapeutic arsenal against viral infections. Following identification of an initial candidate antibody, various methods for improving antibody affinity and potency are typically employed, each with their advantages and drawbacks. Affinity maturation using mutagenesis or library display is a powerful tool to improve candidate antibodies and can screen large sequence spaces.However, this process is labor intensive and may result in an antibody sequence with altered biophysical or pharmacokinetic properties that requires additional optimization. A parallel strategy to improve potency is to target multiple epitopes, either by engineering bi-specific or multi-specific molecules or by combining multiple antibodies into a cocktail.Targeting multiple epitopes has the added benefit of decreasing the likelihood of viral escape and resistance,and has shown promise as a powerful viral immunotherapy against viruses such as influenza and HIV.Indeed, several cocktails and engineered multi-specific binders  have been shown to be effective against SARS-CoV-2. Recently, our lab demonstrated the benefits of linking multiple neutralizing epitopes on the SARSCoV-2 Spike using bi-paratopic binders derived from variable heavy domains.By linking multiple neutralizing VH together in tandem, we were able to improve antibody potency through avidity. Here we explored whether linking non-neutralizing binders to neutralizing binders in a bispecific scaffold could be used as a means to rapidly improve neutralization potency. Using phage display, we identified Fabs that bind RBD but do not block ACE2 binding, and then assembled them in a knob-in-hole bi-specific IgG scaffold with VH binders that block ACE2. These VH/Fab bi-specifics have the additional advantage of avoiding the light-chain mispairing problem common to bi-specific IgGs that utilize Fabs on both arms.Remarkably, the resulting VH/Fab bi-specifics are ~20 to 25-fold more potent in neutralizing both pseudotyped and authentic SARS-CoV-2 virus than the mono-specific bi-valent VH-Fc or IgG alone or as a cocktail. This effect is epitope dependent, illustrating the unique geometry that bi-specific VH/Fab IgGs could capture on the trimeric Spike protein. Our findings highlight how targeting multiple epitopes within a single therapeutic molecule, both neutralizing and non-neutralizing, can confer significant gains in efficacy, and could potentially be generalized to other therapeutic targets to rapidly enhance antibody potency. Recently, we reported the identification and engineering of human variable heavy chain binders against SARS-CoV-2 Spike from an in-house VH-phage library, using a masked phage selection strategy to enrich for binders to Spike-RBD that compete with ACE2. From this process, we identified VH domains against two epitopes that bind within the ACE2 binding site of SARS-CoV-2 Spike. In the bi-valent VH-Fc format, both site A and site B binders blockbinding of ACE2 to Spike and neutralize pseudotyped and authentic SARS-CoV-2. VH domains that bind outside of the ACE2 binding site were not identified with this selection campaign.Here, we utilized an in-house Fab-phage library to identify unbiased Fab binders that recognize Spike-RBD. Briefly, for each round of selection, the Fab-phage pool was pre-cleared with biotinylated Fc immobilized on streptavidin -coated magnetic beads before incubating with SA-beads conjugated with biotinylated Spike-RBD-Fc . After 3-4 rounds of selection, significant enrichment was observed for Fab-phage that bound Spike-RBD-Fc over Fc alone. Individual phage clones were isolated and phage ELISA was used to characterize binding to Spike-RBD-Fc alone and in complex with ACE2-Fc. We hypothesized that Fab-phage that can bind similarly to Spike-RBD-Fc alone or when masked with ACE2-Fc would bind an epitope outside of the ACE2 binding site and would therefore occupy a unique epitope from the VH. From here, we identified over 200 unique Fab-phage sequences that bound Spike-RBD-Fc, a majority of which did not bind at the Spike-ACE2 interface . We characterized a subset of these and identified two lead Fabs, C01 and D01, which bound Spike-RBD-Fc and the trimeric Spike full ectodomain with high affinity . Conversion of these Fabs into a traditional bi-valent IgG scaffold further improved affinity to Secto to single-digit nanomolar KD . The increased affinity of the IgG compared to the Fab is driven by the avidity of the two binding arms. Due to the challenges of modeling the interaction between a bi-valent binder and a conformationally dynamic, trimeric Spike, we have reported affinities as apparent KDs derived from a 1:1 binding model of the data.

Sleep hours significantly decreased for both groups compared to before the pandemic

The JD-R theory has been developed to broadly involve multiple and various job demands, job resources, and worker health outcomes, applicable to any occupation. Among nurses, evidence suggests that high psychological and physical job demands are related to the intention to quit nursing, emphasizing the relevance of the JD-R theory to this population. Additional significant findings from research applications of the JD-R theory in the COVID-19 pandemic context indicate a negative association between the perception of work organization support and post traumatic stress disorder symptoms in a sample of U.S. nurses working in COVID-19 hospital units. As an extension of the JD-R theory to our study of prison nurses, the domain of job demands includes efforts, work hours, and the types of shift assignment and patient care. The job resources domain encompasses PPE supply and work-related rewards. The well being outcome measures are the psychological characteristics of anxiety, depression, and post-traumatic stress symptoms. Thus, research applying this JD-R theory to correctional nurses working during the COVID-19 pandemic will further characterize the job demands, resources,hydroponic vertical farm and outcomes unique to this population and compounded by the pandemic.

Prior to the COVID-19 pandemic, published reviews related to nurses and other health professionals working in correctional facilities identified occupational stressors including security prioritization, conflicts, fear, job demands, burnout, stress, and secondary trauma. Within North America, older U.S. studies have demonstrated moderate and high work-related mean stress levels among correctional nurses. Other North American pre-pandemic studies of correctional worker mental health in Canada have included nurses, but within healthcare worker subgroups. A recent study of U.S. correctional workers during the COVID-19 pandemic found that correctional healthcare workers reporting any degree of depression, anxiety, burnout, and post-traumatic stress symptoms ranged from 37% to 50%. However, this study was concentrated on correctional facilities located in eastern U.S. states. To the best of our knowledge, there are no available scientific reports focused solely on prison nurses and their working conditions and well being during the COVID-19 pandemic. The prevalence of COVID-19 cases in U.S. correctional facilities has been significantly higher than that of the general population. Based on the facilities’ available reports, which vary in data quality, there were 42,107 COVID-19 cases among incarcerated individuals in U.S. federal and state prisons, a rate that was five-and-a-half-fold greater than that of the U.S. population, between March and June 2020.

There were 13,781 documented or reported COVID-19 cases specifically in California correctional institutions, a rate that was on average over eight-and-a-half-fold greater than that of the aggregated Californian population between September and November 2020. Research on healthcare workers and nurses working through the COVID-19 pandemic has demonstrated elevated levels of occupational and psychological concerns. Studies conducted in Europe and Asia have identified elevated levels of occupational stress , insomnia, workload, anxiety, and depression. Additionally, effort and over-commitment have been associated with anxiety and depression. Literature reviews and meta-analyses of international healthcare workers have reinforced these individual study findings, with pooled prevalence rates ranging from 43% to 56.5% for stress, 40% to 44% for sleep issues, and 18.75% to 48% for post-traumatic stress . Qualitative and quantitative research on U.S. nurses during the COVID-19 pandemic have recognized occupational challenges regarding patient care, increased workload, and inadequate personal protective equipment , as well as psychological outcomes including post-traumatic stress, depression, and anxiety. However, these studies heavily focus on hospital settings, and are mostly concentrated in the U.S. Northeast, South, and Midwest . A December 2020 national survey that provided state-specific data reported that the majority of California nurses felt exhausted, overwhelmed, and anxious, with 52% expressing neutrality or disagreement with the statement that their workplace val-ued employee safety and health. Yet, there was minimal representation of correctional nurses in California.

In California correctional settings, the Legislative Analyst’s Office 2019 report acknowledged the California Department of Corrections and Rehabilitation’s use of mandatory overtime for nursing staff, despite previous state agreement to decrease this practice. The under representation of correctional nurses in California and the combined challenges intrinsic to the correctional work setting and to the COVID-19 pandemic warrant further investigation. To the best of our knowledge, this is the first study in the western U.S. region to exclusively target prison nurses and their working conditions and well being in the context of the COVID-19 pandemic, and compared to a non-correctional worker group. This study aims to evaluate a group of California prison nurses and compare their work characteristics and well being outcomes with those of a community nurse group. Recruitment for the community group occurred through nursing organization websites during an approximate 1.5-month survey window between late May and early July 2020. Most of the community nurse participants were in California. The prison nurse group was subsequently enrolled through collaboration with healthcare administrators at a California state prison. The survey window for the prison nurse group was about two months, from early September to late November 2020. For both groups, eligible nurses had to have current paid employment in a healthcare setting since the start of the COVID-19 pandemic. Informed consent was obtained from participants at the initiation of the online survey. Each participant received a USD 10 gift card incentive.

This study was reviewed and approved by the University of California, Los Angeles Institutional Review Board , and followed the Declaration of Helsinki guidelines, as well as the Strengthening the Reporting of Observational Studies in Epidemiology reporting guideline. The “Survey of Nurses Work and Well being during the COVID-19 Outbreak” was administered online to both groups of participants. The survey included validated instruments, Likert-type scales, visual analog scales, and numeric and free text responses to measure the working conditions and well being of nurses before and during the pandemic . Specifically, the survey focused on 3 domains of assessments on working conditions: psychosocial characteristics, organizational work characteristics, and the COVID-19 working characteristics. The survey also focused on 3 domains of assessments on psychological well being, including sleep characteristics,vertical farm psychological characteristics, and post-traumatic stress disorder. Pre-pandemic recall and current reports were requested for the variables of weekly work and sleep hours and night shift assignment. All other variables were one-time measurements.The Effort–Reward Imbalance scale was used to measure psychosocial factors at work, consisting of 10 items, 3 for effort, and 7 for reward. The effort score ranges from 3 to 12 and the reward score ranges from 7 to 28, with high scores corresponding with high magnitudes of effort and reward . The E–R ratio score ranges between 0.25 and 4.00, with scores above one suggesting high work stress. The Cronbach’s alpha coefficients were 0.80 for the effort sub-scale, and 0.78 for the reward sub-scale. The ERI measure has been widely used among nurses in Europe, as well as healthcare workers, including nurses in the United States. During the COVID-19 pandemic, several studies used the ERI for measuring work stress in front line healthcare workers . The organizational work characteristics included work years, as well as pre-pandemic and current .The Patient Health Questionnaire-4 measured depression and anxiety. The PHQ-4 features two two-item sub-scales to measure depression and anxiety symptoms over the past month. Each sub-scale’s score ranges from 0 to 6, with higher numbers relating to higher levels of depression and anxiety. For both conditions, scores of 3 and above represent positive cases of depression and anxiety. Both sub-scales were reliable, with a Cronbach’s alpha of 0.82 for depression and 0.9 for anxiety. Previous studies utilized this brief instrument during the COVID-19 pandemic among a hospital nurse sample in Romania, and a hospital nurse and nurse assistant sample in the United States. Post-Traumatic Stress Disorder symptoms over the past month were measured with a six-item screening instrument. Scores range from 6 to 30, with elevated scores reflecting elevated PTSD symptoms. A score of 14 or above indicates PTSD. The Cronbach’s alpha for this scale was 0.88. This instrument has been used in a United Kingdom healthcare worker sample during the COVID-19 pandemic. Participants with partial responses were included using pairwise deletion, with the omission of non-responses per variable rather than the implementation of missing value replacements.

Data were analyzed with Mann–Whitney U and t-tests for continuous data, and Fisher’s exact and Chi-Square tests for categorical data. The Shapiro–Wilk test checked for normal distributions. Wilcoxon signed-rank and McNemar’s tests compared pre-pandemic and current data. Means, standard deviations, and ranges were calculated. Calculations and analyses were conducted using SAS 9.4 .Among the 114 participants that originally submitted the survey, 5 participant entries were removed due to lack of consent or non-response on all items, resulting in a total sample of 109 with at least partial responses. Of this total sample, 79.82% completed the entire survey, with similar completion rates between the prison and community groups. The analysis incorporated the remaining participants’ partial responses. Table 1 shows the demographic characteristics of the study participants, with no significant differences between the two groups for gender, race, marital status, and age. The majority of participants in both groups were female, married or partnered, with a mean age within the 40-year age range. The largest racial subgroup was non-Hispanic White for both groups.Table 2 indicates the organizational characteristics before and during the pandemic. Both prison and community nurses had mean work years of about 15 years, with a minimum of 2 years, without significant differences. The pre-pandemic and current weekly mean hours of work were significantly higher for prison nurses compared to community nurses. For prison nurses, work hours significantly increased during the pandemic. Although there were no significant differences in night shift assignment between the groups before the pandemic and currently, the amount of prison nurses working any night shifts significantly increased after the start of the pandemic. There were no significant differences in psychosocial work stress experiences in terms of effort-reward imbalance between the groups, but stress levels in both groups were relatively high . Table 3 reports working conditions during COVID-19. Prison nurses reported significantly more direct COVID-19 patient contact, and had more requests to work, or had worked, in other departments. However, significantly more prison nurses perceived adequate PPE supply and had COVID-19 testing compared to community nurses. Significantly more community nurses expressed fear of contracting COVID-19 at work, and had a higher level of general fear towards the COVID-19 outbreak. Table 4 focuses on the psychological well being of the study participants. The prison nurses’ mean daily sleep hours were significantly lower than those of community nurses before the pandemic and currently. Mean scores for total insomnia and the sleep-related items indicating “trouble falling asleep” and “waking up at night” were elevated in the prison group compared to the community group, but these differences were not statistically significant. Both groups did not significantly differ in their mean PTSD scores, but both mean scores were above the cutoff score of 14. The percentage of nurses with a PTSD score equal to or above 14 was 49.02% in the prison group and 69.05% in the community group. Although depression and anxiety mean scores were more elevated in the community group, they did not significantly differ from those of the prison group. For both groups, the depression and anxiety mean scores were below the cutoff, and the prevalence of depression and anxiety cases was low.Significant findings from this study provide insight into prison nurses’ intensified challenges, including longer work hours, less sleep hours, more COVID-19 patient care demand, higher perceived PPE supply, and lower pandemic-related fear levels compared to community nurses. Although not statistically different, the occupational stress and mental distress results of prison nurses and community nurses are concerning, and reflect the pandemic context. The weekly work hours of the prison nurse study participants contrasted with those of the U.S. nurse population. Among the estimated U.S. population, 58.7% of nurses worked 32 to 40 h weekly between February and June 2020. While the community group’s mean pre-pandemic and current weekly hours were within this range, those of the prison group exceeded the national population estimate. Additionally, this finding of long working hours among the prison nurse study participants may be related to the previously mentioned issue of mandated overtime among some California state institutions.

Results of these evaluations show high baseline vegetable preferences among participating children

UC ANR internal program coordination. Additional information gathered from the questionnaire included a more in-depth description of UC ANR’s internal programs and activities. Thirteen of the 17 respondents indicated that their counties have an active Master Gardener Program, and 10 indicated that their master gardeners work with school or after-school garden programs or Farm to School programs. This internal program coordination was cited as an important factor for implementing successful school and after-school garden programs and Farm to School programs. These results suggest that the multidisciplinary and highly collaborative UC Cooperative Extension network has the potential to provide an important framework for successful school gardens, after-school gardens and Farm to School programs. Highlights from UCCE-evaluated programs are provided below.A unique UCCE program in Contra Costa County brings many young school aged children, especially those in grades 1 and 2, to an edible garden at the county fairgrounds. The site is also home to an agriculture museum. Approximately 1,700 students, 67 teachers and many parents visited the site during the 2010-2011 school year. Process evaluation, which documents and evaluates the development of a program from its inception, has demonstrated positive attitudes toward the program among teachers,stackable planters and results support the concept that teachers value the emphasis on local agriculture in the education process. However, these evaluations lack control groups of children who did not visit the edible garden, making it difficult to draw authoritative conclusions about the program’s success.

UCCE Contra Costa and Nevada counties collaborated to initiate the UC Sustainable Community Project, a federally funded Children, Youth and Families at Risk Sustainable Community Project that will begin participant enrollment in February 2012. A key element of the project is place-based learning, including at least one field trip to a farm. Both counties are partnering with master gardeners, and all intervention sites have gardens. The program will use the 4-H Teens as Teachers model to deliver the majority of the education to the younger participants . The short term goals of the program include improvement in youth knowledge about nutrition, gardening, agriculture, cooking and health; improvement in the ability to act on this knowledge; and improvement in physical fitness. The program leaders expect to provide participants with the skills to grow and cook their own food to support their personal health goals. As this is a nationally funded project, evaluation tools have already been developed, and a research team at Arizona State University will analyze pre- and post-intervention data. An exciting aspect of this project is that it supports the recent Institute of Medicine call for innovative techniques, integrating gardening and Farm to School programs with new technologies. For example, teens will use iPad 2 applications to identify and map safe routes to school and will share their findings by teaching children about walking and biking paths in their communities. Several education lessons will be delivered using accredited applications, and all data analysis will be collected with “clicker” technology, which uses wireless student response pads that allow instructors to instantly assess how well students understand the material presented.

In San Bernardino County, a team consisting of UCCE staff and academic personnel from the Fielding Graduate University Department of Psychology used a multidisciplinary approach to evaluate the impact of school gardens on nutrition knowledge and psychological parameters including attention and mood. Students in first- and second-grade classrooms were assessed pre- and post-intervention for nutrition knowledge using the Eating Healthy from Farm to Fork: Promoting School Wellness assessment tool. Teachers were trained to deliver this curriculum in its entirety and to use the 4-H gardening curriculum See Them Sprout. In addition, students spent 30 minutes in the garden each Friday. At the end of the 14-week semester, the post-test results showed a statistically significant increase in fresh fruit and vegetable knowledge. A unique aspect of this project was the attention given to the psychological impact of the school garden. Children worked in the garden for only one semester, allowing investigators to use a cross-over design to compare gardening and non-gardening children both within and between groups. Assessments of mood and attention were conducted before and after the 30-minute garden session and before and after the matched control non-gardening activity sessions each Friday over two semesters. The following semester, this procedure was repeated with the group assignments reversed. Assessments of self-efficacy and well-being were conducted with individual students, using longer measures at the beginning and end of each semester.

Results of this study are pending analysis. While randomized controlled interventions are needed, studies using an observational pre- and post-test design can still be highly informative, especially with respect to process evaluation. UCCE in Stanislaus and Merced counties has taken a leadership role in implementing Farm to School programs that reach over 3,000 children per year. Taste tests, teacher evaluations and teacher interviews were conducted to determine taste preferences and nutrition-related behavior changes in children participating in Farm to School programs.This is likely the result of prior exposure to school garden and Farm to School programs, as these have been operational for several years. Given these high baseline preferences, no improvements in children’s taste preferences were observed. While the finding that children participating in Farm to School programs prefer fruits and vegetables is encouraging, the information we gain is limited, reinforcing the need for randomized control studies. Without controls, it is impossible to conclude that the program being evaluated actually resulted in the measured outcomes. With controls, however, researchers can sort out any outcomes that might have happened by chance or simply as a result of other factors in the environment. Similarly, with randomization, researchers can ascertain whether outcomes were the result of one study site being more determined to make changes. The Shaping Healthy Choices Program uses a randomized controlled design to determine the outcomes of a multi-component nutrition education program on student health–related outcomes. Findings will help ascertain the impact of a coordinated comprehensive nutrition education program on students’ dietary behavior and health status.While UCCE has implemented and partially evaluated Farm to School and garden-enhanced nutrition education programs, it is important to integrate these strengths into a research and education program that incorporates the constructs of the socio-ecological model. Consistent with this, the ANR Healthy Families and Communities strategic plan addresses childhood obesity prevention with a multidisciplinary approach that involves a statewide network of researchers and educators creating, developing and applying knowledge in agricultural, natural and human resources. Funded by the ANR Competitive Grants Program, the research and extension project A Multi-Component, School-Based Approach to Supporting Regional Agriculture, Promoting Healthy Behaviors, and Reducing Childhood Obesity builds upon the multidisciplinary, comprehensive approach to investigate dietary and lifestyle habits with the greatest potential for sustainable childhood obesity prevention. This 4-year study will use the socio-ecological model to implement and measure the effectiveness of an integrated, school-based,stackable flower pots multi-component intervention. The long term goal of the Shaping Healthy Choices Program is to prevent childhood obesity by improving students’ diets and increasing physical activity. A collaborative research team will work with four schools in two counties to develop a system wide, sustainable program to achieve the following objectives: increase availability, consumption and enjoyment of fruits and vegetables, improve dietary patterns and increase physical activity consistent with the 2010 U.S. Dietary Guidelines for Americans, improve science-processing skills to sustain patterns learned and adopted through participating in the program, promote positive changes in the school environment to support dietary and exercise patterns and student health and facilitate the development of an infrastructure to sustain the program beyond the funding period.

To document student outcomes and environmental changes resulting from this multi-component, multidisciplinary approach to obesity prevention, a randomized, controlled, double-blind intervention will be implemented for one academic year through collaboration among faculty and staff from UC Davis, UC ANR, the Agricultural Sustainability Institute at UC Davis and the UC Davis Betty Irene Moore School of Nursing. The factors contributing to obesity are numerous and interrelated. Meeting the complex challenges of obesity prevention will require extensive and diverse collaboration with shared responsibility and common goals. The study will explore and document the effectiveness of an interdisciplinary team in developing comprehensive nutrition and lifestyle education programs that can be delivered throughout the state. In the future, these teams will include UC faculty; UCCE nutrition and youth development specialists and advisors, and Agricultural Sustainability Institute staff; food and agriculture industry representatives; public school educators, administrators, after school providers and families; community members; health practitioners; farmers; and state/county agency nutrition, food science, agriculture and health-care representatives — all developing coordinated programs that can be delivered throughout the state. Introduced in 2004 at the USENIX Symposium on Operating Systems Design and Implementation, the model is inspired from the functional programming construct ”map”. As such, MapReduce consists of the application of uniform functions ”map” and ”reduce” to a set of input elements sub-divided into multiple chunks to be processed by machines, part of a distributed computing system. The appeal of the model stems from the fact that it absolves the programmer from the burden of input management, parallelism, and synchronization constraints. MapReduce programs are written as single node programs by the user, and are subsequently parallelized by the framework. The details of input distribution, synchronization of necessary data structures, as well as handling machine failures, are all abstracted away from the user, and hidden in the paradigm itself. The MapReduce model in its most popular form, Hadoop, uses the Hadoop Distributed File System to serve as the Input/Output manager and fault-tolerance support system for the framework. The use of Hadoop, and of the HDFS is however not directly compatible with HPC environments such as NERSC, the NY state Grid, the Open Science Grid, and TeraGrid. This is so because Hadoop implicitly assumes dedicated resources with non-shared disks attached. At NERSC’s Magellan cluster, the system administrator has isolated a part of the large cluster for use with Hadoop. This isolation not only limits the resources available to MapReduce programs, but also produces performance penalties with MapReduce programs, as HDFS on top of an underlying file system introduces performance hampering layers of indirection. In contrast, applications using the rest of the cluster interact directly with GPFS. Furthermore, even though Hadoop has successfully worked at large scale for a myriad of applications , it is not suitable for scientific applications that rely on a POSIX compliant file systems in the grid/cloud setting. The HDFS is not POSIX compliant. In this paper, we investigate the use of Global Parallel File System, Network File System for large scale data support in a MapReduce context. For this purpose we picked three application groups, two of them, UrlRank and ”Distributed Grep”, from the Hadoop repository, and a third of our own: XML parsing of arrays of double, tested here under induced node failures, for fault tolerance testing purposes. The first two being provided with the Hadoop application package, have been shown to provide scalable performance with Hadoop. Even as we limit our evaluation to NFS and GPFS, our proposed design is compatible with a wide set of parallel and shared-block file systems, such as LUSTRE, pNFS, GFS2, and Oracle Cluster FS. We present the diverse design implications for a successful MapReduce framework in HPC contexts, and show the performance data collected from the evaluation of this approach to MapReduce along side Apache Hadoop at the National Energy Research Scientific Computing Center Magellan cluster.Hadoop uses the HDFS, inspired from the GFS for various background tasks such as input management, distribution, locality, output collection, performance but also fault tolerance. As a data manager, the HDFS is tasked with dividing the input among participating nodes in the cluster, keeping a myriad of accounting tallies, including chunk size, location, and duplication counts. The HDFS insures input distribution and rally in providing the user with an interface whose role is to provide bits of given data files to cluster nodes. Among its chief advantages, the Hadoop Distributed File System provides input locality by enabling nodes hosting input shards to apply their processing on such chunks, rather than on remotely stored data. This design provides significant performance benefits as the computation is brought to the data, rather than the data to the computation. In line with its data management role, the HDFS collects output data processed by nodes, and ”shuffles” them for the reducer to operate on them.

Traditional multi-spectral indices have limitations with assessing water status

Most ET estimation using UAVs is based on satellite remote sensing methods. One source energy balance , high resolution mapping of evapotranspiration, machine learning, artificial neural networks, two source energy balance , dual-temperature-difference, the surface energy balance algorithm for Land , and mapping evapotranspiration at high resolution with internalized calibration are introduced in this section. The discussed ET estimation methods with UAVs and their advantages and disadvantages are summarized in Table 3. As mentioned earlier, this article is not intended to provide an exhaustive review of all direct or indirect methods that have been developed for ET estimation, but rather to provide an overview on ET estimation with UAVs applications. Therefore, only those methods which have already been used with the UAVs platform are discussed. Machine learning techniques and ANN models have already been used for estimating hydrological parameters and ecological variables. Due to the ML’s ability to capture non-linear characteristics,microgreen fodder system many research results suggest that machine learning methods can provide better ET estimates than empirical equations based on different meteorological parameters .

Therefore, artificial neural networks were used in to improve the estimation of spatial variability of vine water status. The resulting emissivities were incorporated into the TSEB model to analyze their effects on the estimation of instantaneous energy balance components against ground measurements. Soil salinization causes a significant reduction in the growth and productivity of glycophytes, including major crops. In general, soil salinity is widespread in arid and semi-arid regions, particularly on irrigated land in such areas. However, saline soil is also a serious problem in humid regions such as South and Southeast Asia, where encroachment of sea water occurs through estuaries and groundwater, especially in coastal regions. Approximately 7 % of the total land surface suffers soil salinity to a greater or lesser extent. More than 650 million hectares of land in Asia and Australia are estimated to be salt-affected, which is a serious threat to stable crop production in these densely populated areas.Excessive salt accumulation triggers various detrimental effects due to two major problems: osmotic stress and ion toxicity. Increases in osmotic pressure, caused by salt over-accumulation in the root zone, lead to a reduction in water uptake, which in turn slows down cell expansion and growth, thereby reducing cellular activity. Na+ is a major toxic cation in salt-affected soil environments.

Over-accumulated Na+ outside and inside of plants disturbs K+ homeostasis and vital metabolic reactions, such as photosynthesis, and causes the accumulation of reactive oxygen species. The high-affinity K+ transporter family in plants has been extensively studied since the discovery of the TaHKT2;1 gene from bread wheat , which encodes a Na+ -K+ co-transporter. Analysis of the structure and transport properties of HKT transporters from various plant species has classified these transporter proteins into at least two subfamilies. Class I HKT transporters were found to form a major subfamily that in general exhibits Na+ -selective transport with poor K + permeability. The single HKT1 gene in Arabidopsis thaliana, AtHKT1;1, was found to be essential to cope with salinity stress. Na+ channel activity mediated by AtHKT1;1 was proposed to predominantly function in xylem unloading of Na+ in vascular tissues, particularly in roots, which prevents Na+ over-accumulation in leaf blades in salt stress conditions . In monocot crops such as rice, wheat and barley, HKT genes were found to form a gene family composed of genes encoding class I and class II transporters. QTL analyses for salt tolerance in rice plants detected a strong locus controlling K+ and Na+ contents in shoots, which was subsequently found to encode the OsHKT1;5 transporter. In bread wheat, the Kna1 locus contributing to enhanced K+ -Na+ discrimination in shoots of salt-stressed plants has long been known [25, 26]. In addition, two important independent loci for salt tolerance were also identified in durum wheat.

These were shown to be responsible for maintaining low Na+ concentrations in leaf blades by restricting Na+ transport from roots to shoots. It seems that the Nax2 and Kna1 loci are orthologs, which turned out to encode HKT1;5 transporters. HKT1;5 transporters from rice and wheat plants were demonstrated to mediate Na+ selective transport and maintain a high K/Na ratio in leaf blades during salinity stress by preventing Na+ loading into xylem vessels in the roots, similar to AtHKT1;1. The Nax1 locus has been shown to function in the exclusion of Na+ from leaf sheaths to blades in addition to restricting the movement of Na+ from roots to shoots. Sequencing analysis of the approximate mapping region of the Nax1 locus has suggested that the effect is attributable to the HKT1;4 gene, TmHKT1;4-A2. In rice, a copy of the OsHKT1;4 gene was found in the genome. Recent analysis of the OsHKT1;4 gene of a japonica cultivar and salt-tolerant varieties of indica rice suggested that the level of the OsHKT1;4 transcript correctly spliced in leaf sheaths is closely related to the efficiency of Na+ exclusion from leaf blades upon salinity stress. Furthermore, recent electrophysiological analyses of two TdHKT1;4 transporters from a salt-tolerant durum wheat cultivar reported Na + -selective transport mechanisms with distinct functional features of each transporter. However, ion transport features and the physiological role of OsHKT1;4 in rice remain largely unknown. In this study, we investigated the features of ion transport mediated by OsHKT1;4 using heterologous expression systems. We also characterized the physiological function of OsHKT1;4 under salt stress by analyzing RNAi transgenic rice lines. We found that OsHKT1;4 is a plasma-membrane -localized transporter for mediating selective Na+ transport, and it plays an important role in restricting Na+ accumulation in aerial parts, in particular, in leaf blades during salinity stress at the reproductive growth stage.To investigate the Na+ transport properties of OsHKT1;4, the full length OsHKT1;4 cDNA was isolated from seedlings of the japonica rice cultivar Nipponbare using a specific primer set . The isolated cDNA was 1545 bp long and deduced to encode 500 amino acids, which were completely identical to sequences registered in GenBank. Heterologous expression analysis was performed using a salt hypersensitive mutant of S. cerevisiae . Transgenic G19 cells harboring an OsHKT1;4 expression construct grew with no serious inhibition on arginine phosphate medium in the absence of excess Na+ although the overall growth of OsHKT1;4-expressing cells were slightly weaker than that of cells harboring empty vector .

The addition of 50 mM NaCl triggered severe growth inhibition of OsHKT1;4-expressing cells in contrast to control cells on AP medium . OsHKT1;4-expressing cells accumulated significantly higher levels of Na+ than control cells when cultured in synthetic complete medium containing approximately 2 mM Na+ . Incubation with liquid SC medium supplemented with 25 mM NaCl further stimulated the phenotype,barley fodder system and a significant increase in Na+ accumulation occurred in OsHKT1;4-expTo determine the localization of the OsHKT1;4 protein in plant cells, we fused EGFP at the N-terminus end of OsHKT1;4 and placed under the control of the CaMV35S promoter. Rice protoplasts transformed with EGFP-OsHKT1;4 showed the presence of EGFP fluorescence at the periphery of the cell . Red fluorescence from co-expressed CBL1n-OFP , a PM marker [36], overlapped well with the green fluorescence from EGFP-OsHKT1;4 . In comparison, rice protoplasts co-transformedwith free EGFP and PM-marked CBL1n-OFP showed typical cytoplasmic localization of EGFP , which did not overlap with CBL1n-OFP fluorescence . These results strongly indicated that EGFP-OsHKT1;4 localizes to the PM of rice protoplasts. However, by repeating the transformation experiments several times, we often observed that EGFP-OsHKT1;4 was also present inside the cells and clustered in punctate-like structures . In order to understand if the internal EGFP signal was due to the accumulation of OsHKT1;4 in the secretion pathway, we co-transformed rice protoplasts with EGFPOsHKT1;4 together with an endoplasmic reticulum marker, ER-mCherry. As shown in Fig. 4i-k, EGFPOsHKT1;4 was present in the ER , but was also detectable at the PM , which was not labeled with mCherry . This latter result indicated that EGFP-OsHKT1;4 was partially retained in the ER, but that it was also able to properly reach the PM. Moreover, co-expression of EGFP-OsHKT1;4 with the ER marker revealed that the observed EGFP punctate-like structures were not made of ER membranes, because they did not exhibit mCherry fluorescence. We further investigated if such punctate-like structures could be a part of the Golgi apparatus by co-expressing EGFP-OsHKT1;4 with a Golgi marker, Golgi-mCherry and analyzing optical sections of transformed protoplasts in which the GA was clearly detectable . As shown in Fig. 4o, EGFP and mCherry fluorescence only partially overlapped , with some pwere labeled with EGFP alone . This latter result indicated that EGFP-OsHKT1;4 was also present in the GA as well as in still unidentified structures.We investigated the tissue-specific expression pattern of OsHKT1;4 at various growth stages of rice plants using the same samples reported previously. Higher expression of OsHKT1;4 in leaf sheaths was found throughout the growth periods . At the flowering stage, the highest expression level was found in the peduncle and internode II .

Note that lower levels of OsHKT1;4 expression were also detected in other organs . We further investigated the response of OsHKT1;4 to stress at two different growth stages. At the vegetative growth stage, exposure to 50 mM NaCl resulted in significant reductions in the accumulation of OsHKT1;4 transcripts in all organs except the youngest leaf sheath . A stepwise 25 mM increase in the NaCl concentration every 3 days from 75 mM to 100 mM was subsequently applied to 50 mM NaCl-treated plants, and the same organs were harvested at each NaCl concentration. In general, prolonged and increased NaCl stress maintained severe reductions of OsHKT1;4 expression in young leaf blades, leaf sheaths, basal nodes and roots compared with control plants . One characteristic difference from 50 mM NaCl-treated plants was the expression profile in the youngest leaf sheath , in which OsHKT1;4 expression showed significant reductions as in other tissues, and the decrease-trend became more severe as the strength of NaCl stress increased . At the reproductive stage, OsHKT1;4 transcript levels were significantly increased in peduncles in response to salt stress . In addition, a significant increase in OsHKT1;4 expression was also found in the uppermost node of salt-stressed rice plants compared with control plants, although the basal level of OsHKT1;4 expression in the tissue was relatively low . The node is an essential tissue for distributing minerals, and toxic elements, that are transported from the roots. The node includes different types of vascular bundles such as enlarged VBs and diffuse VBs , each of which have distinct functions in the distribution of elements. Given that the level of expression of OsHKT1;4 was elevated in node I in response to salinity stress , we examined the expression pattern of OsHKT1;4 in EVBs and DVBs by combinational analysis of laser micro-dissection and real-time PCR. As shown in Fig. 6c, OsHKT1;4 expression was predominantly detected in DVBs but not EVBs in node I, which was approximately 28-times higher than the expression in the basal stem .To investigate whether OsHKT1;4-mediated Na+ transport contributes to salt tolerance in rice plants, we generated OsHKT1;4 RNAi plants. Two independent transgenic lines, which showed reductions in OsHKT1;4 expression in leaf sheaths during the reproductive growth phase, were selected and used for phenotypic analysis . Growth with 50 mM NaCl in hydroponic culture for more than 2 weeks in Nipponbare and RNAi lines did not cause any difference in visual characteristics . The Na+ concentration of different organs was compared between WT and RNAi plants after the plants were treated with 50 mM NaCl for 3 days. No difference was found in the Na+ concentration of all organs between WT and RNAi lines . Given that OsHKT1;4 expression in the tissues of rice at the vegetative growth stage was down-regulated, but was up-regulated in some tissues at the reproductive growth stage in response to NaCl stress , we then examined the phenotypes of RNAi lines at the reproductive growth stage in high-salinity conditions. Wild-type Nipponbare plants and each OsHKT1;4 RNAi line were planted in the same pot filled with soil from paddy fields and grown in two independent greenhouse facilities at two different institutes. Nipponbare and OsHKT1;4 RNAi plants were watered with tap water containing 25 mM NaCl when they started heading, and the NaCl concentration was gradually elevated with a 25 mM increase to the maximum concentration of 100 mM for more than a month. Flag leaves, peduncles,nodes and internode IIs were harvested and ion contents were determined.

There is an exogenous SSP specific component for the livestock density

To represent this healthy U.S. diet in GLOBIOM, we performed a series of additional conversions. First, we determined the allocation of GLOBIOM items across Calculator product groups based on how commodities are currently allocated across each product group. For example, the majority of barley is used for making alcohols and the remaining 9% is consumed as cereals, and about 21% of corn that is consumed by humans is consumed as a cereal, whereas the remaining 79% is used for making corn-based sugars . We then calculated healthy diet “shifters” for each Calculator product group by dividing the healthy diet kcal by the baseline diet kcal. A “shifter”, as we define it here, is a constant multiplier that allows conversion between scenario values. Food product group shifters allow for the creation of a healthy U.S. diet scenario using any baseline diet kcal values . We then combined the healthy diet shifters with the GLOBIOM-to-Calculator product group allocations to calculate diet shifters for GLOBIOM items . These shifters were used directly in GLOBIOM to create the Healthy US, Healthy World, and Sustainability scenarios. Unlike in the Calculator,dutch bucket for tomatoes shifters were applied to the demand curve in GLOBIOM, since final human consumption is determined endogenously .

This means that dietary changes between Calculator and GLOBIOM may not be identical, though they are highly similar .Using the US FABLE calculator, we developed two sets of yield shifters. The “BAU yields” shifters apply 2000–2015 yield growth trends in the U.S. to simulation years 2000–2050 . The Higher “U.S. Yields” shifters increase the growth rate by 200% between 2015 and 2050 if the annual positive rate is lower than 1%/year, 80% if the annual rate is higher than 1%/year, and turns negative historic growth rates into positive growth rates , but only applying these adjusted growth rates on the yields after 2020 . For example, if crop yields were declining at a rate of 0.5% per year historically, and the yield in 2020 was 4.5 tons/ha, we changed this in the higher “U.S. Yields” scenario to increase at a rate of 0.5%/ year starting in 2020 and thus the yield would be 4.6 tons/ ha in 2025. GLOBIOM endogenously adjusts yields based on cropping mix and management . Thus, GLOBIOM yields are a combination of our exogenously applied yield increases and GLOBIOM endogenous adjustments. As a result, the yields between GLOBIOM and the Calculator may not match exactly .We used the Calculator to explore the exogenous effects of changes in livestock productivity parameters. These livestock changes may represent technological innovations or management systems shifts .

The Calculator uses the historic USDA growth rate from 2010 to 2020 linearly extrapolated out to 2050 in the “BAU productivity” livestock productivity scenario, and we increased this growth rate by 20% in the “High productivity” livestock productivity scenario . For ruminant grazing density, the “Constant density” scenario uses the same ruminant density from 2010 to 2050, and reduces this by 6% by 2050 in the “Declining density” scenario. Though these changes were applied to all grazing livestock in the U.S., the vast majority of grazing livestock is cattle, thus, these changes effectively alter beef productivity and grazing density. We conducted these sensitivity analyses in the Calculator, because livestock productivity in GLOBIOM is a more complex combination of endogenous and exogenous factors than for crop yields. The amount of livestock product per unit of land area depends on the average feed conversion ratio and the grass yield. The grass yield is exogenous and can change over time under different climate scenarios , whereas the average feed conversion ratio is endogenous as the production system composition is endogenous.We constructed two main scenarios—Baseline and Sustainability—in both GLOBIOM and the US FABLE Calculator.

The values of all variables chosen for the Sustainability scenario are expected to favor sustainable outcomes . For the Baseline scenarios , we assume no change from the current average U.S. diet, SSP2-Middle-of-the-Road diets for ROW, and SSP2 baseline yields. GLOBIOM model runs generated scenario-specific values that were used as inputs in the Calculator for yields, livestock productivity, ruminant density, imports, and exports . In GLOBIOM, we ran five additional scenarios that isolate the roles of U.S. diets, ROW diets, and crop productivity assumptions to examine every combination of input assumptions. We could not replicate these scenarios in the Calculator because it cannot represent global demand and production. We simulated alternative crop productivity futures in GLOBIOM. As described above, yields are a function of both endogenous decisions and exogenous productivity growth rates that vary between business-as-usual and high yield scenarios . In this analysis we vary the latter to represent a range in expected technological change from business-as-usual to optimistic growth. The remaining five scenarios are as follows : 2: healthy U.S. diets and healthy ROW diets , 3: healthy U.S. diets and high U.S. yields ; 4: healthy U.S. diets only ; 5: healthy ROW diets only ; and 6: high U.S. yields only . For the Calculator sensitivity analysis , we used the A: sustainability scenario assumptions, but changed either the livestock productivity to be higher than in GLOBIOM or ruminant density to be lower than in GLOBIOM . Because of inherent differences in underlying data, model infrastructure, and system boundaries between the FABLE Calculator and GLOBIOM approaches, we report only the difference and percentage change from each model’s BAU scenario.

Simulated diets and crop yields reflect scenario adjustments. Scenario adjustments to commodity demand curves and crop yields in GLOBIOM resulted in both production and consumption changes . That is, instead of perfect alignment with assumed diet and productivity assumptions in a domestic-only LCA or mass balance approach, simulated diets and yields in GLOBIOM reflect endogenous prices and supply-side adjustments that cause variation in crop yields . GLOBIOM uses commodity-specific demand curves for representing human demand. Thus, applying the healthy U.S. diet shifters essentially shifts the entire demand curve, as opposed to the final demand for each commodity, which is determined by both the demand and supply curves and market dynamics. As a result, applying the same set of shifters to both the Calculator and GLOBIOM does not necessarily ensure the same percentage change in final per capita consumption across all items. However, demand curve adjustments to reflect healthy diets in particular, led to expected changes in consumption across all food product groups . Results indicate that the final per capita consumption in GLOBIOM very closely resembles that of the Calculator . Similarly, simulated yields from GLOBIOM vary across scenarios due to market adjustments in the U.S. and the rest of the world, illustrating the sensitivity of the U.S. production system to global market forces .Pastureland declines significantly while cropland contracts slightly in response to healthier U.S. diets . Healthier diets in the rest of the world and increases in U.S. crop yields only modestly reduce cropland in the U.S.,blueberry grow pot but significantly reduce cropland in the rest of the world. In the Sustainability scenarios, domestic land used for livestock forage and grazing decline by 37 mil ha in the US FABLE Calculator and 28 mil ha in GLOBIOM scenarios if the average American diet resembled the Healthy-style DGA diet by 2050 . These declines in pastureland are far more dramatic than for cropland, which declines by only 3.9 mil ha in the Calculator scenarios and 2–3.3 mil ha in GLOBIOM scenarios due to healthier U.S. diets . Percentage reductions in both cropland and pastureland are similar across the two models, with the reductions in the Calculator about 1% and 10% points greater, respectively. Both models assume that reductions of pastureland and cropland result in a commensurate increase in natural lands. Across GLOBIOM scenarios, healthy U.S. diets have the greatest single impact on pastureland changes . Pastureland use is more sensitive to dietary changes than cropland due to the low relative land use efficiency of beef production . Increasing crop productivity has no discernible impact on pastureland use. Simultaneously shifting to healthy diets in the rest of the world and the U.S. only negligibly changes pastureland requirements in the U.S. by 1–1.8 mil ha relative to only shifting to healthy diets in the U.S. , because most beef produced in the U.S. is domestically, as opposed to being exported.

Thus, U.S. pastureland should be most responsive to changes in domestic beef consumption. Correspondingly, shifting just the ROW to healthier diets actually slightly increases pastureland over the baseline by 1.8 mil ha despite a slight reduction in U.S. beef production, likely a result of decreased beef land use efficiency . As the ROW demand for U.S. beef declines, these small decreases in U.S. production result in beef land use efficiency reduction. For cropland, there is a similar spread of about 1 mil ha across the GLOBIOM scenarios that adopt healthier diets in the U.S., the ROW, or both. The greatest declines in cropland are observed with healthier diets. Increased crop productivity in the U.S. only reduces domestic cropland by less than 200,000 ha , due to increased production and exports from the greater global comparative advantage of U.S. crop commodities . As a result, higher U.S. yields alone cause a 7.5 mil ha decrease in croplands globally by 2050 relative to the baseline, which is partially offset by a 1.9 mil ha increase in grassland globally . Annual domestic GHG emissions decrease due to shifts to healthier diets in the U.S. and declines are primarily driven by livestock methane emissions reductions and land sequestration. As a result of a healthier U.S. diet, annual CO2e emissions from the agriculture, forestry, and other land use sectors reduces by 176–197 MT for GLOBIOM scenarios and 187 MT for the Calculator Sustainability scenario compared to the baseline by 2050. Livestock methane emissions drive the majority of total reductions for GLOBIOM scenarios, whereas land use change emissions drive the majority of reductions for the Calculator. Most of the land use change emissions reductions are due to increases in forest sequestration from natural regeneration on former cropland and pastureland . As with land use changes, domestic emissions reductions by 2050 show minor differences in U.S. emissions between GLOBIOM scenarios that vary yields and diets in the ROW. These differences are only apparent for cropland-related emissions—crop and soils non-CO2. Increasing yields alone has little to no effect on total emissions, since any additional sequestration from land use change is negated by increased crop and soil non-CO2 emissions due to more intensive farming practices. In particular, N2O emissions from fertilizer use increases as fertilizer intensity expands with higher yields; higher fertilizer application and associated input costs are exogenously required to increase to achieve higher exogenous yield growth rates. Healthy ROW results in near-term total emissions reductions of 50–60 MT CO2e/year, but these diminish to less than 10 MT CO2e/year by 2050. We do not find evidence of international leakage when the U.S. shifts to healthier diets . In fact, we find slight declines in global emissions in the Healthy U.S. and U.S. Yields scenarios . We do find that Healthy ROW alone reduces global AFOLU emissions by 26% despite having minor impacts on U.S. emissions . The future trajectory of beef productivity and ruminant density provide bounds for the range of possible land use and emissions impacts from healthy U.S. diets . To explore the role of technology improvements in the cattle industry and changes in production system or intensification in response to changes in demand, we use the Calculator to run a range of sensitivity scenarios that exogenously alter beef productivity and ruminant density of cattle. We find that beef productivity decreases significantly in response to lower domestic beef consumption and production after 2020 in the healthier U.S. diet GLOBIOM scenarios . Productivity increases and then decreases after 2030 or 2040 for the scenarios that maintain the current average U.S. diet . The business-as-usual beef productivity trajectory in the U.S., based on USDA data from the last 20 years is comparable with GLOBIOM baseline until 2040, when the productivity growth in GLOBIOM starts to level of due to market conditions. The sensitivity that increases this BAU livestock productivity growth rate causes beef production to exceed that of all GLOBIOM scenarios . These productivity increases result in the greatest pastureland reduction by 2050 across all scenarios. Emissions follow similar trends with increases in beef productivity resulting in significantly greater total emissions reductions compared to baseline or 90–127 MT/year greater reductions than the Calculator Sustainability scenario that uses GLOBIOM productivity assumptions .

The type of rooting medium does not have much influence on rooting success and speed

When plant tissues cannot grow due to nitrogen limitation, they cannot incorporate or store additional carbohydrates. This lower physiological limit of tissue nitrogen content, at which no more cell division or incorporation of carbon is possible, is called the critical nitrogen content of the tissue . The CNC is expressed on a carbon basis .The CNC can be determined for whole plants , and for the different functional tissues of the plant. Different plant parts perform different functions, and therefore have different minimum nitrogen requirements for metabolism maintenance. In previous research with the dicotyledonous storage root perennial Ipomoea batatas , it was determined that the most photosynthetically active tissues, the leaves, and the fibrous roots, which are involved in nutrient uptake, have the highest CNC of all vegetative plant tissues . The storage roots of I. batatas had a significantly lower CNC than any of the other Ipomoea tissues. The difference between the actual tissue nitrogen content and the CNC determines the capacity of these different plant tissues to incorporate or store carbohydrates. Tissues with nitrogen contents that are above the CNC can still incorporate or store carbohydrates. These tissues have a positive carbon sink strength .

Photosynthetically active tissues that have reached their CNC will not incorporate the produced carbohydrates, because that would dilute the nitrogen content of these tissues below the CNC,hydroponic nft channel and metabolism would be impaired. Instead, the photosynthetically active tissues deposit the produced carbohydrates in the phloem, which transports them to those tissues that still have the ability to incorporate or store carbohydrates . This is how leaves, that because of their high CNC loose the ability to incorporate the photosynthates in their own tissues relatively early during the development of the plant, can still produce photosynthates and translocate them down to the reserve storage organs, such as I. batatas storage roots, which maintain their positive carbon sink strength, and tissue growth , the longest of all plant tissues, due to their low CNC.The critical nitrogen content of Arundo leaf tissue was determined in a hydroponics experiment. One hundred Arundo stem fragments were collected in June 1998 from the Santa Ana River near River Road in Riverside county. In the greenhouse, the stem fragments were placed in water for 2 weeks to allow for root and shoot growth. After two weeks, 48 young plants that sprouted from the meristems on the stem fragments were randomly selected for use in the experiment.

Four stems were placed in each of eleven 120-liter plastic containers, that were filled with 100 L aerated, half strength Hoagland nutrient solution . The sprouted stem fragments rested on a floating plastic mesh supported by a ring of plastic pipe, on the surface of each trash can’s nutrient solution. A sheet of opaque white plastic was wrapped around and over each trash can to block out sunlight preventing algae growth and high temperatures in the nutrient solution. The nutrient solutions in the containers were monitored two times per week during a 48-week growth period. Each check consisted of the following: addition of enough deionized water to bring the can’s nutrient solution level up to a 100-L mark, determination of the can’s electrical conductivity in full volume. A concentrated Hoagland solution was added to re-establish the conductivity of the nutrient solution to its original value. The pH was adjusted to 5.7. The harvest dates were partially determined by the growth of the plants as the experiment progressed. Harvested plants were separated into apical meristems, green leaf blades, brown leaf blades , green leaf sheaths, brown leaf sheaths , stems, rhizomes, and roots. Plant parts were dried to a constant weight at 60 C. Biomass of the tissue was determined and sub-samples were ground in a Wiley mill to pass a 0.5 mm mesh screen . The nitrogen and carbon contents of the tissues were determined using an organic elemental analyzer . Stem fragments with meristems can root and regenerate new Arundo plants , as has been reported earlier by .

There were significant patterns in rooting success of meristems on Arundo stems throughout the growing season. In the winter months of November through January, rooting is low, and success percentage lies below 20%, with the exception of 28 ± 12% rooting for meristems from hanging stems in January . Nearly all meristems rooted from March through September. The speed with which meristems rooted showed a related pattern through the growing season . In the period with the lowest rooting success, t50 had the highest values, indicating the slowest rooting. Rooting was most rapid in the months of May through July, a time window that was more narrow that the period in which rooting is most successful.There are no significant differences between the results in plain water and half-Hoagland nutrient solution . The rooting success and speed pattern is similar in soil, but the single replicate meristems do not allow for inclusion in the two-way ANOVA. When compared within each sampling, the rooting success of meristems from hanging stems was significantly higher than that of meristems from upright stems . When split for rooting medium, there was no significant different in rooting success between hanging and upright meristems over time in plain water , but the differences remain significant in the nutrient solution and in soil . Like with rooting success, rooting speed of hanging meristems was significantly faster when compared to that of the upright meristems of the same sampling date . When separated among rooting media, the difference in the speed with which rooting occurs was most pronounced in plain water , but still exists in the nutrient solution and soil . Though these differences in rooting success and the speed of rooting may be statistically significant, they generally are too small to be ecologically significant. Stem diameter at the node where the meristem is placed is an indicator of relative height on the stem,nft growing system and the age of the meristem. Within stems, there was no relation of rooting success or speed with the diameter of the stem at the point of the meristem, so the older meristems on an Arundo stem do not root better or faster than the younger meristems on Arundo stems.When the temperature of the rooting environment was controlled at 28/15 °C for 14/10 h during the entire growing season the seasonal patterns of rooting success and speed remained, and differences between the seasonal rooting patterns of the fragments from hanging and upright stems emerged . The overall patterns differed slightly from those in the greenhouse experiment, with the lowest rooting by both stem type fragments in February through March. The rooting percentages increased in April, and the highest rooting rates occurred from July through September at 80 – 92% for both stem types. In October the rooting of the upright stem fragments decreased more than that of those from hanging stems. The lowest rooting rates of the stem fragments from upright stems were 0 – 10% in February and March, while the rooting rates of the hanging stem fragments only decreased to 30% . The positive influence of the seasonal effect in the months of July – September on the rooting rates of both stem types masked the difference that emerged in the Winter and Spring months. The rooting by meristems from upright stems benefits more from this seasonal effect that of meristems from hanging stems. The seasonal effect on the rooting rates of the stem types could be related to a number of environmental factors that change during the growing season, such as temperature, light intensity, and daylength. determined that stem fragments sampled at the same time, but stored at different temperatures, displayed different sprouting percentages when potted and regenerated at the same temperature in a single greenhouse.

We hypothesize that the different ambient temperatures prior to sampling in our experiment was an ecophysiological equivalent of the experimental factor “storage temperature” in the Boose and Holt study, and one of the factors involved in the seasonal pattern of A. donax stem fragment rooting.The seasonal differences in the rooting percentages and speed between meristems from upright and hanging stems that was striking under controlled temperature conditions had been much less pronounced in the greenhouse rooting experiment. The results of the greenhouse rooting experiment show that the temperature at the time of rooting influence the effects of the seasonal factor. Environmental effects, such as temperature and inundation, are known to affect the success of invasive plants with either negatively or positively In the greenhouse experiment, the temperatures of the rooting medium varied with the ambient temperatures and solar irradiation, while the temperature of the rooting environment in the growth chamber experiment was the same throughout the growing season. The greenhouse, the temperature of the rooting media in the winter ranged from 0.5 – 2 °C at night to 19 –21 °C during the day. In the spring and summer, solar irradiation increased these temperatures to 16 -18 and 28 – 34 °C, respectively. To test the effect of the temperature at rooting, we tested the rooting of fragments of hanging A. donax stems at different temperatures in April and May, a period that in the year-round temperature controlled experiment the success rates were 45 ± 10% in 1998, and 45 ± 21% in 1999. In this test using constant temperatures, no rooting occurred at 10 °C during the 40 days of the experiment . At 15 °C, rooting was better than at 10 °C , but significantly less than at 17.5, 20, and 22.5 °C . In the greenhouse experiment, the seasonal pattern of rooting success was present, but the inherent advantage of fragments of the hanging stems in the winter months was masked by the negative effect of the lower temperatures of the rooting media. The temperatures chosen for the year-round temperature controlled experiment were selected to reflect the temperature conditions in the habitats invaded by A. donax in Southern California in the months of April and May. From the results of this April constant temperature experiment, it appears that the lower night temperature in the 28/15 °C for 14/10 h experiment led to a reduction in rooting success from the maximum possible in that month. This reemphasizes the effect of in situ temperature on the success of stem fragment meristem rooting, and the ecological danger of the floating stem fragments in shallow waters. The inherent seasonal pattern observed in the year round temperature controlled experiment may have resulted from cycles in the concentrations of the plant growth regulators that play a role in the growth of the side shoots, and the apical dominance of the top of the main stems. One of the growth regulators that plays a major role in the regulation of apical dominance is indole-3-acetic acid. The effect of IAA on the rooting of axillary bud on A. donax stem fragments throughout the growing season was tested through the use of exogenous IAA in the rooting medium, and the determination of endogenous IAA levels in the shoots that grew from the axillary buds. When the stem fragments and their axillary buds were exposed to 5 and 10 µM IAA in the rooting medium, the difference between the hanging and the upright stems disappeared. The main effect of the exogenous IAA was a significant improvement of the rooting percentage and speed of the upright stem fragments in the winter and spring periods, so that the difference between the two stem types was minimized. The exogenous IAA had little effect on the rooting success and speed of the hanging stem fragments . At 20 µM exogenous IAA, the highest concentration applied, the success rate and the speed of upright stem fragment rooting decreased from the optimum observed at 5 and 10 µM, almost down to the percentages and t50 observed in the absence of the hormone . The IAA in the rooting medium may have reached the axillary bud through the vascular bundles of the main stem piece, directly through the cuticle of the bud itself, which was positioned immediately below the rooting medium surface, or both. In early studies into the effect of IAA on plant growth, the growth regulator was sometimes applied to the leaf tissues, and the position of the axillary bud in the rooting medium could have resulted in a similar situation.