12.13: Development of Joints - Biology

12.13: Development of Joints - Biology

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Learning Objectives

  • Describe the two processes by which mesenchyme can give rise to bone
  • Discuss the process by which joints of the limbs are formed

Joints form during embryonic development in conjunction with the formation and growth of the associated bones. After birth, as the skull bones grow and enlarge, the gaps between them decrease in width and the fontanelles are reduced to suture joints in which the bones are united by a narrow layer of fibrous connective tissue.

The bones that form the base and facial regions of the skull develop through the process of endochondral ossification. In this process, mesenchyme accumulates and differentiates into hyaline cartilage, which forms a model of the future bone. The hyaline cartilage model is then gradually, over a period of many years, displaced by bone. The mesenchyme between these developing bones becomes the fibrous connective tissue of the suture joints between the bones in these regions of the skull.

A similar process of endochondral ossification gives rises to the bones and joints of the limbs. The limbs initially develop as small limb buds that appear on the sides of the embryo around the end of the fourth week of development. Starting during the sixth week, as each limb bud continues to grow and elongate, areas of mesenchyme within the bud begin to differentiate into the hyaline cartilage that will form models for of each of the future bones. The synovial joints will form between the adjacent cartilage models, in an area called the joint interzone. Cells at the center of this interzone region undergo cell death to form the joint cavity, while surrounding mesenchyme cells will form the articular capsule and supporting ligaments. The process of endochondral ossification, which converts the cartilage models into bone, begins by the twelfth week of embryonic development. At birth, ossification of much of the bone has occurred, but the hyaline cartilage of the epiphyseal plate will remain throughout childhood and adolescence to allow for bone lengthening. Hyaline cartilage is also retained as the articular cartilage that covers the surfaces of the bones at synovial joints.

Harnessing the biology of IL-7 for therapeutic application

Interleukin-7 (IL-7) is required for T cell development in mice and humans and is produced by stromal tissues rather than activated lymphocytes. Under normal conditions, IL-7 is a limiting resource for T cells, but it accumulates during lymphopenic conditions. IL-7 signals through a heterodimeric receptor consisting of the IL-7 receptor α-chain (IL-7Rα) and the common cytokine receptor γ-chain (γc).

IL-7 is not required for human B cell development in fetal life, but it affects early B cell progenitors and contributes to B cell development under normal conditions.

IL-7 has also been recently demonstrated to regulate lymphoid tissue inducer (LTi) cells, which induce the development of secondary lymphoid organs and can induce tertiary lymphoid tissue postnatally in settings of chronic inflammation.

In animals, IL-7 therapy enhances the effectiveness of adoptive immunotherapy for cancer, enhances vaccine responses and enhances viral clearance in the setting of acute and chronic infections.

In mature T cells, IL-7Rα is most highly expressed on recent thymic emigrants, maintained on naive T cells, downregulated upon T cell activation, and re-expressed on memory T cell subsets. As a result, treatment with recombinant human IL-7 (rhIL-7) preferentially expands recent thymic emigrants and naive T cells, as well as central memory T cells, but largely spares senescent T cells and regulatory T cells. This results in increased repertoire diversity following rhIL-7 therapy in humans.

Clinical results with rhIL-7 thus far have shown it to be well tolerated with dose-dependent increases in T cell numbers that persist long after the cytokine is cleared. Based on the pharmacological and biological properties demonstrated thus far, IL-7 is particularly well-suited as a therapy for conditions associated with lymphocyte immunodeficiency.

Multiple trials are ongoing or planned in HIV infection, other chronic infections (including hepatitis B and C), cancer (including as an adjuvant to immune-based therapies), post-haematopoietic stem cell transplantation and ageing.

Access options

Get full journal access for 1 year

All prices are NET prices.
VAT will be added later in the checkout.
Tax calculation will be finalised during checkout.

Get time limited or full article access on ReadCube.

All prices are NET prices.

Access options

Get full journal access for 1 year

All prices are NET prices.
VAT will be added later in the checkout.
Tax calculation will be finalised during checkout.

Get time limited or full article access on ReadCube.

All prices are NET prices.

12.13: Development of Joints - Biology

Time is not on our side

The incongruity between acute mechanotransduction and chronic disease progression

How mechanotransduction within cells and the matrix drive lasting changes in tissue remodeling are incompletely understood.

Countless hours, poor pay, and an absent-minded Principal Investigator all qualify these guys as heroes & lab rats!

Mary helps keep us pointed in the right direction and anchors down the fort as our Lab Manager. Her research focuses on protein production and animal models of disease.

Sue brings 20 years of animal and human research experience to the group. Her research focuses on developing animal models of fibrosis and analysis of novel therapeutics.

Jagath brings extensive experience in antibody development based on his time working in the Antibody Engineering and Development Core facility. His research focuses on antibody production as potential therapeutics.

Vincent's research focuses on endogenous heterogeneity in fibroblast mechanotransduction and new biotechnology development hijacking normal mechanotransduction.

Postdoctoral Fellow (Joint with Matt Torres @ GA Tech)

Wei's research focused on fibronectin mechanics in the past and currently explores mechanisms of ECM memory.

Ping is focused on further exploring Thy-1's role in directing integrin signaling and mechanotransduction.

NRSA (F32) Postdoctoral Fellow

Dan is investigating the roles innate immunity and cytokine signaling play in contributing to the onset of fibrosis.

Biomedical Engineering Program

Leandro is using directed evolution and antibody engineering to improve a potential fibrosis theranostic we developed in the lab.

Molecular and Cellular Basis of Disease (MCBD) PhD Program

Riley is studying the potential for mechanical cues and cytokine signaling to impact pericyte participation in tissue regeneration and fibrosis.

3rd Year PhD Candidate, Biotechnology Training Grant Fellow

Drew's PhD research is focused on identifying mechanosensors for application in stiffness-driven diseases like fibrosis

Cancer Training Grant Fellow

Grace's PhD research is to be decided. Currently she is studying a new family of integrin adaptor proteins that set the mechanotransductive phenotypes of cells.

Dr. Sarah Stabenfeldt (PD '07-'10)- Arizona State University

Dr. Ashley Brown (PhD & PD '06-'15) - North Carolina State University

Dr. Allyson Soon (PhD '06-'12) - Lee Kong Chian School of Medicine

Dr. Vincent Fiore (PhD '08-'14) - Rockefeller University

Dr. Anton Bryksin (PD '12-'13) - Georgia Institute of Technology

Dr. Vicky Stefanelli (PhD '12-'18) - Integra LifeSciences

Dr. John Nicosia (PhD '13-'19) - Emory University

The Cautiously optimistic leader

We're all just trying to find happiness

The folks in my lab remind me that we are each on a journey. Sometimes we have detours and sometimes we find ourselves unintentionally in the fast lane. They help me on my journey every day and I hope, in return, I help them find their way too.


DeepCpG model

DeepCpG consists of a DNA module to extract features from the DNA sequence, a CpG module to extract features from the CpG neighbourhood of all cells and a multi-task Joint module that integrates the evidence from both modules to predict the methylation state of target CpG sites for multiple cells.

DNA module

The DNA module is a convolutional neural network (CNN) with multiple convolutional and pooling layers and one fully connected hidden layer. CNNs are designed to extract features from high-dimensional inputs while keeping the number of model parameters tractable by applying a series of convolutional and pooling operations. Unless stated otherwise, the DNA module takes as input a 1001 bp long DNA sequence centred on a target CpG site n, which is represented as a binary matrix s n by one-hot encoding the D = 4 nucleotides as binary vectors A = [1, 0, 0, 0], T = [0, 1, 0, 0], G = [0, 0, 1, 0] and C = [0, 0, 0, 1]. The input matrix s n is first transformed by a 1d-convolutional layer, which computes the activations a nfi of multiple convolutional filters f at every position i:

Here, w f are the parameters or weights of convolutional filter f of length L. These can be interpreted similarly to position weight matrices, which are matched against the input sequence s n at every position i to recognise distinct motifs. The ReLU(x) = max(0, x) activation function sets negative values to zero, such that a nfi corresponds to the evidence that the motif represented by w f occurs at position i.

A pooling layer is used to summarise the activations of P adjacent neurons by their maximum value:

Non-overlapping pooling is applied with step size P to decrease the dimension of the input sequence and hence the number of model parameters. The DNA module has multiple pairs of convolutional-pooling layers to learn higher-level interactions between sequence motifs, which are followed by one final fully connected layer with a ReLU activation function. The number of convolutional-pooling layers was optimised on the validation set. For example, two layers were selected for models trained on serum, HCCs and mESCs and three layers for the 2i and HepG2 cells (Additional file 4).

CpG module

The CpG module consists of a non-linear embedding layer to model dependencies between CpG sites within cells, which is followed by a bidirectional gated recurrent network (GRU) [36] to model dependencies between cells. Inputs are 100d vectors x 1, …, x T, where x t represents the methylation state and distance of K = 25 CpG sites to the left and to the right of a target CpG site in cell t. Distances were transformed to relative ranges by dividing by the maximum genome-wide distance. The embedding layer is fully connected and transforms x t into a 256d vector ( >_t ) , which allows learning possible interactions between methylation states and distances within cell t:

The sequence of vectors ( >_t ) are then fed into a bidirectional GRU [36], which is a variant of a recurrent neural network (RNN). RNNs have been successfully used for modelling long-range dependencies in natural language [58, 59], acoustic signals [60] and, more recently, genomic sequences [61, 62]. A GRU scans input sequence vectors ( >_1,dots, >_T ) from left to right and encodes them into fixed-size hidden state vectors h 1, …, h T:

The reset gate r t and update gate u t determine the relative weight of the previous hidden state h t−1 and the current input ( >_t ) for updating the current hidden state h t. The last hidden state h T summarises the sequence as a fixed-size vector. Importantly, the set of parameters W and b are independent of the sequence length T, which allows summarising the methylation neighbourhood independent of the number of cells in the training dataset.

To encode cell-to-cell dependencies independently of the order of cells, the CpG module is based on a bidirectional GRU. It consists of a forward and backward GRU with 256d hidden state vectors h t, which scan the input sequence from the left and right, respectively. The last hidden state vector of the forward and backward GRU are concatenated into a 512d vector, which forms the output of the CpG module.

Joint module

The Joint module takes as input the concatenated last hidden vectors of the DNA and CpG module and models interactions between the extracted DNA sequence and CpG neighbourhood features via two fully connected hidden layers with 512 neurons and ReLU activation function. The output layer contains T sigmoid neurons to predict the methylation rate ŷ nt ∈ [0 1] of CpG site n in cell t:

Model training

Model parameters were learnt on the training set by minimizing the following loss function:

Here, the weight-decay hyper-parameter λ 2 penalises large model weights quantified by the L2 norm, and NLL w(ŷ, y) denotes the negative log-likelihood, which measures how well the predicted methylation rates ŷ nt fit to observed binary methylation states y nt ∈ <0, 1>:

The binary indicator o nt is set to one if the methylation state y nt is observed for CpG site n in cell t, and zero otherwise. Dropout [63] with different dropout rates for the DNA, CpG and Joint module was used for additional regularization. Model parameters were initialised randomly following the approach in Glorot et al. [64]. The loss function was optimised by mini-batch stochastic gradient descent with a batch size of 128 and a global learning rate of 0.0001. The learning rate was adapted by Adam [65] and decayed by a factor of 0.95 after each epoch. Learning was terminated if the validation loss did not improve over ten consecutive epochs (early stopping). The DNA and CpG module were pre-trained independently to predict methylation from the DNA sequence (DeepCpG DNA) or the CpG neighbourhood (DeepCpG CpG). For training the Joint module, only the parameters of the hidden layers and the output layers were optimised, while keeping the parameters of the pre-trained DNA and CpG module fixed. Training DeepCpG on 18 serum mESCs using a single NVIDIA Tesla K20 GPU took approximately 24 h for the DNA module, 12 h for the CpG module and 4 h for the Joint module. Model hyper-parameters were optimised on the validation set by random sampling [66] (Additional file 4). DeepCpG is implemented in Python using Theano [67] 0.8.2 and Keras [68] 1.1.2.

Prediction performance evaluation

Data pre-processing

We evaluated DeepCpG on different cell types profiled with scBS-seq [5] and scRRBS-seq [8].

scBS-seq-profiled cells contained 18 serum and 12 2i mESCs, which were pre-processed as described in Smallwood et al. [5], with reads mapped to the GRCm38 mouse genome. We excluded two serum cells (RSC27_4, RSC27_7) since their methylation pattern deviated strongly from the remaining serum cells.

scRRBS-seq-profiled cells were downloaded from the Gene Expression Omnibus (GEO GSE65364) and contained 25 human HCCs, six human heptoplastoma-derived cells (HepG2) and six mESCs. Following Hou et al. [8], one HCC was excluded (Ca26) and we restricted the analysis to CpG sites that were covered by at least four reads. For HCCs and HepG2 cells, the position of CpG sites was lifted from GRCh37 to GRCh38, and for mESC cells from NCBIM37 to GRCm38, using the liftOver tool from the UCSC Genome Browser.

Binary CpG methylation states for both scBS-seq- and scRRBS-seq-profiled cells were obtained for CpG sites with mapped reads by defining sites with more methylated than un-methylated read counts as methylated, and un-methylated otherwise.

Holdout validation

For all prediction experiments and evaluations, we used chromosomes 1, 3, 5, 7, 9 and 11 as the training set, chromosomes 2, 4, 6, 8, 10 and 12 as the test set and the remaining chromosomes as the validation set (Additional file 5). For each cell type, models were fitted on the training set, hyper-parameters were optimised on the validation set and the final model performance and interpretations were exclusively reported on the test set. For computing binary evaluation metrics, such as accuracy, F1 score or MCC score, predicted methylation probabilities greater than 0.5 were rounded to one and set to zero otherwise. Genomic context annotations as shown in Fig. 2d are described in Additional file 6.

The prediction performance of DeepCpG was compared with random forest classifiers trained on each cell separately, using either features similar to DeepCpG (RF) or genome annotation marks as described in Zhang et al. [12] (RF Zhang). Additionally, we considered two baseline models, which estimate missing methylation states by averaging observed methylation states, either across consecutive 3-kbp regions within individual cells (WinAvg) or across cells at a single CpG site (CpGAvg).

Window averaging (WinAvg)

For window averaging, the methylation rate ŷ nt of CpG site n and cell t was estimated as the mean of all observed CpG neighbours y n+k,t in a window of length W = 3001 bp centred on the target CpG site n:

ŷ nt was set to the mean genome-wide methylation rate of cell t if no CpG neighbours were observed in the window.

CpG averaging (CpGAvg)

For CpG averaging, the methylation rate ( >_ ) of CpG site n in cell t was estimated as the average of the observed methylation states ( _^> ) across all remaining cells t′≠ t:

( >_ ) was set to the genome-wide average methylation rate of cell t if no methylation states were observed in any of the other cells.

Random forest models (RF, RF Zhang)

Features of the RF model were i) the methylation state and distance of 25 CpG sites to the left and right of the target site (100 features) and ii) k-mer frequencies in the 1001-bp genomic sequence centred on the target site (256 features). The optimal parameter value for k (k = 4) was found using holdout validation (Additional file 1: Figure S21a).

The features for the RF Zhang model (Additional file 7) included i) the methylation state and distance of two CpG neighbours to the left and right of the target site (eight features), ii) annotated genomic contexts (23 features), iii) transcription factor binding sites (24 features), iv) histone modification marks (28 features) and v) DNaseI hypersensitivity sites (one feature). These features were obtained from the ChipBase database and UCSC Genome Browser for the GRCm37 mouse genome and mapped to the GRCm38 mouse genome using the liftOver tool from the UCSC Genome Browser.

We trained a separate random forest model for each individual cell, as a pooled multi-cell model performed worse (Additional file 1: Figure S21b). Hyper-parameters, including the number of trees and the tree depth, were optimised for each cell separately on the validation set by random sampling. Random forest models were implemented using the RandomForestClassifer class of the scikit-learn v0.17 Python package.

Motif analysis

The motif analysis as presented in the main text was performed using the DNA module trained on serum mESCs. Motifs discovered for 2i cells, HCCs, HepG2 cells and mESCs are provided in Additional file 3. In the following, motifs are referred to filters of the first convolutional layer of the DNA module.

Visualization, motif comparison, Gene Ontology analysis

Filters of the convolutional layer of the DNA module were visualised by aligning sequence fragments that maximally activated them. Specifically, the activations of all filter neurons were computed for a set of sequences. For each sequence s n and filter f of length L, sequence window s n,iL/2, …, s n,i + L/2 were selected, if the activation a nfi of filter f at position i (Eq. 1), was greater than 0.5 of the maximum activation of f over all sequences n and positions i, i.e. a nfi > 0.5 max n i(a nfi). Selected sequence windows were aligned and visualised as sequence motifs using WebLogo [69] version 3.4.

Motifs discovered by DeepCpG were matched to annotated motifs in the Mus musculus CIS-BP [42] and UniPROBE [43] database (version 12.12, updated 14 Mar 2016), using Tomtom 4.11.1 from the MEME-Suite [70]. Matches at FDR <0.05 were considered as significant.

For Gene Ontology enrichment analysis, the web interface of the GOMo tool of MEME-Suite was used.

Quantification of motif importance

Two metrics were used to quantify the importance of filters: their activity (occurrence frequency) and their influence on model predictions.

Specifically, the activity of filter f for a set of sequences, e.g. within a certain genomic context, was computed as the average of mean sequence activities ā nf, where ā nf denotes the weighted mean of activities a nfi across all window positions i (Eq. 1). A linear weighting function was used to compute ā nf that assigns the highest relative weight to the centre position.

The influence of filter f on the predicted methylation states ŷ nt of cell t was computed as the Pearson correlation r ft = cor n(ā nf, ŷ nt) over CpG sites n, and the mean influence r f over all cells by averaging r ft.

Motif co-occurrence

The co-occurrence of filters was quantified using principal component analysis on the mean sequence activations ā nf (Fig. 3) and pairwise correlations between mean sequence activations (Additional file 1: Figure S10).

Conservation analysis

The association between filter activities ā nf and sequence conservation was assessed using Pearson correlation. PhastCons [71] conservation scores for the Glire subset (phastCons60wayGlire) were downloaded from the UCSC Web Browser and used to quantify sequence conservation.

Effect of sequence and methylation state changes

We used gradient-based optimization as described in Simonyan et al. [55] to quantify the effect of changes in the input sequence s n on predicted methylation rates ŷ nt(s n). Specifically, let ŷ n(s n) = mean t(ŷ nt(s n)) be the mean predicted methylation rate across cells t. Then the effect ( _^s ) of changing nucleotide d at position i was quantified as:

Here, the first term is the first-order gradient of ŷ n with respect to s nid and the second term sets the effect of wild-type nucleotides (s nid = 1) to zero. The overall effect score ( _^s ) at position i was computed as the maximum absolute effect over all nucleotide changes, i.e. ( _^s=< max>_dleft|_^s ight| ) . The overall effect of changes at position i as shown in Fig. 3b was computed as the mean effect ( _i^s=>_nleft(_^s ight) ) across all sequences n. For the mutation analysis shown in Additional file 1: Figure S13, ( _^s ) was correlated with PhastCons (phastCons60wayGlire) conservation scores. For quantifying the effect of methylation QTLs (mQTLs) as shown in Additional 1: Figure S14, we obtained mQTLs from the supplementary table of Kaplow et al. [56] and used the DeepCpG DNA module trained on HepG2 cells to compute effect scores for true mQTL variants. Non-mQTL variants were randomly sampled within the same sequence windows, distance-matched to real mQTL variants.

Predicting cell-to-cell variability

For predicting cell-to-cell variability (variance) and mean methylation levels, we trained a second neural network with the same architecture as the DNA module, except for the output layer. Specifically, output neurons were replaced by neurons with a sigmoid activation function to predict for a single CpG site n both the mean methylation rate ( >_ ) and cell-to-cell variance ( >_ ) within a window of size s ∈ <1000, 2000, 3000, 4000, 5000>bp. Multiple window sizes were used to obtain predictions at different scales, using a multi-task architecture, thereby mitigating the uncertainty of mean and variance estimates in low-coverage regions. For training the resulting model, parameters were initialised with the corresponding parameters of the DNA module and fine-tuned, except for motif parameters of the convolutional layer. The training objective was:

where MSE the is mean squared error between model predictions and training labels:

m ns is the estimated mean methylation level for a window centred on target site n of a certain size indexed by s:

Here, m nst denotes the estimated mean methylation rate of cell t computed by averaging the binary methylation state y it of all observed CpG sites Y nst in window s:

where v ns denotes the estimated cell-to-cell variance

Identifying motifs associated with cell-to-cell variability

The influence ( _^v ) of filter f on cell-to-cell variability in widows of size s was computed as the Pearson correlation between mean sequence filter activities ā nf and predicted variance levels ( >_ ) of sites n:

The influence ( _^m ) on predicted mean methylation levels ( >_ ) was computed analogously. The difference ( _^d=left|_^v ight|-left|_^m ight| ) between the absolute value of the influence on variance and mean methylation levels was used to identify motifs that were primarily associated with cell-to-cell variance ( ( _^d ) > 0.25) or with changes in mean methylation levels ( ( _^d ) < −0.25).

Functional validation of predicted variability

For functional validation, methylation–transcriptome linkages as reported in Angermueller et al. [10] were correlated with the predicted cell-to-cell variability. Specifically, let ( _^e ) be the linkage between expression levels of gene i and the mean methylation levels of an adjacent region j [10]. Then we correlated ( _^e ) , which is the average predicted variability over all CpG sites within context j, and FDR adjusted P values over genes i and contexts j.

Trillion-Dollar Jet Has Thirteen Expensive New Flaws

To revist this article, visit My Profile, then View saved stories.

To revist this article, visit My Profile, then View saved stories.

The most expensive weapons program in U.S. history is about to get a lot pricier.

The F-35 Joint Strike Fighter, meant to replace nearly every tactical warplane in the Air Force, Navy and Marine Corps, was already expected to cost $1 trillion dollars for development, production and maintenance over the next 50 years. Now that cost is expected to grow, owing to 13 different design flaws uncovered in the last two months by a hush-hush panel of five Pentagon experts. It could cost up to a billion dollars to fix the flaws on copies of the jet already in production, to say nothing of those yet to come.

In addition to costing more, the stealthy F-35 could take longer to complete testing. That could delay the stealthy jet's combat debut to sometime after 2018 -- seven years later than originally planned. And all this comes as the Pentagon braces for big cuts to its budget while trying to save cherished but costly programs like the Joint Strike Fighter.

Frank Kendall, the Pentagon's top weapons-buyer, convened the so-called "Quick Look Review" panel in October. Its report -- 55 pages of dense technical jargon and intricate charts -- was leaked this weekend. Kendall and company found a laundry list of flaws with the F-35, including a poorly placed tail hook, lagging sensors, a buggy electrical system and structural cracks.

Some of the problems -- the electrical bugs, for instance -- were becoming clear before the Quick Look Review others are brand-new. The panelists describe them all in detail and, for the first time, connect them to the program's underlying management problems. Most ominously, the report mentions -- but does not describe -- a "classified" deficiency. "Dollars to doughnuts it has something to do with stealth," aviation guru Bill Sweetman wrote. In other words, the F-35 might not be as invisible to radar as prime contractor Lockheed Martin said it would be.

The JSF's problems are exacerbated by a production plan that Vice Adm. David Venlet, the government program manager, admitted two weeks ago represents "a miscalculation." Known as "concurrency," the plan allows Lockheed to mass-produce jets -- potentially hundreds of them -- while testing is still underway. It's a way of ensuring the military gets combat-ready jets as soon as possible, while also helping Lockheed to maximize its profits. That's the theory, at least.

"Concurrency is present to some degree in virtually all DoD programs, though not to the extent that it is on the F-35," the Quick Look panelists wrote. The Pentagon assumed it could get away with a high degree of concurrency owing to new computer simulations meant to take the guesswork out of testing. "The Department had a reasonable basis to be optimistic," the panelists wrote.

But that optimism proved unfounded. "This assessment shows that the F-35 program has discovered and is continuing to discover issues at a rate more typical of early design experience on previous aircraft development programs," the panelists explained. Testing uncovered problems the computers did not predict, resulting in 725 design changes while new jets were rolling off the factory floor in Fort Worth, Texas.

And every change takes time and costs money. To pay for the fixes, this year the Pentagon cut its F-35 order from 42 to 30. Next year's order dropped from 35 to 30. "It's basically sucked the wind out of our lungs with the burden, the financial burden," Venlet said.

News of more costs and delays could not have come at a worse time for the Joint Strike Fighter. The program has already been restructured twice since 2010, each time getting stretched out and more expensive. In January, then-Secretary of Defense Robert Gates put the Marines' overweight F-35B variant, which is designed to take off and land vertically, on probation. If Lockheed couldn't fix the jump jet within two years, "it should be cancelled," Gates advised.


What do you think when you look at your skin in the mirror? Do you think about covering it with makeup, adding a tattoo, or maybe a body piercing? Or do you think about the fact that the skin belongs to one of the body’s most essential and dynamic systems: the integumentary system? The integumentary system refers to the skin and its accessory structures, and it is responsible for much more than simply lending to your outward appearance. In the adult human body, the skin makes up about 16 percent of body weight and covers an area of 1.5 to 2 m 2 . In fact, the skin and accessory structures are the largest organ system in the human body. As such, the skin protects your inner organs and it is in need of daily care and protection to maintain its health. This chapter will introduce the structure and functions of the integumentary system, as well as some of the diseases, disorders, and injuries that can affect this system.

As an Amazon Associate we earn from qualifying purchases.

Want to cite, share, or modify this book? This book is Creative Commons Attribution License 4.0 and you must attribute OpenStax.

    If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution:

  • Use the information below to generate a citation. We recommend using a citation tool such as this one.
    • Authors: J. Gordon Betts, Kelly A. Young, James A. Wise, Eddie Johnson, Brandon Poe, Dean H. Kruse, Oksana Korol, Jody E. Johnson, Mark Womble, Peter DeSaix
    • Publisher/website: OpenStax
    • Book title: Anatomy and Physiology
    • Publication date: Apr 25, 2013
    • Location: Houston, Texas
    • Book URL:
    • Section URL:

    © Sep 11, 2020 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License 4.0 license. The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.


    These guidelines were originally developed in February 2020 by a consortium of 20 UK-based organisations, as a letter to the President of CoP26, Alok Sharma, to encourage adoption of the guidelines by other Parties to the UN Framework Convention on Climate Change (UNFCCC).

    In May 2020 the guidelines were adopted by the Together With Nature campaign, a call to corporate leaders to commit to four principles for investing in nature-based solutions.

    For a detailed explanation of why these guidelines are needed, with full references, see the open-access peer-reviewed article “Getting the message right on nature-based solutions for climate change”. The guidelines are designed to inform the planning, implementation and evaluation of NbS projects in order to meet the guidelines, practitioners should set goals and quantitative targets relating to each guideline, monitor progress towards these targets using comprehensive metrics, and use adaptive management to improve outcomes. The guidelines are intended to be complementary to the more detailed IUCN Global Standard for Nature-based Solutions.

    The wording of the guidelines was improved in February 2021. As public and policy interest in NbS is growing rapidly, we are promoting these guidelines to encourage their broad adoption by businesses and governments. The goal is to ensure investment in NbS is channeled to the best biodiversity-based and community-led NbS and does not distract from or delay urgent action to decarbonise the economy. To build momentum around this in the run-up to the UNFCCC’s CoP26, we are now inviting additional signatories from research, conservation, and development organisations across the globe.

    Become a signatory

    If you would like to add your organisation as a signatory to this letter please send you logo to:

    Translating the new genetics of temperament for research and practice

    The first and major implication of the new genetic findings is a precise definition of temperament, which is really a fundamental need for good communication and incremental research progress within any scientific field. Based on the findings reviewed here, we propose the following definition: Temperament is the disposition of a person to learn how to behave, react emotionally, and form attachments automatically by associative conditioning (that is, rapidly and spontaneously, without conscious attention or reflection in response to changing internal and external conditions). Each part of the definition outside the explanation in parenthesis is essential: (1) temperament is the organization within an individual (i.e., a disposition, or set of distinguishing features) of how a person learns, not what, when, where, or why they learn it involves the form and style of how a person learns (2) the characteristic features involve what can be learned by associative conditioning, which include habitual patterns of behavior, emotional reactions, and attachments (3) learning by associative conditioning in response to changing conditions is automatic and spontaneous (that is, without delay for conscious attention or reflection).

    We propose that these criteria are necessary and sufficient to define temperament precisely. Our proposed definition is sufficient because it implies all the traditional criteria proposed for temperament, and it is necessary because the other criteria are non-specific when used individually or in combination. From this basic definition, it follows that the predisposition to temperament is innate and heritable, but its expression may change in response to associative conditioning, which can be modified by brain development or injury and by its integration with other systems of learning and memory related to other aspects of personality involving self-regulatory processes for intentional self-control and creative self-awareness. Associative conditioning is highly conserved in all animals, whereas intentional self-control emerged only in higher primates and self-awareness in human beings 25,26,27 . The integration of these systems is manifest in the complex and dynamic patterns of development that are observed for personality, language, art, and science across the life span of a person in response to changing conditions 27 .

    We suggest that the proposed definition of temperament captures all the traditional concepts with specificity and precision, distinguishing it from other aspects of personality with which it becomes integrated during development. For example, a temperament can be unambiguously distinguished by heritable differences in behavioral conditioning what is inherited as temperament is limited to the habit learning system, the component of procedural learning that is evolutionarily conserved in all animals. Cognitive systems for intentional self-control that emerged in higher primates may begin to interact with temperament from an early age 19 , but they involve fundamentally distinct molecular processes and brain structures than does temperament 42,43 . This definition yields the expected features of appearance in early childhood, prominence of basic emotions and automatic behaviors, and moderate stability over time, which also distinguish temperament from other aspects of personality, as summarized in Table 1.

    An alternative definition is that “temperament refers to neurochemically based individual differences in the regulation of formal dynamical aspects of behavior 22 .” Reference to the formal dynamical aspects of behavior, as did Strelau (see Table 2), is useful to exclude character, but does not capture the rhythmicity and responsiveness to physiological stimuli (e.g., hot/cold, wet/dry, light/dark) that is prominent in classical concepts of temperament (Supplementary Table 1), the prominence of social attachments (sociable/aloof) (Tables 2–4), or in the molecular processes for regulation of diurnal and seasonal rhythms that we identified as fundamental features of the molecular pathways underlying temperament (Tables 2 and 6). We propose that only the form of learning (i.e., associative conditioning) and its evolutionary conservation are necessary and sufficient criteria for temperament because of the non-specificity of other criteria.

    Several traditions that have approached temperament in different ways 17 can now be recognized as converging and providing complementary information about how temperament and other aspects of personality develop across the life span. Defining temperament in terms of a specific and heritable form of learning makes it clear that distinctions between nature and nurture, biology and learning, genes and environment are inadequate. Temperament is the manifestation of a specific form of learning and memory, which is a non-linear dynamical process associated with complex patterns of inheritance and development. Individual differences in these adaptive processes are being investigated in terms of specific human brain functions using brain-imaging techniques 96,97,126,183 .

    The temperament and character domains of personality do not function independently, so it is not surprising that investigators interested in temperament or personality often address similar questions. At times the overlap and interaction of temperament and character has led to confusion about what belongs to which domain because people function as whole organisms embedded in the world. We have identified the networks that integrate these domains and described their architecture, but there remains a need for further research to understand the integrative processes that bring the emotional reactivity of temperament together in balanced way with emotional regulation of character.

    Personality research has closely aligned itself with temperament research by its emphasis on stability and use of similar methods based on assumptions of linear structure. However, it is crucial to recognize that personality has a complex biopsychosocial structure that is a product of interactions among multiple systems of learning memory that are dissociable functionally and developmentally.

    Our findings about the complex genetics of temperament and character can best be understood from an evolutionary-developmental perspective. The evolutionary-developmental perspective helps to understand the adaptive functions of the molecular processes that distinguish temperament from other aspects of personality. The functions of the Ras-MEK-ERK and PI3K-AKT-mTOR pathways serve to maintain cellular homeostasis, healthy functioning, and repair of injury and degeneration despite diurnal, seasonal, and climactic changes in a person’s internal and external environment. Diverse stimuli can activate the molecular systems underlying temperament in coordinated ways that provide opportunities for effective interventions. However, there is great need for clinical trials to clarify how to use these natural stimuli effectively. As we begin to recognize that the psychobiological and genetic networks that regulate health and well-being correspond to systems of learning and memory, we have the opportunity and responsibility to develop and advocate an evidence-based approach to psychiatry that integrates knowledge about molecular, neurobiological, and psychosocial processes. The molecular aspects of psychiatry are only one level of organization that helps to open our eyes to the full multi-level organization of human functioning.

    We have found that combining genotypic and phenotypic information does provide more information about health than does phenotypic information alone 25,26 . Consequently, genotypic panels for assessing the health propensities of people based on their personality are likely to be developed and offered commercially, as is being done for complex medical disorders. However, what has not been acknowledged by such commercial ventures is that the development of common disorders is highly complex and depends on the interaction of many sets of genotypic and environmental variables. Polygenic risk scores are not adequate for precise assessment of temperament because they rely on the average effects of genes acting independently, which can provide only weak and inconsistent information about personal health or risks of complex phenotypes in a specific individual (Supplementary Table 6) 107 . Even when complex phenomena (i.e., pleiotropy, epistasis, and gene-environment interaction) are taken into account, it turns out that the same genotypic profiles can be expressed in ways that are either healthy or unhealthy because of differences in the coherence of processes that regulate expression of genes and co-expression of sets of genes, often involving long non-coding RNA genes or a few “switch genes” that distinguish healthy and unhealthy character profiles 25,27 . For example, every possible TCI temperament profile can be either healthy or unhealthy, depending on a person’s character profile there are average differences in risk between profiles, but nothing can be said about how healthy a particular individual is from their temperament alone 14 . Until we learn more about the processes that regulate the expression of protein-coding genes 27 , the additional costs and worries introduced by genetic testing of personality and/or common diseases may be unjustified when most information of practical value for personalized treatment planning is provided by improved phenotypic assessment at a lower cost. In addition, there are serious ethical issues concerning germline editing of the human genome to modify heritable human traits 184 . Our current reservations about the merits and dangers of introducing genotypic panels for enhanced personality assessment will need to be revisited once we gain more knowledge about the regulation of co-expression of sets of genes that lead to well-being and ill-being.

    Psychopharmacology has already made substantial advances in developing treatments designed to target specific receptors, which can be an effective strategy when a small number of receptors cause a disorder consistently. However, when heterogeneous disorders depend on complex interactions among many genes and environmental variables, it is difficult or impossible to design interventions that are broadly effective and well tolerated.

    Fortunately, we already know that the molecular mechanisms underlying temperaments evolved to help organisms adapt to naturally occurring physiological, psychosocial, and energetic stimuli, as was observed in antiquity. What is most important now is to consider how our molecular and clinical observations can be translated into useful interventions for disease reduction and health promotion. Use of cold (e.g., cryotherapy) 185,186 , heat (e.g., infrared light therapy) 187 , light exposure (e.g., bright light therapy) 188,189 , patterned EMF (e.g., transcranial magnetic stimulation) 167 , and lifestyle adjustments to optimize hydration, nutrition, exercise, and sleep 190,191 have been widely advocated, but often produce weak and inconsistent results, particularly when there is inadequate motivation for change 192 or limited understanding of the underlying mechanisms and the parameters critical for efficacy 193,194 .

    Furthermore, there is extensive evidence that treatments of temperament are most effective when treatment addresses all three systems of learning and memory in a coordinated manner: behavioral conditioning, intentional self-control, and self-aware evaluation need to be integrated in order to be strongly and consistently effective in promoting health and well-being 27,190,195,196,197,198 . Put another way, relating a person’s current well-being to both their temperament and character provides powerful motivation for a person to change 199 . Fortunately, such thorough phenotypic assessments can also be expected to improve clinical trials by increasing study power in moderate-sized samples with stronger and more consistent results than have been obtained in poorly characterized and heterogeneous groups of subjects 107 .

    Watch the video: J Smeeton: Fish synovial joints as new models for joint development and disease. (December 2022).