December 27, 2022

The 500k Apex

I am pleased to report that as of the beginning of December 2022, Synthetic Daisies blog has reached 500,000 reads. The blog started over the 2008 Holiday season (14 years ago) and took about a year to garner significant readership. Over the years, Synthetic Daisies has participated in the Carnival of Evolution, several SciComm (Science Communication) initiatives, and charted the research progress of a number of initiatives (including the DevoWorm group).

Number of posts by year. Click to enlarge.

Although the number of time investment has declined in recent years, the posts have continued to accumulate reads. Synthetic Daisies has a Github repository and overlay site where some of the more popular posts are profiled. The Github repo also features associated code and other items from these posts. 

Number of readers over the course of 14 years. Click to enlarge.

More to report in another 200,000 to 300,000 reads!

December 23, 2022

Learning on Graphs (LoG) conference recap

 


The Learning on Graphs (LoG) conference took place from December 9-12 and featured a broad diversity of research on Graph Neural Networks (GNNs). GNNs [1] encompass a relatively new area of machine learning research which have a number of interesting connections to applied math and network science. The daily sessions (keynote talks and oral presentations), in addition to the seven workshop sessions, are available from the conference YouTube channel.

GNNs are a way to take data that yield graphical relationships in the real world and analyze then using the power of neural networks. While GNNs are specialized for problems that can be represented as a graph (discrete, interconnected systems), any problem with a set of complex geometric relationships is appropriate for GNNs. While the output of GNNs are typically embeddings (graph topologies embedded in the feature space), some problems require different approaches such as functions or more formal representations.

It is the analysis of these graphical relationships which make it a useful analytical approach. In all their forms, GNNs yield useful representations of graph data partly because they take into consideration the intrinsic symmetries of graphs, such as invariance and equivariance of graph topology with respect to a relabeling of the nodes [2]. Based on what was featured at LoG, GNNs had many potential applications in the biological arena, including precision medicine, drug discovery, and characterizing molecular systems (such as Stefan Gunnemann's (Technical University of Munich) talk in the Friday session).


GNNs can be evaluated using the isomorphism (or k-WL) test, which evaluates whether two graphs are isomorphic. Given that a graph can be drawn from the source data, the source data graph should be isomorphic with the output graph. The Weisfeiler-Lehman heuristic for graph isomorphism can be summarized in the 1-D case as the color refinement algorithm. A related issue in GNN research is algorithmic expressiveness. Expressivity is the breadth of ideas that can be represented and communicated using a particular type of representation. One current challenge of GNNs as they are applied to various problem domains is their ability to be functionally robust. One solution to this is by using GNNs as a generative model. Generating alternate graph representations allows us to use graphons [3], functions that capture different GNN topologies of the same type. The collection of graphs associated with a graphon can then be evaluated. Soledad Villar's (Johns Hopkins) presentation during the Sunday session featured an in-depth discussions of expressiveness and graphons as they relate to GNN performance.



GNNs can be combined with various analytical techniques traditionally used in complex network analysis. One of these involves the analysis of graphical models using tools from network science. These include the use of random graphs and stochastic block models to uncover the presence of topological structure and community formation, respectively. GNNs have ties to category theory as well. The cats.for.ai workshop (October 2022) featured applications of category theory to GNNs. In the Saturday session, Taco Cohen (Qualcomm AI) discussed how the techniques of category theory, monads in particular, can be applied to GNNs. GNNs can also form directed acyclic graphs (DAGs), which are amenable to causal models. 



GNNs are constructed using a series of inferential techniques. One technique discussed at LoG is message passing neural networks (MPNNs). Discrete forward passes from node to node (along edges) allow for approximation of the true, original network topology to be reconstructed. MPNN is a standard technique that lends itself to a wide variety of problem domains. The MPNN approach [4] can be extended to directed multigraphs and other types of graphs that capture complex systems, but can suffer shortcomings such as over-smoothing, over-squashing and under-reaching. While message passing has been the standard in the GNN field, continuous methods using approaches inspired by differential geometry and algebraic topology might serve as powerful alternatives [5]. Aside from approximations of real-world networks and graph-like structures, we can also think of GNN outputs in terms of time (capturing delays) and space (capturing translations). GNNs are also well-suited to mapping problems from algorithmic domains, in particular dynamic programming [6].


GNNs are particularly useful for task-specific architectures. The DevoWorm group’s D-GNN work (DevoGraph) is an example of this, being specialized for embryogenetic image processing or capturing biological growth and differentiation processes. But GNNs can also engage in transfer learning, which is the transfer of learned information from one context to another. Successful graph transfer learning is characterized by the reproduction of a graph of a similar but different size, or problems that require changes in network size over time.


From "Do we need deep graph neural networks?" by Michael Bronstein, Towards Data Science, July 20, 2020.


Workshops

Several of the workshops were particularly interesting with respect to some of the points mentioned above. There were also a number of outstanding oral presentations and posters not discussed here, but are worth checking out in the daily session recordings or on OpenReview.


Neural Algorithmic Reasoning (video). GNNs serve as excellent processors (neural networks in latent space) that can be aligned with more traditional algorithms [7]. This recasts many optimization problems as neural representation learning, particularly in cases where optimization algorithms do not represent the system being analyzed in a realistic manner.



Expressive GNNs (video). This tutorial covers a range of techniques that can be used to increase the expressivity of GNNs. Borrowing from areas such as topological data analysis and group theory, there is great potential for a variety of highly effective strategies for improving GNN architectures for a host of problems.


Graph Rewiring (video, web). Graph rewiring is presented as a way to overcome the limitations of the MPNN approach. Rewiring is based on the reconstruction of graph edges from iterative adaptive sampling of the input data. There are a number of different techniques that allow us to evaluate edge relevance using techniques such as diffusion and spectral approaches.


GNNs on TensorFlow (video). This tutorial introduces nascent modelers to implementing their own GNN models in the open-source TF-GNN framework. The tutorial uses heterogeneous input data to show how to implement the GNN and deal with missing label and edge information.


References

[1] Sanchez-Lengeling, B., Reif, E., Pearce, A., and Wiltschko, A.B. (2021). A Gentle Introduction to Graph Neural Networks. Distill, doi:10.23915/distill.00033.


[2] Chen, Z., Villar, S., Chen, L., and Bruna, J. (2019). On the equivalence between graph isomorphism testing and function approximation with GNNs. Proceedings of Neural Information Processing Systems, 32.

[3] Ruiz, L., Chamon, L.F.O., and Ribeiro, A. (2020). Graphon Neural Networks and the Transferability of Graph Neural Networks. arXiv, 2006.03548.

[4] Heydari, S. and Livi, L. (2022). Message Passing Neural Networks for Hypergraphs. arXiv, 2203. 16995.

[5] Bronstein, M. (2022). Beyond Message Passing: a Physics-Inspired Paradigm for Graph Neural Networks. The Gradient, May 7.

[6] Dudzik, A. and Velickovic, P. (2022). Graph Neural Networks are Dynamic ProgrammersarXiv, 2203.15544.

[7] Velickovic, P. and Blundell, C. (2021). Neural Algorithmic Reasoning. arXiv, 2105.02761.

October 24, 2022

OAWeek 2022: Managing Virtual and Hybrid Meetings


Welcome to International Open Access Week, 2022 edition! Last year, we discussed the vision of a distributed research organization. This year, we will explore this theme a bit further. One aspect of distributed organizations is the need to work both synchronously and asynchronously. This brings the real-world experience closer to the collaborator without the travel, carbon emissions, or expense of being at a centralized institute. As our collaborators live in many time zones and have different lifestyles, it is important to capture their full attention in different ways. 

One way this is done is through the live attendance and replay of group meetings. The Orthogonal Research and Education Lab (OREL) offers a number of regular topical meetings, in addition to a general meeting on Saturdays, that engages collaborators from all over the world using a number of different pedagogical and technological techniques. 


An example of a virtual distributed meeting with collaborators dropping in from different parts of the globe.

An open meeting has a number of moving parts that need to be thoughtfully considered to ensure success. The first of these is choosing a meeting platform. OREL has found success with Jitsi, as it is lightweight and free to use (open source). While Jitsi can be used as a service, installing it on your own server opens up its many customizable features. Jitsi even works with Virtual Reality, with interactions between the 2-D meeting world and immersive 3-D being available in the Wolvic browser and Meta Quest casting option.





Sample scenes from screensharing within Meta Quest and the casting option.

Secondly, programming the meeting is a non-trivial detail that can make the most of your time. For our Saturday Morning NeuroSim meetings, we have settled on the following format: updates, light features, discussion, open collaboration, and finally, papers of the week. Agenda-setting should be flexible with respect to your attendee's availability. Not everyone can make an entire meeting, so allowing them to "drop into" participation is encouraged. 

Notetaking and live feeds are also good for augmenting our meetings. The OREL Lab Manager (Jesse Parent) We use notetaking tools such as Obsidian and Notion with allied feeds (Slack and Discord) for coordinating the various fragments of ideas and themes that emerge during meeting time. Feed technology is also good for sharing papers, and the vision of a stream feed is key to realizing the multimedia aspect of real-time meeting immersion, even when attendees are asynchronous.     



Different types of notetaking and stream feeding within a meeting (from the Cognition Futures Reading Group).

As a tool for participatory engagement, this can be done in a number of different ways. Lead by Daniel Ari Friedman and Bleu Knight, the Active Inference Institute has taken the route of invited livestreams and summary podcasts. These materials introduce collaborators to difficult academic concepts while making them more accessible. While YouTube has options for live streaming, it is not always the best option. I use OBS Studio (free and open source) to compose a desktop recording and edit before making it public. 

Recorded meetings are also good for coding demos, particularly when they do not go as planned. One can either prepare a recording in advance to include in the meeting recording or strip the demo down to a minimal approach using a CoLab notebook. This reduces the friction of failed screenshares and execution errors, while also easing the burden of performing in front of a group.



Coding demos from a recent Saturday Morning NeuroSim meeting.

But completely virtual experiences are not the only option for bringing people together from around the world. OREL has been experimenting with hybrid meetings. This type of meeting brings the ethos of virtual meetings to more traditional in-person meetings. This enables more inclusive participation from distant geographical points. Last Spring, we experimented with our own virtual meeting experience at the New York Celebration of Women in Computing (NYCWiC), hosted live at Fort William Henry, NY. The hybrid session "Frontiers in Data Privacy and Tech Ethics" featured a buffet of topics on AI and technology ethics. Soem of the participants were live, while others were virtual (recorded or located in different parts of the globe). For this type of meeting, experimenting with ways to optimize live/virtual synchronization and media capture quality are essential. We plan to experiment with this more in the near future. 



Virtual (top) and in-person (bottom) components of the session.

August 9, 2022

New Paper on Developmental Braitenberg Vehicles now live!

 

The special issue of Artificial Life on Embodied Intelligence is now live! Inside you will find our paper "Braitenberg Vehicles as Developmental Neurosimulation", which has lived on the arXiv since 2020. This paper lays out an approach to Developmental Neurosimulation, involving three adversarial approaches to the agent-based development of embodied brains and embodied cognition. Here is the abstract:

Connecting brain and behavior is a longstanding issue in the areas of behavioral science, artificial intelligence, and neurobiology. As is standard among models of artificial and biological neural networks, an analogue of the fully mature brain is presented as a blank slate. However, this does not consider the realities of biological development and developmental learning. Our purpose is to model the development of an artificial organism that exhibits complex behaviors. We introduce three alternate approaches to demonstrate how developmental embodied agents can be implemented. The resulting developmental Braitenberg vehicles (dBVs) will generate behaviors ranging from stimulus responses to group behavior that resembles collective motion. We will situate this work in the domain of artificial brain networks along with broader themes such as embodied cognition, feedback, and emergence. Our perspective is exemplified by three software instantiations that demonstrate how a BV-genetic algorithm hybrid model, a multisensory Hebbian learning model, and multi-agent approaches can be used to approach BV development. We introduce use cases such as optimized spatial cognition (vehicle-genetic algorithm hybrid model), hinges connecting behavioral and neural models (multisensory Hebbian learning model), and cumulative classification (multi-agent approaches). In conclusion, we consider future applications of the developmental neurosimulation approach.

There are many themes to follow up on in this paper. Just of few examples include:

* brain/body scaling in an embodied agent.

* the role of multisensory integration in the development of cognition.

* ways to classify shapes and motifs in the emergence of multi-agent collectives. 

* spatial cognition and transfer learning in developmental embodied systems.

Congratulations to Stefan Dvoretskii, Ziyi Gong, Ankit Gupta, Jesse Parent, and Bradly Alicea for their hard work.

June 15, 2022

Google Summer of Code 2022 in the OpenWorm Community (DevoWorm)



Welcome to Google Summer of Code 2022! I am pleased to announce that this year, we have two funded projects: D-GNNs and Digital Microspheres! These projects will both take place in conjunction with the DevoWorm part of the OpenWorm community. DevoWorm is an interdisciplinary group engaged in both computational and biological data analysis. We have weekly meetings on Jit.si, and are a part of the OpenWorm Foundation

This year, we were able to fund two students per project. They will be working on complementary solutions to each problem, and we will see how far they get by the end of the Summer. 

D-GNNs (Developmental Graph Neural Networks)

The description for this project is as follows:

Biological development features many different types of networks: neural connectomes, gene regulatory networks, interactome networks, and anatomical networks. Using cell tracking and high-resolution microscopy, we can reconstruct the origins of these networks in the early embryo. Building on our group's past work in deep learning and pre-trained models, we look to apply graph neural networks (GNNs) to developmental biological analysis.

The contributor will create graph embeddings that resemble actual biological networks found throughout development. Potential activities include growing graph embeddings using biological rules, differentiation of nodes in the network, and GNNs that generate different types of movement output based on movement seen in microscopy movies. The goal is to create a library of GNNs that can simulate developmental processes by analyzing time-series microscopy data.

When completed, D-GNNs will become part of the DevoWorm AI library. Ultimately, we will be integrating the GNN work with the DevoLearn (open-source pre-trained deep learning) software. 


Jiahang Li

Jiahang Li is a first year MPhil candidate in Computing Department at Hong Kong Polytechnic University. His research interests cover graph representation learning and its applications. Jiahang's approach to the project is to provide a pipeline that converts microscopic video data of C. elegans and other organisms into graph structures, on which advanced network analysis techniques and graph neural networks will be employed to obtain high-level representation of embryogenesis and to solve applied problems.




Wataru Kawakami

Wataru is a student at Kyoto University with interests in Machine Learning (in particular Graph Neural Networks) and Neuroimaging.

Digital Microspheres

The description for this problem is as follows: 

This project will build upon the specialized microscopy techniques to develop a shell composed of projected microscopy images, arranged to represent the full external surface of a sphere. This will allow us to create an atlas of the embryo’s outer surface, which in some species (e.g. Axolotl) enables us to have a novel perspective on neural development.

The contributor will build a computational tool that allows us to visualize 4D data derived from the surface of an Axolotl embryo. The spatial model and animation (4th dimension) of microscopy image data can be created in a 3-D modeling software of your choice.

This project is based on previous research by DevoWorm contributors Richard Gordon and Susan Crawford-Young. The flipping and ball microscopy research involve the design and fabrication of specialized microscopes to image embryos in a 4-D context (3 dimensions of space plus time).

Spherical Embryo Maps: Gordon, R. (2009). Google Embryo for Building Quantitative Understanding of an Embryo As It Builds Itself. II. Progress Toward an Embryo Surface Microscope. Biological Theory, 4, 396–412.

Flipping Microscopy: Crawford-Young, S., Dittapongpitch, S., Gordon, R., and Harrington, K. (2018). Acquisition and reconstruction of 4D surfaces of axolotl embryos with the flipping stage robotic microscope. Biosystems, 173, 214-220.

Ball Microscopy: Crawford-Young, S.J. and Young Williment, J.L. (2021). A ball microscope for viewing the entire surface of amphibian embryos. Biosystems, 208, 104498.

Karan Lohaan

Karan is a student at Amrita Vishwa Vidyapeetham University, and is a member of the AMFoss program there. He is interested in Machine Learning and Image Processing. 

Harikrishna Pillai

I am Harikrishna pursuing my B.Tech in Computer Science and Artificial Intelligence from Amrita Vishwa Vidyapeetham University. I completed my schooling in Mumbai. I started with python as my first language and eventually developed interest for AI. Due to my interest in Android apps, I have done Android development in Kotlin. Also, I have been interested in open source for some time now and therefore, I wanted to start my open source journey with GSoC.

We also have two GSoC mentors for these projects: Bradly Alicea is a mentor for D-GNNs and Digital Microspheres, and Jesse Parent is a mentor for D-GNNs. Richard Gordon and Susan Crawford-Young are serving as collaborators for the Digital Microspheres project.

If you would like to check on their progress, please check out our weekly meetings available on our YouTube channel.

March 31, 2022

Updates on Open Source Community Tools

All hail the Guidance (and Attribution) Tree!

The guidance tree is a concept that grew out of my days on the OpenWorm Community Committee. I have implemented a guidance tree for the Rokwire Community, and is available in Beta version for other communities to implement (HTML, Markdown) under a CC-BY license. Check out this 19-minute tour of a guidance tree based on the Rokwire Community.


A guidance tree allows new community members to find a starting point in your community easily, while optimally leveraging their skills. This is something I am calling Community Wayfinding. Community managers who want to adopt the guidance tree have to analyze their own set of community resources (Github repositories, web resources, and documentation) to see where members might fit in. Adopting a guidance tree of your own also requires a definition of community roles, which will be unique to different communities. In the Beta version, a user encounters a set of binary choices that bifurcate towards a specific contribution path. Future versions might map this possibility space to a VR (virtual) world where the options are presented as 3-D objects or as activity rooms.

Would you be interested in a system for tracing attribution or authorship on an open-source (or open science) project? When the formal release version of guidance tree is released, it will be accompanied by the Authorship Tree, an idea I worked on circa 2017 in the Orthogonal Research and Education Lab. The Authorship (or attribution tree) solves the problem of authorship order by showing the relative contributions of each individual in the form of a tree structure. This not only allows for primary contributions to be visualized, but also for deeper contributions (informal conversations, data sources, stakeholders) to be recognized. This is particularly good for publicly recognizing different types of contributors and whose who have been active at various levels of contribution.


Recombining your Technical Stack
During the month of March 2021, we considered how we might strive for a Full Stack community. The basic idea was that depending on the task one wants to carry out, there exist a series of possible technologies to achieve your goals. The challenge is to pick the best set of tools, for all aspects of your open source project. These tools should be compatible with one another, easy to learn/use, and provide accessibility to future contributors. 

But tools are constantly changing. Sometimes you outgrow your current set of tools. Some platforms are okay for small numbers of users, but become unmanageable as you community scales up. This is the nature of the constant tradeoffs one makes in managing a dynamic community. In other cases, tools simply cease to exist, forcing you to migrate to another solution. And sometimes a tool becomes unaffordable as your user needs change. So the question becomes: how do you go about changing out your stack? 


My personal preference is to stick to open source tools whenever possible. There are two reasons for this: it eliminates the cost constraint, and it allows for open source solutions to emerge from the user community. Your open source community might also be able to develop customized tools for such platforms, thus helping you keep your technology stack consistent. The Jitsi platform is a good example of this. Jitsi instances can be started through a web browser or mobile app, and Jitsi servers can be customized by specific organizations


When you do need to change out elements in your stack, the first step is to make sure that the solution works well with your other tools. For example, if you change your video conferencing tool, be sure to make sure your whiteboard (Jamboard) and file sharing (Ignite RealTime) tools also work well with this shift. Doing a series of contingency tests based on common use cases may help. Adopting OBS for screen recordings and streaming is an example of this. Once you know your tool runs stably and provides the desired output, then you will have fewer glitches down the road. 

Secondly, make sure that you community leaders (people most likely to run a community meeting) can use the new tool. You might offer a primer or training session to get people up to speed on the new tool, in addition to how it connects with other tools in the stack. This primer should then be available in the form of video and written documentation to the community, as your meeting and discussion leaders may change over time. Tools that visualize Github tasks and milestones (ZenHub) is one example of a tool with multiple dependencies.

A third step is to do an audit of how the tool is actually being used, to ensure that there is a match between desired functionality and the functionality that is available to everyday users. Perhaps your community is really interested in sharing files during their video meetings. This may require the addition of a new tool, or an addition to an existing tool. Only a post-implementation audit (or a quarterly solicitation of use cases) will reveal an actionable path.

February 15, 2022

Gyrification of the Tree of Mammals

For this year's Darwin Day post, I will be reviewing the evolutionary origins and developmental emergence of gyrification of the Mammalian brain. Gyrification occurs when the neocortex, or six layered cortex on the dorsal surface of Mammalian brains, exhibits wrinkles and folds rather than a smooth surface (lissencephaly). Gyrification is measured using the gyrification index (or GI). GI can range from 5.6 in Pilot whales (Globicephala) to 3.8 in Elephants (Loxodonta) and 2.6 in Humans (Homo) [1]. A more extensive phylogenetic analysis (Figure 1) shows the evolutionary trajectory for this in Hominids, and a highly gyrified brain is associated with other traits that emerge as early as the divergence of Primates. 


Figure 1. A phylogeny of primate brain evolution (with Mammalian outgroups), with a focus on the origin of traits found in the human brain. COURTESY [2].

The evolutionary origins of gyrification may either be mono- or polyphyletic, as different genes have been identified as potential associated factors. Gyrification might also be a product of convergent evolution, as this trait may simply be a by-product of larger neocortical sheets. Steidter [3] points out that gyrification may simply be due to physical constraints related to fitting a vastly enlarged cortical sheet into a skull scaled to an organism's body size. 

Figure 2. Allometric scaling across select Mammalian brain, showing an increase in gyrification for larger brains. COURTESY [4].

In Figure 2, we see that in general larger brains also have a larger GI value. The curvilinear relationship shown in the figure is known as an allometric scaling. Allometry [5] is a convenient way to quantitatively assess relative growth across different species, and the resulting regression parameters are suggestive of underlying mechanisms that control and predict growth across evolution.

In this case, the allometric relationship is brain size versus tangential expansion. Tangential matter is expansion of gray matter relative to the constraints of white matter, or a grey-to-white matter proportion [4]. As the amount of gray matter increases, brain size also tends to increase, and so does the GI value. However, the proportion of gray to white matter saturates, while brain sizes continue to expand along with increasing GI values. 




Figure 3. Simulating gyrification as a by-product of physical processes. 3-D printed models based on MRI data for brains from different stages of development. COURTESY [6].

Genetic analyses implicate the role of specific genes in controlling brain volume, which then sets the stage for gyrification [7]. Developmental mutations in the human genetic loci collectively known as MCPH 1-18 [8] lead to a condition called microcephaly, where the mature microcephalic brain remains small and lacks gyrification. In a study of 34 species [9], the largest source of explained variance between species can be explained by random Brownian motion. Furthermore, the data within the order Primates shows that fold wavelength is stable (~12mm) despite a 20-fold difference in volume [9]. 

As an alternative hypothesis to evolutionary origins, gyrification can result from various physical processes in developmental morphogenesis (Figure 3). The gyrification process consists of gyral (ridge-like) and sulcal (groove-like) convolutions. In the earliest stages of development, no gyrification is expressed in the phenotype. However, as the neocortex grows faster relative to the rest of the brain, a mechanical instability results that leads to buckling [6]. Buckling thus creates gyrification, although the consistency of their localization and timing in development suggests underlying cellular and molecular mechanisms. Demonstration of biophysical mechanisms does not preclude a phylogenetic explanation, however. As we will see later on, surface physics relies upon the presence of certain cell types and growth conditions.


Figure 4. An overview of the evolution of development (Evo-Devo) of gyrification. Gyrification and lissencephaly occur through mechanisms that affect changes in brain size and GI relative to the last common ancestor (in this figure, transitional form). COURTESY [10].

There are also several cellular and molecular factors that contribute to neocortical growth, and thus towards gyrification. In Figure 4, we see four archetypes that result from increases and decreases of brain size coupled with increases and decreases of GI. For example, increases in basal radial ganglion (bRG) precursor cells and transit-amplifying progenitor cells (TAPs) contribute to increases of both brain size and GI [10]. Decreases in brain size and GI are controlled by changes in cell cycle timing and associated heterochronic changes. Heterochrony has to do with the timing of the rate and termination of growth in development and is but one factor that suggests lissencephaly is actually the derived condition. Thus, smooth brains would be an evolutionary reversal from the ancestral gyrified state that occurred multiple times across the tree of Mammals. 

Once again, an evolutionary conundrum. Happy evolutioning!

NOTES:

[1] Johnson, S. Number and Complexity of Cortical Gyrii. Center for Academic Research and Training in Anthropogeny. La Jolla, CA. Accessed: February 13, 2022. 

[2] Franchini, L.F. (2021). Genetic Mechanisms Underlying Cortical Evolution in Mammals. Frontiers in Cell and Developmental Biology, 9, 591017.

[3] Striedter, G. (2005). Principles of brain evolution. Sinauer, Sunderland, MA.

[4] Tallinen, T., Chung, J.Y. , Biggins, J.S., and Mahadevan, L. (2014). Gyrification from constrained cortical expansion. PNAS, 111(35), 12667-12672.

[5] Shingleton, A. (2010) Allometry: The Study of Biological Scaling. Nature Education Knowledge, 3(10), 2.

[6] Tallinen, T., Chung, J.Y., Rousseau, F., Girard, N., Lefevre, J., and Mahadevan, L. (2016). On the growth and form of cortical convolutions. Nature Physics, 12, 588–593.

[7] Zilles, K., Palomero-Gallagher, N., and Amunts, K. (2013). Development of cortical folding during evolution and ontogeny. Trends in Neurosciences, 36(5), 275-284. 

[8] Jayaraman, D., Bae, B-I., and Walsh, C.A. (2018). The Genetics of Primary Microcephaly. Annual Review of Genomics and Human Genetics, 19, 177-200.

[9] Heuer, K., Gulban, O.F., Bazin, P-L., Osoianu, A., Valabregue, R., Santin, M., Herbin, M., and Toro, R. (2019). Evolution of neocortical folding: A phylogenetic comparative analysis of MRI from 34 primate species. Cortex, 118, 275-291.

[10] Kelava, I., Lewitus, E., and Huttner, W.B. (2013). The secondary loss of gyrencephaly as an example of evolutionary phenotypical reversalFrontiers in Neuroanatomy, 7, 16.


Printfriendly