December 16, 2018

TVoT (Television of Things)

Here is a feature of television-related items, and unlike the Internet of Things (IoT), these things has little hype behind them.

A few weeks ago (Novermber 22nd) was the 30th Anniversary of the Max Headroom intrusion on two Chicago television stations. The first intrusion was on WGN (Channel 9). A second intrusion occurred later that evening on WTTW (Channel 11). The original pirate broadcaster was never identified, and subsequent intrusions did not occur.

Shortwave radio carries a television signal! The high-pitched bleeps and buzzes in the background of a short-wave broadcast are actually data streams. Some of these sub-channels carry a series of images rendered at a low frame-rate, which produces a television broadcast of sequentially-scanned still images. This protocol is called slow-scan television (or SSTV) [1], and has its roots in the ham radio community. Modern uses include sending images to earth from the International Space Station [2]. 

If you don't know about mechanical television technology, it's your time to learn. Precursor to the much better-known electronic television, a number of early technologies enabled the reproduction of images by producing horizontal lines within a given image at a certain rate. The higher the scan rate, the faster individual images could be rendered, and the faster new images could replace the complete ones already rendered. Obtaining a higher scan rate meant that the images were more recognizable as a "moving" sequence. 

By and large, m
echanical generation was not commercially successful. Scan rates that would enable high-resolution images were never achieved. The Nipkow disc (invented in 1884) enabled animations based on a sequence of images on a rotating disc visualized using a light source and a Selenium element. The Baird televisor (developed during the 1920s) improved upon the Nipkow disc by transmitting the sequential images using a carrier signal. Like SSTV, there are limited uses for these technologies in the 21rst century, which include Steampunk-themed maker projects, digital light processing, and laser lighting displays (the latter two applications have relied upon significant technical advancements).

Lastly, there was an excellent recent episode of the Function podcast (hosted by Anil Dash) on the history of animated gifs. While it is now a 30-year old technology, animated gif are still an efficient way to present sequential movement and simple animation on the web. While animated gifs have been discussed on this blog in the past, the podcast discussion did mention that the first gif was 1987. So here is an image of the first gif (a flying plane) created by Steve Wilhite [3].

[1] A collection of readings can be found at the National Association for Amateur Radio website.

[2] Trapp C. (2017). Space Technology and Audio Tape to Store Art. Hackaday, December 14. SSTV signals commonly relay images to earth from the space station.

[3] Buck, S. (2012). History of GIFs. Mashable blog, October 19.

October 31, 2018

October: Geppetto month

Here is a recap of Geppetto Month at the OpenWorm Foundation. This content has been cross-posted from the OpenWorm Foundation blog (h/t Giovanni Idili).


OpenWorm is made up of many sub-projects, “project of the month” is an effort to highlight a different OpenWom sub-project every month. This month is Geppetto’s turn!

What is Geppetto?
Geppetto is a visualisation and simulation web-based platform for building neuroscience applications. The first use case ever of Geppetto was OpenWorm itself (some lore: the virtual Worm being Pinocchio, a Geppetto was needed to “make it”), but since then many groups have adopted it as their platform of choice. It is basically a set of reusable components for simulation, visualisation and data aggregation that make it easier to develop your neuroscience application, be it a data portal or an entry point to external simulation engines.

Projects that currently make use of Geppetto as a platform:
OpenWorm uses Geppetto as an integration platform for the output of various of its subprojects, from connectome browsing to replaying of integrated electrophysiology and fluid dynamics simulations.

Open Source Brain uses Geppetto to share, visualize and simulate neuronal models, both for individual neurons and networks.

Virtual Fly Brain is an ontology and 3D/2D  morphology browser for drosophila resources built using Geppetto.

NetPyNE-ui is a user friendly UI to create and run neuronal models using the NetPyNE library.

Open Development
Geppetto development is entirely open source, like anything else that happens under the OpenWorm umbrella. There are open sprint meetings every two weeks that anybody can join, and we keep a public development board showing development activities and progress. You can browse the issues and see if there is anything you might wanna chance your hand on!

Here are some links if you want to learn more about Geppetto:

          Open access paper (Philosophical Transaction of the Royal Society B, 2018)

          Geppetto docs

          Geppetto live demo

          Development board

          Geppetto source code (Github)

          Geppetto Blog

          Geppetto on Twitter

Get involved!
Getting involved is easy, simply fill out the OpenWorm volunteer application form and we will invite you to the OpenWorm foundation slack, from there you can interact with the community and join the #geppetto channel if you are interested to learn more about Geppetto or get involved as a contributor.

October 26, 2018

OAWeek 2018: Barriers to Practice

In our final OAWeek post, I will present the current barriers to "open" practice. While there are many potential barriers to living up to the principles of complete openness, there are four major reasons why people or institutions make the decision to be open and their reasons for doing so. These include (but are not limited to): technological, financial, formal conventions, and learning curve.

Technological. The past few years have seen a boom in innovations and digital tools that enable open access, open science, and open source. Based on the above figure, we can see that the all areas of the conventional scientific process have been touched by this revolution. Distribution, publishing, notetaking, bibliographies, and engaging the broader community have all been impacted by new tools and (more importantly) their adoption by a critical mass of scientists. The development of formal pipelines for organizing this proliferation of tools into actionable steps [1] has also been a technological advance. Despite this convergence, this is not a single "killer app" that will solve the open problem. Nor should there be, as killer apps are often concentrated in the hands of single entities that are vulnerable to profiteering. Importantly, open-enabling technologies must be available to smaller research groups, particularly generators of smaller datasets [2], to get the most out of the scientific community's efforts.

101 Innovations in Scholarly Communication. ORIGINAL SOURCE:  License: CC-BY.

Financial. While many tools are relatively cheap to use, other aspects of open science can be quite costly to individual scientists or even laboratories. In Wednesday's post on the three "opens", the various models of open access were discussed. Depending on which route to open access and/or open science is chosen, there are costs associated with manuscript, data archiving, curation, and annotation. A successful "open" strategy should include a consideration of these costs to ensure sustainability over the long term. There are also issues with the cost and public funding of large-scale community resources such as open access journals, preprint servers, data repositories that must be solved without making their use unaffordable or (by extension) unavailable. One open question is the incentive structure for sharing resources and making them accessible. This is particularly true for datasets, which require incentives related to research efficiency, social prestige, and intellectual growth [3]. Such incentives can also help to reinforce higher reproducibility standards and overall levels of scientific integrity [4]. 

An example of a set of formal conventions chosen from a large number of potential tools. COURTESY: Nate Angell, Joint Roadmap for Open Science Tools. License: CC-0.

Formal Conventions. Another barrier to "open" is cultural practice. In moving from concept to finished product, we do so by following a set of internalized practices. While science requires much formal training, many scientific practices are taught implicitly during the course of laboratory and scholarly research. Several recent studies characterize openness as a matter of evolving norms [5, 6] which define openness in terms of collegiality, and does not punish non-open endeavors. One critical aspect to encouraging open practices is education. However, there does seem to be a generational shift in attitudes and educational opportunities surrounding open practices. This has occurred at the same time information and computational technologies have emerged that encourage sharing and transparency. Whether this will change standards and expectations in a decade is unclear -- although governments and funding agencies are now embracing open access and open science in ways they previously have not.

Learning curve as compared to the diffusion of innovations [7]. COURTESY: Wikimedia.

Learning Curve. With all of the potential tools and steps in making research open, there is a learning curve for both individual scientists and small organizations (e.g. laboratory). While the learning curve for some practices (e.g. preprint posting) are trivial, other "open" practices (e.g. transparent protocol and methods) require more commitment and formal training. The learning curve is one major factor in the difference between merely "making things open" and making things accessible. In the domain of open datasets, accessibility can be hampered due to the fragmentation of resources across many obscure locations rather than a highly-discoverable set of repositories with fixed identifiers [8]. There are two additional barriers to accessibility and/or practice adoption: difficulty of learning and cultural learning. Difficulty in learning a specific tool or programming language does make a difference in how open practices are, and the harder or more time consuming a certain task is, the less likely the associated practice will be adopted. Cultural learning involves being exposed to a specific practice and then adopting that practice. This generally has little relation to difficulty, and depends more on personal and institutional preference. It is important to keep both of these in mind, both for adopting an "open" strategy and expectations of members of the broader community.

[1] Toelch, U. and Ostwald, D. (2018). Digital open science: Teaching digital tools for reproducible and transparent research. PLoS Biology, 16(7), e2006022. doi:10.1371/journal.pbio.2006022.

[2] Ferguson, A.R., Nielson, J.L., Cragin, M.H., Bandrowski, A.E., and Martone, M.E. (2014). Big Data from Small Data: Data-sharing in the ‘long tail’ of neuroscience. Nature Neuroscience, 17(11), 1442-1448. doi:10.1038/nn.3838.

[3] Gardner, D. (2003). Towards Effective and Rewarding Data Sharing. Neuroinformatics, 1(3), 289-285. AND Piwowar, H.A., Becich, M.J., Bilofsky, H., Crowley, R.S. (2008). Towards a Data Sharing Culture: Recommendations for Leadership from Academic Health Centers. PLoS Medicine, 5(9), e183. doi:10.1371/journal.pmed.0050183.

[4] Gall, T., Ioannidis, J.P.A., Maniadis, Z. (2017). The credibility crisis in research: Caneconomics tools help? PLoS Biology, 15(4), e2001846. doi:10.1371/journal.pbio.2001846.

[5] Pham-Kanter, G., Zinner, D.E., and Campbell, E.G. (2014). Codifying Collegiality: recent developments in data sharing policy in the life sciences. PLoS One, 9(9), e108451. doi:10.1371/ journal.pone.0108451.

[6] Fecher, B., Friesike, S., and Hebing, M. (2015). What Drives Academic Data Sharing? PLoS One, 10(2), e0118053. doi:10.1371/journal.pone.0118053.

[7] Rogers, E. (1962). Diffusion of Innovations. Free Press of Glencoe, New York.

[8] Culina, A., Woutersen-Windhouwer, S., Manghi, P., Baglioni, M., Crowther, T.W., Visser, M.E.  (2018). Navigating the unfolding open data landscape in ecology and evolution. Nature Ecology and Evolution, 2, 420–426. doi:10.1038/s41559-017-0458-2

October 24, 2018

OAWeek 2018: Open Access, Open Science, Open Source

For this OAWeek post, we will discuss the connections between open access, open science, and open source. As an organizing principle, I will introduce each concept with a working definition, and then discuss relationships with other "open" concepts.

Open Access: availability to the general public, research output can be distributed freely without restrictions.

A typology of different forms of Open Access publishing.

As a publishing phenomenon, open access can take a number of forms [1, 2]. Aside from a distinction between peer-reviewed and non peer-reviewed materials, Open Access publishing is color-coded as green (self-archiving) or golden (archival at the publisher's site for a fee) [3]. There is also a version of golden open access called diamond open access, the difference being that diamond open access does not require the author to pay a fee to the publisher [4]. Self-archival can be done through a personal server (website), a preprint site such as bioRxiv, or a site that allows for public hosting of documents (ResearchGate, Figshare). Golden open access usually requires an APC fee, the funds for which go to the publisher. While cheaper, self-archival requires adherence to a set practices that ensure ease of access.

In a narrow sense then, open access is a publishing issue seemingly unconnected to open science and particularly open source. Yet in fact, open access is both critical to and an enabling factor in open science and open source. Aside from making materials open (free or affordable), they mush also be made accessible. There are many other benefits to open access [5], but the most important of which is that they enable access to many different components of a set of scientific results.

Open Science: make research and data (scholarly outputs) publically accessible. This requires efforts to make scholarly outputs transparent and accessible, which should enable reproducibility.

Open Science is an extension of open access in that not only is the manuscript made public, but the research products are made public as well [6, 7]. An open pipeline (or system) might include any number of the following: version-controlled manuscript editing, preprints, preregistration of study design, open datasets, demonstrable analyses, open source code, social media engagement, post-publication review, and open manuscript review. While it is up to the scientist or scientific organization what components to utilize, each component has value to both the scientist [8] and the scientific audience.

One way to make the benefits of being open explicit without violating the rights of scientists to their original work is to adopt an open license. While there are a number of options for both open science and open source, one popular type of license is Creative Commons (CC) [9]. There are many types of CC license, but one commonly used in open science is CC-BY (or alternatively CC-BY-NC). The BY license allows others to distribute and/or recombine your work with acknowledgement of the original author (you). BY-NC licenses explicitly disallow commercial derivatives.

A successful open science strategy is more than simply the production of science and the least publishable unit. Open science also includes access to educational materials, such as screencasts, lecture notes, and even course development [10]. As a suitable example, Open Science MOOC provides all of their course modules at the level of a consumable lesson and a Github repository of sharable lesson plans.

Open Source: make source code publically available and editable. Software architecture is licensed so that it can be modified in collaborative fashion.

In many ways, open source (OS) can be considered a crucial component of open science, as the ability to collaboratively and transparently solve problems is a key part of the ethos. Yet open source has its own set of concerns surrounding project-building and the management of contributors. The development of open source software is not simply the production of free software, as there are significant version control and human resource issues that go into OS [11]. Open source projects (such as Wikimedia Foundation or Linux Foundation) tend to operate at a much larger scale than open science collaborations. In the case of hybrid open science/open source organizations (such as the OpenWorm Foundation), there are a number of management concerns that also draw from making research methods and data transparent.

Open Source provides not only an avenue to transparency, but also as a tool for collaboration. An open source infrastructure that provides version-control [12] and source code annotation in the public domain can serve to enable public discussion and encourage future development outside of a specific project or set of experiments. The ability to open up code used in analysis and simulation aids in the peer review process. For published methods, open source provides a means for people to improve upon and use the code base. Open source efforts such as the open hardware movement allows labs to share standardized plans for DIY lab equipment, lowering the costs of science.

[1] Jeffrey, K.G. (2006). Open Access: an introduction. ERCIM News.

[2] Suber, P. (2012). Open Access. MIT Press, Cambridge, MA

[3] Kienc, W. (2015). Green OA vs. Gold OA. Which one to choose? Open Science blog, June 3.

[4] Kelly, J.M. (2013). Green, Gold, and Diamond?: A Short Primer on Open Access. Jason M. Kelly blog, January 27.

[5] PLoS. Why Open Access?

[6] Guide to Open Science Publishing. F1000Research.

[7] McKiernan, E.C., Bourne, P.E., Brown, C.T., Buck, S., Kenall, A., Lin, J., McDougall, D., Nosek, B.A., Ram, K., Soderberg, C.K., Spies, J.R., Thaney, K., Updegrove, A., Woo, K.H., and Yarkoni, T. (2016). How open science helps researchers succeed. eLife. 2016; 5: e16800. doi:10.7554/eLife. 16800.001.

[8] Ali-Khan, S.E., Jean, A., MacDonald, E., Gold, E.R. (2018). Defining Success in Open Science. MNI Open Research, 2, 2. doi:10.12688/mniopenres.12780.

[9] Creative Commons. About the licenses.

[10] Jhangiani, R. and Biswas-Diener, R. (2017). Open: the philosophy and practices that are revolutionizing education and science. Ubiquity Press. doi:10.5334/bbc.

[11] Fogel, K. (2017). Producing Open Source Software: how to run a successful free software project. Version 2.3088

[12] Blischak, J.D., Davenport, E.R, and Wilson, G. (2016). A Quick Introduction to Version Control with Git and GitHub. PLoS Computational Biology, 12(1), e1004668. doi:10.1371/journal.pcbi. 1004668.

October 22, 2018

Welcome to Open Access Week 2018!

Welcome to Open Access Week! Orthogonal Research and Education Laboratory is contributing to the week's activities through three blogposts: in this post, we will briefly discuss Open Annotation, while Wednesday will feature "Open Access, Open Science, and Open Source" and Friday will feature "Barriers to Practice".

Synthetic Daisies blog celebrated Open Access Week in 2016 (Working with Secondary Datasets, How Am I Doing, Altmetrics?) and 2017 (Version-Controlled Papers, Open Project Management). All posts will be tagged with #OAweek for easy retrieval.

To kick off the discussion, we will now quickly discuss Open Annotation and the role it can play in enabling literature searches, peer-review, and collaboration. Two of the most well-known open annotation tools are and Fermat's Library. A few posts from the blog serve to establish the benefits and potential of open annotation and how it is currently being implemented on the web.

According to [1], open annotation can serve as a framework for new practices such as collective document review. This is a common function of collaborative document systems such as Overleaf and Authorea. However, the vision for seems to be building a so-called "ecosystem" for commenting that can be used for peer review, reader notes, or links to relevant additional readings [1, 2]. In such a system, comments can be transferred across versions of a document, from draft to preprint to published manuscript [1].

Under the hood, open annotation relies upon standards such as the W3C Open Annotation data model. Once implemented, this allows for a separation of the discussion (annotations) from the main page [2]. This provides opportunities for meta-browsing [3] and distributed discussion threads that can be centralized in a common repository. There are also many opportunities for novel uses of open annotation, ranging from collaborative note-taking to adding references and data to an existing paper.

[1] Staines, H. (2017). Making Peer Review Transparent with Open Annotation. blog,

[2] Gerben (2014). Supporting Open Annotation. blog, supporting-open-annotation/.

[3] Wiesman, F., van den Herik, H.J., and Hasman, A. (2004). Information retrieval by metabrowsing. Journal of the American Society for Information Science and Technology, 55(7), 565-578.

September 30, 2018

Finding Your Inner Modeler (Part II)

Last year, I attended a workshop at the University of Illinois-Chicago called "Finding Your Inner Modeler". Sponsored by the NSF, FYIM is meant to bring together biologists and modelers and to foster collaborations between the two. There were many interesting talks over the course of two days, including plant biology, biochemical kinetics, and (of course) various types of computational and statistical model [1].

This year was the second installment of FYIM, and this time I was chosen for a platform presentation. The platform presentation (Process as Connectivity: models of interaction in cellular systems) involves a 40-minute discussion between the principal investigator and an expert modeler. For my talk, this expert modeler was Dr. Eric Deeds from the University of Kansas.

The talk features work with several collaborators, features work from the DevoWorm group. In the talk, I described the DevoWorm group as an example of data science biology [2]. As an affiliate of the OpenWorm Foundation [3], the DevoWorm group works with primary and secondary data, and produces secondary and tertiary open datasets that serve as material for publications, student projects, and the wider development/computational biology communities.

The core innovation introduced in this talk is the use of graph theory and complex networks to analyze the organizational structure of the embryonic phenotype. This work is now showcased in a new paper [4] and Github repository.

[1] One example is the Virtual Cell software project, which allows one to model and analyze representations of kinematics, kinetics, geometry, and network interactions at the cellular level.

[2] Alicea, B., Gordon, R., and Portegys, T.E. (2018). DevoWorm: data-theoretical synthesis of C. elegans development. bioRxiv, doi:10.1101/282004.
[3] Sarma, G.P., Lee, C-W., Portegys, T., Ghayoomie, V., Jacobs, T., Alicea, B., Cantarelli, M., Currie, M., Gerkin, R.C., Gingell, S., Gleeson, P., Gordon, R., Hasani, R.M., Idili, G., Khayrulin, S., Lung, D., Palyanov, A., Watts, M., Larson, S.D. (2018). OpenWorm: overview and recent advances in integrative biological simulation of Caenorhabditis elegans. Philosophical Transactions of the Royal Society B, 373, 20170382. doi:10.1098/rstb.2017.0382.

[4] Alicea, B. and Gordon R. (2018). Cell Differentiation Processes as Spatial Networks: identifying four-dimensional structure in embryogenesis. BioSystems, doi:10.1016/j.biosystems.2018.09.009.

September 10, 2018

OpenWorm: Royal Society B special issue now live!

Worm images courtesy of the OpenWorm browser (Drs. Christian Grove and Padraig Gleeson)

Regular readers of this blog might recall that the OpenWorm Foundation spearheaded a workshop (From Connectome to Behavior) at the Royal Society (London) in January 2018. This workshop generated a lot of social media content and internal (to OpenWorm Slack) discussion of the presented topics.

Since then, the participants have been hard at work putting together a special issue (now live in Royal Society B) that presents in more detail what was discussed at the meeting. Lots of great papers on interesting topics ranging from network theory to movement models, and from foraging behavior to quantitative phenotyping.

Many of these papers feature ongoing work related to the OpenWorm Foundation, including the Geppetto, c302, and Sibernetic projects. There is also an OpenWorm overview paper that provides a current state of the initiative. Enjoy!

September 4, 2018

September: Movement Validation Month @ OpenWorm

Welcome to Movement Validation month! This content has been cross-posted from the OpenWorm Foundation blog and Chee-Wai Lee's blog.

Movement Motivations
The behavior of C. Elegans is partially captured by their movement characteristics. We develop and work with a number of tools for recording, characterizing, storing, analyzing, and sharing worm movement data.

Our goal is to eventually be able to compare the movement behavior of worms generated via simulation, against real instances of C. Elegans objectively and quantitatively.

The month of September 2018 will see some focus on this activity. We hope you will participate and do stuff that’s interesting to you!

Our Tools
The Database - We work with many others who record the movement activities of real live specimens of C. Elegans. That extended community has generously shared their data via our movement database tool (Github Repo here.)

A Common Data Format - To facilitate the sharing and exchange of data, we have developed the WCON format and accompanying libraries for manipulating data. We are working to support as many languages as we can. Our code repository can be found here.

Tools for Movement Analysis - The code base for our collection of tools to analyze movement is found here.

Activities for the Month
Planning - We have a range of active tool development issues we would like to review, and create tasks for. We expect to discuss this over the course of the first and second weeks. This should happen in our OpenWorm Slack channel #Movement-Analysis. We expect to also discuss new ideas/projects people might like to do, and lay out goals (e.g. integration of production code in OpenWorm’s Docker image.) We will kick this off with an Office Hours presentation in the OpenWorm Slack channel #office-hours on Sep 5th 4pm UTC.

Doing Interesting Stuff - Over the rest of the month, we will break off to work on individual tasks, have discussions, get questions answered. Activities can take many forms over a wide range of skill sets, technical or creative:

1. Create Blog/Art content.
2. Improve code/tool design.
3. Develop new features.
4. Explore new science/analysis techniques.

This will culminate in one or more hackathons over the last week of the month to round off the work. We also expect to create a number of open badges (see for tasks/educational activities that can be built around our results for wider community engagement.

Participate and Share

We encourage both new and experienced volunteers to participate! For newcomers, the best way to do so is filling out our volunteer application form.

After we have had a chance to look at your application, an invite will sent for you to join us in Slack. From there we can work with you to find stuff that you may be interested to work with, and people in the community you can get in touch with.

We look forward to hearing from you!

August 27, 2018

Final Google Summer of Code presentations are live!

The final presentation for Google Summer of Code students in the Representational Brains and Phenotypes (Sam and Jim) and DevoWorm (Arnab) groups are now available! The summer went quite well: skills were taught, lessons were imparted, and computing projects were advanced. Here are the links to each project submission and summary video.

Sam's final talk (YouTube) and project repository (Github).

Arnab's final talk (YouTube) and project repository (Github).

Cheng-Hsun (Jim)'s final talk (YouTube) and project repo (Github).

Sam and Jim worked to advance the Contextual Geometric Structures framework (link 1, link 2) using different approaches. Both projects involved a mix of evolutionary algorithms, computational linguistics, hybrid modeling, and representational AI. The Google n-gram database was used to provide a source of training data.

Arnab worked on an XML framework for organizing embryo data at the cellular level. This not only provides the DevoWorm group with a link to more specific XML and simulation frameworks, but also moves us towards network-based representational models of the embryo.

August 20, 2018

Google Summer of Code Experience: Advice for Future Participants

As part of the Google Summer of Code program final project evaluations, I was asked to provide advice to future students and mentors in subsequent years. Here is the advice I gave:

Advice to Students:
I have three pieces of advice: do your research, reach outside of your comfort zone, and keep in communication with your mentor. Doing your research means that you continually reevaluate the big picture of your project. Start with a schematic of the workflow or project vision as outlined in your proposal. If you do not fully understand a set of issues (new algorithm, unfamiliar topical area), look them up or ask around.

Reaching outside of your comfort zone means you need to consider contingency plans for each step of your project in case something does not go as planned. If that requires a deep dive into a new method, then be willing to accommodate this into your project schedule. Do not spend too much time on learning new things, however, as the Summer moves pretty fast.

Finally, regular communication with your mentor on multiple channels helps with scheduling, project reevaluation, and keeping within expectations. I have used a combination of Slack, e-mail, and Google Meet with my mentees. Develop your own rhythm -- weekly Meet and e-mail updates can complement daily Slack communication. Don't be afraid to ask questions or take the initiative, but do be sure to coordinate this with your mentor's needs and expectations.

Advice to Mentors:
This year (2018), I mentored three students in two organizations. How did I do it? Weekly schedules, content management, and flexibility. Let's walk through this past summer to provide examples. The first thing I suggest is to mentor your students through the application process, so that their proposals align with your expectations and programmatic time constraints. During the community period, be sure to plan structured activities such as presentations to the broader community or access to background information on the organization and project.

As the coding period ramps up, be sure to set your weekly plan for communication. In my case, I scheduled a weekly meeting time (0.5 to 1 hour) for each student in Google Meet, set each student up in the appropriate Slack team, and prepared a weekly email newsletter (1-2 paragraphs) covering upcoming milestones. Also, be sure to utilize the link between Github issues and, as the latter serves to make outstanding tasks visually salient.

Our weekly meetings focused on four things:
1) what did you do this week/what will you do next week.
2) any outstanding issues/barriers to discuss in detail (or live demos).
3) upcoming milestones, planning for several weeks out.
4) review Waffle board issues, create new issues.
Finally, be flexible with respect to work style and meeting times. I had students from three different countries on two continents, and we often had to reschedule times. It is also important to let them adopt their own working rhythm (provided it is organized and within the bounds of the organization's needs). If they want to interact more or less on Slack, that is up to them. Just encourage them towards your community standards, and work from there. These things may seem like a lot to ask of a mentor, but I have found that it is worth it.

August 17, 2018

Google Summer of Code Experience: Afonso Pinto

The next Google Summer of Code project I want to highlight involves someone from one of my organizations (OpenWorm Foundation), but whom I did not mentor. Afonso Pinto (mentored by Giovanni Idili and Matteo Cantarelli) spent this summer working on the NWB (Neurodata without Borders): Explorer. Here is Afonso's final project presentation (YouTube) and work product (Github), in which he highlights GSoC-related contributions (a series of work sprints) to the project.

August 13, 2018

Google Summer of Code Experience: Arnab Banerjee

The "Road to GSoC" is paved with learning and fun!

As Organizations are finishing up Google Summer of Code for 2018, I would like to spend a few posts highlighting this Summer's projects at the OpenWorm Foundation and Orthogonal Research and Education Laboratory. In this post, I am reposting Arnab Banerjee's blog post on his experience with open-source development. He worked on a project called "Physics-based XML Model building for mosaic embryogenesis of Caenorhabditis elegans" (project repository, proposal presentation).

          I have recently completed a Google Summer of Code (GSoC) 2018 student project as part of the DevoWorm group and OpenWorm Foundation. Being a newcomer to the world of open source, it might seem a little presumptuous of me to write an article about the stages of becoming an experienced open source developer. But this article has resulted from personal experiences in the Open Source community, and so should be useful to open-source project newcomers and future Summer of Code students alike.

           I would like to begin by sharing a metaphor. Imagine a group of mountain climbers trying to get to the summit of a mountain.

           There are some who are doing it for the first time - the BEGINNERS. They find it difficult to get used to this activity and it is all new to them - from maintaining rations of water and oxygen cylinders to getting used to the physical exhaustion. Then there is the psychological factor - the will power and ability to keep going forward despite setbacks. The psychological and physical factors combine to make this period fraught with unexpected challenges. 

          Then there’s another group who have already done this a few times before. They are definitely more confident in their abilities, but not so good that they can lead an expedition to the top. They know how to maintain the rations like water and oxygen up to the correct levels because they have experienced it a few times before already. Willpower is also a little stronger than the beginners. These are the INTERMEDIATE climbers.

           Finally, the group leading the expedition - the EXPERIENCED - have done mountain climbing dozens of times. They lead the entire movement - from allocating resources to other climbers and deciding course of action in case of disaster. The remaining group looks upto them for directions when they are stuck. The rest of the group draws inspiration from them too, because they are the leaders of the movement. The EXPERIENCED have already been successful at the act of mountain climbing and learned from their failures - but they still keep doing it and training the others to reach higher and better positions.

          I’m sure by now you might have guessed that these grades of expertise parallel the world of open source development. As with mountain-climbing, I believe there are 3 distinct classes of developers in open source:

1. Beginner - A beginner struggles with basic concepts like version control systems. Also diving deep into the code base of an established open source organization seems difficult at first. As with mountain-climbing, the learning curve is steep. But once the initial effort is made, steady progress is seen.

2. Intermediate - Intermediates are ore comfortable with basic issues like using Git or version control system more generally. Intermediates can become familiar with the code base in a shorter amount of time, either due to organizational experience or experience with interpreting code more generally. In working at the intermediate level, time management becomes an important skill to build. Along with contributions to code base in a limited amount of time, other work like collaborating with teams and community bonding become important skills to master.

3. Experienced - The experienced are leaders of their organizations, and decide the overall strategy for these organizations. Experienced open-source contributors allocate new comers to issues to which beginners and intermediates will fit well. Other roles include conflict resolution and making sure that the organization functions with efficiency. Technical expertise (strong coding abilities) and a firm grip on domain-specific knowledge makes them natural leaders. 

          I believe after my GSoC journey, I have almost reached the end of the first 'BEGINNERS' phase. It was both an educational and rewarding experience. The initial weeks were tough to get used to because coding was one of many things I was simultaneously juggling - blog progress, mentor calls, community bonding, documentation, and domain-specific knowledge (learning about embryogenesis and neuroscience). Here is a visual aid in cartoon form (slideshow).

           Prior to my experience with GSoC, I couldn’t have pictured completing such a project with a neuroscience domain organization, but the massive support that I received from the larger OpenWorm Foundation community, Dr. Stephen Larson, and especially my mentor Dr. Bradly Alicea. Bradly introduced me to the DevoWorm Group, and was instrumental in whatever I have been able to achieve this summer. I learned from them that dozens of contributors sitting in 10 different time zones can collaborate to work on a project that has the potential to impact an entire community of researchers and developers.

CONCLUSION - Aiming for a moonshot project

          I hope to keep dedicating myself at least part-time to the cause of OpenWorm. The OpenWorm mantra is “building the world’s first worm-matrix, one line of code at a time”. The project that I worked on has given me a good foothold and introduction to the world of open source - especially the OpenWorm Foundation and DevoWorm Group. Now, there’s no turning back. I have decided that I will contribute to the OpenWorm Foundation by integrating my GSoC project with the Geppetto modeling environment.

          Technically, its a moonshot project and will not be easy to achieve. But HEY! I just participated in GOOGLE SUMMER OF CODE. As they say at Google: "aim for moonshots; its easier to make something 10X better than 10% better". That’s what I’ll aim for - a moonshot. Let’s see where it takes us - me and the OpenWorm Foundation. At worst, I end up contributing some useful code and learning something interesting. At best - well, we integrate DevoWorm with Geppetto!

Until we meet again!
Arnab Banerjee
Student Developer
Google Summer of Code 2018 - INCF (OpenWorm Foundation and DevoWorm Group)

August 1, 2018

OpenWorm Blog: August is ChannelWorm Month!

Content is being cross-posted from OpenWorm Foundation blog

August is ChannelWorm Month at OpenWorm!

As we make steady progress on our goal of building the world’s first detailed simulation of C. elegans, we’ve been brainstorming ways to systematically draw attention to each of the components of OpenWorm's platform.

Our scientific and outreach committees have decided that each month will be dedicated to a specific repository that will be designated “Project of the Month.”

This coming month (August 2018) will be dedicated to ChannelWorm, a repository aimed at constructing quantitative models of ion channel behavior.

Visit our Github repo and take a look for yourself:
See also the Heroku App for extensive curated data on ion channels:

WHAT ARE ION CHANNELS? Ion channels are pore-forming proteins which facilitate the movement of charged particles (ions) between the extracellular space and the cytoplasm.  All cells contain ion channels and play a central role in normal cellular function.  However, their role in the nervous system is much more significant.  In particular, ion channels in the nervous system mediate the generation of action potentials, the fundamental mechanism by which a network of neurons processes information. 

HOW THIS CONTRIBUTES TO THE OPENWORM VISION: Ion channels are the most granular level of detail in OpenWorm.  In other words, we abstract away the many complexities of intra-cellular function and treat the organism as an information processing system which uses electrical signals to mediate body movement.  Incorporating additional levels of biological detail is certainly a possibility but is not currently on our scientific roadmap.
BRIEF OVERVIEW OF CHANNELWORM: The aim of ChannelWorm is to build quantitative models of ion channel function.  To that end, we use the following workflow:
  • Create a database of information about C. elegans ion channels specifying channel type and location in the nervous system (i.e. name of neuron) where the channel is expressed. 
  • Identify relevant scientific papers which contain experimental recordings of ion channel behavior.
  • Extract plots from the corresponding papers and add them to the database of ion channels. (This data is currently available at
  • Digitize plots to extract data points.
  • Use parameter fitting techniques to determine numerical values of parameters for quantitative models.
  • Export ion channel models in the NeuroML (NML) format to be stored in PyOpenWorm.
SHARE IN THE COMMUNITY! We very much encourage both new and experienced volunteers to turn their attention to ChannelWorm and find areas that might be of interest to them.  The best way to get involved is to first fill out our volunteer application form.

In the meantime, we recommend taking a look at the Github repository for ChannelWorm for open issues the project is currently tackling.  Some of the issues are specifically marked as being appropriate for beginners.  We will add to this list over the next few weeks.

WHAT HAPPENS AFTER THE APPLICATION: After we look at your application, you’ll receive an invitation to join our workspace on Slack.  From there, you can contact other volunteers who can help find a project to match your skills and interests.

WHAT IF I’M NOT A PROGRAMMER? Even if you are not a computer programmer, you can still contribute! Here are some of the ways:
  • Author a blog post: we’d love to have informative scientific blog posts on anything relevant to ion channel function and modeling. Submit a draft and we will host it on the OpenWorm blog.
  • Literature search and data re-evaluation: we’d also like to support basic science in this area. If you have experience with conducting research, please get in touch!
  • Create educational content: we can support educational initiatives such as the creation of digital badges and video tutorials. Pitch an idea and become part of the conversation!
  • Something original: if you have an idea for something that no one has thought of, we’d love to hear your proposal!  Artwork, animations, popular scientific articles, etc. are all welcome.  
At the end of August, we’ll be organizing a distributed hackathon for the community to get together and work on tackling open items on our to-do list. We’ll announce more details towards the second week of August.

OW Science Committee is held at 3:00pm UTC, link provided in Slack #science channel. Hackathon Time/Date: TBA. Digital badges are located at

We hope you can join us!
Please fill out our volunteer application form and contact us on Slack for more information!

ChannelWorm is currently building upon the work featured in this publication:

Gurkiewicz, M. and Korngreen, A. (2007). A Numerical Approach to Ion Channel Modelling Using Whole-Cell Voltage-Clamp Recordings and a Genetic Algorithm. PLoS Computation Biology, 3(8), e169.

May 21, 2018

Rise of the Alt-Research Program

It is time for a new paradigm! In the past 5 years or so, a new type of research institute has arisen [1]. One that is flexible and open, without the constraints typical of a University or corporate labs. In a time of institutional change and funding uncertainty, such institutes provide a means for many non-conventional types of research to flourish. We can think of such facilities an "Alt-research Program" (after the "alt-academic" movement) [2], although stressing the open science and collaborative aspects are also important. So let's discuss some recent developments for one such organization, Orthogonal Research and Education Laboratory.

We have three recent developments: a new paper collaboration, a preprint mention, and a set of Google Summer of Code presentations. First up is a paper that was recently published with three co-authors. Orthogonal Research is a nexus for open science-enabled collaborations with University-based academics [3]. The paper “Network Dynamics of Attention During a Naturalistic Behavioral Paradigm” is now live at Frontiers in Human Neuroscience [4]. Learn about the what happens in attentional networks of the human brain during naturalistic behavior – in this case, high-resolution video game play with neural activity captured via “free-viewing” neuroimaging.

Here is how you build institutional credibility (or so I've heard). Notice the second affiliation.

From Figure 3 in the paper (drawings courtesy of Dr. Richard Huskey).

Screen shot of the first-person video game stimulus "Tactical Ops: assault on terror". Screenshot courtesy of Top Full Games and Software.

Orthogonal Lab was also recently mentioned in a preprint on scientific ecosystems [5] from members of the Ronin Institute. In the paper, Orthogonal Lab was described as a lab focusing on more specific research questions than a larger institute focused on enabling basic science initiatives (such as Neurolinx). This new scientific ecosystem paradigm proposed in the paper is focused on how to enable collaboration and open science outside of the formal University structure.

Thirdly, we have community period [6] presentations by three students in this year’s Google Summer of Code program (sponsored by INCF). There will be a presentation now, and a final presentation at the completion of the Summer. The idea is to keep students thinking about the project's progress, to develop their public speaking/presentation skills, and to build up the foundations for a paper or future research. Cheng-Hsun (Jim) Hsueh and Sam Felder are working on Contextual Geometric Structures project (Representational Brains and Phenotypes Group), while Arnab Banerjee is working with the OpenWorm Foundation (DevoWorm Group).

One way to enable Alt-research Programs is to embrace low carbon and location-free modes of doing and disseminating research. One such proposal (by Dr. Angel Goni-Moreno) has been made to provide low-carbon and researcher-accessible conferencing options to the annual ALife conference. In particular, distributed sessions would enable participation and collaboration across continents and research groups that would otherwise not interact.

The new research ecosystem paradigm path to field-specific and interdisciplinary community-building?

[1] See a previous Synthetic Daisies post on hosting theory hackathons through such as organization.

[2] While I find the prefix "alt-" to be a shallow marketing term (sometimes nefariously so), it does fit into existing descriptions of academic activity outside of or in parallel with Universities.

[3] The two main collaborators were the Media Neuroscience Laboratory at UCSB and the Cognitive Communication Science Laboratory at OSU.

[4] Here are the essential materials: Paper, Supplemental Materials, Open Dataset, Video Game Stimulus.

[5] Lancaster​​, A.K., Thessen​, A.E., Virapongse​, A. (2018). A new paradigm for science: nurturing the ecosystem. doi: 10.7287/peerj.preprints.26885v2.

[6] Typically, the community period is an opportunity for students to get acquainted with the community resources (open datasets, open codebase, community members) of their chosen open source/science organization. For more information on the Google Summer of Code community period, here are a few blog posts (1, 2, 3, 4, 5).