June 8, 2020

Virtuality: a new view on virtual experience

The first images when you Google "virtuality". Products from the Virtuality Group. LEFT: Virtual Reality Society, RIGHT: Wikimedia (Dr.  Jonathan D. Waldern/Virtuality Group).

At Neuromatch 2.0, I gave a talk on the concept of virtuality titled "Computational Virtuality as a Form of Artificial Intelligence" (slides available on Figshare). Virtuality is a means to summarize all brain and body systems involved in a virtual experience. This talk features two big ideas, the second of which the Saturday Morning NeuroSim group will be building on in the coming months.  

1) "Being Virtual" as Human Performance (click to enlarge).



2) Differentiating Allostasis machine (click to enlarge). Allostatic load is induced collectively by stimulus [1], leads to dysregulation of system and ultimately to a new homeostatic setpoint.


Overall, virtuality can be characterized as a constructive sensory experience (h/t to Anson Lim). The first big idea stems from the observation that current theories are too limiting to properly characterize virtual experiences at the systems-level. Indeed, the very notion of objective reality is based in part on the synchronization of perception and action. When this synchronization is disrupted or perturbed in any way, we observe virtuality. This does not even require a virtual environment -- virtuality can be experienced in our interaction with the physical world [for an example from fly-human interaction, see 2]. But virtual environments are the most common way to elicit the allostatic load required to observe virtuality.


There are two aspects of embodied performance essential to understanding the virtuality effect. The first is the idea of cognitive gaps, or disruptions to spatiotemporal representations caused by the perturbation of reality. The second involves inducing allostatic load in the internal model the represents perception-action coupling. The latter has wide-ranging effects in the homeostat (nervous system), and can lead to many different effects of varying length scales.



But where does the artificial intelligence part come in? In the second part of the talk, I conclude that the effect sizes for human experiments are too small and formal experimental design is too limiting to demonstrate virtuality. This is particularly true for observing virtuality over long periods of time. Therefore, we can use computational agents! To propose a general approach, we want to identify two attentional and three sensorimotor features essential for any agent to exhibit virtuality.



Naturally, we want our agents to be embodied in some manner. We propose working with two types of agent: Morphogenetic Agents, a novel agent type that can exhibit both pattern recognition and morphogenesis (percepetion-action), and the well-known Braitenberg Vehicle, in this case experiencing incongruous environmental physics which leads to gaps in the perception-action loop. 





NOTES:
[1] this requires a ecological view of perception. For a short introduction, please see: Lobo, L., Heras-Escribano, M. and Travieso, D. (2018). The History and Philosophy of Ecological Psychology. Frontiers in Psychology, doi:10.3389/fpsyg.2018.02228.

[2] Alicea, B. (2013). Perceptual Time and the Evolution of Informational Investment. Synthetic Daisies blog, September 24.

May 4, 2020

Welcome, Summer of Coders (2020)!


The Google Summer of Code selections for 2020 have been made! Congratulations to Ujjwal Singh and Mayukh Deb, who will be joining the OpenWorm Foundation and the DevoWorm group for a summer of community engagement and code. Ujjwal's project is called OpenDevoCell Integration, and here is the description:

This project will focus on improving the data science and machine learning infrastructure of the DevoWorm group. The work will focus on an extension of the Summer of Code projects completed in 2017 and 2019. The first two aims are to improve upon the OpenDevoCell web interface and to improve segmentation techniques overall. While the OpenDevoCell interface has been implemented as a Heroku app, we would like to develop a dashboard for interpretation as well as tighter integration with DevoZoo's collection of open-source microscopy data. The third aim is to deploy the code package as a unified Python library, which would be done in concert with the improvement of segmentation techniques.The priority for this Summer is to improve the web interface both in terms of interactivity and functionality. Ideally, we would like to provide users with multiple options for analysis. This includes the ability to incorporate new forms of analysis as well as algorithms for new types of data. Currently, our web app is optimized for microscopy images acquired using the SPIM technique. However, we would also like to segment microscopy images acquired using a wide range of technologies. Feeding into this is the ability to segment and obtain features for the data in our DevoZoo. The ability to extract quantitative data from these movie images is key to conducting the comparative and time-series analysis. The development of a dashboard would ideally enable users to employ various machine learning and simulation techniques in one place.These improvements are meant to increase participation in our open science initiative and make sophisticated analytical techniques more accessible to students and potential collaborators alike. 


Mayukh's project is called Pre-trained Models for Developmental Neuroscience, and is based on previous work done in the group during the DevoWormML course [1]. This project is described thusly:
This project will center around building a pre-trained model for shapes and processes related to Developmental Biology and Neurobiology and extracted from image data. Our organization's Machine Learning interest group (DevoWormML) has published a blog post [1] on the advantages and need for pre-trained models in this area. In short, biological development is characterized by characteristic shapes, movements, changes in shape, and temporal processes that define important features. Pre-trained models are used in NLP and Deep Learning for the domains of sequence discovery in language processing (GPT-2) and bounding box methods for segmenting complex images (DeepLabv3). Models specialized for biology, however, do not exist. A suitable pre-trained model would greatly reduce the need for input data without sacrificing the ability to generalize to different contexts.Our main interest is in extracting spatiotemporal features from image data. We will focus on microscopy data such as that found in the DevoZoo or from more specialized sources [2]. For a typical pre-trained model, the network is pre-trained with non-random weights that approximate the generalized versions of the features we would like to discover. However, we are also interested in a semantic component, particularly the ability to incorporate elements such as meaning assigned to static knowledge (semantics) and multiple meanings for a single feature (polysemy). This will enable relational modeling and the mapping of segmented image data to lineage trees and taxonomies. This will enable relational modeling and the mapping of segmented image data to lineage trees and taxonomies. Our model, tentatively called DevLearningv1, should be applicable to a wide range of neural network and deep learning techniques.

Thanks to INCF for sponsoring our activities once again this year. Thanks also go to Vinay Varma, who will be providing support on all things mentorship this summer. Vinay was a GSoC student last summer, and will be sharing his wisdom with this year's students. I would also like to invite all those who applied for these projects to contribute to the Organization in some other way. Often, interaction with the community now can lead to additional opportunities down the road. 
As for the Orthogonal Research and Education Lab project (Contextual Neurodevelopmental Dynamics), we unfortunately did not get any slots this year. Thanks to Ankit Gupta and Jesse Parent for their excellent proposals. But I would like to continue pursuing the initiative as an open-source effort, hopefully leading to other avenues for development and funding. The same community interaction advice given for OpenWorm applies to Orthogonal Lab as well. 
We are going to be developing in the Meta-Brain repository on Github. Be sure to check out our Saturday Morning NeuroSim meetings for more information on this project (join our mailing list!). And register for the Neuromatch Summer School if you have not already, as it will be quite relevant to what we will be doing.
A sample of Saturday Morning NeuroSim (with a recap of the ICLR conference). Click to enlarge.

UPDATE (5/5): One of our regular meeting attendees (Devansh Batra) has also received a Google Summer of Code position with the OpenCV organization. Congrats!
NOTES:
[1] Alicea, B., Gordon, R., Kohrmann, A., Parent, J., and Varma, V. (2019). Pre-trained Machine Learning Models for Developmental Biology. The Node blog, October 29.

April 10, 2020

fQXi essay on the Undecidable, Uncomputable, and Unpredictable


It's that time of year again: the fQXi essay contest for 2020 is going strong! Every 12-24 months, fQXi (Foundational Questions Institute) sponsors an essay content on a different topic. The fQXi community [1] then responds to the essay using a ratings and comment system. This year's topic was "Undecidability, Uncomputability, and Unpredictability", a topic not only applicable to physics, but to fields ranging from Computer Science to Sociology and even Biology. Check out the collection of submissions for some incredibly creative takes on the topic.


Myself, along with Orthogonal Research and Education Lab members Jesse Parent (@JesParent on Twitter) and Ankit Gupta (@ankiitgupta7 on Twitter) submitted an essay called "The illusion of structure or insufficiency of approach? the un(3) of unruly problems".

I have also posted several essays from years past as part of a ResearchGate project. These include "Establishing the Phenomenological Conditions of Intention-like Goal-oriented Behavior" from 2016 and "Towards the meta-fundamental: introducing intercontextual invariants" from 2018.

A few weeks after submitting this year's essay, I discovered the work of Nicolas Gisin, who has published a series of papers [2] on alternative forms of mathematics (such as intuitionism) for describing complex systems. While his examples are limited to physics, they are a complement to this year's essay.

NOTES:
[1] for some stimulating internet discussion, check out the Alternative Models of Reality section of the fQXi community.

[2] Gisin, N. (2020). Mathematical languages shape our understanding of time in physics. Nature Physics, 16, 114–116.

April 3, 2020

NeuroConferences in Twos


Welcome to April and the era of the virtual conference! Aside from enforcing social distancing, virtual conferences have a number of other social benefits. I have recently presented at two virtual conferences, one on Twitter and one via teleconference software. The Twitter conference (OHBMx) was held previously as the brain.tc conference. This year, I presented on behalf of four co-authors about our new paper "Braitenberg Vehicles as Developmental Neurosimulation".


Our talk (#44) at OHBMx (click to enlarge).

In this talk and paper, we explore using Braitenberg Vehicles (BVs) to study the role of processes related to brain development and ontogenetic emergence of behavior. So-called dBVs are flexible systems that allow for naturalistic explorations of embodied behavior. Our approach utilizes four allied topical areas: evolutionary simulations, multisensory Hebbian learning, simulations of collective behavior, and explorations of embodied cognition.

A little more than a week later, me and one of my co-authors from the Twitter conference (Jesse Parent) presented at the Neuromatch conference. Neuromatch was put together in a only a few weeks by Konrad Kording, Dan Goodman, and Titipat Achakulvisut (congrats to them). The organizers used Crowdcast and Zoom as the presentation media. Two days of talks and over 2500 attendees! While there were various technical challenges to overcome (such as Zoombombing), the conference was a great success. The organizers said that this conference was meant as a template for future all-virtual conferences.


The title of my presentation was "Process as Connectivity: towards biology-specific complex networks". This is an update on presentations given at the Find Your Inner Modeler II workshop and the NetSci 2017 conference. Jesse's presentation was "Embodied Cognition: Using Developmental Braitenberg Vehicles To Model Levels of Representation", and while delayed due to technical difficulties, was a follow-up on our dBVs paper.

Two great talks at Neuromatch! Click to enlarge.

The other part of Neuromatch is matching researchers based on interest area. One interesting outcome to result from this matching process is the new BrainWeb community. BrainWeb is a new collaborative platform aimed at bringing together Neuroscientists and Neuro-adjacent people with expertise to share. They have even visualized an adjacency matrix of the community so far based on expertise. Check out their website for information on hackathon times and virtual locations (URLs).


Now it is I who is caught up in the hairball! Click to enlarge.

UPDATE (4/6): a new paper from several eLife Ambassadors is now out on the bioRxiv with recommendations on how to improve the academic conference experience. Many of their concerns and recommendations for change dovetail with the virtual conference experience. 

Citation: Sarabipour et.al (2020). Evaluating features of scientific conferences: A call for improvements. biorxiv, doi:10.1101/2020.04.02.022079.

March 11, 2020

Silver Linings of COVID-19

PLoS headquarters when most of its staff is working remotely (click to enlarge).

A Brady Bunch pun on remote work from our friends at Numenta (click to enlarge).

This potentially tasteless title brings to mind the positive elements of canceling classes, academic conferences, and workplace meetings: the ability to do these activities virtually. Among my current projects, I am involved in a number of working groups that are entirely virtual. These group utilize Zoom and Google Meet to give talks and hold meetings, with Github, Google Docs, and a host of other tools to manage contributions and research products (papers, talks, social media posts). This might be called the "Zoom/Slack" paradigm. Below is a Twitter thread from a Sloan Foundation program officer that asks for thoughts on alternatives to this standard.


In 2014, I posted on a concept called a theory hackathon, which was held as a hybrid physical and virtual event. The idea is to define and work on problems that are best solved in teams where not all members can meet live. But online meetings are evolving beyond awkward encounters and technical glitches. Often, live physical meetings are meant to cement social ties. Indeed, below is an informal survey that asks this very question.


In general, live physical conferences seem to be useful for social connection. Revisiting the tweet from Josh Greenberg, perhaps what is needed aside from virtual meeting spaces and file exchange/ chat functionality is a frictionless social platform. This could be conventional social media, or more likely a virtual reality platform integrated with live video/version controlled file exchange/chat capabilities.

Virtual meetings and virtual work are not without their own rhythm and customs. Aside from the potential for social disconnection, it also poses a challenge for personal habituation and ultimately productivity. Below is a link to a Twitter thread that gives tips for meeting virtually for people who are unaccustomed to doing so.

Online meeting tips from Mozilla Open Leaders (click to enlarge).

Carpentries-style tips for synchronous online meetings (click to enlarge).

There are also tips for working from home more generally. As with virtual meetings, capacity to work remotely has been accelerated in the age of social media [1]. The link below gives tips about working from home as an adjustment from working in a large office or public place. Generally, virtual work does require a change in expectations, from dealing with technical glitches to dealing with gaps in social presence [2].

Tips on adjusting to working at home (click to enlarge).

Well-being while working from home (download) (click to enlarge).

Draft workbook on how to host an online conference (click to enlarge).

Online conference are more than simply scaling up virtual meetings. There is a method to conducting and organizing online conferences [3], and there are a number of options regarding the medium. Returning to the issue of greater social connectivity in virtual meeting, one solution is to hold the conference in a virtual world such as Second Life. In this type of meeting, you are able to meet other people as avatars, and even interact with the venue itself. Below are two examples of my experiences with Second Life academic events in the past, one being a continuing lecture series called Embryo Physics, and the other a conference called Simulation and Second Life.


Tour of the Embryo Physics Course @ Silver Bog, Second Life (click to enlarge).


My avatar at the Simulation and Second Life conference, 2007 (click to enlarge).

As a bonus, there is a new agent-based model of COVID-19 transmission created by Paul Smaldino and implemented in NetLogo. This model demonstrates the efficacy of social distancing (hence the resurgent interest in working virtually).

Discussion of COVID-19 transmission model as a Twitter thread (click to enlarge).

Be sure to also check out the Living Computation Foundation's "Pandemic in a Box"! Click to enlarge.


NOTES:
[1] Williams, A. (2017). How the Rise of Social Media Fostered a Culture of Remote Working. Social Media Week, April 14.

[2] Oh, C.S., Bailenson, J.N., and Welch, G.F. (2018). A Systematic Review of Social Presence: Definition, Antecedents, and Implications. Frontiers in Robotics and AI, doi:10.3389/frobt.2018. 00114.

[3] Reshef, O., Aharonovich, I., Armani, A., Gigan, S., Grange, R., Kats, M.A., Sapienza, R. (2020). How to organize an online conference. arXiv, 2003.03219.

Printfriendly