November 27, 2020

Open-source Community-building Discussions

As part of my role as Community Manager at the Rokwire Initiative, I have been maintaining content on a personal blog. I am highlighting some of the systems-oriented posts here on Synthetic Daises. The first re-post summarizes the functional aspects of open-source communities, while in the second re-post I propose that open-source communities are actually form of collective intelligence. The third re-post is of particular interest to this blog's audience, which is a revisitation of wicked problems.


Striving for Issue 0 (why do we have a community)

Why do we want to build a community in the first place? This question may seem self-obvious on its surface, even in the open-source context. But it is good to state the reasons why we might want to create a community. More importantly, it is good to think about communities as having explicitly stated benefits. Communities allow us to tap into a heterogeneous pool of expertise, build upon a network of contributors, recruit collaborators for future projects, and serve to facilitate education.

What direction does the action flow? A countdown to the essential issue, or enumeration of the issues?

The first benefit of an open-source community is to allow for a single contributor to interact quickly with many other contributors with a wide range of skills and expertise. For example, if you are a technical writer and have a question about a new programming language, you can post a message to the Slack channel with your inquiry and receive a helpful answer much more quickly than making a blanket inquiry or searching for the answer yourself. This actually sounds like the guiding principle of a University, but with a much more decentralized and informal structure.

Unlike a University, open-source communities are inherently interconnected [1], and as such serve to form a social network of contributors. Our hypothetical Slack channel is part of a Slack team (with many channels), and the Slack team is in turn a part of the entire community. As with a traditional workplace or school, this involves the interactions of many people, often with different roles and interests. A community allows for people who might not otherwise interact to cross paths, and mutually benefit from this serendipity [2].

Example of network structure in an open source community. Map of openrpg bug report interactions. From [3].

By their nature, open-source community networks are broad and loosely integrated. This is necessary by design; many open-source contributors have other commitments and contribute erratically to the project. But this is a good thing! One benefit of this structure is to enable people who would otherwise not make the time commitment. Related to this is the existence of a talent pool that is poised to build out an existing initiative, or engage in a new initiative. This alleviates the need to recruit and incentivize people using conventional hiring mechanisms.

But these relationships should not designed to be exploitative. In fact, one contributor to a healthy and stable open-source community is the feeling that contributors are able to benefit from their participation. Short of financial compensation, there are many other ways that contributors can benefit. One of these is becoming educated about the software platform itself. Educational incentives such as badging systems (microcredentials) serve as small-scale incentives. Being a contributor also allows one to learn about the latest features, and even have a hand in developing them. Another way open-source communities can reinforce education is through outreach, particularly in the form of an Ambassadors program [4]. A focus on learning opportunities centered on a specific software platform, in addition to learning and promoting associated skills, builds participation incentives into the community.

In these ways, we can take a step back from issues and builds and think about how we can leverage the community as a benefit of the platform. Then we can work towards addressing Issue 0: building and maintaining a community.

NOTES:

[1] Teixeira, J., Robles, G., and Gonzalez-Barahona, J.M. (2015). Lessons learned from applying social network analysis on an industrial Free/Libre/Open Source Software ecosystemJournal of Internet Services and Applications, 6, 14.

[2] Achakulvisut, T., Ruangrong, T., Acuna, D.E., Wyble, B., Goodman, D., and Kording, K. (2020). neuromatch: Algorithms to match scientistseLife Labs, May 18.

[3] Crowston, K. And Howison, J. (2005). The social structure of free and open source software developmentFirst Monday, 10(2).

[4] Starke, L. (2016). Building an Online Community Ambassador ProgramHigher Logic blog, July 28.

The Role of Collective Intelligence in Open-source

What do ants have to do with open-source?

Ants (and social insect more generally) are capable of building structures that feature great complexity and labor specialization. These complicated structures result from the small contribution of individual ants, each of which have a specialized job in their community [1]. In this sense, ant colonies exhibit parallels with open-source collaboration. Both types of organization (ant societies and open-source communities) rely upon collective intelligence, or the wisdom and power of crowds to create an artifact of great complexity shaped by design principles that emerge from these interactions.

Collective intelligence can be defined as intentional behaviors based on a coordinated set of goals among multiple individuals, and emerges from various forms of collaboration, competition, and group effort. For systems ranging from insect colonies to human societies [2], collective intelligence is a prime enabler of coordinated social behaviors and movements [3]. Being aware of how this process works is important for making the most of an open-source effort.

Traditional crowdsourcing can be understood as cooperation between two groups of people: requesters and contributors [4]. Requesters can be thought of as people who want a functional artifact, but may not be able to implement the solution. Contributors are people with technical know-how who realize the initial specification. I have discussed the open-source ethos and related contributor motivations in other posts on this blog. In this post, the focus will be on how requesters and contributors work together to produce a coherent outcome.

The find-fix-verify (FFV) workflow [5] has been identified as a facilitator or collective intelligence in open-source projects. FFV involved three steps: 1) finding room for improvement, such as a new feature or correcting an error, 2) proposing specific solutions to said improvements, and 3) verify that such changes are acceptable solutions. FFV allows for a division of labor in open-source communities based on expertise and technical skill. Yet each step is not executed by a dedicated employee reporting to a supervisor. Rather, each step is performed by groups of contributors who make their own contributions based on personal experiences and context.

In terms of organization, there are three ways in which open-source communities can be organized [4]. These are demonstrated graphically using Paul Baran’s networking diagram. The first is through direct leadership (centralized, A), which typically involves a single coordinator. This is most typically found in traditional corporate or academic organizations, and are often the least effective at harnessing the power of open-source.

Open-source communities can also be organized through collaboratives (decentralized, B), which involves the coming-together of people with a common interest. This form of organization is usually maintained over the long-term through expedient subgroups that form and dissolve given the immediate imperative. Finally, the passive mode of organization (distributed, C) is perhaps the most effective mode for facilitating open-source. In this type of organization, members of the crowd work independently, and in fact may never collaborate directly. This mode resembles so-called leaderless movements [6]. While collaborative and passive organizational modes have their advantages for facilitating open-source, there is no one-size-fits-all solution. The optimal organizational structure is often project- and goal-dependent.

Paul Baran’s Networking Diagram

A great majority of open-source contributors are transient [7]. Generally, these people contribute to a single project, and within that project only contribute to a small portion of the codebase. Most open-source projects have a few key leaders (occasionally not dissimilar to queen ants) who coordinate and facilitate project management [8]. But this is only a coordination tactic; across hundreds or event thousands of contributors, great things can happen.

NOTES:

[1] Bonabeau, E., Dorigo, M., and Theraulaz, G. (1999). Swarm Intelligence: From Natural to Artificial Systems. Oxford University Press, New York.

[2] O’Bryan, L., Beier, M., and Salas, E. (2020). How Approaches to Animal Swarm Intelligence Can Improve the Study of Collective Intelligence in Human TeamsJournal of Intelligence, 8(1), 9.

[3] Sasaki, T. And Biro, D. (2017). Cumulative culture can emerge from collective intelligence in animal groupsNature Communications, 15049.

[4] Bigham, J.P., Bernstein, M.S., and Adar, E. (2014). Human-Computer Interaction and Collective Intelligence. Chapter 2, Collective Intelligence Handbook.

[5] Bernstein, M.S., Little, G., Miller, R.C., Hartmann, B., Ackerman, M.S., Karger, D.R., Crowell, D., and Panovich, K. (2015). Soylent: A Word Processor with a Crowd InsideCommunications of the ACM, 58(8), 85-94. doi:10.1145/ 2791285.

[6] Alicea, B. (2012). Leaderless control: understanding unguided orderSynthetic Daisies, April 9.

[7] Cui, X. And Stiles, E. (2010). Workings of Collective Intelligence within Open Source CommunitiesThird International Conference on Social Computing, Behavioral Modeling, and Prediction, Bethesda, MD.

[8] Alicea, B. (2020). Building a Distributed Virtual Laboratory Adjacent to AcademiaMetaArXiv, doi:10.31222/osf.io/4k3z6. 


Infinite Issues (issue infinity): how to break down a wicked problem

In this blog post, I want to discuss how to break down a big problem into a set of smaller issues that are addressable in a short-term timescale. Previously, I discussed how to break a complex problem down into addressable action items. Now I want to do this in the context of wicked problems, or problems that are highly complex, hard to predict, and have multiple unintended or unforeseen outcomes.

Solving a basic problem requires you to take its most salient and/or interesting features, and then establish the outlines of a solution. This might involve breaking the problem up into  more easily solved parts, or asking additional questions about the nature and context of the problem. Next is a consideration of what resources you might need. In an open-source project, many of these projects revolve around the time constraints of contributors. Therefore, one key to creating issues is to break down the problem into small pieces that are both relatively easy to solve and require a low time commitment. Individually, these might seem too small to matter. Taken together, however, they allow you to build large-scale applications.

Of course, this procedure works well for normal complex problems, such as “let’s build a smart watch”! But suppose we want to work on a problem that interacts strongly with social systems, such as how to mitigate a pandemic. Then we have to think about the problem in terms of so-called wicked problems.

Wicked problems have a very high degree of computational complexity, which translates into a system with many more moving parts than cannot be analyzed in a way that provides an exact answer.

1) ill-defined problems reign supreme: it is hard to define the problem or even a set of issues at least initially.

2) all solutions to a given problem are at best a guess. This includes the ability to break down a problem into salient issues. Most issues in wicked problems will require the approximation of the salient issues, as well as forms of rapid prototyping to refine issues as the problem domain becomes more familiar.


Wicked Problems from a design perspective. Image is from Figure 1 in [1].

3) no natural end-point, where a system does not have a clearly defined stopping points.

4) so-called messes interactions between subdomains, problems of which cannot be easily broken up into discrete parts [for more, see 2]. This might be because your problem domain has porous boundaries/categories or that the problem itself is highly variable over time.

One example of a wicked problem is an institutional response to COVID. The University of Illinois has done this fairly effectively, but has involved innovations on multiple fronts. Part of this has involved Safer Illinois, which is a normal complex problem. But Safer is necessary but not sufficient to solving this problem. The real success has been seen through multiple institutional components such as testing regimens and building ambassadors. While an app designed to manage one part of the pandemic response is a basic problem, the response as a while is a wicked problem. This is something to consider when we think about how our contributions fit into larger issues.

NOTES:

[1] Jobst, B. And Meinel, C. (2013). How Prototyping Helps to Solve Wicked Problems. Design Thinking Research, 105-113.

[2] Yeh, R.T. (1991). System Development as a Wicked Problem. International Journal of Software Engineering and Knowledge Engineering, 1(2), 117-130.

November 18, 2020

Presentations at Neuromatch (NM)3


Neuromatch 3 (NM3) happened a few weeks ago in virtual space, and it was great! There were hundreds of presentations over five days (October 26-30), many of them already archived on YouTube. This version of Neuromatch took the place of Society for Neuroscience (SfN), which was cancelled due to COVID. In this sense, Neuromatch is proving itself to be an improvement on the legacy conference. Between my two research groups (Representational Brains and Phenotypes and DevoWorm), we had five presentations submitted to NM3. Let's go through them one by one.



This presentation was by Krishna Katyal and myself. Krishna is a regular contributor to the DevoWorm group. This presentation demonstrates several contrasts between Biological and Artificial Neural Networks, and how level of abstraction, network structure, and energetics all play a role in distinguishing the information-processing marvels known as biological brains.



Thinking more along the lines of biological development, the next presentation features work done in the Representational Brains and Phenotypes Group. Taking a developmental approach to the classic Braitenberg Vehicle, we demonstrate how embodied developmental principles can be used to shape and guide networks for learning. Topics such as developmental contingency and the difference between morphogenesis and learning during the development of an artificial nervous system were also discussed.



Are the advantages of cognitive information processing limited to organisms with a brain? This seems like a strange question, but can actually help us understand what a brain does and why it is important for coordinating the behavior of complex multicellular systems. In this presentation, which includes contributors to both the Representational Brains and Phenotypes and DevoWorm groups, we reconsider a model of Diatom movement as cognitive information processing. We also propose a series of potential scenarios in which this information processing occurs as well as Psychophysical-like measures to quantify these phenomena.



This abstract was submitted by Jesse Parent and Anson Lim, two regular contributors to the Orthogonal Research and Education Lab. Jesse is also a group leader and community manager in the lab. While they were not able to present during the scheduled time, they continue to work on this topic under the new and emerging Cognition Futures project.



The final talk was by Akshara Gopi (a regular contributor to the Orthogonal Research and Education Lab), along with myself and Ashwin Irungovel. This presentation was on convergence insufficiency in human vision, a topical specialty of Akshara and Ashwin. My contribution was to propose an agent-based model for this phenomenon. 

Check out all of the great talks at NM3, including one by Rishabh Chakrabarty (regular research contributor to the Orthogonal Lab) and his co-author called "Seeing through the Mind’s Eye: reconstruction of the visual stimuli using 3D Generative-Adversarial Modeling". This intriguingly titled talk features research that combines Neuroimaging data with Deep Learning.

If you are interested in these topics and want to be involved in this research, join us at our weekly research meetings (Saturday Morning NeuroSim and DevoWorm group), or watch them on YouTube! I also invite you to join the Orthogonal Research and Education Lab Slack or OpenWorm Slack (DevoWorm) for continued discussion. Hope to see everyone at Neuromatch 4!

Printfriendly