Learners often struggle to grasp the important, central principles of complex systems, which describe how interactions between individual agents can produce complex, aggre-gate-level patterns. Learners have even more difficulty transferring their understanding of these principles across superficially dissimilar instantiations of the principles. Here, we provide evidence that teaching high school students an agent-based modeling language can enable students to apply complex system principles across superficially different domains. We measured student performance on a complex systems assessment before and after 1 week training in how to program models using NetLogo (Wilensky, 1999a). Instruction in NetLogo helped two classes of high school students apply complex sys-tems principles to a broad array of phenomena not previously encountered. We argue that teaching an agent-based computational modeling language effectively combines the benefits of explicitly defining the abstract principles underlying agent-level interac-tions with the advantages of concretely grounding knowledge through interactions with agent-based models.
Hansen, M. E., Lumsdaine, A., & Goldstone, R. L. (2013). An experiment on the cognitive complexity of code. Proceedings of the Thirty-Fifth Annual Conference of the Cognitive Science Society. Berlin, Germany: Cognitive Science Society.
What simple factors impact the cognitive complexity of code? We present an experiment in which participants predict the output of ten small Python programs. Even with such simple programs, we find a complex relationship between code, expertise, and correctness. We use subtle differences between program versions to demonstrate that small notational changes can have profound effects on comprehension. We catalog common errors for each program, and perform an in-depth data analysis to uncover effects on response correctness and speed.
Programming language and library designers often debate the usability of particular design choices. These choices may impact many developers, yet scientific evidence for them is rarely provided. Cognitive models of program comprehension have existed for over thirty years, but lack quantitative definitions of their internal components and processes. To ease the burden of quantifying existing models, we recommend using the ACT-R cognitive architecture: a simulation framework for psychological models. In this paper, we provide a high-level overview of modern cognitive architectures while concentrating on the details of ACT-R. We review an existing quantitative program comprehension model, and consider how it could be simplified and implemented within the ACT-R framework. Lastly, we discuss the challenges and potential benefits associated with building a comprehensive cognitive model on top of a cognitive architecture.