The Bus Video This isn't an essay or a blog but it's my favorite introduction to what we're trying to do at the School of Computing and Information at Pitt.
Push Science If machines can master vast, fragmented literatures and assemble models from fragments, pushing them out for all to see, then humans can study these models and read individual papers only as needed.
Modeling and Managing Complicated, Interacting Systems. At DARPA, I realized that it is very difficult to build mechanistic, explanatory models of complicated systems, much less interacting systems. It can take years to model, say, cell signaling pathways in cancer processes, or land use policies, or economic effects of climate change. I wanted to focus on
explanatory models, not just associative machine learning. At first, my Big Mechanism program seemed no more than an opportunity to develop new kinds of modeling technology. But it soon became clear that helping humans to build complicated models is a killer app for AI.
Communicating with Computers This is a podcast from DARPA. We are very, very far from being able to communicate with computers in natural modes such as language, music and gesture. I was fascinated by a sinple problem: Suppose a three-year-old has just stacked two wooden blocks, one on the other, and you say, "Add another one." The kid will put another block on top of the others because she understands "Add another one" in context. Generally, computers have impoverished notions of context, so they can't understand language that depends on it. And nearly all language depends on context!
Accelerating Science. DARPA celebrated its 60th birthday in 2018. This is a short article about efforts to automate science dating back to the earliest days of AI. These days, the "heroic" model of a lone genius seems quaint to me, and the heroic lone AI system seems quaint as well.
A Return to Polymathy. The systemic problems of our age -- climate change, food and water security, aging, energy -- are not neatly confined to academic disciplines, yet universities continue to organize themselves into narrow units that don't talk to each other. Academic researchers generally are ill-equipped to study systemic problems that cross disciplines. Yet it seems plausible to develop a curriculum of near-universal abstractions that, once mastered, encourage polymathy. I worry that if we don't start to train polymaths (and provide them with technologies to integrate huge numbers of narrow results, see, e.g.,
Big Mechanism) then we won't be able to understand and manage the systems on which civilization depends.
Contradictions. Harold Cohen, my father, was a great artist and a brilliant man. He died in 2016. He was the creator of
AARON, one of the first and, to me, still the most sophisticated programs to make computer-based art. Harold worked incessantly on AARON. The program went through many distinct generations and hundreds of subtly different versions. Not surprisingly, Harold's stories about AARON changed over the years. I suppose they were all true in the sense that they all represented what Harold was thinking about the program at any given time. To me, a scientist, this continuous reframing of the AARON story felt like a missed opportunity to frame and test hypotheses about art-making. To which Harold would say, so what, I'm not pretending to be a scientist.
Focal Problems. I am lucky to work in Artificial Intelligence, which has always been driven by applied problems. Some problems are better than others at accelerating fundamental research (see also
"If not Turing's Test, then What?" and
accompanying AAAI 2004 invited talk). Good "focal" problems seem to satisfy half a dozen criteria, which I describe and illustrate in this short note.
Evaluation and Methods. As an undergraduate in psychology I learned about experimental methods and research statistics. By the early 1990s, I had the sense that AI researchers rarely made and tested empirical claims, but I had no data to back this up. So I surveyed all the papers in our major, annual conference, and
here are the results. Later on, I wrote a book called
Empirical Methods for Artificial Intelligence. I also teach a course on research methods for computer scientists and I give
tutorials.
What Professors Do. Way back in 1995, the Massachusetts State Legislature declined to increase faculty salaries. The Secretary of Finance and Administration, Charles Baker, said faculty were "underworked and overpaid." My wife and I, both associate professors at the time, wrote an editorial in which we documented our work in the course of a week. Few people understand what professors do, many think we "have summers off" and we "don't understand the real world." This is our own fault: We need to communicate what we do, what universities have become, and how higher education actually works. Reviewing this editorial at a distance of twenty years it seems antiquated: Things soon became busier and less idealistic.