Alumni Profiles
Andrew McCallum PhD '96
Interview from 2017 Multicast Newsletter
Andrew McCallum is a Professor and Director of the Information Extraction and Synthesis Laboratory, as well as Director of Center for Data Science in the College of Information and Computer Science at University of Massachusetts Amherst. He has published over 250 papers in many areas of AI, including natural language processing, machine learning and reinforcement learning; his work has received over 50,000 citations. He obtained his PhD from University of Rochester in 1995 with Dana Ballard. He is a AAAI Fellow, the recipient of the UMass Chancellor's Award for Research and Creative Activity, the UMass NSM Distinguished Research Award, the UMass Lilly Teaching Fellowship, and research awards from Google, IBM, Microsoft, and Yahoo.
You have an unattributed quote on your website: “A master can tell you what he expects of you. A teacher, though, awakens your own expectations”. Is this your advising style? How do you bring out the best in your students?
AM: Dana Ballard, my PhD advisor at Rochester, was a big influence on my advising style. He lead by inspiration. I remember seeing him give a talk about his latest ideas for active vision, and walking away completely jazzed, ready to change the world.
I try to lead my own students by inspiration. I certainly guide with a light touch. I make suggestions for broad directions or certain technical approaches, but the students make their own choices. Sometimes it can turn into sink or swim, but hopefully with a lot of calm, loving advice from the sidelines. I really see my advisees as additional children––my academic offspring. There are a lot of parallels: they need both nurturing and freedom in careful measures on just the right schedules, which can often be hard to discern.
Your PhD thesis was in reinforcement learning. Since machine learning was in its infancy, how significant was your early work in establishing machine learning as a new direction in computer science?
AM: Before I finished my thesis I was already becoming disenchanted with reinforcement learning, and I really wanted to start a side-project on machine learning for natural language. Dana very wisely said, "No. Finish your thesis first." I was disenchanted because at the time I felt that RL was a solution in search of a problem. The RL community was mostly stuck solving maze problems. (I certainly contributed to that problematic trend.)
Of course now we are seeing how wrong I was, given the fantastic success of reinforcement learning with deep neural networks, not just in DeepMind's AlphaGo, but in education, medicine, energy, and many other areas. I think it is one of the most exciting research directions on the landscape in a decade.
My later work on machine learning for text perhaps had more influence (for example, conditional random fields), but I certainly wouldn't have been able to do that without the mental exercise and confidence-building of first doing my Rochester reinforcement learning thesis.
You have created some machine learning tools that are well known and widely used by technologists. Could you describe MALLET and FACTORIE, how they are being used, and who would be most likely to use them?
AM: I have always tremendously enjoyed engineering large systems and writing code. In fact during my PhD, during a low-confidence moment I considered exiting the UR CS PhD to work as a coder on a Free Software Foundation project. (That probably would have been fun, but I'm very glad I stuck with research.) I did, however, waste a huge amount of time in the final months of my PhD reading through much of the source code to the GNU Hurd project.
Anyway, after WhizBang Labs (the startup company that Tom Mitchell and I helped found in 1999) went under, I realized I really missed working with students and focusing on research. So I decided to apply for an academic position. I feel very fortunate that UMass Amherst said yes. In the early days of WhizBang Labs I did a lot of coding, helping design a system that eventually turned into over 1 million lines of machine learning and data integration infrastructure. But during the final six months of WhizBang I hardly had any time to code. When I arrived at UMass I knew my students and I would need some software infrastructure to do our research. And I had a lot of pent-up coding energy. So during my first three months at UMass (before I had many students or too many responsibilities) I wrote something like 30,000 lines of Java, which became MALLET.
It was used by nearly all my students for almost ten years. I'm happy that many others seem to have found it useful too. I estimate that about 500 companies (large and small) have used it, and it has been cited almost 2,000 times. It mostly does efficient classification, clustering, topic modeling, and sequence labeling (the later with conditional random fields).
By 2009 my research was requiring more general graphical models than MALLET could provide. I was also getting tired of the verbosity of Java. So on the advice of my UMass colleague Emery Berger, I took a look at the programming language Scala, and I started work on FACTORIE. I wrote most of it during my sabbatical in Grenoble, France. It is a general purpose graphical model toolkit designed to be fast and scalable (not a teaching toy). It also has a complete NLP pipeline that is in many cases faster, more accurate and more compact that the Stanford Core NLP toolkit. It is also fairly widely used. For example, it is part of some shipping products from Oracle.
But FACTORIE doesn't have any deep learning. So I'm now trying to decide whether to collaborate with some Oracle JVM engineers on some JVM-GPU integration and extend FACTORIE, or whether to build something new.
The day you defended your PhD thesis was a cold, snowy Rochester day. As you were racing around preparing for your talk, you slipped on melted snow in the CSB hallway, and cut open your head on a corner wall, requiring a trip to the ER and stitches. Later that day, you successfully defended your thesis with a bruised and bandaged forehead. Has this experience helped you prepare your students through all the inevitable ups and downs a grad student might experience along the path to a PhD?
AM: Ah! I remember slipping in the hallway and getting stitches at the hospital. And I remember giving Leslie Kaelbling a tour of our virtual reality lab before my thesis oral defense. But I had forgotten that they happened the same day!
The CS staff was so kind when I walked into Marty Guenther's office, blood streaming down my forehead. I think Jill Forster drove me to the ER, stayed, and drove me back.
Finishing a PhD is a surprisingly emotional journey, I find. From the intellectual search of the last year or so, to the final few months' push to finish the writing, to the oral presentation––and throughout the self-doubts, and the accomplishments. I find myself thinking a lot about this as I guide my own students through to the end. I admit I shed a few tears myself along the way, including a few in Dana's office. The majority of my own students PhD students can attest to the same thing.
You are well-known for your work in conditional random fields. Your 2001 paper on this topic received the ICML “Test of Time Award” in 2011 because of its applicability for more than a decade. Could you have imagined your work would be cited more than 10K times? What makes this paper so important after so many years?
AM: When John Lafferty, Fernando Pereira and I were working on these ideas, I knew they were a great solution to the information extraction problem I was working on at the time, but we had no idea the idea would grow to be so influential.
In the beginning the experimental results weren't even very good. I had tremendous fun working on improving and extending the idea with my students at UMass. I was thrilled when a few people from other universities began to build on the ideas.
In some ways I see the inspiration for conditional random fields as related to my Rochester thesis work on "utile distinctions." In both, the algorithm is striving to focus on what is important for the problem at hand, and not waste modeling effort on capturing parts of the data that are not relevant.
It has been exciting to see interest in CRFs morph as the deep neural network revolution has exploded. The combination of deep learning and structured prediction is still largely unexplored. In 2016 and 2017 ICML papers my student David Bellanger and I have proposed a new approach we call "structured prediction energy networks" in which the factor graph is replaced by a new energy function based on a neural network. I'll be interested where this idea ends up in 10 years.
As president of the International Machine Learning Society you have other UR alumni on the board with you. Jeff Schneider (PhD ’95) and Corinna Cortes (PhD ’94) both serve with you as secretary and board member. Was URCS at the forefront of machine learning in the mid-90’s. How have our alumni contributed to this field?
AM: So many UR alumni are now in such great leadership positions. Corinna is heading research in Google NYC. Jeff is leading various machine learning projects at Uber. I feel fortunate to have shared our grad school years together. I remember studying for quals with Corinna. With Jeff I remember lots of studying for Lane's theory class and Michael Scott's systems class, as well as big Friday night dinners and game nights, in which I did most of the cooking (which I loved), and Jeff did most of the winning at poker (which I assume he enjoyed also).
UR CS has always been a gem––small, but sparkling with creativity and warmth. When I applied to PhD programs I was waitlisted at MIT. I'm so glad I didn't go to MIT; I think it would have eaten me alive. UR CS was the perfect place to grow, learn and be inspired.
What are the top three biggest machine learning successes? What do you see as the next big problem that machine learning will solve?
AM: Deep neural networks for computer vision in self-driving cars. Structured prediction by various methods for natural language understanding. Reinforcement learning with function approximations for the game of Go.
(By the way, I think Dana and I spent more time playing Go together than we did talking about research. I believe I learned just as much about life, strategic choices, and resilience from those games as I would have by conversations about research or any other topic.) (insert photo of Andrew and Bob Wisniewski playing Go)
Next big problems? I'll give two related ones: (1) Unsupervised discovery of task structure, e.g. reinforcement learning subroutines, exploration, and curiosity. (2) Conversational dialog.
Outside of work, how do you spend your recreation time? Do you still contra dance like you did when you were a graduate student?
AM: My wife Donna and I met at the regular weekly contra dance in Rochester. I started contra dancing there because George Ferguson invited me to play Ultimate, and after the frisbee game someone there invited me to stay in the field where there was a rare outdoor contra dance.
Donna and I still occasionally dance here in western Massachusetts (where some of the hottest contra dance folk bands in the country call home), but we spend more time doing things with our kids. My youngest loves mountain climbing. We have nearly finished the highest peaks in each of the New England states, and soon we'll work on all forty-eight 4,000 footers in the New Hampshire White Mountains. My oldest son heads to university this fall to study Industry Design at Rochester Institute of Technology. So I'll have additional reasons to come back to Rochester and visit. I'd love to drop in on the UR CS department again and thank you all in person for the kind-hearted and inspirational help you gave me and so many others.