| | | | | | | | Michael L. Scott, Chair Arthur Gould Yates Professor of Engineering, Professor of Computer Science
|
|
|
|
|
|
|
|
|
| Dear friends,
As I write this in mid October, I am a quarter of the way through the fourth and final year of my term as department chair. I am delighted to share that Dean Heinzelman has selected Chen Ding to succeed me next July. In the intervening months he is serving as Associate Chair.
As the world continues to heal from the COVID-19 pandemic, life in the department this year has felt quite a bit more “normal.” The building is buzzing with people, classes and seminars are well attended, and our staff, after some major reshuffling last year, is settling well into what we hope will be another long stretch of stability.
Commencement last May saw us award 125 BA and BS degrees, 28 MS degrees, and 20 PhDs. Among the undergraduates, almost a third were double majors; four were triple majors, and over 40% completed a minor in another field. Twenty studied abroad. Five completed senior honors theses, and dozens of others were actively engaged in research. Ten were inducted into Phi Beta Kappa. At the other end of the pipeline, 27 new MS students and 22 new PhD students entered the department this fall. The PhD cohort is the largest in department history.
Perhaps the most exciting news this year has been the hiring of four new faculty members—the most we’ve ever brought on at one time. Jian Kang strengthens our core expertise in machine learning, with a particular emphasis on fairness. Jiaming Liang, who has a joint appointment with Data Science, brings new expertise in optimization. Yukang Yan, who will be joining us in January, augments our HCI group, with a particular emphasis on mixed-reality systems. And Monika Polak, who joins our instructional faculty, is an active researcher in post-quantum cryptography. Bios of these wonderful new colleagues can be found elsewhere in this issue. (And we hope to hire two more this year!)
Among existing faculty, Jiebo Luo was elected a Fellow of the US National Academy of Inventors (NAI). Zhen Bai received an NSF CAREER Award. Work by Kaave Hosseini and others received the Best Paper Award at the International Colloquium on Automata, Languages and Programming (ICALP). Within the University, George Ferguson received the Edward Peck Curtis Award for Excellence in Undergraduate Teaching. Undergraduate Coordinator Sara Klinkbeil received the Edmund A. Hajim Outstanding Staff Award and PhD Coordinator Robin Clark received an ACE Staff Recognition Award.
Our students have also received a host of recognitions. Raiyan Baten (PhD ’22) won the Outstanding PhD Dissertation Award of the Association for the Advancement of Affective Computing (AAAC). Zhengyuan Yang (PhD ’20) received the 2022 ACM SIGMM Award for Outstanding PhD Thesis in Multimedia Computing. Jingyuan Chen ’25 and his advisor, Jiebo Luo, took the Best Student paper award at the IEEE International Conference on Digital Health (ICDH). Saiful Islam (PhD, ’27) received a Google Research Fellowship. Hanjia Lyu (PhD, ’28) received a Donald M. and Janet C. Barnard Fellowship.
Among our bachelor’s students, Sizhe Li ’23 was a finalist in the Computer Research Association’s Outstanding Undergraduate Researcher competition; Adira Blumenthal ’24 and Draco Xu ’23 received honorable mention. Sidhant Bendre ’23 and Neil Yeung ’23 took third place in the Forbes Entrepreneurial Competition. Our lead team for the International Collegiate Programming Contest, consisting of Tran Bao ’23, Loc Bui ’22, and Vladimir Maksimovski ’22 (coached by Daniel Štefankovič and Adam Purtee), took ninth place in the Northeast North America regional competition. Seven of the eight teams that placed ahead of them were from MIT; the eighth was from Harvard. Their performance earned them a spot at the North American finals. Within the University, CS students received a long list of awards—too many to list here. I will, however, call out the CS Undergraduate Council (CSUG) for winning the Dean of Students’ Award for Excellence in Creative Co-Sponsorship and, in competition that spanned the Engineering School, Erin Gibson ’23 for winning the Hook Prize, Henry Welles ’23 the Newton Prize, Ashley Wilson ’24 the Block Prize, and Qiyuan Feng ’24 the Wells Prize. CS rocks!
And of course our alumni continue to excel. Srinivasan Parthasarathy (PhD ’99) was named an IEEE Fellow. Bob Wisniewski (PhD ’96) was named Vice President and Chief Architect for High Performance Computing at Samsung Corp. Corinna Cortes (PhD ’94) was named an ACM Fellow. Bill Bolosky (PhD ’93) received the Test of Time Award from the 2023 USENIX Conference on File and Storage Technologies.
As we enter our 50th year as a department, there is much to be grateful for and to be excited about. Department research is thriving and increasingly interdisciplinary: we have collaborations underway not only with a host of departments in Arts, Science, and Engineering, but with the Medical Center, the Eastman School of Music, the Laboratory for Laser Energetics, and the Warner School of Education as well—not to mention with colleagues at labs around the world. AI and AR/VR are pillars of the University’s newly adopted strategic plan, and department faculty—Chris Kanan and Adam Purtee in particular—are playing a key role in the University’s response to the rise of large language models.
I hope you enjoy this issue of Multicast. I encourage you to drop by any time life brings you to Rochester. Last but not least, please mark your calendars for Sept. 27, 2024—the first day of Meliora Weekend—when we will be celebrating the 50th anniversary of the founding of the department. A crack committee, chaired by Chen Ding and working closely with University Advancement, is planning an all-day event, with scintillating speakers, a gala reception, and lots of time to socialize. You won’t want to miss it!
Yours, Michael
|
|
|
|
|
|
|
|
|
| | Featured Undergraduate Alumnus Joshua Pawlicki By Ted Pawlicki | | | | Ted: We have with us Joshua Pawlicki, who graduated from CS@UR in 2012 with a double degree in Computer Science and Geology. While he is currently a staff software engineer at Google we must state that he is speaking only for himself and not in any way on behalf of his employer.
So, what do you do at Google, Josh?
JP: I work on Chrome. Most of Chrome is written as Chromium in an open source repository. A little is added on top in closed-source for licensing and other reasons. Probably all of what I have done for the past few years has gone into the Chromium repository, or was on the server-side of my team (which is closed source). Specifically, I’m one of several team leads on the Chrome Auto Update Team, but we have a broad scope.
Ted: Can you describe your personal and professional journey since your U of R graduation?
JP: Sure. I applied to Google and several other companies while still at the U of R and I accepted an offer from Google starting at what we call “L3.” I moved out to Seattle to contribute there. I’ve been with the Chrome team for the past 11 or so years. I worked through promotion up to “L6” where I am now.
Ted: Have those promotions been designated with different shifts in responsibility or technical focus?
JP: Yes, the way I think of it is: L3s are expected to be able to solve a technical problem, given an approximate solution. L4s are also expected to be able to define the solutions. L5s are also expected to be able to define the problem. L6s are generally shifting to more strategic roles. At each level, your scope increases and you do more cross-team collaboration.
Ted: Can you talk a little bit about your personal journey, outside of your professional life?
JP: As I mentioned, I had moved out to Seattle; that was a lot of fun. I love the culture there, but I miss Rochester’s four seasons, to be honest. Seattle has a beautiful summer and a dark rainy winter. For about a year, I shared an apartment with some of my UR classmates who also moved out West. After a while Edie Hanson - another one of my UR classmates - moved to Seattle as well. We got married, and after we had twins, we moved back East to be closer to both of our families. Since then I've been working remotely from Connecticut.
Ted: So, you were working remotely before it was “cool?”
JP: Yes, that’s what I like to tell people! It was interesting to see everybody go through the experience of transitioning from an in-person office-centric culture to a decentralized, more self-managed, remote working setup. I had just done that three years prior and experienced some of the challenges myself. I think it was kind of fortunate in that I had already had time to figure out things that worked and things that didn’t. I was able to share and help a little bit.
Ted: What sort of mathematics/algorithmic tools do you use in your day-to-day activities?
JP: I’ve discovered that, at least for the problems that we’re trying to solve, simpler is much, much better. The solutions to our problems lean on very common data structures and algorithms. Software engineering and design patterns - how you scale these projects and keep them manageable - also play a big role. We have code bases that have been contributed to by hundreds or thousands of people over decades. It is a lot different than a self-contained homework assignment that I wrote by myself. I have had to learn a lot about working with software and teams at scale.
It’s surprising how often random knowledge from my Computer Science education comes into play. Taking an example from just this past week - one day I am working on a mini SAT solver for a rules engine, the next day I’m looking at crash dumps and reading assembly, trying to understand how the instruction pointer became misaligned with the instructions and whether the stack is corrupt. So really, the full stack of computer science plays into the work, but maybe not in a way that most people imagine.
Ted: Has there been much shifting around with your technical experience? Have you had to adopt different roles?
JP: My team is very fortunate in that we are very full stack. I’ve moved around a lot within that stack. So we develop highly scalable infrastructure on the server side putting out billions of updates each day, a logical protocol that defines behaviors, and do low-level C++ systems programming on the client side. In addition there are things like analytics with statistical and data science elements used to understand how things are behaving at scale. The little bit of experience I had with SQL came in unexpectedly useful as I progressed through my career. When you’re trying to understand the highly complex interaction of all these systems, having some ability to break that down, analyze, and problem solve by digging into large volumes of just absolutely bonkers data is quite useful. On top of that, we get some wild traffic from people exploring or tinkering with the system which can violate assumptions about how the client is working.
Ted: Those can be both legitimate and adversarial traffic, correct?
JP: Yes. There’s definitely a lot of adversarial traffic. There’s a lot of attention industry-wide on supply chain attacks in particular. We have put a lot of thought and effort into understanding and answering those threat models.
Ted: Are there other life skills that you’ve had to learn along the way?
JP: Yes. I’ve learned a lot of skills from my peers, both technical and “soft.” I would say that the biggest area right now that I feel like I have so much to understand is just in overall statistics - not necessarily the big data machine learning stuff - just good old fashioned statistics. For example, when a new version of Chrome is released, you’ll notice that there’s actually two versions. During the early stages of the rollout, some users get the new version and others get a control version that behaves identically to the older version. Because the act of updating can alter user behavior, giving a “placebo” update is necessary to get valid statistical comparisons and understand if the new version is crashing more or less. But, you can’t just compare those populations directly, because there are also groups of users to whom we tried to serve the update, but it didn’t take for some reason. If you don’t count those into your partitions of the data, you wind up with a bunch of bias between the groups. Structuring experiments, hypothesis testing, and things like that are things that I would like to understand better. |
| Ted: Can you describe the team level interactions at Google and the necessary soft skills that go into them?
JP: The projects at Google are large enough that you can’t do everything yourself. Understanding the different skills on your team, the different communication styles, the habits, the strengths or weaknesses of people, and how to leverage them has been something that I’ve definitely developed. Group work in college is all at a scale where, if they really had to, a single contributor could probably do it themself and carry the team. That just does not scale to the level of problems that we’re trying to work with.
Ted: Does your job involve interaction with organizations outside of Google?
JP: Yes, Chromium is an open source project. A lot of browsers are based on it and are very interested in reusing or improving parts of it. On my team we’ve talked with Microsoft, Brave, and many other folks that are in the browser space. We’ve talked quite a bit with the Electronic Frontier Foundation around topics related to identity, security and signing, and so forth. Especially when it comes to security, everybody I’ve spoken to wants to do what’s right for the user.
Ted: You often conduct the legendary technical code interviews at Google. Can you describe the mindset or the culture of someone conducting that type of interview?
JP: Speaking only for myself, when I conduct the interviews, I usually start by telling the candidate that we’re going to be focused on coding, data structures, algorithms, and problem solving related to them. I’m not here to trick you or lead you into any kind of trap or anything. We’ve got an interesting coding problem. I’m going to pose it. We’re going to discuss it. We’re going to work through it together. I hope it’s going to be a fun and collaborative experience. Throughout the question, I’m looking to assess how they problem-solve. How do they break this down into pieces that are manageable? How do they implement those pieces? Can they translate their ideas that they have in their mind into code? Can they explain their ideas? Can they analyze the code that they wrote? The interview problems that I give are similar to homework assignments. Obviously, the pressure is much higher - you’re timed, and being observed. It’s a pretty stressful environment, but I try to de-stress it as much as possible. I understand that the tools aren’t perfect: I’m not going to be picking on syntax or style or anything like that. I’m really just focused on, can you show me how you solve this problem? If you get stuck, I’m going to help you and try to guide you and give you ladders to climb. Often it goes well, sometimes it doesn’t, but no matter what, it’s a learning experience.
Ted: Looking back, what was your most exciting moment at Google?
JP: Yes. There are a few I don’t really want to talk about. There have been some highs and lows for sure. The lows are a bit easier to remember just because the scale of what we do is so immense. If we do ship a bug, it can seriously affect how people use their computers. I have had a couple humbling experiences related to that idea. The flip side is that it’s really an amazing opportunity when you think about the kind of stuff that we do, and the reach that it has. Once you boil it down to a statistic, it’s boring, right? Imagine that you improve this thing by 5%. Your metric, some little line, jumped up 5%. But that’s multiplied by a billion devices. And it adds up: maybe your 5% improvement in efficiency just saved humanity the cost of a school. Or you also think - how many times have you been at an airport and the gate agent is saying “I’m sorry, the computer is so slow” with a line of people standing there. Everyone’s thinking: “It’s 2023. How can this still be a problem? Haven’t they figured this out?” Maybe that 5% improvement is the difference between somebody standing at the gate or getting home. Or maybe somebody is trying to access medical information urgently in a hospital, and the 5% makes a difference. It’s an amazing and a scary opportunity - the sort of scale that Google and Chrome are used at.
Ted: Can you describe your greatest technical challenge that you’ve had so far? Does anything stand out?
JP: There’s one thing in particular, and it’s not actually that exciting, and some of your readers are going to laugh at me and say: “Oh, why didn’t you think of that?” But, I don’t know, I really enjoyed this one.
We just had a very bizarre crash and were trying to figure out what was going on. Nobody was really familiar with the crashing code. It was written over a decade ago, it’s got all this conditionally compiled in-line assembly and C and it’s sort of a third-party library that we pulled in from this other thing, which is no longer maintained. So good luck, right? It’s total wilderness as far as code is concerned, but it crashes here - sometimes. Why does it do that, and why doesn’t it do it every time? I spent three days just trying to narrow it down and hack at it. In the end it turned out the assembly made bad assumptions that break only when compiled as position independent code. It was a two character fix after three days of investigation. But it felt so good to go from “what is this and how can this even happen?” to understanding exactly what was going on. That Eureka moment, or I should say Eureka process of going from total chaos back into like order and predictability, was a memorable and fun experience.
Ted: What, what advice would you have for undergraduate computer science majors currently at UR?
JP: The biggest advice I would have is to find a way to use computer science in a way that you are passionate about outside of class. If you find something that you love and you’re having fun doing, and it’s building your skills, that’s going to come through on a resume. That’s going to come through in an interview. Classwork is important. You want to take advantage of all the opportunities to learn. But going a little bit outside the program to explore and to do something on your own is a really valuable skill to develop.
Ted: Can you describe your most heartwarming event at Google?
JP: A few years ago my manager of many years retired and as in most places there’s a party. The thing that warms my heart is that even though this person has retired and they’ve left the team, we’re all still in touch. If we ship a bug, we get a mail from him saying: “hey, you know, this isn’t working.” To be clear, he’s not saying things like: “What did you do? You incompetent fools!” It’s that there’s still this connection and there’s this trust. For me, the most heartwarming thing is that this network of relationships and trust, at least on my team at Google, is very strong. It’s exciting to see people bringing their full self to work and interacting with people in that way.
|
|
|
|
|
|
|
|
|
| | | Abigail BurtonAbigail Burton joined the Department of Computer Science in September 2022 in the role of Department Coordinator. She has lived in Rochester and the surrounding area for her whole life, and worked in various retail and restaurant positions prior to joining the University. Abigail graduated from Greece Odyssey Academy with honors in June of 2021. |
|
|
|
|
|
|
|
|
| Jian Kang Jian Kang joins the Department of Computer Science as an assistant professor. Jian’s research aims to model and learn our inter-connected world in a fair and reliable way. His interests include data mining and machine learning, trustworthy artificial intelligence, and computational social science. He has been recognized as a Rising Star in Data Science by The University of Chicago based on his research contributions to the field. He earned his PhD in Computer Science from the University of Illinois Urbana-Champaign in 2023. |
| |
|
|
|
|
|
|
|
| | Jiaming Liang Jiaming Liang joins the Department of Computer Science and Goergen Institute of Data Science as an assistant professor. His primary research goal is to design, analyze, and implement fast algorithms for solving a general class of problems in data science. His research interests broadly include topics in optimization and sampling algorithms. He obtained his PhD in Operations Research from Georgia Tech and was a postdoctoral associate at Yale University prior to joining the University of Rochester.
|
|
|
|
|
|
|
|
|
| Monika Polak Monika Polak joins the Department of Computer Science as an assistant professor (Instruction). Her expertise includes cryptography, quantum-resistant cryptography, algebraic graph theory, extremal graph theory, pseudorandomness, and coding theory. She received her PhD from The University of Maria Curie Sklodowska.
|
| |
|
|
|
|
|
|
|
| | Yukang Yan Yukang Yan joins the Department of Computer Science as an assistant professor. His research focuses on human-computer interaction and mixed reality. Yukang received his PhD from Tsinghua University and was a postdoctoral fellow at Carnegie Mellon University before joining the University of Rochester. Yukang received two Best Paper Honorable Mention Awards at ACM CHI and one Best Paper Nominee Award at IEEE VR. |
|
|
|
|
|
|
|
|
| | Featured Graduate AlumnusMayur Thakur By Lane A. Hemaspaandra | | Mayur Thakur received his B. Tech. from IIT Delhi in 1999 and his PhD from URCS in 2004. As a grad student, he spent his summers interning at Los Alamos National Laboratory, Microsoft Research, and even a startup. After being a tenure-track CS faculty member, he worked on Google Search, was a Managing Director and global head of surveillance engineering at Goldman Sachs, and was the Chief Data Officer for the healthcare data technology company H1. He currently is a Managing Director at the Bank of America. Mayur has published in a wide range of areas, including complexity theory, cryptography, data mining, discrete dynamical systems, graphs and networks, quantum computing, and recommender systems.
Mayur was interviewed for Multicast by URCS professor Lane A. Hemaspaandra, who was Mayur’s PhD advisor. | Lane: It is a real treat to be interviewing you, Mayur; and let us get right to it! You had engineering leadership roles at Google, Goldman Sachs, and H1, which differ a lot from each other. Can you tell us a bit about your experiences, and how you were flexible enough to be successful in companies with such divergent focuses?
Mayur: It’s a pleasure to be interviewed by you, Lane. Hopefully, the questions will be easier than the area exams!
On the surface my experiences (professor, Google, Goldman, H1) have all been very different. For example, at Google you could assume everyone (engineers, product managers, designers, and even patent lawyers) had a computer science background. At Goldman, I worked with lawyers, traders, software engineers, management consultants, and ex-FBI agents. At H1, I worked with serial entrepreneurs, pharma sales reps, doctors, and marketing folks.
In my experience, though, there are a lot of similarities as well. For example, modeling relationships among core entities is a data and algorithmic challenge in each domain (hence knowledge graphs and algorithmic approaches such as entity matching). A simple problem like “joining tables” at scale is a challenge in each of these. A single, integrated search engine is required in each domain. Model explainability is important in each domain. People underestimate the difficulty in collecting clean labeled data at scale. I could go on.
As to my approach to these different fields, there are a couple of things. First, I did not set out to find these divergent experiences, but when they came I was curious and open-minded. I tried to understand what the core problems were in each area by talking to folks before I took the job. Second, I had to be honest with myself about whether I could actually bring a unique perspective. I happen to have training that is pretty domain agnostic, so that helps. (BTW, this last part applies to many of your readers as well.) I have found that a dry topic like “pretrial discovery” becomes interesting and unique when you look at it with a different lens. For example, this is one of the only domains I know where close-to-100% recall (in the precision/recall sense) is not just desirable, but it’s necessary. Finally, I really enjoy learning from, explaining to, and working with people who are from different disciplines: I have had to explain distributed computing to lawyers and regulators, and it was both fun and challenging.
Lane: You’ve seen a national lab, been a university faculty member, and worked for three companies. Can you give our readers, who may have to choose between those domains in their own careers, a sense of the difference in flavor between those three types of employment? Which setting did you most enjoy?
Mayur: I have enjoyed each of these settings and found them challenging and enriching. When I was in grad school, I was not sure if I would fit in any of these settings, but luckily I was wrong then and got a few lucky breaks after grad school. So here is my perspective on each of these settings:
University professor: A big advantage is academic freedom (maybe too much freedom if you like more structure). You have no boss, so you really have to be entrepreneurial. But you have very few true peers as well (even the best PhD students are not your peers). Thus a university job can be isolating. Getting funding for grad students is the one thing I did not like and I cannot say I understood very well. On the bright side, there are breaks: semester ends, paper deadlines, etc. This means you start fresh frequently. And in the long run, this helps keep you fresh.
Research lab: There are very few pure computer-science research labs now. But the true advantage of being in a research lab is focus: much of your time is expected to be spent in doing research and maybe putting your research into a product (as opposed to teaching, writing grant applications, etc.). So if you would love to spend all your time thinking about problems, go to research labs. You will have a boss who will tell you to focus on area X and as you become more senior, you may have management responsibilities. If you want to get your ideas into products, you might be frustrated if the product and research teams don’t get along.
Industry: The true advantage of being in a high-performing industrial organization is that (a) you will get to experience true teamwork and (b) you will have to push yourself constantly and that makes you better. While projects are never-ending, you get the satisfaction of getting products in front of users and/or making clients happy. You will be working with a diverse set of people and learning from them. Most people will treat you as a peer. Big disadvantages are: too many meetings and constant context-switching, typically more politics (people issues) and less autonomy (e.g., you may be put into a project you don’t like). You get more autonomy as you get more senior, but business demands mean that you might not be able to devote large chunks of time to projects that you want to.
So there are pros and cons to each. If you already know which of the above-mentioned features suit your personality and which you value more, then the choice is easier. I didn’t know this in grad school. For example, I did not realize how much I value being part of a team and having peers to bounce ideas off of. So I am completely fine trading off a little bit of autonomy for being part of a larger team effort.
|
| Lane: In your work, you built a 250-person team, did algorithms and systems work leading to many patents, and led the building of a variety of platforms. Could you pick one or two highlights from among those contributions, and describe the core of what was accomplished and how you and your teams brought it together?
Mayur: Let me talk about the work at Goldman Sachs. Finance is a highly regulated industry and surveillances are a key control mechanism to detect and deter financial crimes. My team and I were responsible for building, from scratch, a surveillance system that could detect and prevent financial crimes such as money laundering, insider trading, market manipulation, etc.
Let me highlight three key challenges we faced:
A first challenge was how to build a team that would both understand the nuances of finance well and would be able to build highly scalable distributed software that ran complex models. We decided to focus on hiring high-quality software engineers and computer scientists even if they didn’t have a finance background.
Another challenge was the sheer size of the problem: there are hundreds of types of financial products, different countries they could be traded in, each country has its own laws, regulations, and languages, and there are the issues of structured/unstructured data and the sheer size of the data on a daily basis (billions of data points per day). At a high level we solved this by picking a few use cases that had the highest complexity (size, product complexity, diversity of datasets, regulatory sensitivity, etc.) and then building our surveillance system to solve those first. The subsequent use cases were faster to solve since we had done the heavy lifting upfront.
Finally, a unique challenge in building this platform was the diversity of users and stakeholders of this platform (engineers, product managers, compliance officers, regulators, etc.). This was the hardest challenge; and there is no magic bullet in these scenarios: we went person by person, group by group, use case by use case.
For the more technically inclined, the platform contained one of the largest datasets in finance (measured in petabytes), which was constantly updated, and ran all the surveillances from simple if-then logic to complex machine-learned models. One of the consequences of having all this data was that we could use it for use cases that we had not thought of originally. In fact, one of the tools we created came to be called the “Google of Wall Street” in the press.
Lane: It has been a while, but do you have any (no pressure!) fond memories to share of your time at URCS?
Mayur: I have a lot of them; I will mention two.
I was on the admissions committee along with faculty members and other grad students. Even as a first-year grad student, I had a say in which other grad students were admitted. That was awesome. As a bonus, I learned to quickly read resumes, which has saved me hundreds of hours in my career.
In my first year, George Ferguson taught the CS 400 course and I remember that the final project was a multiplayer poker game, where the players were coded up by us grad students. On the final day of class, there was a poker party, where we had our poker players play against each other and we projected the game onto a big screen in the computer lab. That was very cool for those days. (BTW, I was recently telling someone that in those days people didn’t say things like “I built an AI for poker,” just like you don’t say things like “I used math to build a bridge.” But things are different now.)
Lane: Work-life balance is much discussed currently. What do you enjoy spending time on outside of work?
Mayur: Anu (my wife) and I have two sons: Darsh (11 years) and Avi (9 years). Outside of work, I try to spend as much time with them as possible. All four of us like to travel to new places; cities and beaches seem to be our sweet spot. We are foodies. I read---mostly nonfiction, though these days I am into suspense novels as well. And I try to play golf, which is something I had picked up while I was at Rochester.
Lane: Thank you so very much for keeping up your contacts with URCS over the years---from visiting us to recruit employees, to giving seminar talks, to serving on panels; we deeply appreciate that. And, as a final question, do you have any career advice for current URCS students?
Mayur: A degree from URCS will give you depth and it’s generally good for your career to specialize. But a solid computer-science education is like a Swiss Army knife; you can use it in many different settings. And many of the things you can do with it are yet to be invented. So keep an open mind and ask a lot of questions.
Lane: Wonderful! Warmest thanks, and all the very best! |
|
|
|
|
|
|
|
|
| | Featured Article The Era of Large Language ModelsBy Hangfeng He | Introduction:
With the advent of ChatGPT, a cutting-edge model unveiled by OpenAI in November 2022, discussions in the natural language processing (NLP) community have been dominated by the rise of Large Language Models (LLMs). ChatGPT has not just transformed the dynamics within the field of AI; it has seamlessly integrated itself into our daily lives. From refining pieces of writing to providing answers to a myriad of questions, its applications are profound. This article will delve into the genesis of LLMs, highlighting the great opportunities they present, as well as the challenges and concerns they introduce.
The origin of LLMs:
The rudimentary concept of a language model has been in existence for several decades. However, the evolution of LLMs is a relatively recent phenomenon. The cornerstone architecture behind LLMs is the Transformer, introduced by Google Brain in 2017. The Transformer leaned heavily on the parallel multi-head attention mechanism. In terms of training efficiency it outpaced recurrent neural networks (RNNs), the dominant architecture at that time. Therefore, it quickly became the foundation for successive LLMs.
OpenAI’s Generative Pre-trained framework (later called GPT-1) marked the debut of Transformer-based LLMs. GPT-1 introduced a novel fine-tuning technique to apply pre-trained language models to various tasks. In the same year, Google introduced BERT, Bidirectional Encoder Representations from Transformers, using a similar approach. The breakthrough here was that both models moved away from building task-specific architectures from the ground up. Instead, they fine-tune the same structure for multiple tasks, drastically reducing the training time.
Post-BERT, the research community avidly explored BERT’s variants and their potential. OpenAI continued to advance their GPT series, launching GPT-2 in 2019. Despite its sizable increment in parameters and training dataset, it did not eclipse BERT in key NLP benchmarks. Yet, OpenAI persisted, releasing GPT-3 the next year. This behemoth, with a staggering 175 billion parameters, heralded the era of zero-shot and few-shot learning in NLP. Although still lagging behind fine-tuned BERT in certain tasks, GPT-3 eliminated the requirement of training or fine-tuning on downstream tasks.
Riding on the wave of success and the incorporation of high-quality human feedback, OpenAI released ChatGPT using the GPT-3.5 model. Evidently, ChatGPT’s prowess exceeded BERT’s, ushering NLP into an epoch where pre-trained LLMs took center stage, guided only by thoughtfully-crafted prompts. OpenAI’s next innovation was a multimodal iteration, GPT-4, which merged text and visual inputs to generate text-based outputs. It’s imperative to note that ChatGPT’s triumph was a confluence of scaling both the model and the training data (as seen in GPT-1, GPT-2, and GPT-3) and harvesting rich, real-world human feedback (in the veins of GPT-3.5 and GPT-4).
In sum, LLMs have catalyzed a paradigm shift in NLP’s landscape. We’ve transitioned from crafting and training dedicated models on thousands of annotated examples to employing a singular pre-trained LLM across diverse tasks without necessitating any task-specific training or annotations.
| | The shift of NLP paradigm | Opportunities:
The remarkable accomplishments of LLMs bring a multitude of new opportunities. In this section, we spotlight three avenues ripe with potential within the realm of LLMs.
Tool-Augmented LLMs: Tools emerge as instrumental in magnifying LLM capabilities. Echoing this trend, platforms like ChatGPT have rolled out support for plugins, morphing into a novel app store of sorts. Some plugins harness LLMs for practical applications, like flight searches, while others aim to hone specific LLM abilities, such as arithmetic calculations. We envision ample room for improvement in the synergy between tools and LLMs.
Multimodal Learning: LLMs have underscored the potential of unsupervised pre-training of foundation models across diverse modalities, including images and audio. However, representing equivalent information in such modalities tends to be more bit-intensive than in text, limiting the amount of data available for pre-training within the same computational budget. Given the complexities associated with multi-modal pre-training, LLMs might hold an edge over models specifically tailored for other modalities. Consequently, a promising direction is how to better leverage LLMs and specialized models in other modalities for multimodal tasks. While there have been preliminary ventures into this territory, further exploration is needed, especially considering the unique data attributes of each modality to craft more robust hybrid systems.
LLMs for Science: LLMs can also be applied to other disciplines. For example, lawyers can use LLMs to automate legal document writing. Administrative staff would find LLMs helpful in sifting through voluminous documents, and educators could utilize them as instructional aids. Moreover, given that LLMs have gleaned vast swaths of online information, they possess a knowledge reservoir broader than any individual. They could be seen as collaborators, working alongside humans, to augment and expedite scientific research. A crucial endeavor in this direction involves harmonizing domain-specific human expertise with LLM capabilities. This approach demands innovative interaction methodologies between humans and LLMs.
Concerns:
While LLMs boast significant advancements, they bring along a slew of concerns. This section discusses some of the most pressing concerns.
1. Social and Ethical Concerns:
Privacy Issues: One major concern about LLMs from OpenAI is privacy. Since they are not open-source, access is primarily through their APIs. Although OpenAI asserts that users retain data control when using APIs, apprehensions persist, especially since users must share data with OpenAI to harness the LLMs. There is also the risk of LLMs inadvertently divulging sensitive data or producing content that infringes on copyrights. These concerns emphasize the necessity for strategies and regulations to safeguard privacy and copyright.
Bias Issues: Like other machine learning models, LLMs can propagate and even amplify biases in their training data. Past solutions have tried to utilize alignment or refined prompts to align LLM outputs with human values. However, these measures often fall short of human expectations. Addressing bias might require more proactive interventions during data collection and unsupervised pre-training phases.
2. Superintelligence Concerns:
LLMs spur concerns about human roles becoming redundant. But the primary intent behind LLMs is to augment human capabilities, not to supplant them. In addition, some researchers are worried about LLMs potentially surpassing human intelligence in the foreseeable future. This scenario demands approaches to guide and regulate AI systems, especially if they evolve beyond our cognitive capabilities. Recognizing the gravity of these concerns, OpenAI has initiated a specialized superalignment team, dedicating a significant portion of computational resources to address the challenges linked to super-intelligent systems.
Conclusion:
Epitomized by models like ChatGPT, LLMs have profoundly reshaped the landscape of the field of NLP, the broader AI community and even people’s everyday routines. The ascendancy of LLMs is not solely attributed to breakthroughs like the Transformer architecture. It is also a testament to OpenAI’s persistence in the evolution of the GPT series.
The advent of LLMs unlocks a plethora of opportunities, enabling tasks previously deemed impossible. However, as with any monumental progress, there comes a duty to navigate the emergent challenges with prudence and foresight.
Acknowledgement: The writing of this article was polished with the assistance of ChatGPT. | | | | There are a few things I’d like to share with the URCS community about research and outreach activities in my group.
We are thankful for the generous support of the NSF CAREER Award, UR REU program (Computational Methods for Music, Media, and Minds), Thurgood Marshall College Fund, Kearns Center, Studio X, Dr. April Luehmann, Ms. Danielle Daniels, and all the teachers and students of the summer camps. |
|
|
|
|
|
|
|
|
| | | Award for Outstanding Teaching Assistant: Henry Welles
Entrepreneurship Award: Chukwubuikem (Chem) Chikweze Excellence in Undergraduate Research:
Adira Blumenthal, Sizhe Li, Yurong Liu, Dillanie Sumanthiran, Henry Welles, Yunlong Xu, Yufei Zhao Most Valuable Programmer:
Quan Luu and Tran Bao |
| Outstanding Senior: Erin Gibson and Peirong Hao
Donald M. Barnard Prize: Yufei Zhao G. Harold Hook Prize: Erin Gibson
Charles L. Newton Prize: Henry Welles
Lisa Norwood Student Endowment Fund Prize: Justin Pimentel
|
|
|
|
|
|
|
|
|
| | | Undergraduate and Graduate Highlights | | 2022
Zhengyuan Yang, PhD ’21, receives the ACM SIGMM Award for Outstanding PhD Thesis.
Boyang Wang ’25 places second in the Student Research Competition at PACT 2022.
Valerie Battista ’23 receives Hajim School Wells Award.
Vladimir Maksimovski ’22, Tran Bao ’23, and Loc Bui Dung Le ’22, coached by Daniel Stefankovic and Adam Purtee, advance to the North American Championship of the International Collegiate Programming Competition (ICPC) 2022.
Mandar Juvekar ’22 receives Honorable Mention in the CRA Outstanding Undergraduate Researcher competition.
Maged Michael, PhD ’97, shares the 2022 Edsger W. Dijkstra Prize in Distributed Computing.
Mohammed Zaki, PhD ’98, is named an ACM Fellow.
Sidhant Bendre ’23 and Neil Yeung ’23 take third place in the Forbes Entrepreneurial Competition |
| 2023
Jingyuan Chen ’25 and his advisor Jiebo Luo receive Best Student Paper award at IEEE ICDH 2023.
Hana Genana ’24 and Quynh Anh Pham ’24 receive Susan B. Anthony Legacy Awards.
Computer Science Undergraduate Council is recognized with Award for Excellence in Creative Co-Sponsorship.
Qingjian Shi ’26 wins People’s Choice Award in Art of Science Competition.
Sizhe Li ’23 is a finalist in the CRA Outstanding Undergraduate Researcher competition. Adira Blumenthal ’24 and Draco Xu ’23 receive Honorable Mention.
Ashley Wilson ’24 receives Block Prize.
Qiyuan Feng ’24 receives Robert L. Wells Prize.
Hanjia Lyu, PhD ’28, receives a Donald M. and Janet C. Barnard Fellowship
Saiful Isalam, PhD ’27, receives Google Research Fellowship
Raiyan Baten, PhD ’22, wins Association for the Advancement of Affective Computing (AAAC) Outstanding PhD Dissertation Award.
Steven Oufan Hai ’24 and Alexander Martin ’24 receive Data Set Grants from River Campus Libraries. |
|
|
|
|
|
|
|
|
| | Faculty and Staff Highlights | 2022
Jiebo Luo is elected a member of Academia Europaea.
Ehsan Hoque is named a distinguished member of the ACM.
Jiebo Luo is named the Albert Arendt Hopeman Professor of Engineering.
Sreepathi Pai receives NSF CAREER award.
Michael Scott is elected a Fellow of the AAAS.
Yuhao Zhu and UR alumni Sifan Ye (BS ’20) and Ting Wu (MS ’20) win the Kostas Pantazos Memorial Award for Outstanding Paper in Visualization and Data Analysis.
Test of Time Award at HPCA 2022 goes to PhD alumni Greg Semeraro, Grigorios Magkils, and Rajeev Balasubramonian, with advisors David Albonesl, Sandhya Dwarkadas, and Michael Scott.
|
| 2023
Jiebo Luo is elected a Fellow of the US National Academy of Inventors (NAI).
Kaave Hosseini and co-authors receive Best Paper Award at ICALP.
George Ferguson receives Edward Peck Curtis Award for Excellence in Undergraduate Teaching.
Sara Klinkbeil receives Edmund A. Hajim Outstanding Staff Award.
Zhen Bai receives NSF CAREER Award.
Department faculty secure over $5M in new external grants.
|
|
|
|
|
|
|
|
|
| | | Wentao Cai Nonblocking Data Structures on Persistent Memory Software Engineer, Google Faculty Advisor: Michael Scott
Sayak Chakraborti Opportunistic Resource Management: Balancing Quality-of-Service and Resource Utilization in Datacenters Research Engineer, Meta Faculty Advisor: Sandhya Dwarkadas
Komail Dharsee Critical Hardware Towards Software Security Enforcement Draper Labs Faculty Advisor: John Criswell
Mohammad Hossein Faghihi Sereshgi Enhancing Secure Computations Via Input Certification Faculty Advisor: Muthuramakrishnan Venkitasubramaniam
Yiming Gan Reliable Computing Systems for Autonomous Machines Assistant Professor, Chinese Academy of Sciences Faculty Advisor: Yuhao Zhu
Lane Lawley Episodic Logic Schema Learning Post-Doc, Georgia Tech Faculty Advisor: Len Schubert
Zhiheng Li Discover and Mitigate Biases in Discriminative and Generative Image Models Applied Scientist, AWS Faculty Advisor: Chenliang Xu
Fangzhou Liu Capturing the Statistical Representation of Data Locality in Parallel Programs using Reuse Interval Software Engineer, Qualcomm Faculty Advisor: Chen Ding
Georgiy Platonov On Cognitively Informed Models for Spatial Relations ML Software Engineer, Google Cloud Faculty Advisor: Len Schubert
Zhuojia Shen Enforcing Low-Cost Security for ARM Virtualization Software Engineer, Apple Faculty Advisor: John Criswell
Jing Shi Vision Based Language to Action Mapping Research Scientist, Adobe Faculty Advisor: Chenliang Xu
Wesley Smith Data Movement Complexity Startup Faculty Advisor: Chen Ding
Haosen Wen Memory Management and Persistency of Multithreaded Applications Sr. Software Engineer, Huawei Faculty Advisor: Michael Scott
Chen Yu Improving Natural Language Processing: From Adding Graph Structure to Distributed Learning Machine Learning Engineer, TikTok Faculty Advisor: Dan Gildea
Feng Yu Systematic Optimizations for Efficient Mobile Visual Computing Post-Doc, University of Rochester Faculty Advisor: Yuhao Zhu
Songyang Zhang Temporal Representation Learning for Video-Language Understanding and Generation Applied scientist, AWS Faculty Advisor: Jiebo Luo
Haitian Zheng Visual Content Manipulation with Correspondence Learning and Representation Learning Research Scientist, Adobe Research Faculty Advisor: Jiebo Luo
Jie Zhou Retrofit Memory Safety to Low-level Software Incrementally Post-Doc, University of Rochester Faculty Advisor: John Criswell
Wei Zhu Advances in Learning to Generalize to Out-Of-Distribution Data Applied scientist, AWS Faculty Advisor: Jiebo Luo
|
|
|
|
|
|
|
|
|
| | | | We had five honors students in the Undergraduate Class of 2023.
Aayush Poudel: Compressing Ray Trajectory Mapping using Bezier Curves (Honors)
Henry Lin: Tracking Words (Honors)
Yurong Liu: Sampling Over Union of Joins (Highest Honors)
Draco Xu: Network Construction on Historical and Real-Time Data (Highest Honors)
Enting Zhou: Unsupervised Arousal Valence Estimation from Speech and Corresponding Discrete Emotion (High Honors) | | | | Where is the Class of 2023?
Job Placement
Microsoft (5), Amazon (4), Bank of America (3), M&T Bank (3), Audible (2), Intersystems (2), 3M, Capital One, JPMorgan Chase & Co., T. Rowe Price, Palantir Technologies, Raytheon Technologies, American Express, LinkedIn, L3 Harris Technologies, Tesla, Slack, Morgan Stanley, HubSpot, Lockheed Martin, Google, Hansa, VISTECH, Citizen’s Bank, Northrop Grumman, IBM, Stanford University, NVIDIA, Millenium, Epic Systems Inc., PayPal
Master’s Program
Columbia University (2), Northwestern University (2), Carnegie Mellon University (2), Cornell University (2), University of Rochester, University of California at Berkeley, John Hopkins University, University of Cambridge, Georgia Institute of Technology, University of Pennsylvania, University of Virginia, University of Washington
PhD Program
Cornell University, New York University, Rensselaer Polytechnic Institute, Harvard University, University of Rochester, University of Chicago
E5
Kevin Tusiime, Yueyue (Nina) Long, Bohan Cui, Chukwubuikem (Chem) Chikweze
Take5
Qianqian Wei, Adira Blumenthal ________________________ | Job Postings:
Theresa Kettleberger ’19 shares 3Play Media is seeking Senior Platform/Dev Ops Engineer
NVIDIA is hiring! If you are interested in working on CUDA C++ (front end, middle end, backend), AI Compilers (XLA, MLIR, JAX, Internal Projects), Compiler Research, Compiler Testing & Verification, or Graphics Compilers feel free to reach out to Justin Fargnoli ’22 or apply here. ________________________ |
| | | A mini UR-in-Boston reunion took place last December when URCS PhD alums Polly Pook, Mark Crovella, Marc Light, and Leonidas Kontothanassis got together and toured Boston University’s new Center for Computing and Data Sciences. Photo from Mark Crovella, PhD ’94 | | “U of R people at the ACL conference” (July 2023). Photo from Diane J. Litman, PhD ’86 | | Prof. Evangelos Markatos (on the right) receives the “Stelios Pichorides” award from the rector of the University Prof. George Kontakis (on the left). Photo from Prof. Evangelos Markatos, PhD ’93 | | Zhuojia Shen, PhD ’23 and Xiaowan Dong, PhD ’19 were married October 2022 in Maui. Photo from Zhuojia Shen | | Seattle Dinner with Alumni (July 14, 2023) Front left 1: Yongkang Zhu, Software Engineer in Meta, PhD ’05. Front left 2: Chengliang Zhang, Senior Engineering Manager in Google, PhD ’07. Front left 3: Chen Ding, Professor of Computer Science, PhD (Rice) ’00 Front left 4: Pin Lu, Senior Engineering Manager in Meta, MS, ’07 Front left 5: Xiaoya Xiang, Software Engineer in Meta, PhD ’13. Back left 3: Xiong Zhang, Research Scientist in Meta, PhD ’20. Back left 4: Shupeng Gui, Research Scientist in Meta, PhD ’21. Back left 5: Wentao Cai, Software Engineer in Google, PhD ’22. Back left 6: Shengbo Ge, Software Engineer in Google, BSCS ’18 Photo from Professor Chen Ding |
| Alumni Updates:
Ricardo Bianchini, PhD ’95 I am no longer in Microsoft Research. I’m now a Technical Fellow and Corporate VP at Microsoft Azure.
David Coombs, PhD ’92 I left the lab at Maryland to join 2 Circle, where I support DARPA as a technical SETA contractor.
Myroslava Dzikovska, PhD ’04 I have a new employer, Blis, a company specializing in programmatic advertising.
Girts Folkmanis, MS ’07 Now employed by Global Health Labs.
Neal Gafter, PhD ’90 I began working for a startup, Relational.AI, in February 2023.
Boris Goldowsky, PhD ’95 I am now at the Concord Consortium, working on educational technology.
Xiaoming Gu, PhD ’14 I just changed my job to AMD.
Ashwin Lall, PhD ’08 Promoted to Professor of Computer Science at Denison University.
Kate Lang, MS ’10 I am now a QE Manager at Sony Interactive Entertainment.
Grigorios Magklis, PhD ’05 New employer: Esperanto Technologies.
Evangelos Markatos, PhD ’93 Professor Evangelos Markatos received the “Stelios Pichorides” award for 2023. Every year the University of Crete gives the award to one or two professors for their Excellence in Academic Teaching.
Ross Messing, PhD ’12 I now work at Squarespace.
April Mitchell, MS ’02 I’m currently: VP of Engineering & Operations, Dasera (data security startup).
Zhuojia Shen, PhD ’23 and Xiaowan Dong, PhD ’19 This is Zhuojia Shen, a recent URCS PhD graduate. I’m writing to inform that Xiaowan Dong and I got married last October in the beautiful Maui island in Hawaii.
Neil Smithline, PhD ’89 I still work for Poloniex but my title is now “VP of Infrastructure, GM West.”
Phyo Thiha, PhD ’19 I have started working for Amazon as a Data Engineer III since last November.
Tiancheng Xu, MS ’20 I am currently a PhD student at Rice CS.
Robert W. Wisniewski, PhD ’96 I have switched jobs. I am now at Samsung: Senior Vice President and Chief Architect of HPC, Head of Samsung’s SAIT Systems Architecture Lab.
Weilie Yi, PhD ’06 My current employer is Hayden AI. |
|
|
|
|
|
|
|
|
| | Alumni Updates (continued)
Liudvikas Bukys, MS ’86 Liudvikas Bukys is now working at a startup, Reframe Technologies, building a product to transform how we get our work done with computers. He splits his time between beautiful Keuka Lake and warm sunny Clearwater, Florida.
Jim Heliotis, PhD ’84 I am now retired from RIT with the “Professor Emeritus” title. I hope to still stay somewhat active in the CS education area. I ran a workshop at a regional conference in April and I hope to do the same thing at a larger conference next winter. Other than that I’m just spending a bunch of time around the homestead doing projects that I literally put off for decades!
Kailash Joshi, MS ’17 I am delighted to share that last year (29th Aug) I embarked on a journey with my dream company, Microsoft! The journey began with the initial aspiration of “One day,” followed by meticulous planning, thorough preparation, a series of interviews, experiencing rejections, finally receiving an acceptance, going through the onboarding process, and culminating in the much-awaited “Day 1.” It has been quite an exhilarating ride!
Ronald P. Loui, PhD ’88 Probably teaching a seminar at Case in the Fall tentatively titled AI GOOD AND EVIL. Got the green light to write an article on proximal cause and machine learning based product liability for LAW OF AI 2nd ed. If you’re really curious, awkscripts.com/oldweb/loui.html is a recent summary of my 62 years (lots of photos, but no runaway js, so the browser can handle it). I’d rather be working on defense tech, but I dislike commercial airlines and I like living near Cleveland. I guess the big news is a new timeline for the Torah based on Amorite history and biannual shanah counting, but you’ll have to click on the link or read my Facebook friend posts to see it.
Jim Muller, PhD ’94 I’m dividing my time between three companies as co-founder or co-owner. Didero Games, launched this year, is a subscription rental club for physical Nintendo games. We’ve been running a similar club, the Hoefnagel Wooden Jigsaw Puzzle Club, since 2020. Both use AI-driven peer-to-peer shipping. I’m also co-owner of Artifact Puzzles, designing and manufacturing wooden jigsaw puzzles since 2009.
Danny Sabbah, PhD ’82 Along with co-authors, I have written a book which is now going through the publishing process and will be available in the fall (early Nov). It is available for pre-order on Amazon. The name is: The Heart of Innovation. I am in the process of also setting up a venture fund based on the principles in the book. We introduce the equivalent of behavioral thinking (much like Behavioral Economics) into the early evaluation of proposed innovations. This filters out cognitive biases and introduces a concept called “authentic demand” into the conversation around innovation. We develop a method for extracting and understanding “authentic demand.” The book is in 2 parts; One part is examples from our collective history of accidental innovations. The next is an introduction to the method for “deliberate innovation.” The preface is written by Arvind Krishna who is the current CEO of IBM.
Robert Schudy, PhD ’82 Much has happened these last few years: I’m now Emeritus from Boston University. I learned of my advisor Dana Ballard’s death. Dana did so much to help me, including providing detailed corrections for many drafts of my thesis. I remember his guidance well, and wonder if I will ever live up to what he prepared me for. I’ve written two books on online education, with my Boston University colleagues Anatoly Temkin and Dan Hillman. Our first book is titled “Best Practices for Administering Online Programs.” It’s the academic administration title in the Routledge Best Practices in Online Teaching and Learning series. When we finished that book Routledge asked us to write a book on teaching online, so we wrote “Winning Online Instruction: A Q&A for Higher Education Faculty.” The section titles of this book are questions that faculty frequently ask, and the text answers those questions. Both books have been well received. My wife Liz Watson and I spend our summers in Lincoln Mass, but we purchased a small condo on an intracoastal-connected lake in Hallandale Beach Florida, and we spend our winters there. We drive back and forth, and stopped to see my classmate Bryan Lyles in North Carolina on the way north this year. We have a Crealock 34 cruising sailboat, which we keep at the dock at our condo. About five years ago we joined the Gulfstream Sailing Club. I’m now on the Board of the Club and of the affiliated Gulfstream Sailing Foundation. Our main mission is teaching children how to sail, and we’ve taught thousands. I’m also co-captain of the ocean racing committee and dockmaster at our condo. We lead modest yacht races many Saturdays, and enjoy cruises in the Florida Keys. One of my genuinely enriching experiences is joining a Bahamian Anglican church in Hallandale, and singing in their choir. Bahamians are renowned amongst sailors as the friendliest people on earth, and it’s true. They’ve welcomed me warmly, even though I’m very different and usually the only pale person in the church. I want to really understand and feel what it’s like to be black in America. After six months I’m beginning to understand that it’s much more difficult than most people think. I also sing in the Episcopal Church in Lincoln, which is very different. Many Lincoln parishioners are wealthy, and few are poor. In Hallandale there is a free breakfast after church, prepared by parishioners in the parish hall, so that everyone has at least one good meal, and there are tables with donated food to help parishioners make ends meet. In Lincoln we provided everyone N95 masks, used technology to sing together safely, and didn’t miss a stride during the pandemic. This didn’t happen in Hallandale, and many people, including most of the children, no longer come to church. This is a huge loss for the kids, because a lot of what church is about is teaching kids about ethics, history, communications, and getting along with others.
Chunqiang Tang, PhD ’04 After leaving IBM Research in 2013, I joined Facebook, which has now changed its name to Meta Platforms. I have remained with the company since then and was promoted to the position of Senior Director. During the past few years, I have been working in the broad area of cloud computing in Meta’s massive private cloud. Although my work at Meta is primarily centered around production systems, managing millions of servers and serving billions of users, I have continued to publish our cutting-edge production work as research papers in esteemed conferences such as SOSP, OSDI, ISCA, and ASPLOS. Notably, we have received multiple accolades for our work, including the ISCA ’23 Best Paper Award for “Contiguitas: The Pursuit of Physical Memory Contiguity in Datacenters,” the ASPLOS ’22 Best Paper Award for “TMO:Transparent Memory Offloading in Datacenters,” and being selected for the IEEE Micro Top Picks 2023 with our ASPLOS ’22 paper titled “IOCost: Block IO Control for Containers in Datacenters.” Overall, I find great satisfaction in contributing to industry work that not only impacts billions of people but also advances the state of the art in research.
Mohammed J. Zaki, PhD ’98 Honored to be elected a Fellow of the American Association for Advancement of Science “For distinguished contributions to the fields of data mining and machine learning, and for service to the academic community.”
| Share your outcomes and updates with the department!
ugalumni@cs.rochester.edu, gradalumni@cs.rochester.edu
And connect with us in the URCS Alumni Group
https://www.linkedin.com/groups/12655649/ | | 50th Anniversary Celebration!
Please mark your calendar:
The University of Rochester Department of Computer Science will celebrate the 50th anniversary of its founding on Friday, September 27, 2024. We invite all our alums, colleagues, and friends to join the celebration. |
| | Hylan Building: 1974-1987 |
|
|
|
|
|
|
|
|
| | Computer Studies Building: 1987-2017 |
| | Wegmans Hall: 2017-Present |
|
|
|
|
|
|
|
|
| | Alternate link for mailing list: https://forms.office.com/r/45sPAyLKxh | | | | Hackathons Pushing Students’ Creativity and Their Global Involvement Sidhant Bendre, Sara Klinkbeil | Sidhant Bendre ’23 won one of the Grand Prizes at Stanford TreeHacks 2023 - specifically “The Moonshot Prize” which is awarded to “The craziest, most out-of-this-world project.” TreeHacks is one of the largest hackathons in the nation. It attracts over 1700+ hackers that fly in from all over the globe. In groups of four or fewer, they hack for 36 hours straight. They are all attempting to build the future, to create the next big thing. TreeHacks is a notoriously selective program with a 7.5% acceptance rate.
“The project my team and I built allows people to control their drone by just giving it an objective to accomplish in plain English! Using LLMs, I created a tool that writes its own drone programs to perform a variety of complex tasks such as long-running tasks, multi-modal, etc., without needing to write a line of code themselves. For example, you could tell the drone to ‘find the bottle’ or ‘find the person in a red shirt’ and it will take off, survey the room for the target, and once found, will fly to the target.” | | The idea for this project came to Sid after reading the Toolformer paper that was published by Meta AI Research just 8 days before the hackathon — a paper that demonstrated the ability of large language models like GPT-3 to learn how to use a particular tool or system of tools.
Link to post: https://devpost.com/software/droneformer |
|
|
|
|
|
|
|
|
| | The Untapped Potential of Computing in Tackling Climate Change | | “Hajim School researchers explore the potential for using computing to help promote eco-friendly lifestyles. Computer Science PhD student Adiba Proma and Associate Professor Ehsan Hoque authored an invited paper for NAE Perspectives along with Robert Wachter, the Holly Smith Distinguished Professor and Chair of the Department of Medicine at University of California, San Francisco.” | | | Patrick Chen ’25 Wins Best Student Paper Award at 2023 IEEE International Conference on Digital Health | “Patrick (Jingyuan) Chen, a rising CS junior working in Professor Jiebo Luo’s research group, has won the Best Student Paper Award after giving a presentation in person in Chicago last week at the IEEE International Conference on Digital Health (ICDH). He was the only undergraduate student in the invitation-only Student Research Competition. Yuan Yao, a first-year PhD student in Professor Luo’s research group, is the second author of the paper with collaborators from URMC (Maiken Nedergaard) and Copenhagen University in Denmark.” | | | Jiebo Luo Selected as a Fellow of the National Academy of Inventors | | “Professor Jiebo Luo, a leading expert in artificial intelligence, has been selected as a fellow of the National Academy of Inventors (NAI). The academy is recognizing Luo and the other inductees for ‘a prolific spirit of innovation in creating or facilitating outstanding inventions that have made a tangible impact on the quality of life, economic development, and welfare of society’.” |
|
|
|
|
|
|
|
|
| | | Alumnus Caleb Wohn ’22 Receives CSGrad4US Graduate Fellowship | “Caleb joined the ROC HCI lab as a sophomore in Fall 2019 and had been actively involved in multiple research projects including the SOPHIE Project, which is a virtual agent designed to prepare doctors for end-of-life conversations. He has also contributed to building a knowledge graph for climate change and a platform to nudge people to select eco-friendly products.” | | | | Spring 2023 Data Set Grants | Steven Oufan Hai ’24 and Alexander Martin ’24 receive Data Set Grants from River Campus Libraries. | | | Qingjian Shi Wins People’s Choice Award in Annual Art of Science Competition | | This year’s People’s Choice Award went to Computer Science student Qingjian Shi ’26 for “Robot’s Expression of Individuality.” Shi describes the work as a “retro-futuristic robot expressing itself and what it feels while contrasting mechanical and fluidity of nature in Monet style.” | | | |
|
|
|
|
|