2022 Year in Review
2022 has been a fairly interesting year! October 2022 made it exactly 1 year at Research Software Engineer, @MSR. It has been a period of intense learning and growth. I've been fortunate to work with amazing, generous colleagues and learn from them. A technical highlight of the year has been getting to make some contributions to GitHub CoPilot - a tool that has changed how developers write code today. I got to work on studying evaluation metrics for code generation models and new metrics for evaluating the quality of generated code compeletions.
My goal for 2022 (see my 2021 year in review post) was to do more research writing, blog writing, community work, learn new skills and improve work life balance.
Progress was made.
I am excited to see what the next year brings!
I got to write papers on some of the research I conducted in the last year as well as independent research. Thankfully, my current role as a Research Software Engineer has been beneficial to this goal. I contributed to 3 papers this year.
I wrote 16 blog posts this year. This included posts on web (e.g., rendering jupyter notebooks on the web, rendering blender models on the web, ), machine learning (implementing gradient explanations for Tensorflow BERT models, extractive summarization, research paper reviews etc), and general topics (e.g., trends in ML).
I started writing a book (learn more here). Something that has been on my mind for a while now is a realization on the growing importance of user experience as a driver for competitive advantage in the ML space. Tools and services like Huggingface and OpenAI have democratized many ML tasks, making it possible for any team to address a wide set of tasks. However, weaving the raw technical capabilities fo these tools into an experience that creates value for the user is a skillset that is not immediately obvious to teams. I am writing a book that explores this idea and hopefully get it done by December 2023 (yeah .. thats ambitious and I intent to be accepting of whatever progress I can make!).
Made some progress on reconnecting with the ML community! 6 talks, co-organized 1 conference workshop, contributed 1 open source library.
- Conferences. I helped co-organize the Machine Learning Efficiency workshop at the 2022 Indaba conference. I didnt get to attend in person this year, but hoping I'd get to do that next year. I also got to attend and present at the 2022 Google Developer ML Summit (this was such a great experience, thanks to the organisers!!).
- Talks: I got to also give a few talks this year. Trends in ML at the Chinese University of Hong Kong, Code Generation Models and Developer Productivity at Deakin University, ML on Android at Google Developers Machine Learning Bootcamp, Intro to ML at Technology University Jamaica.
Peacasso is definitely the most exciting OSS project I worked on this year. Diffusion models have shown impressive performance on text-to-image generation and continue to get better - higher image quality, smaller model size, faster generation. All of these make these models ready for consumer applications. However, the right user experience (e.g., how do we reduce trial and error for users, improve user efficiency) is still a challenge. Peacasso is an attempt to address this challenge. Along this journey, I learned alot about how diffusion models work, usability challenges associated with the models, and also attempting to serve these models at scale.
Peacasso is both a test bed to experiment with UI ideas and a research tool that others can build with. It achieves this by providing a user interface and a back end python api. Learn more about Peacasso and it's goals here. It is an ongoing project and I hope to keep improving it in the coming year.
I also worked on a few other "things" that are not yet ready for prime time - mostly on tools for evaluating code generation models. I hope to get to them in the coming year.
This year, I spent some time getting familiar with VR (I learned to build basic VR experiences on the Meta Quest 2 VR headset and wrote about it here), revisited Android development with Jetpack Compose (I revisited building Android apps with Compose, running local ML models on Android and wrote about it here), learned to use Figma (e.g., diagrams in the Peacasso paper draft are created with Figma.), and spent time learning to write software that is more usable and more robust (tests, typing in python, python packaging, designing authentication for web apis etc).
There has also been an explosion in two aspects of ML in 2023 - LLMs (GPT series, ChatGPT) and image generation models (Stable Diffusion et al). Building with these models both as part of my personal experimentation and official work has also been a great learning experience.
There are many different ways to define work life balence or measure improvements in work life balance. For me, this year, I invested in spending quality time with my family (especially experiences with our son who is 3.5 yrs old already! Time flies!). One way I measured progress (quality), is by ensuring I took real breaks where I truly did unplug (no phones, no side projects) from all type of work (something I did not do previously). I traveled more (3 international trips compared to 0 last year, interstate trips). There was also some progress on fitness and health domain (I can how hold a 5 second front lever pose for the first time, and do a half decent bridge). I am fortunate to be at a point in my career where I am able to do this. I hope to continue to do this in the coming year.
For 2023, I'd like to continue improving on the same things - research, writing, community contributions, learning new things, personal health/fitness.
Cheers to everyone who made it to the end of the year. You did great, you are awesome, you are appreciated, you rock! Happy New Year!