07 Jul Impressions from MLCon – Munich, June 2023
Impressions from Machine Learning Conference – Munich, June 2023
I attended ML conference and here are my impressions
The intended audience is regular programmers who do not have much contact with AI/ML world and want to hear the summary of the latest news.
2 Conference one-liner overview
Conference one-liner overview:
The conference was really aiming to give an “Overview of the state of science, technology, and practices of ML in this present point of time”. Topics were mostly presented on a “technically popular” level, without going into too many technical details. Speakers were researchers, academics, philosophers/ethics-legal experts, and ML practitioners from software companies all over the world.
3 Most interesting topics – Personal choice
Most interesting topics (may be subjective, based on my personal interests):
3.1 ChatGPT in production
ChatGPT in production. There was one person (only one out of ~120 people) that said that his company already has ChatGPT in production. He was from Germany somewhere, and he just shortly described the application. Vaguely speaking, what they did is to index all documents in the company using ChatGPT and they are using Chat GPT as an expert system for asking for information from company documents or for searching them. He didn’t go into much detail, but to me, it looks similar to articles , , . The big news is ChatGPT is already making money in production.
There was a lot of talk about the abilities and limitations of ChatGPT. A lot of criticism was going on the fact that ChatGPT sometimes gives incorrect answers or partially incorrect answers. It appears that the system sometimes invents or claims facts that are not factually correct. Popular terminology on the internet is that ChatGPT sometimes “hallucinates”. That is very much true for the popular free version of ChatGPT 3.5. There are claims that the latest version ChatGPT 4.0 (from March 2023) will hallucinate less and that it is believed to provide more truthful answers.
Interestingly, during the whole conference, no one said a word about Google Bard AI system.
3.2 Open Source AI
Open Source AI. Very interesting was the presentation of the founder of LAION  Christoph Schuhmann. LAION, as a non-profit organization, provides datasets, tools, and models to liberate machine learning research. What they did is scraped a good part of the Internet and analyzed all html Picture-AltText pairs and created an open database containing 400 million English image-text pairs. The idea is to provide an open database so ML engineers can train their models.
The idea and goal is that the Open-Source community competes with big companies like Microsoft, Meta, and Google in AI/ML development. This organization has made a lot of noise worldwide, and its founder (high school teacher) got offers for jobs in the industry, offers from venture capitalists, etc. But he decides to stay independent and is receiving a lot of donations including free access to some of the supercomputers in the EU and the world.
If you are interested in contributing to Open Source AI/ML development and in that way obtaining some practical ML skills, contact LAION  and ask how you can help.
3.3 Computer vision
Computer vision. I was in the past doing some “object tracking” using C# and classical algorithms, so it was interesting for me to see what ML is offering now. One expert Oliver Zeigermann (, ,  ) was comparing the classical approach to computer vision vs the ML approach. He was talking about ML/Neural networks technology, the process of training, available tools, the processing power needed, etc. A typical use case would be using ML computer vision to do quality control on the factory production line. If you are interested, you can search the internet for this author’s name and you will find an abundance of articles, slides, and even books on Computer Vision and ML in general.
Computer vision is a big topic in ML. There were discussions about the applicability in medicine and the possibility to replace Radiologists in their work. They argue, that one trained Radiologist is trained during education on maybe 1000 images, while computer systems can easily be trained on tens of thousands of images. Definitely a hot topic.
3.4 Using ChatGPT for code generation
Using ChatGPT for code generation. (see ) The big fuzz was about the ability of ChatGPT to create (snippets of) code. There are 2 camps: A) We really like it and use it; B) it is not good/mature enough.
Arguments from camp A) are that like to way to iteratively learn by playing with the chat prompt and is easy to work that way. They also claim it is a very good approach if you need to access an unknown library to generate you interface/access code fast.
Camp B) argues that generated code is not good enough, so technology is not mature enough yet. One guy says: before we spent 80% of time coding and 20% debugging. With ChatGPT we spent 20% time coding and 80% time debugging.
But definitely, the audience at the conference was excited about it. At the present moment, they claim that there are no copyright problems with AI generate content, that is it can be freely used, which was a concern for some people. Many were concerned if such a code can be trusted. There were usual questions like how many people/developers will lose jobs due to AI ability to create code. Can AI do a code review for you?
There is already an available extension for Visual Studio Code  to use ChatGPT to help you with coding.
3.5 Low-Code, No-Code development
Low-Code, No-Code development. These are new buzzwords in the industry and such jobs are appearing. (, , ). The idea is that developer does not need to know in the details the programming language, but uses some Visual tools (or similar, maybe a chat prompt) to create an application while in the background code generation supported by AI/ML.
The interesting story was from one younger German guy that has PhD and works as AI engineer somewhere. He said he got an offer for a “Low code” job, but turned it down. It seems that classical developers are not sure if such positions would be good for their career path and are concerned about losing real coding skills. So many new things and no one is sure what will the future bring.
MLOps. Another new buzzword ( , ) in the industry. Shortly, that is DevOps for ML applications. Data for training, models, applications, iterative learning artifacts… all need to be tracked, versioned, integrated, deployed, and monitored.
The conference was going with presentations in parallel in 3 different rooms and I was not able to follow all sessions, and I am more on the developer side than on the DevOps side. But definitely, there was a number of sessions dedicated to MLOps, and that emerges as a separate discipline/skill within AI world. It has its methods, tools, best practices, etc. From how to support the deployment of ChatGPT in the corporate environment to how to track all artifacts of ML learning model process.
3.7 Technology Stack for ML
Technology stack for ML. As a predominantly .NET/C# developer, I was interested in the technology stack for building ML applications available. For example, how to include ML module into my ASP.NET application. Currently, it seems the “Python” language is the dominant platform for ML development where all tools/libraries are available. Some people say a bridge between .NET and Python is possible, but complain it is error-prone.
I see there is an open-source ML.NET library available if needed (, ) for native C#/.NET development. The concern is if it is as good as the Python environment for ML development.
Some guys I talked to said they are pure Python developers and openly said have no clue about C#/.NET world. Another guy said he does his work in Python and there are other guys that interface his code via C#.
3.8 Generative AI
Generative AI. Another buzzword in the industry. Shorty, Generative AI () are computer systems that have the ability to generate (create) content. Content can be text ( poem, essay, letter, etc.) or multimedia (images, video, audio). A typical example is ChatGPT generating business letters based on prompt requests or “Stable Diffusion” generating images based on a textual description via a prompt.
You can try “Stable Diffusion” demo at ,  and type for example “typical Boston family”. Pictures are a bit distorted, people have more than 5 fingers sometimes, etc. But looks promising.
There was a lot of criticism at the conference that AI models are biased, for example, generated pictures for the “typical Boston family” prompt are mostly people of light skin color, and rarely show a person of color. Then, they say that is because AI systems are trained on the Internet, and the Internet is biased because the world is biased. So, they argue AI systems should be trained on “filtered”, biased data. But then, you get “politically correct” AI systems that do not represent the real world. Etc.
From the legal side, they say that at this moment those pictures from Generative AI do not have a copyright, so anyone can use them at will. Some argue that the person that created the prompt should have the copyright, that Generative AI is just a tool and legally machine can not hold the copyright. Etc.
4 Some distinguished presentations – excerpts
All excerpts are taken from 
4.1 MLOPS HANDS-ON GUIDE: FROM TRAINING TO DEPLOYMENT AND MONITORING
By Alexey Grigorev (Alexey runs DataTalks.Club — a community of 25,000+ data enthusiasts) (see )
MLOps plays a crucial role in the machine learning project lifecycle that enables organizations to streamline and automate the development, training, and deployment of machine learning models. In this workshop, we will demonstrate how to track experiments, create a training pipeline, register a model in a registry, serve the model, and monitor the performance using open-source tools. We will start by discussing the basics of MLOps and its role in the ML project lifecycle. We will then prepare the environment and train our model using a case study of ride duration prediction. Next, we will install and run MLFlow to track experiments and manage the model training process. We will then use Scikit-Learn pipelines to make model management simpler and convert a notebook for training a model into a Python script. After that, we will save and load the model using the MLFlow model registry and serve the model as a web service. Finally, we will demonstrate how to monitor the performance of the model. Attendees will come away with a solid understanding of how to use MLOps to streamline and automate the machine learning project lifecycle, as well as how to use open-source tools such as MLFlow and Scikit-Learn to achieve this.
4.2 SO, WHAT EXACTLY IS MLOPS?
By Mihailo Joksimovic- Microsoft (see )
We’re all clear on DevOps role, sure! But what about MLOps? What does it mean and how does it differ? Can a regular DevOps practitioner move to MLOps? And how hard is it? I was determined to answer all of these questions, all the while giving you a bit more context on inner-workings on ML. We’ll talk Math, Linear Algebra, Building Models and, obviously, some MLOps!
4.3 COMPUTER VISION: PAST, PRESENT, AND FUTURE
By Oliver Zeigermann – Freelancer (see )
Computer vision is the parade discipline of machine learning. Artificial neural networks can achieve recognition rates and robustness that were unthinkable with classical methods. However, traditional approaches are still useful in some areas as an alternative or in combination with neural networks. In this talk, I take you through the following topics:
1. Traditional approaches: Why are these approaches non-ML approaches? What is their strength, and what are their limitations? When should you still use them?
2. Neural networks: When are they useful, and in what architecture? What does it take to train them? How can we know if their training is successful?
3. What’s next: Newer approaches that have not yet been tested in practical applications, but have the potential to play a larger role in the future.
4.4 How To Use ChatGPT for Conversational Developers
By Dominik Meißner, 169 Labs GmbH (see )
ChatGPT is one of the most well-known artificial intelligence language models of today. To solve development-related problems, users interact in a multi-turn conversation to refine the problem and the solution. So as a conversational developer: Why not use it as a development buddy? A mentor that will help you solve everyday tasks as a software developer or systems engineer? As promising as this sounds, there are ups and downs in this technology. In this session, we will look at the does and don’ts of AI-supported software development and help you use ChatGPT right.
4.5 MLOps Journey – Machine Learning As An Engineering Discipline
By Vinay Narayana, Levi Strauss & Co. (see )
The current situation at most companies could be summarized as below:
· Every team has their own unique way of testing and productionizing a model
· Lack of a centralized feature store
· Severe data quality issues
· Limited to no data or model monitoring in production (or test)
· Limited to no operational readiness
· Fragmented collaboration with partner teamsThis presentation takes the use case of a typical data science org that can apply software engineering principles to improve and solve all the above typical scenarios.
A vision that all data science teams could aspire for, involves the following:
· access to reliable data (with SLOs),
· automate data processing, model, training, evaluation and validation,
· productionize the model either for batch or online serving,
· continuously monitor data and model in production,
· use a trigger based mechanism to auto train, deliver and deploy in production
For achieving the vision, multiple goals need to be put in place. Some of them are below:
· Transform and standardize on how we do MLOps across all teams
· Leverage a centralized feature store and remove any training or serving skew
· All data produced must be treated as a product
· Enable comprehensive data and model monitoring capabilities
· Follow standard tiered approach model for implementing operations readiness
· Lastly, nurture relationships and collaborate with data engineering, central infra teams, etc
The rest of the presentation will go into details on how to implement each of the above goals along with a few high level architectural patterns.
4.6 Unleashing The Power Of Generative AI: From Creative to Practical Applications
By Vinay Narayana, Levi Strauss & Co. (see )
Generative AI (GAI), or generative models and generative adversarial networks (GANs), is a powerful and rapidly evolving field of artificial intelligence that has the potential to provide value to a broad range of business applications across industries. Generative AI has the ability to generate interactive images, art, video, text, code and also has the ability to enrich data sets (think rare cancer data set enrichment) for an ultimately better AI performance. This technology will revolutionize the way we create and consume art, design and optimize products, and deliver healthcare.In this presentation, you will learn the main concepts behind GAI, business use cases for GAI, ChatGPT & DALL – E2 models, potential benefits & challenges. Lastly, we will consider the future of GAI and how it may continue to shape and transform our world.
4.7 Democratizing AI
By Christoph Schuhmann, LAION (see )
Christoph Schuhmann is an educator and computer scientist who co-founded the German non-profit organization LAION e.V., which strives to democratize state-of-the-art AI research and models. He studied computer science, physics, and psychology at the University of Vienna. Before actively working on AI, he produced the documentary “Schools of Trust” about schools where kids can learn what they are curious about, without mandatory curricula, grades, or other extrinsic rewards. In the past 10 years, Schuhmann has advised over 50 start-up groups for such schools on education and business matters, completely for free (as his “hobby”), to accelerate the growth of this movement. Nowadays, Schuhmann works as a tenured high school teacher and spends his free time organizing LAION’s community with thousands of scientists, developers, and engineers, who are united by one common goal: making state-of-the-art AI models openly accessible for everyone in the world, as a humanitarian right.
4.8 Learning Machine Learning: Opportunities and Pitfalls
By Pieter Buteneers, Transfo.energy (see )
You can do great things with machine learning and right now everyone wants to do it. But getting Machine Learning to work, let alone turning it into a sustainable business, is a real pain in the ass.As a Machine Learning Consultant and Engineering Director at Sinch, I train a lot of people to build their own Machine Learning algorithms and turn them into customer value. And while it is not that hard in and of itself, it is really easy to make mistakes, even for the best of us. Setting up a whole system, using the right tools and frameworks, and getting the best training data is a challenge, even for experienced machine learning experts.In this talk, I’ll explain what machine learning is and what it isn’t. I will highlight some of the most common mistakes and how to avoid them. But if you think you can stop making mistakes by doing everything right, think again!
5 ML Journey continues
There is one thing all attendees of the conference agree: In front of us are exciting years of ML development. One should just imagine where will be ML science/engineering in 5 years. Today’s AI systems look very promising, although not all production ready, because of still significant error rate. But everyone believes that over time results will just get much, much better.
For those who are interested in following ML development, I will just mention that the next conference on Machine Learning is MLCon in Berlin, in November 2023 ().
Recent developments in AI/ML discipline of computer science are very exciting. I think every person in IT industry should follow developments to some degree because they have the potential to impact our branch of technology significantly. However, I still think that professional orientation to only ML applications is a bit risky since for one ML job position there are still 20+ classical programming positions available. I believe that in the near future, AI/ML technology will still stay just one of the tools at the disposal of the average software engineer, who will still most of the time solve problems using classical algorithms, languages, technologies, and tools.
 Conference link: https://mlconference.ai/munich/