Skip to main content


The workshops will take place on June 22, 2023 in the JED Zurich (Zürcherstrasse 39, 8952 Schlieren)

In order to attend a workshop, make sure you buy a 2-day pass and select the corresponding workshop. Workshops are limited in capacity and served on a first come basis.

Room – Event Hall

Morning Session (09:00 – 12:00)

Generative AI in Practice

Join us at the Generative AI Workshop for an interactive exploration of this exciting new field. We’ll present how Generative AI models work, which have made rapid progress in generating images, text, and more on demand. You’ll have the chance to try out natural language and computer vision models for yourself and collaborate with others to brainstorm practical applications for these technologies. By the end of the workshop, you’ll have a solid understanding of what generative AI can do, how it does it and how you can use it to your advantage.

Generative AI has the potential to disrupt markets and create new business opportunities. Don’t miss out on this chance to see what practical applications are possible today and get inspired for how you can use it in your company tomorrow.

Afternoon Session (13:00 – 16:00)

Successful Adoption of Knowledge Graphs, based on three examples in watch industry, energy and insurance

Google has the search graph, Amazon the product graph, Facebook the social graph, LinkedIn the professional graph. Closer to home, Swatch has an IT graph, BKW a smart meter graph and Mobiliar a data catalogue graph. In this workshop, we discuss how they used a coherent data network to radically change data visibility, detect inconsistencies and learn new information. Find out what criteria’s the companies had at the beginning of their journey, what tools they used, what roles and skills they needed, and what partners they worked with to deliver. Inspired by the three different journeys, you identify similar pain points and use cases in your own organization and discuss your business case with the other participants for immediate feedback – mutually inspiring everybody to look for new use cases.

Room – Zühlke

Morning Session (09:00 – 12:00)

AI-Powered Medical Image Analysis – from Imaging to Decision Support

Medical imaging constitutes a key source of information for answering clinical and scientific questions targeting human health. A wide variety of different imaging methods exists, from radiography, to tomography, magnetic resonance imaging, ultrasound imaging or light sheet microscopy. These technologies enable the study of the human body and the diagnosis, monitoring, or treatment of medical conditions. Combining these imaging methods with Artificial Intelligence (AI) can greatly improve image acquisition, diagnostic precision or the processing of the growing amount of imaging data. AI for biomedical imaging has found a wide range of applications and is beginning to transform medical diagnostics.

Afternoon Session (13:00 – 16:00)

An Introduction to Neural Networks using Longitudinal Health Data

In this tutorial, Daniel and Michael – members of the Data Science working group of the Swiss Association of Actuaries – will introduce neural networks on a synthetic, longitudinal health dataset considering risk factors like BMI, blood pressure, age, etc. to predict various health outcomes of individuals over time. The tutorial consists of 4 parts: Creation of the synthetic dataset, applying generalized linear models, transitioning to shallow and deep neural networks, model explainability and risk factor importance. All 4 parts will provide ready-to-use Python scripts for the audience to explore during and after the tutorial.

Room – Sequoia

Morning Session (09:00 – 12:00)

Responsible AI – Transparency and Fairness of data-based applications in practice

The usage of AI will soon be regulated in the European Union, with the upcoming AI Act. This will impact the data science activities of companies dramatically: the use of many data-based algorithms and applications will have to be re-thought, and often adopted. In this workshop, we focus on two relevant requirements: (a) Explainability and Transparency, and (b) Fairness and Non-discrimination.

Distinguished speakers will explain these requirements and comment on the state-of-the-art of how to implement such requirements technically, in a concrete data-science application. In addition, participants will have the opportunity to discuss their specific challenges, open questions, etc. with the experts as well as with the other workshop participants, in a moderated exchange format.

Afternoon Session (13:00 – 16:00)

Scaling Analytical Data Platforms: From Data to Data Products to Data Mesh

Scaling analytical data platforms is one of the challenges of this decade. By now, we know very well how to get the most out of data in a bounded context. But with the increasing adoption of data-driven solutions, the rising complexity of platforms forces us to think beyond technology. This workshop is designed to address these challenges. We will start with a short introduction to the concept of data mesh and then provide a structured approach to think of data as products. We will dive into an example of architecting an enterprise-scale data landscape including organizational and governance aspects. At the end, the participants are ready to apply the techniques and tools in their very own real-life scenarios. Join us to learn how to create a data mesh organization.

Room – Hyperion

Morning Session (09:00 – 12:00)

Geospatial Data Science: The Power of Knowing Where

60-80% of all information is geospatially referenced, yet few companies exploit the full potential of geospatial data. Companies that understand why something happens where can boost the effectiveness of marketing campaigns, optimize supply chains and improve customer experience, among others. In this workshop, we introduce participants to the basics of geospatial data science, including an overview of geographic coordinate systems, data types and commonly used tools for storing, manipulating, and visualizing geodata. Using open-source data to explore natural hazards in Switzerland, participants learn how to prepare, visualize, and manipulate location-based data, how to use it in predictive modeling and the state-of-the-art tools to do so in Python.

Afternoon Session (13:00 – 16:00)

GEOSpatial Business – Innovation & Business Cases

The Power of Where – this frequently used statement underscores the importance of geo-spatial data and spatial data analytics in a World, which is becoming increasingly data-driven. It can help with global challenges such as climate change adaption or famine, can give companies a competitive advantage, and can streamline processes and systems that society and governments have built. But even though data quality and availability are increasing, the transition to commercialisation of data products and services remains a challenge. This is due on the one hand to the requirements regarding the operationalisation of value-added services and on the other hand to the market’s expectations regarding the reliability and accuracy of these services.

Room – Eve’s Kitchen

Morning Session (09:00 – 12:00)

Data Stories: making your insights truly stand out

Data professionals spend a huge amount of time exploring, processing, and analyzing their data. Managers and executives utilize these to make or drive data based decisions. However, even with highly impactful insights, it is challenging to make sure the message clearly gets across, the more so when communication takes place with busy decision makers. In this workshop we will see how data storytelling can be leveraged to make sure the right message is efficiently conveyed, such that the results of the hard work stand out.

Afternoon Session (13:00 – 16:00)

Data Science Techniques for Data sets on Mental and Neurodegenerative Disorders

About two percent of the world’s population suffers from various types of mental and neurodegenerative disorders, making up 13% of the global burden of disease. The burden of mental health disorders, in terms of reduced health and productivity, was estimated as costing approximately $2.5 trillion USD globally in 2010 and projected to grow to $6 trillion by 2030 ($8.2 trillion in 2022 dollars). Artificial intelligence techniques have been used to detect and help treat mental and neurodegenerative disorders. However, developing and integrating trustworthy AI models into healthcare settings, especially with such vulnerable populations, requires many things, including: robust methods, careful assessment of model performance, and clean, unbiased data sets. Thus, data science techniques are critical for advancing our understanding and treatment of mental and neurodegenerative disorders.

Room – Offset

Morning Session (09:00 – 12:00)

Forecasting & Meta-learning

Time series forecasting is a crucial tool in a variety of fields, but applying deep learning models in practice can be challenging. In this workshop, we will present techniques for data cleaning and modeling to train and apply (deep learning) forecasting models, using our open-source Python library Darts. We will also introduce the innovative concept of meta-learning, which can discover generic patterns from diverse time series data and provide zero-shot predictions on unrelated time series. Participants will have the chance to apply these techniques through hands-on exercises, including a time series forecasting competition.

Afternoon Session (13:00 – 16:00)

Deep Learning for Predictive Maintenance: Scalable Implementation in Operational Setups

Developing deep learning (DL) algorithms for predictive maintenance is a growing trend in various industrial fields. Whereas research methods have been rapidly advancing, implementations in commercial systems are still lagging behind. One reason for the delay is the common focus on the choice of algorithm, ignoring crucial aspects of scaling the algorithms to heterogeneous fleets of multi-component machines.

In this tutorial, we discuss approaches to address these challenges. We provide background to techniques for scalable deployment of DL in commercial machine fleets, focusing on anomaly detection, transfer learning, and uncertainty quantification. We explain the generic concepts on use cases from commercial fleets, including a code implementation on a publicly available data set.

Room – Bridge

Morning Session (09:00 – 12:00)

DataMesh in Action – When and how to implement a DataMesh

DataMesh is a socio-technical paradigm on how to shape an organization to leverage the value of data even in the presence of frequent change and massive growth. Even though the theoretical concepts are well covered in literature, implementation details and answers to practical questions are scarce. In this workshop, we will leverage the wide experience of data practitioners to get more flesh-on-the-bone of DataMesh in an interactive manner. After a short catch-up on DataMesh, we will identify the most common challenges in today’s data ecosystem based on our collective experience. In a second step, we will work in groups to identify how the DataMesh principles might help to overcome the prevalent data challenges, but also what new questions might arise in a practical implementation.

Afternoon Session (13:00 – 16:00)

Creating a Modern Data Lakehouse

Data Lakehouse architectures are on the rise and are supporting modern data needs through flexible and fluid structures, while ensuring performance for big data analytics. In this workshop, you will learn the concepts, differences and the purpose of Data Lakehouses as well as key technological advancements, such as Delta Lakes, which have enabled this solution. Additionally, you will gain firsthand experience to set up your own Data Lakehouse in the public cloud and use it as source for a BI report, putting theory into practice. Simultaneously, you will learn about how the cloud can help you gain access to data science, machine learning, and business analytics capabilities. In the end, we want you to leave the workshop with all the tools needed to create your very own big data solutions.

Room – Mahogany

Morning Session (09:00 – 12:00)

Databricks brick-by-brick: Data, Analytics and ML on one platform

Databricks is an industry-leading, cloud-based lakehouse platform used for processing and transforming massive quantities of data, deriving new insights using SQL endpoints, and enabling modern ML lifecycles. In this workshop, we present how Databricks tools assist and enable fast development in all aspects of the current data product lifecycle, from ELT pipelines, workflow orchestration and data governance to Machine Learning experimentation and Model Serving (MLOps). In the practical part of the workshop we will discuss each of these steps in detail and guide the participants through the whole development lifecycle in Databricks.