The field of Artificial Intelligence has made significant progress over the past years by
+making use of Deep Neural Networks. The ability of Deep Neural Networks to carry out complex mappings
+in high dimensional spaces makes them extremely interesting for biological applications, e.g. for mapping genotypes to (molecular) phenotypes.
+
+This talk gives an introduction to recent Deep Learning concepts that are interesting for Life Sciences, with
+special focus on the ‘Attention Mechanism’ — a key concept of Neural Machine Translation.
+State of the art results are presented by using Deep Neural networks for predicting DNA binding motifs, RNA secondary structure for given RNA sequence, and RNA sequence for given RNA secondary structure.
+
+
+Automation and applications of robots in various fields bear the promise of reducing expenses as well as time requirements for production, logistics, transportation, and others. The first step towards automation included writing down our own rules and intuitions about how machines should solve tasks: programming. Machine learning enables us to generate rules which are too complex to be manually formulated by training highly flexible models based on large datasets. Our efforts have been shifted from rule design to the collection, cleaning, and annotation of data. To overcome increasing time demands for larger and larger datasets, we rely on methods from fields such as modularity, transfer learning, domain adaptation, learning from demonstration and reinforcement learning. In this talk, I will summarise some of our recent work from the Oxford Robotics Institute (University of Oxford) and the Berkeley AI Research lab (UC Berkeley) aiming at conceptualising the current challenges as well as the potentials for increasing the efficiency of humans to increase the efficiency of robotic automation.
+
+Markus is a postdoctoral research scientist at the Oxford Robotics Institute as well as a member of Oxford University’s New College. From September 2018 on, he will be joining Google DeepMind’s robotics efforts as research scientist. In 2017, he was a visiting scholar with the UC Berkeley Artificial Intelligence Research lab. The principal focus of his research is the development of approaches for increasing the efficiency of processes for providing supervision to guide autonomous systems with particular emphasis on modularity, transfer learning and learning from demonstration, work which was awarded as Best Student Paper at IROS16.
+
+In early 2016, he additionally lead ORIs path planning software development for the presentation of a self driving prototype at the Shell Eco Marathon (SEM). This work has paved the way for to the introduction of a new autonomous challenge category at the SEM scheduled for 2018. Being in the field of robotics since 2010, he has been part of research efforts on space exploration robots, GPU-based simulations and robotic platforms for first responders as well as mobile autonomy at various research institutions including MIT, ETHZ, UC Berkeley and the University of Oxford.
+
+
+
+
+Markus' publications are indexed at his google scholar site .
+
diff --git a/_posts/2018-09-25-jeff-dean.html b/_posts/2018-09-25-jeff-dean.html
new file mode 100644
index 0000000000..ecdc4aa8f5
--- /dev/null
+++ b/_posts/2018-09-25-jeff-dean.html
@@ -0,0 +1,29 @@
+---
+layout: post
+title: "Deep Learning to Solve Challenging Problems"
+subtitle: "Jeff Dean, Head of Google AI"
+date: 2018-09-25 13:00:0 +0000
+background: https://media.giphy.com/media/3og0IwzZ9cryoKNKsE/giphy.gif
+future_date: False
+layer_shift: True
+---
+
+Jeff Dean, head of Google's AI division and recipient of the 2012 ACM Prize in Computing, will be speaking at the DKFZ in addition to his participation in the 6th Heidelberg Laureate Forum (HLF).
+
+Abstract
+
+For the past seven years, the Google Brain team has conducted research on difficult problems in artificial intelligence, on building large-scale computer systems for machine learning research, and, in collaboration with many teams at Google, on applying our research and systems to many Google products. Our group has open-sourced the TensorFlow system, a widely popular system designed to easily express machine learning ideas, and to quickly train, evaluate and deploy machine learning systems. We have also collaborated closely with Google's platforms team to design and deploy new computational hardware called Tensor Processing Units, specialized for accelerating machine learning computations. In this talk, I'll highlight some of our research accomplishments, and will relate them to the National Academy of Engineering's Grand Engineering Challenges for the 21st Century, including the use of machine learning for healthcare, robotics, and engineering the tools of scientific discovery. I'll also cover how machine learning is transforming many aspects of our computing hardware and software systems.
+
+This talk describes joint work with many people at Google.
+Bio:
+Jeff Dean joined Google in 1999 and is currently a Google Senior Fellow in Google's Research Group, where he co-founded and leads the Google Brain team, Google's deep learning and artificial intelligence research team, and also leads Google AI's overall research efforts. He and his collaborators are working on systems for speech recognition, computer vision, language understanding, and various other machine learning tasks. He has co-designed/implemented many generations of Google's crawling, indexing, and query serving systems, and co-designed/implemented major pieces of Google's initial advertising and AdSense for Content systems. He is also a co-designer and co-implementor of Google's distributed computing infrastructure, including the MapReduce, BigTable and Spanner systems, protocol buffers, the open-source TensorFlow system for machine learning, and a variety of internal and external libraries and developer tools.
+
+Jeff received a Ph.D. in Computer Science from the University of Washington in 1996, working with Craig Chambers on whole-program optimization techniques for object-oriented languages. He received a B.S. in computer science & economics from the University of Minnesota in 1990. He is a member of the National Academy of Engineering, and of the American Academy of Arts and Sciences, a Fellow of the Association for Computing Machinery (ACM), a Fellow of the American Association for the Advancement of Sciences (AAAS), and a winner of the ACM Prize in Computing and the Mark Weiser Award.
+
+
+Event's recording
+
+VIDEO
+
+
+
diff --git a/_posts/2018-10-11-part-segmentation.html b/_posts/2018-10-11-part-segmentation.html
new file mode 100644
index 0000000000..ef991e58af
--- /dev/null
+++ b/_posts/2018-10-11-part-segmentation.html
@@ -0,0 +1,21 @@
+---
+layout: post
+title: "Learning Semantic Part Segmentation of Model Organisms from Little to No Training Data"
+subtitle: "Dagmar Kainmüller, Berlin Institute of Health"
+date: 2018-10-11 14:00:0 +0000
+background: '/img/posts/part_segmentation_banner.jpg'
+future_date: False
+layer_shift: True
+---
+
+
+Abstract
+
+ Annotating cells in 3D light microscopic images of the nematode worm C. Elegans is an elementary task for cell-level studies of gene expression. Manually annotating individual cells with their unique biological names is hard even for trained anatomists. There exist, to date, thirty 3D volumes of L1 larvae in which most cells have been expert-annotated, and this effort took 5 years to complete. Such annotations do not exist for other stages of development of the worm. For an exhaustive study of gene expression at the cell level, thousands of worms will have to be annotated, which will only be possible if the annotation task can be automated.
+
+Our work explores how to best leverage a small set of annotated worm images for supervised training of automated annotation models. In particular, we compare (1) the current state of the art for automated annotation, Active Graph Matching, which matches a global point distribution model of cell locations via combinatorial optimization, to (2) a location-sensitive U-Net trained to predict hundreds of different cell names on the few avilable training volumes. To our surprise, we find that the U-Net gets close to the performance of the model based approach.
+
+Furthermore, we explore possibilities to alleviate the need for even a small training set, via unsupervised training of a model for nuclei annotation.
+Event
+
+The event will take place on Thursday, 11 October, 2018 at 2pm at Room: B.128 (Mathematikon B, 3rd floor, Universität Heidelberg Berliner Str. 43). Please ring the bell at the entry door (towards Berliner Strasse) which says: Universität Heidelberg; HCI am IWR. Kindly help us plan ahead by registering for the evnent on our meetup page .
diff --git a/_posts/2018-10-29-med-gan.html b/_posts/2018-10-29-med-gan.html
new file mode 100644
index 0000000000..3911e838af
--- /dev/null
+++ b/_posts/2018-10-29-med-gan.html
@@ -0,0 +1,19 @@
+---
+layout: post
+title: "Modelling Probability Distributions using Neural Networks: Applications to Medical Imaging"
+subtitle: "Christian Baumgartner, ETH Zuerich"
+date: 2018-10-29 19:00:0 +0000
+background: '/img/posts/christian_baumgartner_background.png'
+future_date: False
+layer_shift: True
+---
+
+
+Abstract
+
+Attributing the pixels of an input image to a certain category is an important and well-studied problem in computer vision, with applications ranging from weakly supervised localisation to understanding hidden effects in the data. In recent years, approaches based on interpreting a previously trained neural network classifier have become the de facto state-of-the-art and are commonly used on medical as well as natural image datasets. On medical data they have been widely used for discovering disease effects. Unfortunately, such approaches have a significant shortcoming which may lead to only a subset of the category specific features being detected. To address this problem we developed a novel feature attribution technique based on Generative Adversarial Networks, which does not suffer from this limitation. The proposed method performs substantially better than the state-of-the-art for identifying disease effects on 3D neuroimaging data from patients with mild cognitive impairment (MCI) and Alzheimer’s disease (AD). Moreover, the proposed framework allows easy incorporation of prior knowledge about the disease formation. For AD patients the method produces compellingly realistic disease effect maps which closely resemble the observed effects. We believe that in the future GAN-based frameworks may offer an alternative to classical population-wide disease effect analyses such as voxel-based morphometry.
+
+
+Event's recording
+
+VIDEO
diff --git a/_posts/2018-11-08-post-binary.html b/_posts/2018-11-08-post-binary.html
new file mode 100644
index 0000000000..fa773daf71
--- /dev/null
+++ b/_posts/2018-11-08-post-binary.html
@@ -0,0 +1,45 @@
+---
+layout: post
+title: "The Post-Binary"
+subtitle: "A Conference on Artificial Intelligence in Art and Design"
+date: 2018-11-08 19:00:0 +0000
+background: '/img/posts/post-binary_notext.gif'
+future_date: False
+layer_shift: True
+---
+
+
+
+
+
+We are partnering with The Post-Binary, a 3-day conference held at the Museum Angewandte Kunst, Frankfurt am Main.
+
+The Post-Binary asks about a future which is emerging right now. With software becoming smarter and robots ever more versatile, innovation and automation become seemingly omnipresent in large aspects of our lives. Although many applications such as self-driving cars, AI systems for clinical diagnostics or seemless augmented reality are still under development, there is evidently immense potential to shake up the status quo – for better or for worse. The question is: How do we use our new toolboxes? And what will human life be like when AI and its applications permeate society?
+
+The Post-Binary aims at illuminating the role that AI plays in both art and society today and attempts to afford a glimpse of how this role might evolve. To this end we will hear from key figures of the scene, who will offer us their perspectives and elaborate upon their contributions to the field(s).
+
+As space is limited, a registration on post-binary.com is required.
+
+
Itinerary
+
+November 8 : Workshop I @ HFG Frankfurt on `Unity3D Machine Learning Agents' by Manuel Rossner.
+
+November 9 : Workshop II @ HFG Frankfurt on `Evolutionary AI' by Christoph Martens.
+
+November 10 : Conference Day @ Museum Angewandte Kunst (MAK):
+
+Morning:
+Luba Elliot (AI curator) ,
+Anna Ridler (AI artist) ,
+Sascha Pohflepp (tech artist)
+
+Afternoon:
+Franziska Nori (director Frankfurter Kunstverein) ,
+Ali Eslami (DeepMind) ,
+Mario Klingemann (AI artist)
+
+
+
Event
+
+The workshops will be held at the HFG Offenbach. The conference day will take place at the Museum Angewandte Kunst Frankfurt. Note that for this event, registration on post-binary.com is required and open until November 1. Because of limited capacity, we might have to hold a raffle for tickets. Tickets are 5EUR (HFG students are exempt).
+
diff --git a/_posts/2018-11-12-supply_chain.html b/_posts/2018-11-12-supply_chain.html
new file mode 100644
index 0000000000..9608e348d3
--- /dev/null
+++ b/_posts/2018-11-12-supply_chain.html
@@ -0,0 +1,33 @@
+---
+layout: post
+title: "Towards the Autonomous Supply Chain"
+subtitle: "Christian Scherrer Principle Data Science Consultant, Blue Yonder"
+date: 2018-11-12 19:00:0 +0000
+background: '/img/posts/feindt_banner.jpg'
+future_date: False
+layer_shift: True
+---
+
+
Michael Feindt was represented by Christian Scherrer as he was unfortunately prevented from speaking.
+
+Abstract
+
+
+
+
+Blue Yonder stands for robust probabilistic machine learning and data driven decision optimization and automation,
+packaged into consumable scaling narrow-AI products for retailers running in the cloud, made in Germany and having
+developed from long and deep experience in High Energy Physics.
+To fully profit from these products requires a massive change in the culture of decision making in companies.
+After the recent acquisition by jda the Blue Yonder technology will be distributed worldwide and widened both the
+complete supply chain up to the manufacturer and down to the end-customer. The underlying moonshot vision
+is the autonomous supply chain, leading to a much improved efficiency, less waste and independent of human biases.
+
+The talk will give an overview over the techniques applied, the history of the company, and some observations
+on the applicability of AI in industrial settings.
+
+
+
+Event's recording
+
+VIDEO
diff --git a/_posts/2018-11-26-generative-few-shot.html b/_posts/2018-11-26-generative-few-shot.html
new file mode 100644
index 0000000000..79067b92f8
--- /dev/null
+++ b/_posts/2018-11-26-generative-few-shot.html
@@ -0,0 +1,21 @@
+---
+layout: post
+title: "Generative Models for few-shot Prediction Tasks"
+subtitle: "Marta Garnelo Research Scientist, DeepMind"
+date: 2018-11-26 19:00:0 +0000
+background: '/img/posts/gqn.png'
+future_date: False
+layer_shift: True
+---
+
+
+Abstract
+
+Few-shot density estimation lies at the core of current meta-learning (or ‘learning to learn’) research and is crucial for intelligent systems to be able to adapt quickly to unseen tasks. In this talk we will introduce generative query networks (GQN - published in Science this year), a generative model for few-shot scene understanding that learns to capture the main features of synthetic 3D scenes. In the second half of the talk we will cover neural processes (NPs), a generalisation of the GQN training regime to a wider range of tasks like regression and classification. NPs are inspired by the flexibility of stochastic processes such as Gaussian processes, but are structured as neural networks and trained via gradient descent. We show how NPs make accurate predictions after observing only a handful of training data points, yet scale to complex functions and large data sets.
+
+
+Event's recording
+
+VIDEO
+
+
diff --git a/_posts/2019-01-15-motor-skill-learning.html b/_posts/2019-01-15-motor-skill-learning.html
new file mode 100644
index 0000000000..7dcd44c75d
--- /dev/null
+++ b/_posts/2019-01-15-motor-skill-learning.html
@@ -0,0 +1,20 @@
+---
+layout: post
+title: "Towards Motor Skill Learning"
+subtitle: "Jan Peters Technische Universität Darmstadt & Max-Planck Institute for Intelligent Systems"
+date: 2019-01-15 18:00:0 +0000
+background: '/img/posts/motor_skills.png'
+future_date: False
+layer_shift: True
+---
+
+
+Abstract
+
+Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this talk, we investigate a general framework suitable for learning motor skills in robotics which is based on the principles behind many analytical robotics approaches. It involves generating a representation of motor skills by parameterized motor primitive policies acting as building blocks of movement generation, and a learned task execution module that transforms these movements into motor commands. We discuss learning on three different levels of abstraction, i.e., learning for accurate control is needed to execute, learning of motor primitives is needed to acquire simple movements, and learning of the task-dependent „hyperparameters“ of these motor primitives allows learning complex tasks. We discuss task-appropriate learning approaches for imitation learning, model learning and reinforcement learning for robots with many degrees of freedom. Empirical evaluations on a several robot systems illustrate the effectiveness and applicability to learning control on an anthropomorphic robot arm. These robot motor skills range from toy examples (e.g., paddling a ball, ball-in-a-cup) to playing robot table tennis against a human being and manipulation of various objects.
+
+Bio
+Jan Peters is a full professor (W3) for Intelligent Autonomous Systems at the Computer Science Department of the Technische Universitaet Darmstadt and at the same time a senior research scientist and group leader at the Max-Planck Institute for Intelligent Systems, where he heads the interdepartmental Robot Learning Group. Jan Peters has received the Dick Volz Best 2007 US PhD Thesis Runner-Up Award, the Robotics: Science & Systems - Early Career Spotlight, the INNS Young Investigator Award, and the IEEE Robotics & Automation Society's Early Career Award as well as numerous best paper awards. In 2015, he received an ERC Starting Grant and in 2019, he was appointed as an IEEE Fellow.
+
+Event
+The event will take place on 15 January at 6pm in seminar room B128 in Mathematikon. For a more detailed description please see here
diff --git a/_posts/2019-01-22-self-supervision.html b/_posts/2019-01-22-self-supervision.html
new file mode 100644
index 0000000000..98c63341c5
--- /dev/null
+++ b/_posts/2019-01-22-self-supervision.html
@@ -0,0 +1,33 @@
+---
+layout: post
+title: "Self-Supervision: Learning to Learn"
+subtitle: "Prof. Bjoern Ommer Head of Computer Vision Group, Heidelberg University"
+date: 2019-01-22 19:00:0 +0000
+background: '/img/posts/ommer_banner.jpg'
+future_date: False
+layer_shift: True
+---
+
+
+Abstract
+
+
+A major challenge of artificial intelligence is to learn models that generalize to novel data. While training images and videos are easily available, labels are not, thus motivating self-supervised learning. Furthermore, Prof. Ommer will present a widely applicable strategy based on deep reinforcement learning to improve self-supervision. As a challenging application we will discuss estimating human pose. Time permitting, Prof. Ommer will present a variety of applications of this research ranging from behavior analysis in neuroscience to data analysis in the digital humanities.
+
+ Bio
+
+Björn Ommer is a full professor for Scientific Computing and leads the Computer Vision Group at Heidelberg University.
+He has studied computer science together with physics as a minor subject at the University of Bonn, Germany. His diploma (~M.Sc.) thesis focused on visual grouping based on perceptual organization and compositionality.
+
+After that he pursued his doctoral studies at ETH Zurich Switzerland in the Pattern Analysis and Machine Learning Group headed by Joachim M. Buhmann. He received his Ph.D. degree from ETH Zurich in 2007 for his dissertation "Learning the Compositional Nature of Objects for Visual Recognition" which was awarded the ETH Medal.
+
+Thereafter, Björn held a post-doc position in the Computer Vision Group of Jitendra Malik at UC Berkeley.
+
+He serves as an associate editor for the journal IEEE T-PAMI and previously for Pattern Recognition Letters. Björn is one of the directors of the HCI and of the IWR, principle investigator in the research training group 1653 ("Spatio/Temporal Graphical Models and Applications in Image Analysis"), and a member of the executive board and scientific committee of the Heidelberg Graduate School HGS MathComp. He has received the Outstanding Reviewer Award at ICCV'15, CVPR'14, ICCV'13, CVPR'11, and CVPR'10 and has served as Area Chair for ECCV'18. Björn has organized the 2011 DAGM Workshop on Unsolved Problems in Pattern Recognition.
+
+
+
+
+ Event Info
+
+The event will take place on Tuesday, 22 January, 2019 at 7pm at the Gästehaus Uni Heidelberg (guest house Uni Heidelberg), Im Neuenheimer Feld 370. Drinks and snacks will be provided, courtesy of the Division of Medical Image Computing at DKFZ. Kindly help us plan ahead by registering for the event on our meetup page .
diff --git a/_posts/2019-02-12-quantum-computing-ai-x.html b/_posts/2019-02-12-quantum-computing-ai-x.html
new file mode 100644
index 0000000000..47d05ffd27
--- /dev/null
+++ b/_posts/2019-02-12-quantum-computing-ai-x.html
@@ -0,0 +1,51 @@
+---
+layout: post
+title: "Quantum Computing & AI at Alphabet/Google X"
+subtitle: "Jack Hidary, Quantum@X and AI@X"
+date: 2019-02-12 18:00:0 +0000
+background: '/img/posts/hidary_banner.jpg'
+future_date: False
+layer_shift: True
+---
+
+++++++++++++ CANCELLED +++++++++++
+Unfortunately Jack can not make it to Germany next week. We are working on a possible alternative date later this year. Sorry for any inconvenience.
+
+Abstract
+
+
+Recent advances in the realization of quantum computing devices have now brought us to the NISQ regime .
+We will discuss what kinds of algorithms may be possible within this regime and the pathways to a fault-tolerant quantum computer.
+We will then provide an overview of the kinds of AI work we do at X.
+
+
+ Bio
+
+Jack Hidary focuses on Quantum Computing and AI at Alphabet X (formerly Google X).
+
+
+Jack studied philosophy and neuroscience at Columbia University and was awarded a Stanley Fellowship in Clinical Neuroscience at the
+National Institutes of Health (NIH). Under the fellowship, he conducted research on the use of neural networks
+to model and analyze the data in functional neuroimaging using techniques such as positron emission tomography (PET)
+and functional Magnetic Resonance Imaging (fMRI) to study brain states.
+
+
+
+After doing research in neuroscience and AI at the NIH, Jack followed his fascination with technology and established EarthWeb, a company dedicated to the needs of tech professionals. Jack co-founded the company with his brother Murray Hidary and friend Nova Spivack. Jack led the company from its inception through three rounds of investment and then its IPO on NASDAQ. Under Jack’s leadership, EarthWeb acquired Dice.com, a website that connects users with jobs, and other sites dedicated to the needs of IT professionals (NYSE: DHX)
+As Chairman and CEO of the public company, Jack continued to grow the company and engage with shareholders, customers and analysts. After running the public company for more than three years, Jack handed off to a new CEO.
+
+
+
+Jack became active in public service. He has been a board member of Trickle Up which helps thousands of entrepreneurs start small businesses each year. Jack established the Hidary Foundation to focus on medical research.
+Jack is also the co-founder and Chairman of Samba Energy, a technology company that integrates and implements clean tech solutions for enterprise customers.
+
+
+
+Jack now is the Senior Advisor to X Labs, the advanced innovation lab of Alphabet/Google.
+
+
+
+
+ Event Info
+
+The event will take place on Tuesday, 12 February, 2019 at 6pm at the Kirchhoff-Institute for Physics, Im Neuenheimer Feld 227. Kindly help us plan ahead by registering for the event on our meetup page . As usual, we will offer free snacks and beer after the talk, kindly provided by DKFZ's Medical Image Computing Division.
diff --git a/_posts/2019-03-21-bayesian_uncertainty.html b/_posts/2019-03-21-bayesian_uncertainty.html
new file mode 100644
index 0000000000..356a0afe4a
--- /dev/null
+++ b/_posts/2019-03-21-bayesian_uncertainty.html
@@ -0,0 +1,29 @@
+---
+layout: post
+title: "Leveraging (Bayesian) Uncertainty Information: Opportunities and Failure Modes"
+subtitle: "Christian Leibig, MX Healthcare"
+date: 2019-03-21 19:00:0 +0000
+background: '/img/posts/leibig_banner_2.png'
+future_date: False
+layer_shift: True
+---
+
+
+Abstract
+
+
+Deep learning (DL) has revolutionized the field of computer vision and image processing. In medical
+imaging, algorithmic solutions based on DL have been shown to achieve high performance on tasks
+that previously required medical experts. However, DL-based solutions for disease detection have
+been proposed without methods to quantify and control their uncertainty in a decision. In contrast, a physician knows whether she is uncertain about a case and will consult more experienced colleagues if needed. Here we evaluate drop-out based Bayesian uncertainty measures for DL in diagnosing diabetic retinopathy (DR) from fundus images and show that it captures uncertainty better than straightforward alternatives. Furthermore, we show that uncertainty informed decision referral can improve diagnostic performance. Experiments across different networks, tasks and datasets show robust generalization. We analyse causes of uncertainty by relating intuitions from 2D visualizations to the high-dimensional image space. While uncertainty is sensitive to clinically relevant cases, sensitivity to unfamiliar data samples is task dependent, but can be rendered more robust. The opportunities and failure modes identifified here, will be put in a broader context by relating to recent developments.
+
+ Bio
+
+After finishing his PhD at the University of Tuebingen, Christian Leibig joined Merantix , Germany's leading AI healthcare startup based in Berlin. His work on Uncertainty in Deep Learning was published in Nature Scientific Reports .His talk will be based on a Keynote he held at the "Bayesian Deep Learning Workshop" at Neurips 2018.
+
+
+
+
+ Event Info
+
+The event will take place on Thursday, 21 March, 2019 at 7pm at the DKFZ Communication Center (K1+K2), Im Neuenheimer Feld 280. Drinks and snacks will be provided, courtesy of the Division of Medical Image Computing at DKFZ. Kindly help us plan ahead by registering for the event on our meetup page .
diff --git a/_posts/2019-07-09-graph-neural-networks.html b/_posts/2019-07-09-graph-neural-networks.html
new file mode 100644
index 0000000000..4b1169c277
--- /dev/null
+++ b/_posts/2019-07-09-graph-neural-networks.html
@@ -0,0 +1,30 @@
+---
+layout: post
+title: "Learning the Structure of Graph Neural Networks"
+subtitle: "Mathias Niepert, NEC Labs Europe"
+date: 2019-07-09 18:00:0 +0000
+background: '/img/posts/mathias_niepert_background.png'
+future_date: False
+layer_shift: True
+---
+
+
+Abstract
+
+
+Graph-structured data is ubiquitous and occurs in several application domains. Graph representation learning approaches, however, have been limited to such applications where a graph structure is given or have used heuristics to construct affinity graphs before learning commences. One of the long-standing goals of machine learning is to infer and leverage relational dependencies, even if not available a-priori. We propose an end-to-end differentiable framework that jointly learns the graph and weights of a graph convolutional network by approximately solving a bilevel program whose inner and outer objectives aim at optimizing, respectively, the parameters of a GCN and its graph structure. This makes graph neural networks applicable to a much wider range of learning problems. We show that the proposed method outperforms related approaches by a significant margin on datasets where the dependency structure is either incomplete or completely missing. I'll also describe some applications of graph neural networks in the (bio-)medical domain.
+
+
+ Bio
+
+
+Mathias Niepert is a chief research scientist of the Systems and Machine Learning (SysML) group at NEC Labs Heidelberg.
+From 2012-2015 he was a postdoctoral research associate at the Allen School of Computer Science, University of Washington.
+He was also a member of the Data and Web Science Research Group at the University of Mannheim and co-founded several open-source digital humanities projects such as the Indiana Philosophy Ontology Project and the Linked Humanities Project .
+
+
+
+
+ Event Info
+
+The event will take place on Tuesday, 9 July, 2019 at 6pm at the DKFZ Communication Center (K1+K2), Im Neuenheimer Feld 280. Drinks and snacks will be provided, courtesy of the Division of Medical Image Computing at DKFZ. Kindly help us plan ahead by registering for the event on our meetup page .
diff --git a/_posts/2019-09-23-iwr-school.html b/_posts/2019-09-23-iwr-school.html
new file mode 100644
index 0000000000..e8562fa270
--- /dev/null
+++ b/_posts/2019-09-23-iwr-school.html
@@ -0,0 +1,25 @@
+---
+layout: post
+title: "A Crash Course in Machine Learning with Applications in Natural- and Life Sciences"
+subtitle: "IWR School"
+date: 2019-09-23 18:00:0 +0000
+background: 'https://typo.iwr.uni-heidelberg.de/fileadmin/user_upload/Header_IWR_S1_732x250.png'
+future_date: False
+layer_shift: True
+---
+
+
+Description
+
+Our partner, the Interdisciplinary Center for Scientific Computing (IWR), is hosting a school for MSc and PhD students as well as postdocs.
+ A Crash Course in Machine Learning with Applications in Natural- and Life Sciences (ML4Nature) targets young researchers from Natural Sciences and Life Sciences who want to learn more about machine learning.
+ As the organizers state, a background in machine learning is not required. Besides introducing the basic concepts of machine learning, we teach selected topics in more depth, such as deep learning, metric learning, transfer learning, Bayesian inverse problems, and causality. Experts from machine learning, Natural Science and Life Science explain how these machine learning approaches are utilized to solve problems in their respective fields of research.
+
+
+
+ The event is co-organized by our program committee member Carsten Rother and includes lectures by our board member Klaus Maier-Hein and past heidelberg.ai speaker Björn Ommer.
+
+
+ Event Info
+
+The event will take place September 23-27, 2019 in Heidelberg, but the application deadline is July 7 . Please check the event homepage for details.
diff --git a/_posts/2019-10-21-grantcharov.html b/_posts/2019-10-21-grantcharov.html
new file mode 100644
index 0000000000..2cfcd6ee28
--- /dev/null
+++ b/_posts/2019-10-21-grantcharov.html
@@ -0,0 +1,50 @@
+---
+layout: post
+title: "Using data to enhance teamwork, team performance, and patient safety in the OR"
+subtitle: "Teodor Grantcharov, University of Toronto"
+date: 2019-10-21 12:30:0 +0000
+background: '/img/posts/grantcharov.jpg'
+future_date: False
+layer_shift: True
+---
+
+
+Abstract
+
+
+Despite significant advances in technology and research, the Operating Room still remains one of the most secretive environments in modern society.
+This presentation will highlight the impact of transparency and data-driven education & quality improvement on team performance and patient safety.
+It will review best practices from other high-risk, high-performance industries, and highlight the cultural barriers and the opportunities for transfer of similar approaches in the peri-operative setting. The speaker will review the introduction of the OR Black Box platform and the use of Artificial Intelligence to analyze performance and surgical safety. Finally, the presenter will highlight the importance of changing safety culture in the Operating Room in order to transform surgery into a ultra-safe industry.
+
+
+ Bio
+
+
+ Dr. Teodor Grantcharov completed his surgical training at the University of Copenhagen,
+ and a doctoral degree in Medical Sciences at the University of Aarhus in Denmark.
+ Dr. Grantcharov is a Professor of Surgery at the University of Toronto. He holds the
+ Keenan Chair in Surgery at St. Michael’s Hospital in Toronto and Canada Research Chair
+ in Simulation and Surgical Safety.
+ Dr. Grantcharov is the Director of the International Centre for Surgical Safety – a
+ multidisciplinary group of visionary scientists with expertise in design, human factors,
+ computer- and data science, and healthcare.
+
+
+ Dr. Grantcharov’s clinical interest is the area of minimally invasive surgery, while his
+ academic focus is in the field of surgical innovation and patient safety. He has become
+ internationally recognized as a leader in this area with his work on curriculum design,
+ assessment of competence and impact of surgical performance on clinical outcomes. Dr.
+ Grantcharov developed the surgical black box concept, which aims to transform the
+ safety culture in medicine and introduce modern safety management systems in the highrisk operating room environment.
+ Dr. Grantcharov has more than 170 peer-reviewed publications and more than 180
+ invited presentations in Europe, South- and North America. He sits on the Board of
+ Governors of the American College of Surgeons (ACS) and on numerous committees
+ with Surgical Professional Societies in North America and Europe.
+ He sits on the Editorial Boards of the British Journal of Surgery and Surgical Endoscopy.
+
+
+
+
+ Event Info
+
+The event will take place on Monday, 21 October, 2019 at 12:30pm in the Großer Hörsaal, 3.OG, Chirurgische Klinik, Im Neuenheimer Feld 110. Kindly help us plan ahead by registering for the event on our meetup page .
diff --git a/_posts/2019-11-20-koethe.html b/_posts/2019-11-20-koethe.html
new file mode 100644
index 0000000000..0dfe741fc1
--- /dev/null
+++ b/_posts/2019-11-20-koethe.html
@@ -0,0 +1,37 @@
+---
+layout: post
+title: "Analyzing Inverse Problems in Natural Science using Invertible Neural Networks"
+subtitle: "Ullrich Koethe, HCI"
+date: 2019-11-20 19:00:0 +0000
+background: '/img/posts/koethe_background.jpg'
+future_date: False
+layer_shift: True
+---
+
+
+Abstract
+
+
+ Uncertainty quantification is a hot topic in neural network research.
+ This talk will focus on inverse problems, where high uncertainty arises from the inherent ambiguities of ill-posed inverse processes.
+ This type of problem is ubiquitous in natural sciences, and existing approaches are either very expensive or suffer from drastic approximations.
+ The talk presents a new class of invertible neural networks that generalize established Bayesian approaches from the linear to the non-linear setting.
+ These networks work equally well in the forward as well as the inverse direction and thus enable new training and approximation methods,
+ which become asymptotically exact in the perfect convergence limit. A variety of promising results from medical imaging, computer vision,
+ and environmental physics demonstrate the practical utility of the new method.
+
+
+ Bio
+
+
+ Prof. Ullrich Koethe is heading the newly founded group on “Explainable Machine Learning” at the Visual Learning Lab Heidelberg.
+ He is interested in the connection between machine learning
+ and its applications in image analysis, medicine, and natural sciences. Explainable learning shall open-up the blackbox of successful learning algorithms,
+ in particular neural networks, to provide insight rather than mere numbers.
+ In addition, he is interested in generic software bringing state-of-the-art algorithms to the end user and maintain the VIGRA image analysis library.
+
+
+
+ Event Info
+
+The event will take place on Wednesday, 20 November, 2019 at 7:00pm at the DKFZ Communication Center (K1+K2), Im Neuenheimer Feld 280. Drinks and snacks will be provided, courtesy of the Division of Medical Image Computing at DKFZ. Kindly help us plan ahead by registering for the event on our meetup page .
diff --git a/_posts/2020-01-26-dinosaurs.html b/_posts/2020-01-26-dinosaurs.html
deleted file mode 100644
index fc3c9f0b6a..0000000000
--- a/_posts/2020-01-26-dinosaurs.html
+++ /dev/null
@@ -1,40 +0,0 @@
----
-layout: post
-title: "Dinosaurs are extinct today"
-subtitle: "because they lacked opposable thumbs and the brainpower to build a space program."
-date: 2020-01-26 23:45:13 -0400
-background: '/img/posts/01.jpg'
----
-
-Never in all their history have men been able truly to conceive of the world as one: a single sphere, a globe, having the qualities of a globe, a round earth in which all the directions eventually meet, in which there is no center because every point, or none, is center — an equal earth which all men occupy as equals. The airman's earth, if free men make it, will be truly round: a globe in practice, not in theory.
-
-Science cuts two ways, of course; its products can be used for both good and evil. But there's no turning back from science. The early warnings about technological dangers also come from science.
-
-What was most significant about the lunar voyage was not that man set foot on the Moon but that they set eye on the earth.
-
-A Chinese tale tells of some men sent to harm a young girl who, upon seeing her beauty, become her protectors rather than her violators. That's how I felt seeing the Earth for the first time. I could not help but love and cherish her.
-
-For those who have seen the Earth from space, and for the hundreds and perhaps thousands more who will, the experience most certainly changes your perspective. The things that we share in our world are far more valuable than those which divide us.
-
-The Final Frontier
-
-There can be no thought of finishing for ‘aiming for the stars.’ Both figuratively and literally, it is a task to occupy the generations. And no matter how much progress one makes, there is always the thrill of just beginning.
-
-There can be no thought of finishing for ‘aiming for the stars.’ Both figuratively and literally, it is a task to occupy the generations. And no matter how much progress one makes, there is always the thrill of just beginning.
-
-The dreams of yesterday are the hopes of today and the reality of tomorrow. Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten.
-
-Spaceflights cannot be stopped. This is not the work of any one man or even a group of men. It is a historical process which mankind is carrying out in accordance with the natural laws of human development.
-
-Reaching for the Stars
-
-As we got further and further away, it [the Earth] diminished in size. Finally it shrank to the size of a marble, the most beautiful you can imagine. That beautiful, warm, living object looked so fragile, so delicate, that if you touched it with a finger it would crumble and fall apart. Seeing this has to change a man.
-
-
-To go places and do things that have never been done before – that’s what living is all about.
-
-Space, the final frontier. These are the voyages of the Starship Enterprise. Its five-year mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before.
-
-As I stand out here in the wonders of the unknown at Hadley, I sort of realize there’s a fundamental truth to our nature, Man must explore, and this is exploration at its greatest.
-
-Placeholder text by Space Ipsum . Photographs by Unsplash .
diff --git a/_posts/2020-01-27-dreams.html b/_posts/2020-01-27-dreams.html
deleted file mode 100644
index f687623ea2..0000000000
--- a/_posts/2020-01-27-dreams.html
+++ /dev/null
@@ -1,39 +0,0 @@
----
-layout: post
-title: "The dreams of yesterday are the hopes of today and the reality of tomorrow."
-date: 2020-01-27 23:45:13 -0400
-background: '/img/posts/02.jpg'
----
-
-Never in all their history have men been able truly to conceive of the world as one: a single sphere, a globe, having the qualities of a globe, a round earth in which all the directions eventually meet, in which there is no center because every point, or none, is center — an equal earth which all men occupy as equals. The airman's earth, if free men make it, will be truly round: a globe in practice, not in theory.
-
-Science cuts two ways, of course; its products can be used for both good and evil. But there's no turning back from science. The early warnings about technological dangers also come from science.
-
-What was most significant about the lunar voyage was not that man set foot on the Moon but that they set eye on the earth.
-
-A Chinese tale tells of some men sent to harm a young girl who, upon seeing her beauty, become her protectors rather than her violators. That's how I felt seeing the Earth for the first time. I could not help but love and cherish her.
-
-For those who have seen the Earth from space, and for the hundreds and perhaps thousands more who will, the experience most certainly changes your perspective. The things that we share in our world are far more valuable than those which divide us.
-
-The Final Frontier
-
-There can be no thought of finishing for ‘aiming for the stars.’ Both figuratively and literally, it is a task to occupy the generations. And no matter how much progress one makes, there is always the thrill of just beginning.
-
-There can be no thought of finishing for ‘aiming for the stars.’ Both figuratively and literally, it is a task to occupy the generations. And no matter how much progress one makes, there is always the thrill of just beginning.
-
-The dreams of yesterday are the hopes of today and the reality of tomorrow. Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten.
-
-Spaceflights cannot be stopped. This is not the work of any one man or even a group of men. It is a historical process which mankind is carrying out in accordance with the natural laws of human development.
-
-Reaching for the Stars
-
-As we got further and further away, it [the Earth] diminished in size. Finally it shrank to the size of a marble, the most beautiful you can imagine. That beautiful, warm, living object looked so fragile, so delicate, that if you touched it with a finger it would crumble and fall apart. Seeing this has to change a man.
-
-
-To go places and do things that have never been done before – that’s what living is all about.
-
-Space, the final frontier. These are the voyages of the Starship Enterprise. Its five-year mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before.
-
-As I stand out here in the wonders of the unknown at Hadley, I sort of realize there’s a fundamental truth to our nature, Man must explore, and this is exploration at its greatest.
-
-Placeholder text by Space Ipsum . Photographs by Unsplash .
diff --git a/_posts/2020-01-28-exploration.html b/_posts/2020-01-28-exploration.html
deleted file mode 100644
index ec94119725..0000000000
--- a/_posts/2020-01-28-exploration.html
+++ /dev/null
@@ -1,40 +0,0 @@
----
-layout: post
-title: "Failure is not an option"
-subtitle: "Many say exploration is part of our destiny, but it’s actually our duty to future generations."
-date: 2020-01-28 23:45:13 -0400
-background: '/img/posts/03.jpg'
----
-
-Never in all their history have men been able truly to conceive of the world as one: a single sphere, a globe, having the qualities of a globe, a round earth in which all the directions eventually meet, in which there is no center because every point, or none, is center — an equal earth which all men occupy as equals. The airman's earth, if free men make it, will be truly round: a globe in practice, not in theory.
-
-Science cuts two ways, of course; its products can be used for both good and evil. But there's no turning back from science. The early warnings about technological dangers also come from science.
-
-What was most significant about the lunar voyage was not that man set foot on the Moon but that they set eye on the earth.
-
-A Chinese tale tells of some men sent to harm a young girl who, upon seeing her beauty, become her protectors rather than her violators. That's how I felt seeing the Earth for the first time. I could not help but love and cherish her.
-
-For those who have seen the Earth from space, and for the hundreds and perhaps thousands more who will, the experience most certainly changes your perspective. The things that we share in our world are far more valuable than those which divide us.
-
-The Final Frontier
-
-There can be no thought of finishing for ‘aiming for the stars.’ Both figuratively and literally, it is a task to occupy the generations. And no matter how much progress one makes, there is always the thrill of just beginning.
-
-There can be no thought of finishing for ‘aiming for the stars.’ Both figuratively and literally, it is a task to occupy the generations. And no matter how much progress one makes, there is always the thrill of just beginning.
-
-The dreams of yesterday are the hopes of today and the reality of tomorrow. Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten.
-
-Spaceflights cannot be stopped. This is not the work of any one man or even a group of men. It is a historical process which mankind is carrying out in accordance with the natural laws of human development.
-
-Reaching for the Stars
-
-As we got further and further away, it [the Earth] diminished in size. Finally it shrank to the size of a marble, the most beautiful you can imagine. That beautiful, warm, living object looked so fragile, so delicate, that if you touched it with a finger it would crumble and fall apart. Seeing this has to change a man.
-
-
-To go places and do things that have never been done before – that’s what living is all about.
-
-Space, the final frontier. These are the voyages of the Starship Enterprise. Its five-year mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before.
-
-As I stand out here in the wonders of the unknown at Hadley, I sort of realize there’s a fundamental truth to our nature, Man must explore, and this is exploration at its greatest.
-
-Placeholder text by Space Ipsum . Photographs by Unsplash .
diff --git a/_posts/2020-01-29-prophecy.html b/_posts/2020-01-29-prophecy.html
deleted file mode 100644
index 65704ea439..0000000000
--- a/_posts/2020-01-29-prophecy.html
+++ /dev/null
@@ -1,40 +0,0 @@
----
-layout: post
-title: "Science has not yet mastered prophecy"
-subtitle: "We predict too much for the next year and yet far too little for the next ten."
-date: 2020-01-29 23:45:13 -0400
-background: '/img/posts/04.jpg'
----
-
-Never in all their history have men been able truly to conceive of the world as one: a single sphere, a globe, having the qualities of a globe, a round earth in which all the directions eventually meet, in which there is no center because every point, or none, is center — an equal earth which all men occupy as equals. The airman's earth, if free men make it, will be truly round: a globe in practice, not in theory.
-
-Science cuts two ways, of course; its products can be used for both good and evil. But there's no turning back from science. The early warnings about technological dangers also come from science.
-
-What was most significant about the lunar voyage was not that man set foot on the Moon but that they set eye on the earth.
-
-A Chinese tale tells of some men sent to harm a young girl who, upon seeing her beauty, become her protectors rather than her violators. That's how I felt seeing the Earth for the first time. I could not help but love and cherish her.
-
-For those who have seen the Earth from space, and for the hundreds and perhaps thousands more who will, the experience most certainly changes your perspective. The things that we share in our world are far more valuable than those which divide us.
-
-The Final Frontier
-
-There can be no thought of finishing for ‘aiming for the stars.’ Both figuratively and literally, it is a task to occupy the generations. And no matter how much progress one makes, there is always the thrill of just beginning.
-
-There can be no thought of finishing for ‘aiming for the stars.’ Both figuratively and literally, it is a task to occupy the generations. And no matter how much progress one makes, there is always the thrill of just beginning.
-
-The dreams of yesterday are the hopes of today and the reality of tomorrow. Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten.
-
-Spaceflights cannot be stopped. This is not the work of any one man or even a group of men. It is a historical process which mankind is carrying out in accordance with the natural laws of human development.
-
-Reaching for the Stars
-
-As we got further and further away, it [the Earth] diminished in size. Finally it shrank to the size of a marble, the most beautiful you can imagine. That beautiful, warm, living object looked so fragile, so delicate, that if you touched it with a finger it would crumble and fall apart. Seeing this has to change a man.
-
-
-To go places and do things that have never been done before – that’s what living is all about.
-
-Space, the final frontier. These are the voyages of the Starship Enterprise. Its five-year mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before.
-
-As I stand out here in the wonders of the unknown at Hadley, I sort of realize there’s a fundamental truth to our nature, Man must explore, and this is exploration at its greatest.
-
-Placeholder text by Space Ipsum . Photographs by Unsplash .
diff --git a/_posts/2020-01-30-heartbeats.html b/_posts/2020-01-30-heartbeats.html
deleted file mode 100644
index a444e6cd18..0000000000
--- a/_posts/2020-01-30-heartbeats.html
+++ /dev/null
@@ -1,39 +0,0 @@
----
-layout: post
-title: "I believe every human has a finite number of heartbeats. I don't intend to waste any of mine."
-date: 2020-01-30 23:45:13 -0400
-background: '/img/posts/05.jpg'
----
-
-Never in all their history have men been able truly to conceive of the world as one: a single sphere, a globe, having the qualities of a globe, a round earth in which all the directions eventually meet, in which there is no center because every point, or none, is center — an equal earth which all men occupy as equals. The airman's earth, if free men make it, will be truly round: a globe in practice, not in theory.
-
-Science cuts two ways, of course; its products can be used for both good and evil. But there's no turning back from science. The early warnings about technological dangers also come from science.
-
-What was most significant about the lunar voyage was not that man set foot on the Moon but that they set eye on the earth.
-
-A Chinese tale tells of some men sent to harm a young girl who, upon seeing her beauty, become her protectors rather than her violators. That's how I felt seeing the Earth for the first time. I could not help but love and cherish her.
-
-For those who have seen the Earth from space, and for the hundreds and perhaps thousands more who will, the experience most certainly changes your perspective. The things that we share in our world are far more valuable than those which divide us.
-
-The Final Frontier
-
-There can be no thought of finishing for ‘aiming for the stars.’ Both figuratively and literally, it is a task to occupy the generations. And no matter how much progress one makes, there is always the thrill of just beginning.
-
-There can be no thought of finishing for ‘aiming for the stars.’ Both figuratively and literally, it is a task to occupy the generations. And no matter how much progress one makes, there is always the thrill of just beginning.
-
-The dreams of yesterday are the hopes of today and the reality of tomorrow. Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten.
-
-Spaceflights cannot be stopped. This is not the work of any one man or even a group of men. It is a historical process which mankind is carrying out in accordance with the natural laws of human development.
-
-Reaching for the Stars
-
-As we got further and further away, it [the Earth] diminished in size. Finally it shrank to the size of a marble, the most beautiful you can imagine. That beautiful, warm, living object looked so fragile, so delicate, that if you touched it with a finger it would crumble and fall apart. Seeing this has to change a man.
-
-
-To go places and do things that have never been done before – that’s what living is all about.
-
-Space, the final frontier. These are the voyages of the Starship Enterprise. Its five-year mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before.
-
-As I stand out here in the wonders of the unknown at Hadley, I sort of realize there’s a fundamental truth to our nature, Man must explore, and this is exploration at its greatest.
-
-Placeholder text by Space Ipsum . Photographs by Unsplash .
diff --git a/_posts/2020-01-31-man-must-explore.html b/_posts/2020-01-31-man-must-explore.html
deleted file mode 100644
index 4dd948b4d7..0000000000
--- a/_posts/2020-01-31-man-must-explore.html
+++ /dev/null
@@ -1,40 +0,0 @@
----
-layout: post
-title: "Man must explore, and this is exploration at its greatest"
-subtitle: "Problems look mighty small from 150 miles up"
-date: 2020-01-31 10:45:13 -0400
-background: '/img/posts/06.jpg'
----
-
-Never in all their history have men been able truly to conceive of the world as one: a single sphere, a globe, having the qualities of a globe, a round earth in which all the directions eventually meet, in which there is no center because every point, or none, is center — an equal earth which all men occupy as equals. The airman's earth, if free men make it, will be truly round: a globe in practice, not in theory.
-
-Science cuts two ways, of course; its products can be used for both good and evil. But there's no turning back from science. The early warnings about technological dangers also come from science.
-
-What was most significant about the lunar voyage was not that man set foot on the Moon but that they set eye on the earth.
-
-A Chinese tale tells of some men sent to harm a young girl who, upon seeing her beauty, become her protectors rather than her violators. That's how I felt seeing the Earth for the first time. I could not help but love and cherish her.
-
-For those who have seen the Earth from space, and for the hundreds and perhaps thousands more who will, the experience most certainly changes your perspective. The things that we share in our world are far more valuable than those which divide us.
-
-The Final Frontier
-
-There can be no thought of finishing for ‘aiming for the stars.’ Both figuratively and literally, it is a task to occupy the generations. And no matter how much progress one makes, there is always the thrill of just beginning.
-
-There can be no thought of finishing for ‘aiming for the stars.’ Both figuratively and literally, it is a task to occupy the generations. And no matter how much progress one makes, there is always the thrill of just beginning.
-
-The dreams of yesterday are the hopes of today and the reality of tomorrow. Science has not yet mastered prophecy. We predict too much for the next year and yet far too little for the next ten.
-
-Spaceflights cannot be stopped. This is not the work of any one man or even a group of men. It is a historical process which mankind is carrying out in accordance with the natural laws of human development.
-
-Reaching for the Stars
-
-As we got further and further away, it [the Earth] diminished in size. Finally it shrank to the size of a marble, the most beautiful you can imagine. That beautiful, warm, living object looked so fragile, so delicate, that if you touched it with a finger it would crumble and fall apart. Seeing this has to change a man.
-
-
-To go places and do things that have never been done before – that’s what living is all about.
-
-Space, the final frontier. These are the voyages of the Starship Enterprise. Its five-year mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before.
-
-As I stand out here in the wonders of the unknown at Hadley, I sort of realize there’s a fundamental truth to our nature, Man must explore, and this is exploration at its greatest.
-
-Placeholder text by Space Ipsum . Photographs by Unsplash .
diff --git a/_posts/2020-02-06-ruder.html b/_posts/2020-02-06-ruder.html
new file mode 100644
index 0000000000..697f15bab6
--- /dev/null
+++ b/_posts/2020-02-06-ruder.html
@@ -0,0 +1,27 @@
+---
+layout: post
+title: "Cross-lingual Transfer Learning"
+subtitle: "Sebastian Ruder, DeepMind"
+date: 2020-02-06 18:15:0 +0000
+background: '/img/posts/ruder_background.png'
+future_date: False
+layer_shift: True
+---
+
+
+Abstract
+
+
+ Research in natural language processing (NLP) has seen many advances over the recent years, from word embeddings to pretrained language models. However, most of these approaches rely on large labelled datasets, which has constrained their success to languages where such data is plentiful. In this talk, I will give an overview of approaches that transfer knowledge across languages and enable us to scale NLP models to more of the world's 7,000 languages. I will cover the spectrum of recent cross-lingual transfer approaches, from word embeddings to deep pretrained models. The talk will conclude with a discussion of the cutting-edge of learning such representations, their limitations, and future directions.
+
+
+ Bio
+
+
+ On his webpage ruder.io , which is also a well-known Machine Learning blog, Sebastian writes: I'm a research scientist at DeepMind, London. I completed my PhD in Natural Language Processing and Deep Learning at the Insight Research Centre for Data Analytics, while working as a research scientist at Dublin-based text analytics startup AYLIEN. Previously, I've studied Computational Linguistics at the University of Heidelberg, Germany and at Trinity College, Dublin. During my studies, I've worked with Microsoft, IBM's Extreme Blue, Google Summer of Code, and SAP, among others. I'm interested in transfer learning for NLP and making ML and NLP more accessible.
+
+
+
+ Event Info
+
+The event will take place on Thursday, 6 February, 2020 at 6:15pm at the Seminarraum in Gästehaus of University Heidelberg, Im Neuenheimer Feld 370. Drinks and Pizza will be provided, courtesy of the Division of Medical Image Computing at DKFZ. Kindly help us plan ahead by registering for the event on our meetup page .
diff --git a/_posts/2020-03-24-vincent.html b/_posts/2020-03-24-vincent.html
new file mode 100644
index 0000000000..a917872e87
--- /dev/null
+++ b/_posts/2020-03-24-vincent.html
@@ -0,0 +1,38 @@
+---
+layout: post
+title: "[CANCELLED] Immersions - Visualizing and sonifying how an Artificial Ear hears music"
+subtitle: "Vincent Herrmann, University of Music Karlsruhe"
+date: 2020-03-24 19:00:0 +0000
+background: '/img/posts/vincent_background.gif'
+future_date: False
+layer_shift: True
+---
+
+We will reschedule this event to the summer!
+
+Abstract
+
+
+ Immersions is a system that lets us interact with and explore an audio processing neural network,
+ or what Vincent calls an “artificial ear”. This network was trained in an unsupervised way on various music datasets
+ using a contrastive predictive coding method. There are two modes of showing its inner workings - one is visual,
+ the other is sonic. For the visualization, first, the neurons of the network are laid out in two-dimensional space,
+ then their activation is shown at every moment, depending on the input, in real-time. To make audible how music sound
+ to the artificial ear, an optimization procedure generates sounds that specifically activate certain neurons in the network.
+
+
+ Bio
+
+
+ Vincent Herrmann caused quite a stir at last year's Neurips in Vancouver
+ when demonstrating his Artificial Ear, which also earned him a well-deserved "Best Demonstration" award. His background is in classical music,
+ but in the last years he has gotten more and more interested in the field of machine learning and artificial
+ intelligence. At the moment he is finishing his Master thesis at the Bosch Center for Artificial Intelligence.
+ Right before that he got a Master’s degree in piano from the University of Music Stuttgart.
+
+
+
+ Event Info
+
+The event will take place on Tuesday, 24 March, 2020 at 7:00pm at the DKFZ Communication Center (K1+K2), Im Neuenheimer Feld 280. Drinks and snacks will be provided, courtesy of the Division of Medical Image Computing at DKFZ.
+ Kindly help us plan ahead by registering for the event on our meetup page .
diff --git a/_posts/2020-05-06-welling.html b/_posts/2020-05-06-welling.html
new file mode 100644
index 0000000000..adf958db9c
--- /dev/null
+++ b/_posts/2020-05-06-welling.html
@@ -0,0 +1,51 @@
+---
+layout: post
+title: "Learning Equivariant and Hybrid Message Passing on Graphs"
+subtitle: "Max Welling, University of Amsterdam"
+date: 2020-05-06 11:00:0 +0000
+background: '/img/posts/welling_banner.png'
+future_date: False
+layer_shift: True
+---
+
+Abstract
+
+
+ In this talk I will extend graph neural nets in two directions. First, we will ask if we can formulate a GNN on
+ meshes of two dimensional manifolds. Previous approaches mostly used standard GNNs which are invariant to
+ permutations of the input nodes. However, we show this is unnecessarily restrictive. Instead, we define mesh-CNNs
+ which are equivariant and allow more general kernels. Second we will study how to incorporate information about the
+ data generating process into GNNs. Belief propagation is a form of GNN with no learnable parameters that performs
+ inference in a generative graphical model. We subsequently augment BP by a trainable GNN to correct the mistakes of
+ BP, in order to improve predictive performance. Experiments show the increased power of both methods.
+
+
+ Bio
+
+
+ Prof. Dr. Max Welling is a research chair in Machine Learning at the University of Amsterdam and a VP Technologies
+ at Qualcomm. He has a secondary appointment as a senior fellow at the Canadian Institute for Advanced Research
+ (CIFAR). He is co-founder of “Scyfer BV” a university spin-off in deep learning which got acquired by Qualcomm in
+ summer 2017. In the past he held postdoctoral positions at Caltech (’98-’00), UCL (’00-’01) and the U. Toronto
+ (’01-’03). He received his PhD in ’98 under supervision of Nobel laureate Prof. G. ‘t Hooft. Max Welling has served
+ as associate editor in chief of IEEE TPAMI from 2011-2015 (impact factor 4.8). He serves on the board of the NIPS
+ foundation since 2015 (the largest conference in machine learning) and has been program chair and general chair of
+ NIPS in 2013 and 2014 respectively. He was also program chair of AISTATS in 2009 and ECCV in 2016 and general chair
+ of MIDL 2018. He has served on the editorial boards of JMLR and JML and was an associate editor for Neurocomputing,
+ JCGS and TPAMI. He received multiple grants from Google, Facebook, Yahoo, NSF, NIH, NWO and ONR-MURI among which an
+ NSF career grant in 2005. He is recipient of the ECCV Koenderink Prize in 2010. Welling is in the board of the Data
+ Science Research Center in Amsterdam, he directs the Amsterdam Machine Learning Lab (AMLAB), and co-directs the
+ Qualcomm-UvA deep learning lab (QUVA) and the Bosch-UvA Deep Learning lab (DELTA). Max Welling has over 250
+ scientific publications in machine learning, computer vision, statistics and physics and an h-index of 62.
+
+
+
+ Event Info
+
+The event will take place on Wednesday, 06 May, 2020 at 11:00am . It will be a joint event with the DKFZ Data Science Seminar and will take place
+ online . You can join the live stream at this URL
+ and ask questions via chat!
+ Kindly help us plan ahead by registering for the event on our meetup page .
+
\ No newline at end of file
diff --git a/_posts/2020-07-08-glocker.html b/_posts/2020-07-08-glocker.html
new file mode 100644
index 0000000000..0915bb4f4f
--- /dev/null
+++ b/_posts/2020-07-08-glocker.html
@@ -0,0 +1,47 @@
+---
+layout: post
+title: "Uncertainty, causality and generalization: Attempts to improve image-based predictive modelling"
+subtitle: "Ben Glocker, Imperial College London"
+date: 2020-07-08 12:00:0 +0000
+background: '/img/posts/glocker.jpg'
+future_date: False
+layer_shift: True
+---
+
+Abstract
+
+
+ This talk will give an overview of some recent work by our team on various aspects of predictive modelling in
+ imaging.
+ We will discuss how the language of causality can shed some new light on key challenges in machine learning for
+ medical imaging,
+ namely data scarcity and mismatch. For the latter, we present a meta-learning algorithm for domain generalization.
+ We also look at some very recent results of our attempt to generate counterfactual images using deep structural
+ causal models.
+ Finally, we introduce a simple yet effective component for modelling spatially correlated aleatoric uncertainty in
+ image segmentation
+ which can be plugged into any existing network architecture.
+ The resulting stochastic segmentation networks predict multiple plausible segmentation maps with a single forward
+ pass.
+
+
+ Bio
+
+
+ Ben Glocker is Reader (eq. Associate Professor) in Machine Learning for Imaging at Imperial College London.
+ He holds a PhD from TU Munich and was a post-doc at Microsoft and a Research Fellow at the University of Cambridge.
+ His research is at the intersection of medical image analysis and artificial intelligence aiming to build
+ computational tools
+ for improving image-based detection and diagnosis of disease.
+
+
+
+ Event Info
+
+The event will take place on Wednesday, 08 July, 2020 at 12:00pm . It will be a joint event with the DKFZ Data Science Seminar and will take place
+ online . You can join the live stream at this URL
+ and ask questions via chat!
+ Kindly help us plan ahead by registering for the event on our meetup page .
+
\ No newline at end of file
diff --git a/_posts/2020-10-07-bronstein.html b/_posts/2020-10-07-bronstein.html
new file mode 100644
index 0000000000..0fe2c81e5a
--- /dev/null
+++ b/_posts/2020-10-07-bronstein.html
@@ -0,0 +1,45 @@
+---
+layout: post
+title: "Deep Learning on Graphs: Successes, Challenges, and Next Steps"
+subtitle: "Michael Bronstein, Imperial College London"
+date: 2020-10-07 11:00:00
+background: '/img/posts/bronstein.png'
+future_date: False
+layer_shift: True
+---
+
+Abstract
+
+
+ Deep learning on graphs and network-structured data has recently become one of the hottest topics in machine
+ learning. Graphs are powerful mathematical abstractions that can describe complex systems of relations and
+ interactions in fields ranging from biology and high-energy physics to social science and economics. In this talk,
+ Michael Bronstein will outline the basic methods, applications, challenges and possible future directions in the
+ field.
+
+
+ Bio
+
+
+ Michael Bronstein is a professor at Imperial College London, where he holds the Chair in Machine Learning and
+ Pattern Recognition, and Head of Graph Learning Research at Twitter. Michael received his PhD from the Technion in
+ 2007. He has held visiting appointments at Stanford, MIT, Harvard, and Tel Aviv University, and has also been
+ affiliated with three Institutes for Advanced Study (at TU Munich as a Rudolf Diesel Fellow (2017-2019), at Harvard
+ as a Radcliffe fellow (2017-2018), and at Princeton (2020)). Michael is the recipient of five ERC grants, Member of
+ Academia Europaea, Fellow of IEEE, BCS, IAPR, and ELLIS, ACM Distinguished Speaker, and World Economic Forum Young
+ Scientist. In addition to his academic career, Michael is a serial entrepreneur and founder of multiple startup
+ companies, including Novafora, Invision (acquired by Intel in 2012), Videocites, and Fabula AI (acquired by Twitter
+ in 2019). He has previously served as Principal Engineer at Intel Perceptual Computing and was one of the key
+ developers of the Intel RealSense technology.
+
+
+ Event Info
+
+The event will take place on Wednesday, 07 October, 2020 at 11:00am . It will be a joint event with the DKFZ Data Science Seminar and will take
+ place
+ online . You can join the live stream at this URL
+ and ask questions via chat!
+ Kindly help us plan ahead by registering for the event on our meetup page .
+
\ No newline at end of file
diff --git a/_posts/2020-10-21-mann.html b/_posts/2020-10-21-mann.html
new file mode 100644
index 0000000000..0260ef96c9
--- /dev/null
+++ b/_posts/2020-10-21-mann.html
@@ -0,0 +1,42 @@
+---
+layout: post
+title: "From Development to a Certified Medical Product: Bringing AI solutions to the Patient"
+subtitle: "Philipp Mann, Mediaire"
+date: 2020-10-21 11:00:00
+background: '/img/posts/mann.jpg'
+future_date: False
+layer_shift: True
+---
+
+Abstract
+
+
+ Although numerous AI-applications are already part of our daily life, the effective utilization of AI is quite new
+ in the field of personalized medicine. Quite recently, different research projects show very promising results on
+ how to help Medical Doctors automatize image analysis, patient diagnosis as well as to predict patient outcome.
+ However, such algorithms are mostly used for research purposes the translation to a sustainable commercial product
+ requires lots of different skill sets. In this presentation, Philipp Mann will outline how this was done and which
+ hurdles have to be taken to successfully launch an AI-based analysis tool for the quantitative evaluation of brain
+ MRI.
+
+
+ Bio
+
+
+ Philipp Mann is working part-time as a post-doc at the German Cancer Research center and for the small start-up
+ "mediaire" in Berlin in Product Development and Sales. Philipp received his PhD from the Physics Faculty Heidelberg
+ in 2017 in the field of MR-guided radiotherapy. Since then, he is leading a small research team and acquired one
+ BMBF grant in close cooperation with the university clinic heidelberg. For mediaire, he was responsible for the
+ market launch of an AI-based image analysis tool, management of scientific cooperations and customer-specific
+ product development.
+
+
+ Event Info
+
+The event will take place on Wednesday, October 21, 2020 at 11:00am . It will be a joint event with the DKFZ Data Science Seminar and will take place
+ online . You can join the live stream at this URL
+ and ask questions via chat!
+ Kindly help us plan ahead by registering for the event on our meetup page .
+
\ No newline at end of file
diff --git a/_posts/2020-12-07-neurips.html b/_posts/2020-12-07-neurips.html
new file mode 100644
index 0000000000..5a2c187d6a
--- /dev/null
+++ b/_posts/2020-12-07-neurips.html
@@ -0,0 +1,38 @@
+---
+layout: post
+title: "Official NeurIPS Meetup 2020"
+subtitle: "by Freiburg AI and Heidelberg AI"
+custom_date: "December 7-11, 2020"
+date: 2020-12-07
+background: '/img/posts/neurips_logo.png'
+future_date: False
+layer_shift: True
+---
+
+
+
+ Dear fellow AI enthusiasts,
+
+ We are excited to announce that we have been selected as one of the Official NeurIPS Meetups
+ and we are organizing a series of 5 events jointly hosted by Freiburg AI and Heidelberg AI.
+ Neurips is the largest AI conference worldwide and “Neurips Meetups” are described on the conference
+ webpage as follows:
+
+
+ “The goal of NeurIPS meetups is to open up access to communities worldwide and to connect
+ participants by geographic area and language. With this, we hope to support the growth of AI expertise
+ around the world, including in underrepresented communities in tech, and to fuel innovation responsibly.”
+
+
+ What this means is that we get access to all conference content at real-time, from which we will
+ select at least one talk per day to stream for you. For a complete week (Monday, 7. Dec to Friday, 11. Dec),
+ we will host digital meetups starting at 19:00 every day. To encourage insightful after-stream discussions,
+ domain experts will put the talks into perspective with their own research and they will answer questions
+ after the talks. During the week, there will also be room for personal exchange in digital meet and greet
+ events.
+
+
+ A detailed day-to-day schedule can be found on our meetup page .
+
+
\ No newline at end of file
diff --git a/_posts/2021-02-24-butter.html b/_posts/2021-02-24-butter.html
new file mode 100644
index 0000000000..8ff0e78c0e
--- /dev/null
+++ b/_posts/2021-02-24-butter.html
@@ -0,0 +1,50 @@
+---
+layout: post
+title: "Boosting high energy physics with generative networks"
+subtitle: "Anja Butter, Heidelberg University"
+date: 2021-02-24 11:00:00
+background: '/img/posts/butter.png'
+future_date: False
+layer_shift: True
+---
+
+Abstract
+
+
+ Physicists at the Large Hadron Collider (LHC) are searching for signs of new physics to answer fundamental questions
+ like the nature of dark matter.
+ In many scenarios these signals are expected to be as rare as 1 in 10^10 events.
+ We therefore require simulations which can model complex event structures with high precision.
+ LHC physics is unique in the sense that we can rely on first-principles based predictions,
+ which means that simulations rely on a small number of fundamental parameters to simulate observables over many
+ orders of magnitude.
+ I will show how generative neural networks can be used to supplement these simulations in order to match precision
+ requirements of future collider experiments.
+ In addition one can use flow based invertible networks to invert the simulations chain and unfold detector level
+ events to understand the mechanisms at the heart of proton collisions.
+
+
+ Bio
+
+
+ Anja Butter is a postdoc at the ITP in Heidelberg, working on particle physics.
+ Her research interests include Higgs physics, any signs of physics beyond the Standard Model, and the development of
+ machine learning techniques to learn more about both.
+ As the LHC is collecting large amounts of data in the search for new physics, machine learning has become an
+ exciting technique to analyze and learn from these data.
+ While it is crucial to push the limits of particle physics with new developments and techniques,
+ the obtained results have to be understood in a global context.
+ Therefore she is interested as well in global analyses of models like effective field theory and supersymmetry in
+ which we can combine different measurements.
+
+
+ Event Info
+
+The event will take place on Wednesday, February 24, 2021 at 11:00am . It will be a joint event with the DKFZ Data Science Seminar and will take place
+ online . You can join the live stream at this URL !
+ Kindly help us plan ahead by registering for the event on our meetup page .
+
+
\ No newline at end of file
diff --git a/_posts/2021-03-24-hamprecht.html b/_posts/2021-03-24-hamprecht.html
new file mode 100644
index 0000000000..4dd9471af7
--- /dev/null
+++ b/_posts/2021-03-24-hamprecht.html
@@ -0,0 +1,55 @@
+---
+layout: post
+title: "Signed Graph Partitioning: An Important Computer Vision Primitive"
+subtitle: "Fred Hamprecht, Heidelberg University"
+date: 2021-03-24 11:00:00
+background: '/img/posts/hamprecht.png'
+future_date: False
+layer_shift: True
+---
+
+Abstract
+
+Perennial computer vision problems such as image partitioning, instance
+segmentation or tracking can be reduced to combinatorial graph
+partitioning problems.
+
+The majority of models developed in this context have relied on purely
+attractive interactions between graph nodes. To obtain more than a
+single cluster, it is then necessary to pre-specify a desired number of
+clusters, or set thresholds.
+
+A notable exception to the above is multicut partitioning / correlation
+clustering, which accommodates repulsive in addition to attractive
+interactions, and which automatically determines an optimal number of
+clusters. Unfortunately, the multicut problem is NP-hard.
+
+In this talk, I will characterize the combinatorial problem and discuss
+its representations in terms of node or edge labelings. I will discuss
+greedy algorithms that find approximate solutions and do well in real
+applications, in particular the mutex watershed and greedy agglomerative
+signed graph partitioning.
+
+Joint work with Steffen Wolf, Constantin Pape, Nasim Rahaman, Alberto
+Bailoni, Ullrich Koethe, Anna Kreshuk.
+
+ Bio
+
+Fred Hamprecht develops machine learning algorithms for image analysis. He applies these methods to challenging problems
+mainly from bioimage analysis, and is particularly interested in making "structured" predictions. His favorite methods
+have a sound mathematical background, such as combinatorial optimization or algebraic graph theory,
+while being widely applicable and useful in practice.
+
+Fred is a Professor at Heidelberg University and still thinks of science as the greatest profession on earth.
+
+
+ Event Info
+
+The event will take place on Wednesday, March 24, 2021 at 11:00am . It will be a joint event with the DKFZ Data Science Seminar and will take place
+ online . You can join the live stream at this URL !
+ Kindly help us plan ahead by registering for the event on our meetup page .
+
+
\ No newline at end of file
diff --git a/_posts/2021-04-22-lena-maier-hein.html b/_posts/2021-04-22-lena-maier-hein.html
new file mode 100644
index 0000000000..7206ab1b72
--- /dev/null
+++ b/_posts/2021-04-22-lena-maier-hein.html
@@ -0,0 +1,46 @@
+---
+layout: post
+title: "Why Domain Knowledge is Crucial for Machine Learning-based Medical Image Analysis"
+subtitle: "Lena Maier-Hein, DKFZ"
+date: 2021-04-22 19:00:00
+background: '/img/posts/lena_maier_hein.jpg'
+future_date: False
+layer_shift: True
+---
+
+Abstract
+
+The breakthrough successes of deep learning-based solutions in various fields of research and practice have
+ attracted a growing number of researchers to work in the field of medical image analysis. However,
+ due to the increasing number of publicly available data sets, domain knowledge and/or close collaboration with
+ domain experts is no longer an essential prerequisite.
+ To demonstrate the potentially severe consequences of this recent development,
+ this talk will highlight the importance of domain knowledge for various steps within the development process:
+ from the selection of training/test data in the presence of possible confounders, to the choice of appropriate
+ validation metrics and the interpretation of algorithm results.
+
+ Bio
+
+Lena Maier-Hein received a Diploma (2005) and PhD degree (2009) with distinction from Karlsruhe
+ Institute of Technology and conducted her postdoctoral at the German Cancer Research Center (DKFZ) and
+ at the Hamlyn Centre for Robotic Surgery at Imperial College London. During her time as junior group leader
+ at the DKFZ (2011-2016), she finished her Habilitation (2013) at the University of Heidelberg.
+ As DKFZ division head she is now working in the field of biomedical image analysis with a specific focus on
+ surgical data science, computational biophotonics and validation of machine learning algorithms.
+ Lena Maier-Hein is fellow of the Medical Image Computing and Computer Assisted Intervention (MICCAI) society
+ and of the European Laboratory for Learning and Intelligent Systems (ELLIS) and has been distinguished with
+ several science awards including the 2013 Heinz Maier Leibnitz Award of the German Research Foundation (DFG) and
+ the 2017/18 Berlin-Brandenburg Academy Prize. She has received a European Research Council (ERC) starting grant
+ (2015-2020) and consolidator grant (2021-2026).
+
+
+ Event Info
+
+The event will take place on Thursday, April 22, 2021 at 7:00pm . It will be a joint event with Freiburg AI and will take place
+ online . You can join the live stream at this URL !
+ Kindly help us plan ahead by registering for the event on our meetup page .
+
+
diff --git a/_posts/2021-05-20-timo-denk.html b/_posts/2021-05-20-timo-denk.html
new file mode 100644
index 0000000000..daff1ae686
--- /dev/null
+++ b/_posts/2021-05-20-timo-denk.html
@@ -0,0 +1,47 @@
+---
+layout: post
+title: "A Brief Overview of the Success Story of Large Language Models"
+subtitle: "Timo Denk, Zalando"
+date: 2021-05-20 20:00:00
+background: '/img/posts/timo_denk.jpg'
+future_date: False
+layer_shift: True
+---
+
+Abstract
+
+ BERT here, GPT there, [you name it] is all you need... Self-supervised, large-scale language models have been vastly
+ successful in recent years.
+ Today's top scorers on NLP benchmarks are almost exclusively Transformer-based architectures.
+ This DSC talk will give a brief overview of the success story of large language models.
+ We will look into the most prominent papers that are built on top of the Transformer, discuss current research
+ directions,
+ and develop an intuition for why ultra-large language models work so exceptionally well.
+
+
+ Bio
+
+ Timo Denk received his Bachelor's degree in Computer Science from DHBW Karlsruhe in 2019, working on Wordgrid,
+ an approach to understanding documents with 2-dimensional structure, by using word-level information.
+ With his wide range of interests, he worked on a wide variety of projects during his studies,
+ ranging from 3d-printing a violin to a smart poker table which detects player's cards and calculates winning
+ probabilities
+ for each player. Since 2016, he also operates a hardware and software development company focusing on
+ microcontroller programming.
+ In 2019, he joined Zalando's outfits team as an Applied Scientist, where he works on outfit generation with methods
+ from
+ natural language processing (NLP).
+
+ For more information about Timo please check out his website timodenk.com,
+ and for more information about this Meetup have a look at freiburg.ai .
+
+
+ Event Info
+
+The event will take place on Thursday, May 20th, 2021 at 8:00pm . It will be a joint event with Freiburg AI and will take place
+ online . You can join the live stream at this URL !
+ Kindly help us plan ahead by registering for the event on our meetup page .
+
+
\ No newline at end of file
diff --git a/_posts/2021-12-07-neurips.html b/_posts/2021-12-07-neurips.html
new file mode 100644
index 0000000000..87857565e3
--- /dev/null
+++ b/_posts/2021-12-07-neurips.html
@@ -0,0 +1,145 @@
+---
+layout: post
+title: "Official NeurIPS Meetups"
+subtitle: "by Freiburg AI and Heidelberg AI"
+custom_date: "December 7-10, 2021"
+time: 7pm
+date: 2021-12-07
+background: '/img/posts/neurips_logo.png'
+future_date: False
+layer_shift: True
+---
+
+
+
+ Dear fellow AI enthusiasts,
+
+ We are excited to announce that we will again be hosting a series of 4 NeurIPS events together with Freiburg AI.
+ Neurips is the largest AI conference worldwide and “Neurips Meetups” are described on the conference
+ webpage as follows:
+
+ “The goal of NeurIPS meetups is to open up access to communities worldwide and to connect
+ participants by geographic area and language. With this, we hope to support the growth of AI expertise
+ around the world, including in underrepresented communities in tech, and to fuel innovation responsibly.”
+
+
+ What this means is that we get access to all conference content in real-time, from which we will
+ select at least one talk per day to stream for you. For 4 days (Tuesday, 7. Dec to Friday, 10. Dec),
+ we will host digital meetups starting at 19:00 every day. Afterwards, we will discuss the talks in a joint
+ Q&A session.
+
+ The meetups will be hosted on Zoom:
+ https://us02web.zoom.us/j/3303598315
+
+
+ We also want to point to the NeurIPS code of conduct .
+
+
+
+
+
+
Schedule
+
+
+
+
+
+ Date
+ Topic
+ Speaker
+ More information
+
+
+
+
+ Tuesday, 7.12.2021 from 19:00
+ Tutorial: "Machine Learning With Quantum Computers"
+
+ Maria Schuld (Xanadu, Startup Toronto) Juan Carrasquilla
+ (Vector Institute for AI, Toronto)
+
+ The tutorial targets a broad audience, and no prior knowledge of physics is
+ required.
+ Meetup Event on Tuesday
+
+
+
+ Wednesday, 8.12.2021 from 19:00
+ Keynote: "The Banality of Scale: A Theory on the Limits
+ of Modeling Bias and Fairness Frameworks for Social Justice"
+ Mary Gray (Senior Principal Researcher at Microsoft
+ Research)
+ Meetup Event on Wednesday
+
+
+
+ Thursday, 9.12.2021 from 19:00
+ Keynote: "How Duolingo Uses AI to Assess, Engage and
+ Teach Better"
+ Luis von Ahn (CEO of Duolingo)
+ Meetup Event on Thursday
+
+
+
+ Friday, 10.12.2021 from 19:00
+ Keynote: The Collective Intelligence of Army Ants, and
+ the Robots They Inspire
+ Radhika Nagpal (Professor at Harvard)
+ Meetup Event on Friday
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2022-05-05-alpha-fold.html b/_posts/2022-05-05-alpha-fold.html
new file mode 100644
index 0000000000..581202dcb7
--- /dev/null
+++ b/_posts/2022-05-05-alpha-fold.html
@@ -0,0 +1,135 @@
+---
+layout: post
+title: "Highly Accurate Protein Structure Prediction with AlphaFold"
+subtitle: "Simon Kohl, Senior Research Scientist at DeepMind"
+date: 2022-05-05
+background: '/img/posts/2022-05-05-alpha-fold-3.gif'
+future_date: False
+layer_shift: True
+---
+We are excited to have Simon Kohl, an alumni of the Medical Imaging Division of the DKFZ and co-founder of heidelberg.ai, speaking in person at the DKFZ
+about his work conducted at DeepMind on the task of protein structure prediction and AlphaFold .
+
+
+Event Recording
+
+VIDEO
+
+Abstract
+
+
+ Proteins are essential to life, and understanding their structure can facilitate a mechanistic understanding of their function.
+ Through an enormous experimental effort, the structures of around 100,000 unique proteins have been determined,
+ but this represents a small fraction of the billions of known protein sequences. Structural coverage is bottlenecked by the months
+ to years of painstaking effort required to determine a single protein structure.
+ Accurate computational approaches are needed to address this gap and to enable large-scale structural bioinformatics.
+ Predicting the three-dimensional structure that a protein will adopt based solely on its amino acid sequence—the structure
+ prediction component of the ‘protein folding problem’—has been an important open research problem for more than 50 years.
+ Despite recent, existing methods fall far short of atomic accuracy,
+ especially when no homologous structure is available. Here we provide the first computational method that can regularly predict protein
+ structures with atomic accuracy even in cases in which no similar structure is known.
+ We validated an entirely redesigned version of our neural network-based model, AlphaFold,
+ in the challenging 14th Critical Assessment of protein Structure Prediction (CASP14),
+ demonstrating accuracy competitive with experimental structures in a majority of cases and greatly outperforming other methods.
+ Underpinning the latest version of AlphaFold is a novel machine learning approach that incorporates physical and biological knowledge
+ about protein structure, leveraging multi-sequence alignments into the design of the deep learning algorithm.
+
+
+
+
+Biography
+
+ Simon is a Senior Research Scientist at DeepMind, where he works on AlphaFold2 and problems in structural biology.
+ Before joining DeepMind he was a PhD student at the German Cancer Research Center in Heidelberg, Germany, where he developed generative models for biomedical image segmentation.
+ Broadly, Simon is interested in developing deep, often generative, neural networks for natural science and real world applications.
+ His research is often concerned with uncertainty quantification and the generation of multi-modal outputs in problems that allow multiple solutions.
+
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event at our meetup event-site .
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ DKFZ Communication Center (H1), Im Neuenheimer Feld 280
+
+
+
+ Registration
+ meetup event-site
+
+
+
+
+ After the event, we will walk over to ‘Park an der Uferstraße’ to discuss and share ideas with one another over a few beverages.
+
+ Corona rules
+
+ We are happy to see that so many of you are interested in this event!
+ To allow as many people as possible to attend it, the following rules apply:
+ Attendance requires proof of 3G corona status and wearing a FFP2 mask is mandatory throughout the whole event.
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2022-05-18-alan-karthikesalingam.html b/_posts/2022-05-18-alan-karthikesalingam.html
new file mode 100644
index 0000000000..d25295a25f
--- /dev/null
+++ b/_posts/2022-05-18-alan-karthikesalingam.html
@@ -0,0 +1,126 @@
+---
+layout: post
+title: "AI from Code to Clinic"
+subtitle: "Alan Karthikesalingam, Research Lead at Google Health UK"
+custom_date: "May 18th 2022 @ 4pm"
+date: 2022-05-18
+background: '/img/posts/2022-05-18-code-to-clinic_2.png'
+future_date: False
+layer_shift: True
+---
+We are excited to host Alan Karthikesalingam, who is the research lead at Google Health UK.
+Alan has worked on several large-scale studies with the goal of integrating Artificial Intelligence
+into the clinical workflow and has collected profound experience regarding the
+
+key challenges and the possible impact AI can provide .
+We are big admirers of his work, for instance
+
+ diagnosis of retinal diseases based on OCT Retina Scans
+
+and
+
+ breast cancer screenings with AI systems
+
+.
+Especially the focus on rigid evaluation throughout Alan’s work seems of great importance for the whole field of applied AI.
+
+
+
+
+Abstract
+
+
+ Alan will share lessons learned across Google on a range of projects about the nuances and requirements for success for AI to move from research settings to clinical impact.
+
+Biography
+
+ Alan is a senior staff clinical research scientist in Google Health who leads the clinical team's translational research efforts, with a particular focus on bridging new ML developments and health-oriented research. He focuses on machine learning for medical imaging across multiple fields: radiology, ophthalmology and dermatology, with interests in AI safety, robustness and data-efficiency for realising real-world clinical impact. Alan is an honorary Lecturer in Vascular Surgery at Imperial College in London where he continues to see patients and supervise PhD students. He did his MA in Neuroscience and Medical Degree (MBBChir) at the University of Cambridge, followed by specialist training in surgery in the London Deanery, where he completed his Membership of the Royal College of Surgeons (MRCS), PhD in Vascular Surgery and was appointed as a NIHR Clinical Lecturer. In 2017 he joined DeepMind's health research team and in 2019 joined Google Health. He has published over 150 peer-reviewed articles, including first-author studies in the New England Journal of Medicine and The Lancet prior to DeepMind/Google, where he subsequently co-authored landmark papers in Nature and Nature Medicine on deep learning systems for mammography, ophthalmology and electronic health records.
+
+
+
+
+
+Event Info
+
+This event will be hosted on Zoom:
+
+ https://us02web.zoom.us/j/5824790176
+
+
+
+Please help us plan ahead by registrating for the event at our meetup event-site .
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ Zoom
+
+
+
+ Registration
+
+ meetup event-site
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2022-09-27-andrulis.html b/_posts/2022-09-27-andrulis.html
new file mode 100644
index 0000000000..89ab282e0a
--- /dev/null
+++ b/_posts/2022-09-27-andrulis.html
@@ -0,0 +1,162 @@
+---
+layout: post
+title: "Human compatible world models across sizes, languages and modalities"
+subtitle: "Jonas Andrulis, Constantin Eichenberg, Robert Baldock; Aleph Alpha"
+custom_date: "September 27th 2022 @ 6pm"
+date: 2022-09-27
+background: '/img/posts/2022-09-27-andrulis-background.jpg'
+future_date: False
+layer_shift: True
+---
+
+Did you know there was an AI startup working on "General Artificial Intelligence" right here in Heidelberg?
+
+Yes, there is! And we will have them with us at heidelberg.ai!
+We are excited to have Jonas Andrulis, Constantin Eichenberg and Robert Baldock,
+
+the CEO and research scientists of the startup
+
+ Aleph Alpha
+
+from Heidelberg,
+present their work and insights on multimodal learners and world models in person at the DKFZ.
+As an example of their work and goal towards making AI technologies accessible to a wide audience, they
+
+open-sourced
+
+the MAGMA multimodal model, which can process images and text in any combination, this year.
+
+
+
+
+Event Recording
+
+VIDEO
+
+
+Abstract
+
+
+ The generalizability and few-shot capabilities of large language models (LLM) like GPT-3 have opened up new possibilities for countless innovative apps.
+ LLMs demonstrate an impressive context and language understanding which enables them to solve problems that were previously intractable with deep learning.
+ This level of proficiency is based on the structure and knowledge extracted from a huge array of diverse and complex texts.
+ Up to this point, the application of large-scale language (pre-)training to the construction of multimodal models has been mostly limited to specialized tasks,
+ like visual QA or captioning, or it required expensive data annotation.
+ These attempts thus failed to make convincing use of the potential and flexibility of large language models with their hundreds of billions of parameters.
+ This is where we succeeded in building a
+
+ fully self-supervised trained multimodal model
+ by combining an existing (self-build) multi-language LLM with a pre-trained image encoder.
+ We will discuss our approach of augmenting generative language models with additional modalities using adapter-based finetuning.
+ The language model weights remain unchanged during training, allowing for the transfer of encyclopedic knowledge and in-context learning abilities from language pretraining.
+ This approach makes multimodal enhancement efficient even for very large language models and adds world knowledge and context understanding previously only seen for language models.
+ We further discuss current work using the multimodal embedding for search, explainability and classification.
+
+Presenters
+
+
+ Jonas Andreas, CEO and founder of Aleph Alpha
+ Constantin Eichenberg, Researcher at Aleph Alpha
+ Robert Baldock, Senior researcher at Aleph Alpha
+
+
+
+
+ Aleph Alpha
+
+ is an artificial intelligence research & development company from Heidelberg, Germany. Aleph Alpha aims to revolutionize the accessibility and usability of AI towards an era of Transformative Artificial Intelligence in Europe.
+
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event at our
+
+
+ meetup event-site
+ .
+After the event, there will be a social get-together with food and drinks courtesy of the Division of Medical Image Computing and Interactive Machine Learning Group at the DKFZ.
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ DKFZ Communication Center (K1+K2), Im Neuenheimer Feld 280
+
+
+
+ Registration
+
+
+ meetup event-site
+
+
+
+
+
+ Corona rules
+
+ We are happy to see that so many of you are interested in this event!
+ To allow as many people as possible to attend it, the following rules apply:
+ Attendance requires proof of 3G corona status and wearing a FFP2 mask is mandatory throughout the whole event.
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2023-02-09-stable-diffusion.html b/_posts/2023-02-09-stable-diffusion.html
new file mode 100644
index 0000000000..a4b5efb171
--- /dev/null
+++ b/_posts/2023-02-09-stable-diffusion.html
@@ -0,0 +1,133 @@
+---
+layout: post
+title: "Stable Diffusion and Friends - Generative Modeling in Latent Space"
+subtitle: "Robin Rombach, LMU Munich"
+custom_date: "February 9th 2023 @ 4pm"
+date: 2023-02-09
+background: '/img/posts/2023-02-09-stable-diffusion_2.png'
+future_date: False
+layer_shift: True
+---
+
+Are you ready to see the future of Computer Vision? Latent Diffusion Models such as Stable Diffusion and OpenAIs DALL·E are revolutionizing the way we process images, achieving unprecedented visual fidelity without the need for excessive computational power.
+This technology is already being used in countless apps and has captured the attention of the AI community and beyond.
+We are excited to have Robin Rombach, the first author of this groundbreaking work which originated here in Heidelberg, speaking at the DKFZ. Don't miss out on this opportunity to learn about cutting-edge Computer Vision.
+
+
+Event Recording
+
+VIDEO
+
+
+
+
+To prime yourself for potential use cases of Stable Diffusion, visit the
+
+ Stable Diffusion subreddit .
+
+Fun Fact: the image in the preview was generated using Stable Diffusion.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event at our
+
+
+ meetup event-site
+ .
+After the event, there will be a social get-together with food and drinks courtesy of the Division of Medical Image Computing and Interactive Machine Learning Group at the DKFZ.
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ DKFZ Communication Center (H1), Im Neuenheimer Feld 280
+
+
+
+ Registration
+
+
+
+ meetup event-site
+
+
+
+
+
+ Corona rules
+
+ We are happy to see that so many of you are interested in this event!
+ To allow as many people as possible to attend it, the following rules apply:
+ Attendance requires proof of 3G corona status and wearing a FFP2 mask is mandatory throughout the whole event.
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2023-06-28-varoquaux.html b/_posts/2023-06-28-varoquaux.html
new file mode 100644
index 0000000000..c390412143
--- /dev/null
+++ b/_posts/2023-06-28-varoquaux.html
@@ -0,0 +1,131 @@
+---
+layout: post
+title: "Medical AI: Addressing the Validation Gap"
+subtitle: "Gaël Varoquaux, INRIA France"
+custom_date: "June 28th 2023 @ 11am"
+date: 2023-06-28
+background: '/img/posts/2023-06-28-gael.jpeg'
+future_date: False
+layer_shift: True
+---
+
+Do you know the most widely used Python library for machine-learning scikit-learn ? It was the starting ground for multiple groundbreaking research works and business applications that have a profound impact on our everyday lives.
+We are excited to have Gaël Varoquaux, a co-founder of one of our favorite Python libraries (and many more), speaking at the DKFZ about his work to bring AI models into production in the medical domain by improving their evaluation before they are put to the test in the real world.
+Proper evaluation of AI is a hot topic far beyond the medical domain, so we encourage you to make use of this opportunity to learn about adequate validation (and, even better, we put it into action in our research).
+
+
+This event is held jointly with the NCT Data Science Seminar and ELLIS Life .
+
+
+
+
+
+
+
+Event Recording
+
+VIDEO
+
+
+Abstract
+
+
+Machine-learning, which can learn to predict given labeled data, bares many promises for medical applications. And yet, experience shows that predictors that looked promising most often fail to bring the expected medical benefits. One reason is that they are evaluated detached from actual usage and medical outcomes.
+And yet, test running predictive models on actual medical decisions can be costly and dangerous. How do we bridge the gap? By improving machine-learning model evaluation. First, the metrics used to measure prediction error must capture as well as possible the cost-benefit tradeoffs of the final usage. Second, the evaluation procedure must really put models to the test: on a representative data sample, and accounting for uncertainty in model evaluation.
+
+Biography
+
+
+Gaël Varoquaux is a research director working on data science at Inria (French Computer Science National research) where he leads the Soda team on computational and statistical methods to understand health and society with data. Varoquaux is an expert in machine learning, with an eye on applications in health and social science. He develops tools to make machine learning easier, suited for real-life, messy data. He co-founded scikit-learn, one of the reference machine-learning toolboxes, and helped build various central tools for data analysis in Python. He currently develops data-intensive approaches for epidemiology and public health, and worked for 10 years on machine learning for brain function and mental health. Varoquaux has a PhD in quantum physics supervised by Alain Aspect and is a graduate from Ecole Normale Superieure, Paris.
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event at our
+
+
+ meetup event-site
+ .
+
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ Bioquant (lectura hall SR41), Im Neuenheimer Feld 267
+
+
+ Livestream
+ Zoom
+
+
+ Registration
+
+
+
+ meetup event-site
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2023-07-26-agi-summertalks.html b/_posts/2023-07-26-agi-summertalks.html
new file mode 100644
index 0000000000..7b53a2fcc1
--- /dev/null
+++ b/_posts/2023-07-26-agi-summertalks.html
@@ -0,0 +1,89 @@
+---
+layout: post
+title: "AGI Summer Talks"
+subtitle: "Aleph Alpha, Heidelberg, Germany"
+custom_date: "July 26th 2023 @ 2pm"
+date: 2023-07-26
+background: '/img/posts/2023-07-26-agi-summertalks.jpg'
+future_date: False
+layer_shift: True
+---
+
+Are you ready to dive into the cutting-edge world of Artificial General Intelligence (AGI)? Look no further! We are thrilled to present with Aleph Alpha the AGI Summer Talks, an event that will bring together the brightest minds in AI for an unforgettable experience.
+
+
+Prepare to be captivated as we bring you three exclusive talks and riveting Q&A sessions with esteemed AI professors, including the renowned Matthias Bethge and Kristian Kersting. But that's not all! Brace yourselves for an awe-inspiring keynote address titled "Are Large Language Models the last invention we need to make?" delivered by none other than the influential AGI visionary Joscha Bach.
+
+
+Whether you're an AI enthusiast, researcher, or just curious about the future of AGI, this event promises to be an extraordinary journey into the realm of artificial intelligence. Gain invaluable insights from the best in the field and discover the endless possibilities that AGI holds for humanity.
+
+
+
+
+
+
+
+
+
+Event Info
+
+Please reserve your ticket at Eventbrite . There you can also access the agenda of the event.
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}} - 7:30pm
+
+
+ Where?
+ DAI Heidelberg - Deutsch - Amerikanisches Institut, Sofienstraße 12 69115 Heidelberg
+
+
+ Livestream
+ Youtube
+
+
+ Registration
+
+
+
+ Eventbrite
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2023-09-20-niki-kilbertus.html b/_posts/2023-09-20-niki-kilbertus.html
new file mode 100644
index 0000000000..021a9b93c6
--- /dev/null
+++ b/_posts/2023-09-20-niki-kilbertus.html
@@ -0,0 +1,123 @@
+---
+layout: post
+title: "Learning Dynamical Laws from Data"
+subtitle: "Niki Kilbertus, TU Munich"
+custom_date: "September 20th 2023 @ 4pm"
+date: 2023-09-20
+background: '/img/posts/2023-09-20-democritus.png'
+future_date: False
+layer_shift: True
+---
+
+Recent advances in machine learning have spurred hopes of displacing or enhancing the process of scientific discovery by inferring natural laws directly from observational data.
+
+We are excited to have Niki Kilbertus, professor for Ethics in Systems Design and Machine Learning at TU Munich, speaking at the DKFZ.
+During his talk, he will illuminate the current methodology, which allows learning dynamical laws (such as the law of gravity) from observational data.
+
+We hope to see you there, learning with us about the latest methodologies to unravel the mysteries of nature.
+
+
+
+
+
+
+Event Recording
+
+VIDEO
+
+
+Abstract
+
+
+This talk delves into the rules that dictate how systems evolve over time, using what's known as ordinary differential equations (ODEs). These equations provide a blueprint for understanding changes in systems. To kick things off, we'll see an introduction to how these "rules" are deduced from observed behaviors. The talk will encompass various methods, from the more abstract blackbox models to the transparent symbolic techniques. A core aspect of this discussion is understanding the reliability and accuracy of these methods, particularly when faced with both familiar and new scenarios. Finally recent attempts to extract these equations from imperfect real-world data are described.
+
+
+Biography
+
+
+Niki Kilbertus investigates the interactions between machine learning algorithms and humans with a focus on ethical consequences and trustworthiness. He currently studies identification and estimation of causal effects from observational data in automated decision-making and dynamic environments.
+
+After studying mathematics and physics at the University of Regensburg, Niki Kilbertus obtained his PhD in machine learning in 2020 from the University of Cambridge (UK) in a joint program with the Max Planck Institute for Intelligent Systems. Since 2020 he is a Young Investigator Group Leader with Helmholtz AI at the Helmholtz Center Munich. Since 2021 he is Assistant Professor at TUM and continues to lead his group at Helmholtz AI.
+
+
+
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event at our
+
+
+ meetup event-site
+ .
+After the event, there will be a social get-together with food and drinks courtesy of the Division of Medical Image Computing and Interactive Machine Learning Group at the DKFZ.
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ DKFZ Communication Center (seminar rooms K1+K2), Im Neuenheimer Feld 280
+
+
+
+ Registration
+
+
+
+ meetup event-site
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2023-10-04-qi-dou.html b/_posts/2023-10-04-qi-dou.html
new file mode 100644
index 0000000000..c0855c01da
--- /dev/null
+++ b/_posts/2023-10-04-qi-dou.html
@@ -0,0 +1,125 @@
+---
+layout: post
+title: "Image-based Robotic Surgery Intelligence"
+subtitle: "Qi Dou, Chinese University of Hong Kong & T Stone Robotics"
+custom_date: "October 4th 2023 @ 11am"
+date: 2023-10-04
+background: '/img/posts/2023-10-04-robotic-surgery.png'
+future_date: False
+layer_shift: True
+---
+
+We would like to welcome you all to the next online talk in our joint Heidelberg.ai / NCT Data Science Seminar series.
+
+Online-only Event
+
+We are excited to have Qi Dou. She is an Assistant Professor with the Department of Computer Science & Engineering at The Chinese University of Hong Kong. Her research interest lies in the interdisciplinary area of AI for healthcare with expertise in medical image analysis and robot-assisted surgery towards the goal of advancing disease diagnosis and intervention via machine intelligence.
+
+
+
+
+
+Wednesday, Oct 4th, 11 am
+
+Zoom Registration
+
+
+
+
+
+
+Abstract
+
+
+With rapid advancements in medicine and engineering technologies, the operating room has evolved to a highly complex environment, where surgeons take advantage of computers, endoscopes and robots to perform procedures with more precision while less incision. Intelligence, together with its authorized cognitive assistance, smart data analytics, and automation, is envisaged to be a core fuel to transform next-generation robotic surgery in numerous known or unknown exciting ways. In this talk, I will present ideas, methodologies and applications of image-based robotic surgery intelligence from three perspectives, i.e., AI-enabled surgical situation awareness to improve surgical procedures, AI-powered large-scale data analysis to enhance surgical education, AI-driven multi-sensory perception to achieve surgical subtask automation.
+
+
+
+Biography
+
+
+
+ Dr. Qi DOU is an Assistant Professor with the Department of Computer Science & Engineering at The Chinese University of Hong Kong. Her research interest lies in the interdisciplinary area of AI for healthcare with expertise in medical image analysis and robot-assisted surgery towards the goal of advancing disease diagnosis and intervention via machine intelligence. In this area, Dr. Dou has published over a hundred publications with Google Scholar citations 19000+ and H-index 55. Her research outputs have won a number of distinguished best paper awards including MedIA-MICCAI'17 Best Paper Award, IEEE ICRA 2021 Best Paper Award in Medical Robotics. Dr. Dou received IEEE Engineering in Medicine & Biology Society Early Career Award 2023. She served as Program Co-Chair of the major conferences of MICCAI, IPCAI, MIDL, and Associate Editor of top journals of Medical Image Analysis and IEEE Transactions on Medical Imaging.
+
+
+
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event via Zoom:
+
+
+
+ Zoom Registration
+ .
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ DKFZ Communication Center (seminar rooms K1+K2), Im Neuenheimer Feld 280
+
+
+
+ Registration
+
+
+
+ Zoom Registration
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2023-10-16-krull.html b/_posts/2023-10-16-krull.html
new file mode 100644
index 0000000000..9c46cb2d82
--- /dev/null
+++ b/_posts/2023-10-16-krull.html
@@ -0,0 +1,131 @@
+---
+layout: post
+title: "Image denoising and the generative accumulation of photons"
+subtitle: "Alexander Krull, School of Computer Science, University of Birmingham"
+custom_date: "October 16th 2023 @ 11am"
+date: 2023-10-16
+background: '/img/posts/microscopy_banner_pexels-chokniti-khongchum-3938022.jpg'
+future_date: False
+layer_shift: True
+---
+
+We would like to welcome you all to the next online talk in our joint Heidelberg.ai / NCT Data Science Seminar series.
+
+Online-only Event
+
+We are excited to have Alexander Krull.
+He is a lecturer at the School of Computer Science of the University of Birmingham in the UK.
+His research focuses on denoising of microscopy images, especially in the context of generative models.
+
+
+
+
+Monday, Oct 16th, 11 am
+
+Zoom Registration
+
+
+
+
+
+
+Abstract
+
+
+Shot noise is a fundamental property of many imaging applications, especially in fluorescence microscopy. Removing this noise is an ill-defined problem since many clean solutions exist for a single noisy image. Traditional approaches aiming to map a noisy image to its clean counterpart usually find the minimum mean square error (MMSE) solution,i.e. , they predict the expected value of the posterior distribution of possible clean images. We present a fresh perspective on shot noise corrupted images and noise removal. By viewing image formation as the sequential accumulation of photons on a detector grid, we show that a network trained to predict the where the next photon could arrive is in fact solving the traditional MMSE denoising task. This new perspective allows us to make three contributions:
+(i) We present a new strategy for self-supervised denoising. (ii) We present a new method for sampling from the posterior of possible solutions by iteratively sampling and adding small numbers of photons to the image. (iii) We derive a full generative model by starting this process from an empty canvas."
+
+
+
+Biography
+
+
+
+ Dr. Alexander Krull is a lecturer at the University of Birmingham in the United Kingdom. His research focuses on denoising of microscopy images, especially in the application of generative models in this context.
+After his PhD in computer vision in Carsten Rother's lab in Dresden, Alexander took a postdoc position at the Max Planck Institute of Molecular Cell Biology and Genetics, in Florian Jug's lab.
+He developed readily applicable self-supervised donoising methods (including Noise2Void and DivNoising) that are used research labs around the world.
+In 2020 Alexander began his position as lecturer at the University of Birmingham.
+He has published papers at computer vision and machine learning conferences (ECCV, ICCV, CVPR, ICLR), as well as in various scientific journals, focusing on life-science and imaging techniques.
+
+
+
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event via Zoom:
+
+
+
+ Zoom Registration
+ .
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ Zoom
+
+
+
+ Registration
+
+
+
+ Zoom Registration
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2023-11-28-parcalabescu.html b/_posts/2023-11-28-parcalabescu.html
new file mode 100644
index 0000000000..69904edaad
--- /dev/null
+++ b/_posts/2023-11-28-parcalabescu.html
@@ -0,0 +1,121 @@
+---
+layout: post
+title: "About Vision and Language models: What grounded linguistic phenomena do they understand? How much do they use the image and text modality?"
+subtitle: "Letitia Parcalabescu, Department of Computational Linguistics, Heidelberg University"
+custom_date: "November 28th 2023 @ 4pm"
+date: 2023-11-28
+background: '/img/posts/2023-11-28-parcalabescu-background.jpg'
+future_date: False
+layer_shift: True
+---
+
+Multimodal models are making headlines, with models like ChatGPT now being able to interpret images.
+
+
+We are excited to have Letitia Parcalabescu, a PhD student at Heidelberg University who has already worked on projects with Aleph Alpha and is also a machine learning Youtuber, speaking at the DKFZ.
+During her talk, she will illuminate the methodologies to evaluate language vision models for fine-grained linguistic tasks and also how to explain their outputs to make them safe for human interaction.
+
+
+We hope to see you there, learning with us about future multimodal models.
+
+
+
+
+
+
+Abstract
+
+
+In this talk, we will introduce Vision and Language (VL) models which can very well say if an image and text are related and answer questions about images. While performance on these tasks is important, task-centered evaluation does not tell us why they are so good at these tasks, such as what are the fine-grained linguistic capabilities of VL models use when solving them. Therefore, we present our work on the VALSE💃 benchmark to test six specific linguistic phenomena grounded in images. Our zero-shot experiments with five widely-used pretrained VL models suggest that current VL models have considerable difficulty addressing most phenomena.
+In the second part, we ask how much a VL model uses the image and text modality in each sample or dataset. To measure the contribution of each modality in a VL model, we developed MM-SHAP which we applied in two ways: (1) to compare VL models for their average degree of multimodality, and (2) to measure for individual models the contribution of individual modalities for different tasks and datasets. Experiments with six VL models on four VL tasks highlight that unimodal collapse can occur to different degrees and in different directions, contradicting the wide-spread assumption that unimodal collapse is one-sided.
+
+
+
+Biography
+
+
+
+ After having studied Physics and Computer Science, Letitia is a PhD candidate at Heidelberg University in the Heidelberg Natural Language Processing Group. Her research focuses on vision and language integration in multimodal deep learning. Her side-project revolves around the "AI Coffee Break with Letitia" YouTube channel, where the animated Ms. Coffee Bean explains and visualizes concepts from the latest research in Artificial Intelligence.
+
+
+
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event at our
+
+
+ meetup event-site
+ .
+After the event, there will be a social get-together with food and drinks courtesy of the Division of Medical Image Computing and Interactive Machine Learning Group at the DKFZ.
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ DKFZ Communication Center (seminar rooms K1+K2), Im Neuenheimer Feld 280
+
+
+
+ Registration
+
+
+
+ meetup event-site
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2024-01-31-joacim_roecklov.html b/_posts/2024-01-31-joacim_roecklov.html
new file mode 100644
index 0000000000..b4d6cf5551
--- /dev/null
+++ b/_posts/2024-01-31-joacim_roecklov.html
@@ -0,0 +1,124 @@
+---
+layout: post
+title: "Integrated surveillance and novel data streams for infectious disease outbreak prediction"
+subtitle: "Joacim Rocklöv, Heidelberg Institute of Global Health (HIGH) & Interdisciplinary Centre of Scientific Computing (IWR)"
+custom_date: "January 31st 2024 @ 4pm"
+date: 2024-01-31
+background: '/img/posts/2024-01-31_rockloev_background.png'
+future_date: False
+layer_shift: True
+---
+
+The COVID-19 pandemic has been a defining global health crisis, impacting every aspect of our lives. It underscored the urgent need for more sophisticated tools in predicting and managing infectious disease outbreaks.
+
+
+We are excited to have Prof. Joacim Rocklöv, leading the first AI lab in Germany addressing global infectious diseases and climate change issues.
+During his talk, he will go into detail how to leverage AI for surveillance and prediction of infectious disease outbreaks.
+
+
+We hope to see you there, learning with us about the latest on AI and infectious disease outbreak prediction.
+
+
+
+
+
+
+
+
+
+Abstract
+
+
+Pandemic preparedness and prevention focus on increasing the capacity to detect, manage, and prevent infectious disease outbreaks from spreading between the environment, animals, and humans. Novel surveillance data streams and data integration across sectors are required to improve the preparedness systems, as well as the development of decision-guiding predictive models coupled with effective response mechanisms.
+This talk focuses on describing the state-of-the-art within the intersection of pandemic preparedness and climate-sensitive infectious disease and showcasing a few interesting developments and applications of machine learning for sensors and surveillance. For example, if set in relation to mosquito smart traps, tick citizen science, and IoT sensors for the bioacoustics of animals. It will further discuss the predictive modeling of emerging infectious diseases and provide an example of what model requirements and features are important to consider within this area.
+
+
+
+Biography
+
+
+Joacim Rocklöv is a distinguished figure in the field of public health and epidemiology, with a significant focus on the intersection of epidemiology, climate, and data science.
+Moreover, Rocklöv is at the forefront of developing the first AI lab in Germany addressing global infectious diseases and climate change issues.
+
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event at our
+
+
+ meetup event-site
+ .
+After the event, there will be a social get-together with food and drinks courtesy of the Division of Medical Image Computing and Interactive Machine Learning Group at the DKFZ.
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ Seminarraum im Gästehaus der Universität Heidelberg
+
+
+
+ Registration
+
+
+
+ meetup event-site
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2024-02-06-pearse_keane.html b/_posts/2024-02-06-pearse_keane.html
new file mode 100644
index 0000000000..2faf85f659
--- /dev/null
+++ b/_posts/2024-02-06-pearse_keane.html
@@ -0,0 +1,135 @@
+---
+layout: post
+title: "Building Foundation Models in Ophthalmology - a Clinician’s Perspective"
+subtitle: "Pearse Keane, Institute of Ophthalmology, University College London (UCL), London, UK"
+custom_date: "February 6th 2024 @ 5pm"
+date: 2024-02-06
+background: '/img/posts/2024-02-06_Pearse_Keane_Ophtamology.png'
+future_date: False
+layer_shift: True
+---
+
+We are happy to announce our next online talk, with Prof. Pearse Keane, in our joint Heidelberg.ai / NCT Data Science Seminar series.
+Prof. Keane will talk about RETFound, an Foundation Model for Ophtamology which performs great for a diverse range of downstream clinical tasks, which was recently published in Nature.
+
+
+We hope to see you there, learning with us about how to build Foundation Models in Ophthalmology.
+
+
+
+
+
+
+
+
+
+Abstract
+
+
+Ophthalmology is among the most technology-driven of the all the medical specialties, with treatments utilizing high-spec medical lasers and advanced microsurgical techniques, and diagnostics involving ultra-high resolution imaging.
+Ophthalmology is also at the forefront of many trailblazing research areas in healthcare, such as stem cell therapy, gene therapy, and - most recently - artificial intelligence.
+In my presentation I will describe the development of RETFound, a foundation model for ophthalmology.
+RETFound was trained on 1.6 million retinal images using self-supervised learning and utilising a vision transformer architecture.
+In a recent publication in Nature, we demonstrate that RETFound performs better that other approaches for a diverse range of downstream clinical tasks, from diabetic retinopathy screening to predicting progression of age-related macular degeneration, to using the eye as a window to systemic disease (“oculomics”).
+We also show that RETFound is more robust on external validation, fairer across ethnicities, and more label efficient, opening the possibility of its use in less common retinal disease.
+We will describe the process to create this foundation model as well as our plans to scale and validate the model, going from nearly 2 million to 20 million images, and making it both 3D and truly multi-modal.
+
+
+
+Biography
+
+
+Pearse Keane is Professor of Artificial Medical Intelligence at UCL Institute of Ophthalmology, and a consultant ophthalmologist at Moorfields Eye Hospital, London.
+Since 2020, he has been funded by UK Research & Innovation (UKRI) as a Future Leaders Fellow, and in 2023 he became a National Institute for Health Research (NIHR) Senior Investigator.
+He is originally from Ireland and received his medical degree from University College Dublin (UCD), graduating in 2002.
+
+In 2016, he initiated a collaboration between Moorfields Eye Hospital and Google DeepMind, with the aim of developing artificial intelligence (AI) algorithms for the earlier detection and treatment of retinal disease.
+In August 2018, the first results of this collaboration were published in the journal, Nature Medicine.
+In May 2020, he jointly led work, again published in Nature Medicine, to develop an early warning system for age-related macular degeneration (AMD), by far the commonest cause of blindness in many countries.
+In 2023, he led the development of RETFound, the first foundation model in ophthalmology, published in Nature and made available open source.
+
+In October 2019, he was included on the Evening Standard Progress1000 list of most influential Londoners and in June 2020, he was profiled in The Economist .
+In 2022, he was listed in the “Top 10” of the “The Power List” by The Ophthalmologist magazine ,
+a ranking of the Top 100 most influential people in the world of ophthalmology.
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event via Zoom:
+
+
+
+ Zoom
+ .
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ Zoom
+
+
+
+ Registration
+
+
+
+ meetup event-site
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2024-05-14-zeynep_akata.html b/_posts/2024-05-14-zeynep_akata.html
new file mode 100644
index 0000000000..0dac79ecae
--- /dev/null
+++ b/_posts/2024-05-14-zeynep_akata.html
@@ -0,0 +1,132 @@
+---
+layout: post
+title: "Interpretable Vision and Language Models"
+subtitle: "Zeynep Akata, Director of Institute for Explainable Machine Learning, Professor of Computer Science at Technical University of Munich"
+custom_date: "May 14th 2024 @ 11:15am"
+date: 2024-05-14
+background: '/img/posts/2024-05-14-dallE_hd_ai_zeynep_akata.jpg'
+future_date: False
+online_event: False
+layer_shift: True
+---
+This event is part of the Helmholtz Imaging Annual Conference 2024 in Heidelberg.
+This conference is tailored for scientists and researchers engaged in imaging research or utilizing imaging techniques.
+
+We're excited to share one of the keynotes of this conference with the Heidelberg AI community!
+
+
+Explainable AI (XAI) addresses one of the most critical concerns in the adoption of AI technologies: transparency. XAI seeks to make the decision-making processes of AI systems clear and understandable to human users. This transparency is vital for building trust, particularly in sensitive areas such as healthcare, finance, and autonomous driving, where understanding AI’s decision process is crucial for acceptance and ethical considerations.
+
+This research is particularly crucial when applied to large vision-language models, which are increasingly used to handle complex tasks that involve understanding and generating content from both visual and textual data.
+
+
+Announcement
+Professor Akata will not be able to attend the conference in person.
+Therefore, her talk will be live-streamed at the event.
+
+Abstract
+
+
+In this talk, Professor Zeynep Akata delves into the transformative impacts of representation learning, foundation models, and explainable AI on machine learning technologies.
+She highlights how these approaches enhance the adaptability, transparency, and ethical alignment of AI systems across various applications.
+Professor Akata will address the synergy between these technologies and their crucial role in advancing AI, aiming to make these complex systems more accessible and understandable.
+
+(We will share more info on this talk shortly.)
+
+
+
+Biography
+
+
+
+Zeynep Akata is the Director of the Institute of Explainable Machine Learning and a Professor of Computer Science at the Technical University of Munich.
+Her research focuses on making AI-based systems more transparent and accountable, particularly through explainable, multi-modal, and low-shot learning in computer vision.
+She has held positions at the University of Tübingen and Max Planck Institutes, and her notable recognitions include the Lise Meitner Award for Excellent Women in Computer Science, the ERC Starting Grant and the German Pattern Recognition Award.
+For more details, you can visit her profile .
+
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event via Meetup:
+
+
+
+ Event Registration
+ .
+(no need to register with the Helmholtz Imaging Conference)
+
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ Frauenbad Heidelberg (Event Location)
+
+
+
+ Registration
+
+ This event is part of the Helmholtz Imaging Conference. Participants of this Heidelberg AI talk don't have to be registered with the conference!
+ However, please do register on the
+
+
+ meetup event-site.
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2024-05-23-david_stutz.html b/_posts/2024-05-23-david_stutz.html
new file mode 100644
index 0000000000..aa36a2f214
--- /dev/null
+++ b/_posts/2024-05-23-david_stutz.html
@@ -0,0 +1,113 @@
+---
+layout: post
+title: "Conformal Prediction under Ambiguous Ground Truth"
+subtitle: "David Stutz, Research Scientist at Google DeepMind"
+custom_date: "May 23rd 2024 @ 4pm"
+date: 2024-05-23
+background: '/img/posts/2024-05-23-rabduck.jpg'
+future_date: False
+online_event: True
+layer_shift: True
+---
+Uncertainty estimation is crucial in many areas, from medical applications to self-driving cars and weather forecasting, to allow the widespread use of machine learning models.
+
+We are excited to have David Stutz, a research scientist at Google DeepMind , in our joint Heidelberg.ai / NCT Data Science Seminar series.
+In this online seminar, David Stutz will talk about conformal prediction, a method to give rigorous uncertainties to machine learning models, and how to extend it to cases where even the ground truth data is uncertain.
+
+
+We look forward to your participation, as this seminar will equip us with the knowledge to enhance the safety of our machine-learning models, making them even more reliable.
+
+
+
+Abstract
+
+
+In safety-critical classification tasks, conformal prediction allows to perform rigorous uncertainty quantification by providing confidence sets including the true class with a user-specified probability. This generally assumes the availability of a held-out calibration set with access to ground truth labels. Unfortunately, in many domains, such labels are difficult to obtain and usually approximated by aggregating expert opinions. In fact, this holds true for almost all datasets, including well-known ones such as CIFAR and ImageNet. Applying conformal prediction using such labels underestimates uncertainty. Indeed, when expert opinions are not resolvable, there is inherent ambiguity present in the labels. That is, we do not have ``crisp'', definitive ground truth labels and this uncertainty should be taken into account during calibration. In this paper, we develop a conformal prediction framework for such ambiguous ground truth settings which relies on an approximation of the underlying posterior distribution of labels given inputs. We demonstrate our methodology on synthetic and real datasets, including a case study of skin condition classification in dermatology.
+
+
+Biography
+
+
+David Stutz is a researcher and engineer with more than 7 years of AI research experience and 12 years of software engineering experience across academia and industry.
+Currently, he is a senior research scientist at Google DeepMind focusing on uncertainty estimation, robustness and safety evaluation of generative AI, both in vision and language applications.
+
+He helped develop and ship the first large-scale image and audio watermarking system called SynthID.
+In 2022, he finished his PhD at the Max Planck Institute for Informatics where he worked on adversarial robustness, quantization and uncertainty estimation with deep neural networks, focused on computer vision applications.
+His PhD was awarded the dissertation award of German Association for Pattern Recognition in 2023, an outstanding paper award at the CVPR 2021 CV-AML workshop, a Qualcomm Innovation Fellowship in 2019 and he was selected to participate in the Heidelberg Laureate Forum twice in 2019 and 2023 with an Abne Grant from the Carl Zeiss Foundation.
+Prior to his PhD, he finished his MSc and BSc at RWTH Aachen University, being supported by a Germany Scholarship and awarded the STEM Award IT and RWTH Aachen’s Springorum Denkmünze in 2018.
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event via Meetup:
+
+
+
+ Event Registration
+ .
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ Zoom
+
+
+
+ Registration
+
+
+
+ meetup event-site
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2024-12-18-shek_azizi.html b/_posts/2024-12-18-shek_azizi.html
new file mode 100644
index 0000000000..638f817047
--- /dev/null
+++ b/_posts/2024-12-18-shek_azizi.html
@@ -0,0 +1,114 @@
+---
+layout: post
+title: "Towards Generalist Biomedical AI"
+subtitle: "Shekoofeh Azizi, Research Lead at Google DeepMind"
+custom_date: "December 18th 2024 @ 11am"
+date: 2024-12-18
+background: '/img/posts/2024-12-18-ekg.jpg'
+future_date: False
+layer_shift: True
+---
+
+We would like to welcome you all to the next hybrid event in our joint Heidelberg.ai / NCT Data Science Seminar series.
+In her talk Shekoofeh Azizi, a research lead at Google DeepMind , will discuss building generalist biomedical foundation models.
+
+
+
+
+
+
+
+
+Abstract
+
+
+Foundation models have changed how we develop medical AI. These powerful models, trained on massive datasets using self-supervised learning, are adaptable to diverse medical tasks with minimal additional data and paved the way for the development of generalist medical AI systems. In this talk we will explore the capabilities of these models from medical image analysis, to polygenic risk scoring, and aiding in therapeutic development. Additionally, we will discuss the future of generalist and generative models in healthcare and science.
+
+
+Biography
+
+
+ Shekoofeh (Shek) Azizi is a staff research scientist and research lead at Google DeepMind, where she focuses on translating AI solutions into tangible clinical impact. She is particularly interested in designing foundation models and agents for biomedical applications and has led major efforts in this area. Shek is one of the research leads driving the ambitious development of Google's flagship medical AI models, including REMEDIES, Med-PaLM, Med-PaLM 2, Med-PaLM M, and Med-Gemini. Her work has been featured in various media outlets and recognized with multiple awards, including the Governor General's Academic Gold Medal for her contributions to improving diagnostic ultrasound.
+
+
+
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event via Zoom:
+
+
+
+ Zoom Registration
+ .
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ INF 280, Lecture Hall KOZ and Zoom
+
+
+
+ Registration
+
+
+
+ Zoom Registration
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2025-01-23-charlotte_bunne.html b/_posts/2025-01-23-charlotte_bunne.html
new file mode 100644
index 0000000000..78ecb04c4f
--- /dev/null
+++ b/_posts/2025-01-23-charlotte_bunne.html
@@ -0,0 +1,121 @@
+---
+layout: post
+title: "Virtual Cells and Digital Twins: AI in Personalized Oncology"
+subtitle: "Charlotte Bunne, EPFL, Lausanne"
+custom_date: "January 23rd 2025 @ 4pm"
+date: 2025-01-23
+background: '/img/posts/2025-01-23_background.jpeg'
+future_date: False
+online_event: False
+layer_shift: True
+---
+
+Virtual Cells and Digital Twins are transformative technologies at the intersection of artificial intelligence and personalized oncology, promising to revolutionize how we understand and treat cancer.
+
+
+We are excited to have Charlotte Bunne, assistant professor at EPFL , in our joint heidelberg.ai / NCT Data Science Seminar series.
+In this live-streamed, in-person event, Charlotte Bunne will delve into the exciting world of Virtual Cells and Digital Twins, exploring how AI is empowering the development of personalized cancer therapies by simulating cellular behavior and patient-specific outcomes.
+
+
+We look forward to your participation to this event, where you will gain a deeper understanding of how cutting-edge AI is paving the way for more precise and effective treatments in oncology.
+
+
+
+Abstract
+
+
+TBA
+
+
+Biography
+
+
+ Charlotte Bunne is an assistant professor at EPFL in the School of Computer and Communication Sciences (IC) and School of Life Sciences (SV).
+ Before, she was a PostDoc at Genentech and Stanford with Aviv Regev and Jure Leskovec and completed a PhD in Computer Science at ETH Zurich working with Andreas Krause and Marco Cuturi.
+ During her graduate studies, she was a visiting researcher at the Broad Institute of MIT and Harvard hosted by Anne Carpenter and Shantanu Singh and worked with Stefanie Jegelka at MIT.
+ Her research aims to advance personalized medicine by utilizing machine learning and large-scale biomedical data.
+ Charlotte has been a Fellow of the German National Academic Foundation and is a recipient of the ETH Medal.
+
+
+
+ Important Announcement: Limited Seating for In-Person Event
+
+ Due to space restrictions, a maximum of 30 people can attend the event in person at BioQuant, seminar room 043, Heidelberg.
+ To secure your spot, we recommend arriving early. If you are unable to join in person, the event will also be live-streamed via Zoom (see link below).
+ Thank you for your understanding!
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event via Meetup:
+
+
+
+ Event Registration
+ .
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ BioQuant SR043
+
+
+ Livestream
+ Zoom
+
+
+ Registration
+
+
+
+ meetup event-site
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2025-05-13-asano.html b/_posts/2025-05-13-asano.html
new file mode 100644
index 0000000000..29464e0cb2
--- /dev/null
+++ b/_posts/2025-05-13-asano.html
@@ -0,0 +1,147 @@
+---
+layout: post
+title: "Post-Pretraining in Vision, and Language Foundation Models"
+subtitle: "Yuki M. Asano, University of Technology Nuremberg"
+custom_date: "May 13th 2025 @ 5pm"
+date: 2025-05-13
+background: '/img/posts/2025-05-13-tun-background.jpg'
+future_date: False
+online_event: False
+layer_shift: True
+meetup_link: https://www.meetup.com/heidelberg-artificial-intelligence-meetup/events/307336371
+---
+
+Foundation Models are reshaping the landscape of artificial intelligence, offering unprecedented capabilities across vision, language, and multi-modal tasks. From understanding spatial structure in images to aligning language models with the visual world, these innovations are opening up new frontiers in AI research.
+
+
+We are excited to welcome Yuki Asano, full Professor at the University of Technology Nuremberg and head of the Fundamental AI (FunAI) Lab, to our joint heidelberg.ai / NCT Data Science Seminar series.
+In this in-person event, Yuki Asano will present recent advances in building on top of pretrained Foundation Models to boost performance in dense prediction, efficient fine-tuning, and cross-modal understanding. The talk will cover novel methods such as NeCo for improving spatial perception in vision models, self-supervised time-tuning for video, and ultra-lightweight CLIP training. He will also explore surprising links between large language models and their visual grounding.
+
+
+We look forward to your participation in this exciting seminar, where you will valuable insights into how cutting-edge methods are pushing the boundaries of what Foundation Models can achieve.
+
+
+
+Update
+
+This event will be held in person at the DKFZ Communication Center (H1), Im Neuenheimer Feld 280, Heidelberg.
+
+
+
+
+
+Abstract
+
+
+This talk explores how pretrained Foundation Models can be further enhanced for vision, language, and multi-modal tasks. It begins by addressing a key limitation in models like DINOv2: their lack of spatial understanding in images. To address this, the NeCo method [1] is introduced—a lightweight post-pretraining technique based on patch-nearest neighbors that significantly improves dense prediction performance while requiring only 16 GPU hours. The talk then highlights how video data can be utilized to further enhance dense understanding in pretrained image models such as DINO [2].
+
+In the domain of language models, recent work on parameter-efficient finetuning (PEFT) [3] and instruction tuning [4] is presented, offering practical strategies for adapting large models effectively. The final part of the talk introduces a novel approach to training CLIP models using only 10 GPU hours by leveraging pretrained unimodal encoders. Intriguingly, the results reveal a strong correlation between the performance of language models and their visual alignment capabilities [5].
+
+
+
+Biography
+
+
+ Yuki Asano is the head of the Fundamental AI (FunAI) Lab and a full Professor at the University of Technology Nuremberg . Prior to this, he led the QUVA Lab at the University of Amsterdam , where he worked in close collaboration with Qualcomm AI Research . He completed his PhD at the Visual Geometry Group (VGG) at the University of Oxford , under the supervision of Andrea Vedaldi and Christian Rupprecht .
+
+
+
+
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event at our
+
+
+ meetup event-site
+ .
+
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ DKFZ Communication Center (H1), Im Neuenheimer Feld 280
+
+
+
+ Registration
+
+
+
+ meetup event-site
+
+
+
+
+
+
+
+
+
+
+
+References
+
+ [1] Pariza, V., Salehi, M., Burghouts, G., Locatello, F., & Asano, Y. M. (2024). Near, far: Patch-ordering enhances vision foundation models’ scene understanding. In ICLR 2024.
+
+
+ [2] Salehi, M., Gavves, E., Snoek, C. G. M., & Asano, Y. M. (2023). Time Does Tell: Self-Supervised Time-Tuning of Dense Image Representations. In ICCV 2023. (+Ongoing work)
+
+
+ [3] Kopiczko, D., Blankevoort, T., & Asano, Y. M. (2024). VeRA: Vector-based Random Matrix Adaptation. In ICLR 2024.
+
+
+ [4] Kopiczko, D. J., Blankevoort, T., & Asano, Y. M. (2024). Bitune: Leveraging Bidirectional Attention to Improve Decoder-Only LLMs. arXiv preprint.
+
+
+ [5] Ruthardt, J., Burghouts, G. J., Belongie, S., & Asano, Y. M. (2024). Better Language Models Exhibit Higher Visual Alignment. arXiv preprint.
+
\ No newline at end of file
diff --git a/_posts/2025-07-16-eric_brachmann.html b/_posts/2025-07-16-eric_brachmann.html
new file mode 100644
index 0000000000..e3a4f0b9a0
--- /dev/null
+++ b/_posts/2025-07-16-eric_brachmann.html
@@ -0,0 +1,111 @@
+---
+layout: post
+title: "Pushing the Boundaries of Structure-from-Motion with Machine Learning"
+subtitle: "Eric Brachmann, Senior Staff Scientist at Niantic Spatial"
+custom_date: "July 16th 2025 @ 4pm"
+date: 2025-07-16
+background: '/img/posts/2025-07-16_structure.png'
+future_date: False
+online_event: True
+layer_shift: True
+meetup_link: https://www.meetup.com/heidelberg-artificial-intelligence-meetup/events/308646638
+---
+AI has conquered the internet by storm with LLMs like ChatGPT. The interaction of AI Agents with the real world by means of robotics or augmented reality is generally seen to be the next frontier. To do this, however, AI Agent needs to be able to properly localize itself and the objects in its environment. These tasks are tackled by visual relocalisation and pose estimation, which allow the machine to visually perceive both its place in the world and its environment.
+
+
+We are excited to welcome Eric Brachmann , Senior Staff Scientist at Niantic Spatial and a well-established researcher in visual relocalisation and pose estimation, to our joint Heidelberg.ai / NCT Data Science Seminar series.
+In this online seminar, Eric Brachmann will talk about his current work on camera relocalization where he merges machine learning with more traditional computer vision approaches.
+His work has been integrated into Niantic's Visual Positioning System (VPS) which powers game features in Ingress and Pokemon Go.
+
+
+We look forward to your participation, as this seminar will equip us with the knowledge how AI will be able to perceive the world around it and interact with it in a meaningful way, which is crucial for the future of robotics and augmented reality.
+
+
+
+Abstract
+
+
+In 3D computer vision, we are currently witnessing a remarkable renaissance of interest in structure-from-motion (SfM), i.e. estimating camera poses and 3D geometry from a collection of images. Of course, SfM was never gone. Rather, solutions based on feature-matching and traditional multi-view geometry matured to a state about 10 years ago that turned them into reliable off-the-shelf components for various 3D vision tasks. Still, traditional SfM approaches are most reliable when certain conditions are met. For example, reconstructing very few or a huge amount of images can be challenging. The talk will investigate how learning-based formulations of SfM can address these challenges. We will focus on scene coordinate regression, an implicit scene representation, that naturally avoids the explosion of complexity inherent to image-to-image matching when the number of images is large. The talk culminates in the presentation of ACEZero, a self-supervised scene coordinate regression pipeline, that is able to reconstruct 10.000 images in reasonable time.
+
+
+
+Biography
+
+
+Eric Brachmann is a senior staff scientist at Niantic Spatial with extensive experience at the intersection of machine learning and computer vision, particularly in 3D vision. His work focuses on building and scaling the Visual Positioning System (VPS), with research interests spanning visual relocalization, pose estimation, robust optimization, end-to-end learning, and feature matching. He regularly publishes in top-tier computer vision conferences and is an active member of the community, serving as area chair and reviewer—receiving multiple outstanding reviewer recognitions. He has also co-organized several tutorials and workshops on visual relocalization and object pose estimation. Prior to his current role, he contributed to both academic and industry-led efforts in spatial computing and 3D vision research.
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event via Meetup:
+
+
+
+ Event Registration
+ .
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ Zoom
+
+
+
+ Registration
+
+
+
+ meetup event-site
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2025-08-14-seong_joon_oh.html b/_posts/2025-08-14-seong_joon_oh.html
new file mode 100644
index 0000000000..13b0481f94
--- /dev/null
+++ b/_posts/2025-08-14-seong_joon_oh.html
@@ -0,0 +1,112 @@
+---
+layout: post
+title: "Deploying General AI in the Private World"
+subtitle: "Seong Joon Oh, University of Tübingen"
+custom_date: "August 14th 2025 @ 5pm"
+date: 2025-08-14
+background: '/img/posts/2025-08-14-background.jpg'
+future_date: False
+online_event: True
+layer_shift: True
+meetup_link: https://www.meetup.com/heidelberg-artificial-intelligence-meetup/events/310078732
+---
+Despite breakthroughs in mathematics and coding benchmarks, general-purpose AI systems are not yet ready to deliver a step change in human productivity. Encoding nuanced human intent, managing feedback loops for alignment and debugging, and ensuring privacy, compliance, and trustworthiness are key challenges with regard to boosting our everyday productivity with AI assistants.
+
+
+We are excited to welcome Seong Joon Oh , Professor at University of Tübingen , to our joint heidelberg.ai / NCT Data Science Seminar series.
+In this online seminar, he will explore the key barriers to real-world AI deployment and present recent research aimed at overcoming them. The talk will also highlight strategic directions for the field in the years to come.
+
+
+We look forward to your participation in this important discussion on the future of trustworthy, privacy-conscious AI that can truly assist us in our private and professional lives.
+
+
+
+Abstract
+
+
+Despite breakthroughs in mathematics and coding benchmarks, general-purpose AI systems are not yet ready to deliver a step change in human productivity. This talk outlines three key challenges impeding their deployment: (1) encoding nuanced human intent, (2) managing feedback loops for alignment and debugging, and (3) ensuring privacy, compliance, and trustworthiness. We review recent efforts in these areas and close with strategic directions for AI research in the coming years.
+
+
+
+
+Biography
+
+
+Seong Joon Oh is a professor at the University of Tübingen, where he leads the Scalable Trustworthy AI (STAI) group. His research focuses on building reliable machine learning models—particularly explainable, robust, and probabilistic ones—and on developing cost-effective ways to incorporate human supervision. He also advises Parameter Lab.
+Before joining Tübingen, he worked as a research scientist at NAVER AI Lab. He earned his PhD in computer vision and machine learning from the Max Planck Institute for Informatics in 2018, working with Bernt Schiele and Mario Fritz on the privacy and security implications of machine learning. He holds both a Master of Mathematics with Distinction (2014) and a BA in Mathematics as a Wrangler (2013) from the University of Cambridge
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event via Meetup:
+
+
+
+ Event Registration
+ .
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ Zoom
+
+
+
+ Registration
+
+
+
+ meetup event-site
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_posts/2025-11-05-jonas_huebotter.html b/_posts/2025-11-05-jonas_huebotter.html
new file mode 100644
index 0000000000..65457bf8b6
--- /dev/null
+++ b/_posts/2025-11-05-jonas_huebotter.html
@@ -0,0 +1,120 @@
+---
+layout: post
+title: "Test-Time Training Agents to Solve Challenging Problems"
+subtitle: "Jonas Huebotter, ETH Zurich"
+custom_date: "November 5th 2025 @ 6 PM"
+date: 2025-11-05
+background: '/img/posts/arc-task-grids.jpg'
+future_date: True
+online_event: True
+layer_shift: True
+meetup_link: https://www.meetup.com/heidelberg-artificial-intelligence-meetup/events/311563770/
+---
+
+We’re thrilled to welcome Jonas Huebotter from
+ETH Zurich to our joint
+heidelberg.ai / NCT Data Science Seminar on November 5th at 6 PM .
+
+"When solving a problem of interest, do not solve a more general problem as an intermediate step. Try to get the answer that you really need but not a more general one." — Vladimir Vapnik
+
+In this online session, Jonas Huebotter will guide us through the world of test-time training — a setting in which the model is adapted for each new input during the prediction phase. This unlocks powerful new applications by letting the model adapt and search for the most relevant information to solve the task at hand for this specific input.
+This is one of the most promising strategies on the ARC-AGI challenge .
+
+
+
+We look forward to your participation in this important discussion on the future of trustworthy, privacy-conscious AI that can truly assist us in our private and professional lives.
+
+
+
+Abstract
+
+
+The standard paradigm of machine learning separates training and testing. Training aims to learn a model by extracting general rules from data, and testing applies this model to new, unseen data. We study an alternative paradigm where the model is trained at test-time specifically for the given task. We investigate why such test-time training can effectively specialize a model to individual tasks. Further, we demonstrate that such test-time training enables models to continually improve and eventually solve challenging tasks, which are out of reach for the initial model.
+
+
+
+Biography
+
+Jonas Huebotter is a PhD student in the Learning and Adaptive Systems Group at
+ETH Zurich working with
+Andreas Krause .
+Prior to this, he obtained a Master’s degree in Theoretical Computer Science and Machine Learning from ETH Zurich and a Bachelor’s degree in Computer Science and Mathematics from the Technical University of Munich.
+He is a recipient of the ETH Medal .
+His research aims to leverage foundation models for solving hard tasks through specialization and reinforcement learning. Furthermore, his work encompasses probabilistic inference, optimization, and online learning.
+
+
+
+
+
+Event Info
+
+Please help us plan ahead by registrating for the event via Meetup:
+
+
+
+ Event Registration
+ .
+
+
+
+
+
+
+ What?
+ {{ page.title}}
+
+
+
+
+ Who?
+ {{ page.subtitle}}
+
+
+ When?
+ {{ page.custom_date}}
+
+
+ Where?
+ Zoom
+
+
+
+ Registration
+
+
+
+ meetup event-site
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/_sass/styles.scss b/_sass/styles.scss
index e679d0ee85..7dce3ac68f 100644
--- a/_sass/styles.scss
+++ b/_sass/styles.scss
@@ -1,2 +1,10 @@
// Import Core Clean Blog SCSS
@import "../assets/vendor/startbootstrap-clean-blog/scss/styles.scss";
+
+// Example Change:
+// Changing how a font is defined!
+// Overwrite for .page-heading h1 with values in {} rem
+// .page-heading h1 {
+// font-weight: 100;
+// font-size: 0.5rem !important;
+// }
diff --git a/about.html b/about.html
index c661ef64e4..192676b0db 100644
--- a/about.html
+++ b/about.html
@@ -1,12 +1,186 @@
---
layout: page
+title: we are heidelberg.ai
+description:
+background: 'https://media.giphy.com/media/cT3wMhLGQWdKU/giphy.gif'
+---
+
+organizers
+
+
+
+
+board
+
+
+scientific coordinators
+
+
+program committee
+
+
+
+*initiators of heidelberg.ai
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+For ideas, contributions and event suggestions please contact us at heidelberg.ai@dkfz.de .
+
+
+
+
+
\ No newline at end of file
diff --git a/img/bg-about.jpg b/img/bg-about.jpg
deleted file mode 100644
index cd5302f96b..0000000000
Binary files a/img/bg-about.jpg and /dev/null differ
diff --git a/img/bg-contact.jpg b/img/bg-contact.jpg
deleted file mode 100644
index cf757fabf1..0000000000
Binary files a/img/bg-contact.jpg and /dev/null differ
diff --git a/img/bg-index.jpg b/img/bg-index.jpg
deleted file mode 100644
index 26cd395b5e..0000000000
Binary files a/img/bg-index.jpg and /dev/null differ
diff --git a/img/bg-post.jpg b/img/bg-post.jpg
deleted file mode 100644
index 4c16287f2b..0000000000
Binary files a/img/bg-post.jpg and /dev/null differ
diff --git a/img/hai/dkfz_logo.png b/img/hai/dkfz_logo.png
new file mode 100644
index 0000000000..9c931996ec
Binary files /dev/null and b/img/hai/dkfz_logo.png differ
diff --git a/img/hai/logo_left.svg b/img/hai/logo_left.svg
new file mode 100644
index 0000000000..8e39e4a6b5
--- /dev/null
+++ b/img/hai/logo_left.svg
@@ -0,0 +1,207 @@
+
+
+
+
+
+
+
+
+
+
+
+ image/svg+xml
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ heidelberg.ai
+
+
diff --git a/img/hai/logo_left_black_300.png b/img/hai/logo_left_black_300.png
new file mode 100644
index 0000000000..4b1310254e
Binary files /dev/null and b/img/hai/logo_left_black_300.png differ
diff --git a/img/hai/logo_left_black_500.png b/img/hai/logo_left_black_500.png
new file mode 100644
index 0000000000..6e5be6f341
Binary files /dev/null and b/img/hai/logo_left_black_500.png differ
diff --git a/img/hai/logo_left_offwhite.png b/img/hai/logo_left_offwhite.png
new file mode 100644
index 0000000000..eb49d65033
Binary files /dev/null and b/img/hai/logo_left_offwhite.png differ
diff --git a/img/hai/logo_left_offwhite.svg b/img/hai/logo_left_offwhite.svg
new file mode 100644
index 0000000000..b2b53b9554
--- /dev/null
+++ b/img/hai/logo_left_offwhite.svg
@@ -0,0 +1,208 @@
+
+
+
+
+
+
+
+
+
+
+
+ image/svg+xml
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ heidelberg.ai
+
+
diff --git a/img/hai/logo_only.png b/img/hai/logo_only.png
new file mode 100644
index 0000000000..257ce5da6d
Binary files /dev/null and b/img/hai/logo_only.png differ
diff --git a/img/hai/logo_only.svg b/img/hai/logo_only.svg
new file mode 100644
index 0000000000..73cefe558a
--- /dev/null
+++ b/img/hai/logo_only.svg
@@ -0,0 +1,572 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ logos/logo_only.svg at master · HeidelbergAI/logos · GitHub
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Permalink
+
+
+
+
+
+
+
+
+ Fetching contributors…
+
+
+
+
+
Cannot retrieve contributors at this time
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Sorry, something went wrong.
Reload?
+
Sorry, we cannot display this file.
+
Sorry, this file is invalid so it cannot be displayed.
+
+
+
+
+
+
+
+
+
Jump to Line
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ You can’t perform that action at this time.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
You signed in with another tab or window. Reload to refresh your session.
+
You signed out in another tab or window. Reload to refresh your session.
+
+
+
+
+
+
+
+
+ Press h to open a hovercard with more details.
+
+
+
+
+
+
diff --git a/img/organizations/DKFZ_Logo.png b/img/organizations/DKFZ_Logo.png
new file mode 100644
index 0000000000..97edc5646b
Binary files /dev/null and b/img/organizations/DKFZ_Logo.png differ
diff --git a/img/organizations/DKFZ_Logo_horizontal.png b/img/organizations/DKFZ_Logo_horizontal.png
new file mode 100644
index 0000000000..8f07ef88f4
Binary files /dev/null and b/img/organizations/DKFZ_Logo_horizontal.png differ
diff --git a/img/organizations/DKFZ_Logo_vertical.png b/img/organizations/DKFZ_Logo_vertical.png
new file mode 100644
index 0000000000..c8b89cf5c1
Binary files /dev/null and b/img/organizations/DKFZ_Logo_vertical.png differ
diff --git a/img/organizations/IMSY.png b/img/organizations/IMSY.png
new file mode 100644
index 0000000000..3f24ad8580
Binary files /dev/null and b/img/organizations/IMSY.png differ
diff --git a/img/organizations/cami_logo.png b/img/organizations/cami_logo.png
new file mode 100644
index 0000000000..59fb08c0d4
Binary files /dev/null and b/img/organizations/cami_logo.png differ
diff --git a/img/organizations/cvl_logo.png b/img/organizations/cvl_logo.png
new file mode 100644
index 0000000000..9f1ada6520
Binary files /dev/null and b/img/organizations/cvl_logo.png differ
diff --git a/img/organizations/hi-logo.png b/img/organizations/hi-logo.png
new file mode 100644
index 0000000000..4e8a2452ac
Binary files /dev/null and b/img/organizations/hi-logo.png differ
diff --git a/img/organizations/iml-logo.png b/img/organizations/iml-logo.png
new file mode 100644
index 0000000000..06dabaa528
Binary files /dev/null and b/img/organizations/iml-logo.png differ
diff --git a/img/organizations/iwr-logo_180.png b/img/organizations/iwr-logo_180.png
new file mode 100644
index 0000000000..1d541f8a05
Binary files /dev/null and b/img/organizations/iwr-logo_180.png differ
diff --git a/img/organizations/mic_logo.png b/img/organizations/mic_logo.png
new file mode 100644
index 0000000000..4c7d755fe4
Binary files /dev/null and b/img/organizations/mic_logo.png differ
diff --git a/img/organizations/vll-logo.png b/img/organizations/vll-logo.png
new file mode 100644
index 0000000000..990e8121c9
Binary files /dev/null and b/img/organizations/vll-logo.png differ
diff --git a/img/people/benjamin.png b/img/people/benjamin.png
new file mode 100644
index 0000000000..22b21e5904
Binary files /dev/null and b/img/people/benjamin.png differ
diff --git a/img/people/carsten.png b/img/people/carsten.png
new file mode 100644
index 0000000000..8153123010
Binary files /dev/null and b/img/people/carsten.png differ
diff --git a/img/people/carsten_lueth.png b/img/people/carsten_lueth.png
new file mode 100644
index 0000000000..23a6e90f87
Binary files /dev/null and b/img/people/carsten_lueth.png differ
diff --git a/img/people/daniela.png b/img/people/daniela.png
new file mode 100644
index 0000000000..6e0ed0b040
Binary files /dev/null and b/img/people/daniela.png differ
diff --git a/img/people/gregor.png b/img/people/gregor.png
new file mode 100644
index 0000000000..5081a3696b
Binary files /dev/null and b/img/people/gregor.png differ
diff --git a/img/people/jens.png b/img/people/jens.png
new file mode 100644
index 0000000000..3b46d8e54a
Binary files /dev/null and b/img/people/jens.png differ
diff --git a/img/people/jessica.png b/img/people/jessica.png
new file mode 100644
index 0000000000..44ec780b81
Binary files /dev/null and b/img/people/jessica.png differ
diff --git a/img/people/klaus.png b/img/people/klaus.png
new file mode 100644
index 0000000000..ba4e48e9cb
Binary files /dev/null and b/img/people/klaus.png differ
diff --git a/img/people/lena.png b/img/people/lena.png
new file mode 100644
index 0000000000..72a6c8cf50
Binary files /dev/null and b/img/people/lena.png differ
diff --git a/img/people/maximilian_rok.png b/img/people/maximilian_rok.png
new file mode 100644
index 0000000000..4283a027c1
Binary files /dev/null and b/img/people/maximilian_rok.png differ
diff --git a/img/people/paul.png b/img/people/paul.png
new file mode 100644
index 0000000000..f05936a386
Binary files /dev/null and b/img/people/paul.png differ
diff --git a/img/people/santhosh.png b/img/people/santhosh.png
new file mode 100644
index 0000000000..21d305dc94
Binary files /dev/null and b/img/people/santhosh.png differ
diff --git a/img/people/simon.png b/img/people/simon.png
new file mode 100644
index 0000000000..ba9a23ede1
Binary files /dev/null and b/img/people/simon.png differ
diff --git a/img/people/tassilo.png b/img/people/tassilo.png
new file mode 100644
index 0000000000..50ff4ee5e3
Binary files /dev/null and b/img/people/tassilo.png differ
diff --git a/img/posts/2022-05-05-alpha-fold-3.gif b/img/posts/2022-05-05-alpha-fold-3.gif
new file mode 100644
index 0000000000..9be60432bc
Binary files /dev/null and b/img/posts/2022-05-05-alpha-fold-3.gif differ
diff --git a/img/posts/2022-05-05-alpha-fold.gif b/img/posts/2022-05-05-alpha-fold.gif
new file mode 100644
index 0000000000..d3c86b3144
Binary files /dev/null and b/img/posts/2022-05-05-alpha-fold.gif differ
diff --git a/img/posts/2022-05-18-alan-karthikesalingam.png b/img/posts/2022-05-18-alan-karthikesalingam.png
new file mode 100644
index 0000000000..547959430a
Binary files /dev/null and b/img/posts/2022-05-18-alan-karthikesalingam.png differ
diff --git a/img/posts/2022-05-18-code-to-clinic_2.png b/img/posts/2022-05-18-code-to-clinic_2.png
new file mode 100644
index 0000000000..da93a17652
Binary files /dev/null and b/img/posts/2022-05-18-code-to-clinic_2.png differ
diff --git a/img/posts/2022-09-27-andrulis-background.jpg b/img/posts/2022-09-27-andrulis-background.jpg
new file mode 100644
index 0000000000..be4643cda6
Binary files /dev/null and b/img/posts/2022-09-27-andrulis-background.jpg differ
diff --git a/img/posts/2022-09-27-jonas-andrulis-2.png b/img/posts/2022-09-27-jonas-andrulis-2.png
new file mode 100644
index 0000000000..fb023933b9
Binary files /dev/null and b/img/posts/2022-09-27-jonas-andrulis-2.png differ
diff --git a/img/posts/2022-09-27-jonas-andrulis-background.png b/img/posts/2022-09-27-jonas-andrulis-background.png
new file mode 100644
index 0000000000..d72c12999e
Binary files /dev/null and b/img/posts/2022-09-27-jonas-andrulis-background.png differ
diff --git a/img/posts/2023-02-09-stable-diffusion_2.png b/img/posts/2023-02-09-stable-diffusion_2.png
new file mode 100644
index 0000000000..f9b8575216
Binary files /dev/null and b/img/posts/2023-02-09-stable-diffusion_2.png differ
diff --git a/img/posts/2023-06-28-gael.jpeg b/img/posts/2023-06-28-gael.jpeg
new file mode 100644
index 0000000000..6f8bcf7cb3
Binary files /dev/null and b/img/posts/2023-06-28-gael.jpeg differ
diff --git a/img/posts/2023-07-26-agi-alpha.jpeg b/img/posts/2023-07-26-agi-alpha.jpeg
new file mode 100644
index 0000000000..1e32a7c390
Binary files /dev/null and b/img/posts/2023-07-26-agi-alpha.jpeg differ
diff --git a/img/posts/2023-07-26-agi-summertalks.jpg b/img/posts/2023-07-26-agi-summertalks.jpg
new file mode 100644
index 0000000000..ea29e4d526
Binary files /dev/null and b/img/posts/2023-07-26-agi-summertalks.jpg differ
diff --git a/img/posts/2023-09-20-democritus.png b/img/posts/2023-09-20-democritus.png
new file mode 100644
index 0000000000..56ccadced7
Binary files /dev/null and b/img/posts/2023-09-20-democritus.png differ
diff --git a/img/posts/2023-09-20-niki_kilbertus.png b/img/posts/2023-09-20-niki_kilbertus.png
new file mode 100644
index 0000000000..c1d83dd62b
Binary files /dev/null and b/img/posts/2023-09-20-niki_kilbertus.png differ
diff --git a/img/posts/2023-10-04-qi-dou.jpg b/img/posts/2023-10-04-qi-dou.jpg
new file mode 100644
index 0000000000..a23e5582b0
Binary files /dev/null and b/img/posts/2023-10-04-qi-dou.jpg differ
diff --git a/img/posts/2023-10-04-robotic-surgery.png b/img/posts/2023-10-04-robotic-surgery.png
new file mode 100644
index 0000000000..0dbacf56ba
Binary files /dev/null and b/img/posts/2023-10-04-robotic-surgery.png differ
diff --git a/img/posts/2023-10-16-Krull.jpg b/img/posts/2023-10-16-Krull.jpg
new file mode 100644
index 0000000000..cedef96940
Binary files /dev/null and b/img/posts/2023-10-16-Krull.jpg differ
diff --git a/img/posts/2023-11-28-letitia-parcalabescu.png b/img/posts/2023-11-28-letitia-parcalabescu.png
new file mode 100644
index 0000000000..746e5d228e
Binary files /dev/null and b/img/posts/2023-11-28-letitia-parcalabescu.png differ
diff --git a/img/posts/2023-11-28-parcalabescu-background.jpg b/img/posts/2023-11-28-parcalabescu-background.jpg
new file mode 100644
index 0000000000..213e0cf402
Binary files /dev/null and b/img/posts/2023-11-28-parcalabescu-background.jpg differ
diff --git a/img/posts/2024-01-31_rockloev_background.png b/img/posts/2024-01-31_rockloev_background.png
new file mode 100644
index 0000000000..a55e7969f1
Binary files /dev/null and b/img/posts/2024-01-31_rockloev_background.png differ
diff --git a/img/posts/2024-01-31_rockloev_s.png b/img/posts/2024-01-31_rockloev_s.png
new file mode 100644
index 0000000000..94fa057397
Binary files /dev/null and b/img/posts/2024-01-31_rockloev_s.png differ
diff --git a/img/posts/2024-02-06_Pearse_Keane_Ophtamology.png b/img/posts/2024-02-06_Pearse_Keane_Ophtamology.png
new file mode 100644
index 0000000000..cd170671aa
Binary files /dev/null and b/img/posts/2024-02-06_Pearse_Keane_Ophtamology.png differ
diff --git a/img/posts/2024-02-06_pearse_keane_circle.png b/img/posts/2024-02-06_pearse_keane_circle.png
new file mode 100644
index 0000000000..7872fdda88
Binary files /dev/null and b/img/posts/2024-02-06_pearse_keane_circle.png differ
diff --git a/img/posts/2024-05-14-dallE_hd_ai_zeynep_akata.jpg b/img/posts/2024-05-14-dallE_hd_ai_zeynep_akata.jpg
new file mode 100644
index 0000000000..beb8fa6451
Binary files /dev/null and b/img/posts/2024-05-14-dallE_hd_ai_zeynep_akata.jpg differ
diff --git a/img/posts/2024-05-14-portrait_zeynep_akata.jpg b/img/posts/2024-05-14-portrait_zeynep_akata.jpg
new file mode 100644
index 0000000000..46a4d6af85
Binary files /dev/null and b/img/posts/2024-05-14-portrait_zeynep_akata.jpg differ
diff --git a/img/posts/2024-05-23-rabduck.jpg b/img/posts/2024-05-23-rabduck.jpg
new file mode 100644
index 0000000000..7cb77565f7
Binary files /dev/null and b/img/posts/2024-05-23-rabduck.jpg differ
diff --git a/img/posts/2024-05-23_david_stutz_circle.png b/img/posts/2024-05-23_david_stutz_circle.png
new file mode 100644
index 0000000000..fd7f2d3b78
Binary files /dev/null and b/img/posts/2024-05-23_david_stutz_circle.png differ
diff --git a/img/posts/2024-12-18-azizi.jpg b/img/posts/2024-12-18-azizi.jpg
new file mode 100644
index 0000000000..d55881c714
Binary files /dev/null and b/img/posts/2024-12-18-azizi.jpg differ
diff --git a/img/posts/2024-12-18-ekg.jpg b/img/posts/2024-12-18-ekg.jpg
new file mode 100644
index 0000000000..66c6756c1d
Binary files /dev/null and b/img/posts/2024-12-18-ekg.jpg differ
diff --git a/img/posts/2025-01-23_background.jpeg b/img/posts/2025-01-23_background.jpeg
new file mode 100644
index 0000000000..a7cfb76597
Binary files /dev/null and b/img/posts/2025-01-23_background.jpeg differ
diff --git a/img/posts/2025-01-23_cell.jpg b/img/posts/2025-01-23_cell.jpg
new file mode 100644
index 0000000000..254db1a8cc
Binary files /dev/null and b/img/posts/2025-01-23_cell.jpg differ
diff --git a/img/posts/2025-01-23_charlotte_bunne_circle.png b/img/posts/2025-01-23_charlotte_bunne_circle.png
new file mode 100644
index 0000000000..c67a29f2a8
Binary files /dev/null and b/img/posts/2025-01-23_charlotte_bunne_circle.png differ
diff --git a/img/posts/2025-05-13-asano_circle.png b/img/posts/2025-05-13-asano_circle.png
new file mode 100644
index 0000000000..52e2338644
Binary files /dev/null and b/img/posts/2025-05-13-asano_circle.png differ
diff --git a/img/posts/2025-05-13-tun-background.jpg b/img/posts/2025-05-13-tun-background.jpg
new file mode 100644
index 0000000000..9db0e77011
Binary files /dev/null and b/img/posts/2025-05-13-tun-background.jpg differ
diff --git a/img/posts/2025-07-16_brachmann.png b/img/posts/2025-07-16_brachmann.png
new file mode 100644
index 0000000000..e8aecee4b0
Binary files /dev/null and b/img/posts/2025-07-16_brachmann.png differ
diff --git a/img/posts/2025-07-16_structure.png b/img/posts/2025-07-16_structure.png
new file mode 100644
index 0000000000..7795c1ad4b
Binary files /dev/null and b/img/posts/2025-07-16_structure.png differ
diff --git a/img/posts/2025-08-14-background.jpg b/img/posts/2025-08-14-background.jpg
new file mode 100644
index 0000000000..8cb4d2bd0f
Binary files /dev/null and b/img/posts/2025-08-14-background.jpg differ
diff --git a/img/posts/2025-08-14-seong_john_oh.png b/img/posts/2025-08-14-seong_john_oh.png
new file mode 100644
index 0000000000..df35637a28
Binary files /dev/null and b/img/posts/2025-08-14-seong_john_oh.png differ
diff --git a/img/posts/2025-11-05-jonas_huebotter.png b/img/posts/2025-11-05-jonas_huebotter.png
new file mode 100644
index 0000000000..0ccd99782c
Binary files /dev/null and b/img/posts/2025-11-05-jonas_huebotter.png differ
diff --git a/img/posts/active_inference.jpg b/img/posts/active_inference.jpg
new file mode 100644
index 0000000000..0fa02e3afc
Binary files /dev/null and b/img/posts/active_inference.jpg differ
diff --git a/img/posts/arc-task-grids.jpg b/img/posts/arc-task-grids.jpg
new file mode 100644
index 0000000000..6cd89e0d7d
Binary files /dev/null and b/img/posts/arc-task-grids.jpg differ
diff --git a/img/posts/bronstein.png b/img/posts/bronstein.png
new file mode 100644
index 0000000000..13b8139cc6
Binary files /dev/null and b/img/posts/bronstein.png differ
diff --git a/img/posts/butter.png b/img/posts/butter.png
new file mode 100644
index 0000000000..c7a70a3223
Binary files /dev/null and b/img/posts/butter.png differ
diff --git a/img/posts/christian_baumgartner_background.png b/img/posts/christian_baumgartner_background.png
new file mode 100644
index 0000000000..760b0a8daf
Binary files /dev/null and b/img/posts/christian_baumgartner_background.png differ
diff --git a/img/posts/deep_nilm.jpg b/img/posts/deep_nilm.jpg
new file mode 100644
index 0000000000..6690a949e8
Binary files /dev/null and b/img/posts/deep_nilm.jpg differ
diff --git a/img/posts/dnc.png b/img/posts/dnc.png
new file mode 100644
index 0000000000..7c6cfbaec6
Binary files /dev/null and b/img/posts/dnc.png differ
diff --git a/img/posts/drug_discovery.jpg b/img/posts/drug_discovery.jpg
new file mode 100644
index 0000000000..e04673f36a
Binary files /dev/null and b/img/posts/drug_discovery.jpg differ
diff --git a/img/posts/feindt_banner.jpg b/img/posts/feindt_banner.jpg
new file mode 100644
index 0000000000..bfed43b92b
Binary files /dev/null and b/img/posts/feindt_banner.jpg differ
diff --git a/img/posts/glocker.jpg b/img/posts/glocker.jpg
new file mode 100644
index 0000000000..53f606453e
Binary files /dev/null and b/img/posts/glocker.jpg differ
diff --git a/img/posts/gqn.png b/img/posts/gqn.png
new file mode 100644
index 0000000000..d81d855eb4
Binary files /dev/null and b/img/posts/gqn.png differ
diff --git a/img/posts/grantcharov.jpg b/img/posts/grantcharov.jpg
new file mode 100644
index 0000000000..24936e0af7
Binary files /dev/null and b/img/posts/grantcharov.jpg differ
diff --git a/img/posts/hamprecht.png b/img/posts/hamprecht.png
new file mode 100644
index 0000000000..28654c3556
Binary files /dev/null and b/img/posts/hamprecht.png differ
diff --git a/img/posts/hidary_banner.jpg b/img/posts/hidary_banner.jpg
new file mode 100644
index 0000000000..9c89f821d9
Binary files /dev/null and b/img/posts/hidary_banner.jpg differ
diff --git a/img/posts/koehte_background.png b/img/posts/koehte_background.png
new file mode 100644
index 0000000000..67d5ba8b15
Binary files /dev/null and b/img/posts/koehte_background.png differ
diff --git a/img/posts/koethe_background.jpg b/img/posts/koethe_background.jpg
new file mode 100644
index 0000000000..0629c8c45a
Binary files /dev/null and b/img/posts/koethe_background.jpg differ
diff --git a/img/posts/leibig_banner.png b/img/posts/leibig_banner.png
new file mode 100644
index 0000000000..6ba37ddab2
Binary files /dev/null and b/img/posts/leibig_banner.png differ
diff --git a/img/posts/leibig_banner_2.png b/img/posts/leibig_banner_2.png
new file mode 100644
index 0000000000..ba13fb8ea1
Binary files /dev/null and b/img/posts/leibig_banner_2.png differ
diff --git a/img/posts/lena_maier_hein.jpg b/img/posts/lena_maier_hein.jpg
new file mode 100644
index 0000000000..e708660c16
Binary files /dev/null and b/img/posts/lena_maier_hein.jpg differ
diff --git a/img/posts/mann.jpg b/img/posts/mann.jpg
new file mode 100644
index 0000000000..5d4aae0990
Binary files /dev/null and b/img/posts/mann.jpg differ
diff --git a/img/posts/mathias_niepert_background.png b/img/posts/mathias_niepert_background.png
new file mode 100644
index 0000000000..b6a712449c
Binary files /dev/null and b/img/posts/mathias_niepert_background.png differ
diff --git a/img/posts/microscopy_banner_pexels-chokniti-khongchum-3938022.jpg b/img/posts/microscopy_banner_pexels-chokniti-khongchum-3938022.jpg
new file mode 100644
index 0000000000..5046d8592b
Binary files /dev/null and b/img/posts/microscopy_banner_pexels-chokniti-khongchum-3938022.jpg differ
diff --git a/img/posts/motor_skills.png b/img/posts/motor_skills.png
new file mode 100644
index 0000000000..63efbc08bd
Binary files /dev/null and b/img/posts/motor_skills.png differ
diff --git a/img/posts/neurips2021_banner.jpg b/img/posts/neurips2021_banner.jpg
new file mode 100644
index 0000000000..cc6497d17a
Binary files /dev/null and b/img/posts/neurips2021_banner.jpg differ
diff --git a/img/posts/neurips_logo.png b/img/posts/neurips_logo.png
new file mode 100644
index 0000000000..ace1869378
Binary files /dev/null and b/img/posts/neurips_logo.png differ
diff --git a/img/posts/ommer_banner.jpg b/img/posts/ommer_banner.jpg
new file mode 100644
index 0000000000..d51e2f6892
Binary files /dev/null and b/img/posts/ommer_banner.jpg differ
diff --git a/img/posts/parameter_space.png b/img/posts/parameter_space.png
new file mode 100644
index 0000000000..b0d24b16a5
Binary files /dev/null and b/img/posts/parameter_space.png differ
diff --git a/img/posts/part_segmentation_banner.jpg b/img/posts/part_segmentation_banner.jpg
new file mode 100644
index 0000000000..58112767b6
Binary files /dev/null and b/img/posts/part_segmentation_banner.jpg differ
diff --git a/img/posts/pong.jpg b/img/posts/pong.jpg
new file mode 100644
index 0000000000..68f0aa506f
Binary files /dev/null and b/img/posts/pong.jpg differ
diff --git a/img/posts/post-binary_notext.gif b/img/posts/post-binary_notext.gif
new file mode 100644
index 0000000000..a164937f14
Binary files /dev/null and b/img/posts/post-binary_notext.gif differ
diff --git a/img/posts/post_binary_logo.jpg b/img/posts/post_binary_logo.jpg
new file mode 100644
index 0000000000..a245373f22
Binary files /dev/null and b/img/posts/post_binary_logo.jpg differ
diff --git a/img/posts/robot-learning.jpg b/img/posts/robot-learning.jpg
new file mode 100644
index 0000000000..3e17245a36
Binary files /dev/null and b/img/posts/robot-learning.jpg differ
diff --git a/img/posts/ruder_background.png b/img/posts/ruder_background.png
new file mode 100644
index 0000000000..d2c29122f8
Binary files /dev/null and b/img/posts/ruder_background.png differ
diff --git a/img/posts/search_tree.png b/img/posts/search_tree.png
new file mode 100644
index 0000000000..b4c03a4178
Binary files /dev/null and b/img/posts/search_tree.png differ
diff --git a/img/posts/timo_denk.jpg b/img/posts/timo_denk.jpg
new file mode 100644
index 0000000000..652dae660c
Binary files /dev/null and b/img/posts/timo_denk.jpg differ
diff --git a/img/posts/tractogram.jpg b/img/posts/tractogram.jpg
new file mode 100644
index 0000000000..930b0efa1d
Binary files /dev/null and b/img/posts/tractogram.jpg differ
diff --git a/img/posts/vincent_background.gif b/img/posts/vincent_background.gif
new file mode 100644
index 0000000000..cc901bc757
Binary files /dev/null and b/img/posts/vincent_background.gif differ
diff --git a/img/posts/welling_banner.png b/img/posts/welling_banner.png
new file mode 100644
index 0000000000..3c90d3d090
Binary files /dev/null and b/img/posts/welling_banner.png differ
diff --git a/img/utils/livestream-high-res.png b/img/utils/livestream-high-res.png
new file mode 100644
index 0000000000..d5e1da5bad
Binary files /dev/null and b/img/utils/livestream-high-res.png differ
diff --git a/imprint.html b/imprint.html
new file mode 100644
index 0000000000..a340d17e78
--- /dev/null
+++ b/imprint.html
@@ -0,0 +1,64 @@
+---
+layout: page
+title: we are heidelberg.ai
+description:
+background: 'https://media.giphy.com/media/cT3wMhLGQWdKU/giphy.gif'
+---
+
+
+
+
+
+
+Imprint
+This section provides the legally required information on provider identification and other relevant legal details regarding this website.
+
+Provider
+
+The provider of this website is the Helmholtz-Gemeinschaft Deutscher Forschungszentren e.V.
+
+
+Address
+
+Association Headquarters and Office Bonn
+Ahrstraße 45, 53175 Bonn
+Telephone: +49 (0) 228 30818-0
+Fax: +49 (0) 228 30818-30
+Email: info (at) helmholtz.de
+Web: www.helmholtz-hida.de
+
+
+Register of Associations
+
+The Helmholtz-Gemeinschaft Deutscher Forschungszentren e.V. is entered in the Register of Associations at Bonn Local Court under the registration number VR 7942 .
+
+
+Representative
+
+The association is legally represented by the Executive Board. The President is:
+
+Prof. Dr. med. Dr. h.c. mult. Otmar D. Wiestler
+Berlin Office
+Anna-Louisa-Karsch-Str. 2, 10178 Berlin
+Telephone: +49 (0) 30 206329-52
+Fax: +49 (0) 30 206329-59
+Email: president (at) helmholtz.de
+Web: www.helmholtz.de
+
+
+Editorial Staff
+
+Responsible for editorial content per Section 55 (2) RStV:
+
+Prof. Dr. Klaus Maier-Hein
+Division of Medical Image Computing, DKFZ
+Im Neuenheimer Feld 223, 69120 Heidelberg
+Email: k.maier-hein (at) dkfz-heidelberg.de
+Web: Medical Image Computing
+
+
+Editorial Office
+
+Santhosh Parampottupadam
+Email: santhosh.parampottupadam (at) dkfz-heidelberg.de
+
diff --git a/index.html b/index.html
index 9fcc79a236..21da271a28 100644
--- a/index.html
+++ b/index.html
@@ -1,4 +1,23 @@
---
layout: home
-background: '/img/bg-index.jpg'
+background: 'https://media.giphy.com/media/l3vRcrVqhBVSpJte0/giphy.gif'
---
+
+ Agenda
+
+ heidelberg.ai is a student-led initiative for enthusiasts, students and researchers in the field of
+ Artificial Intelligence hosted by the
+ German Cancer Research Center (DKFZ) .
+ We connect more than 3100 members.
+ We provide opportunities for technical discussions, networking and connecting with industrial partners.
+ Our vision is to create a hub of AI expertise in the Heidelberg area.
+ People with all backgrounds and knowledge are welcome to participate.
+
+
+Join Us
+
+ We manage our events through meetup.com. Registration to our platform let's you receive notifications of
+ future events. To help us plan ahead and tie in with other attendees, you can additionally sign up for
+ individual events. Interested? Proceed to free registration here: meetup page.
+
\ No newline at end of file
diff --git a/posts/index.html b/posts/index.html
index d4876b74ed..9babd6d461 100644
--- a/posts/index.html
+++ b/posts/index.html
@@ -1,7 +1,7 @@
---
layout: page
-title: Posts
-background: '/img/bg-post.jpg'
+title: Events
+background: "https://78.media.tumblr.com/23eeea1415bc929a2e849fcc4dd2168b/tumblr_nw87oeE6N41uhfpc5o1_500.gif"
---
{% for post in paginator.posts %}
@@ -15,13 +15,45 @@ {{ post.subtitle }}
{{ post.excerpt | strip_html | truncatewords: 15 }}
{% endif %}
- Posted by
- {% if post.author %}
- {{ post.author }}
- {% else %}
- {{ site.author }}
+
+
+
+
+
+
+
+
+
+
+
+
+
+ {% if post.future_date %}
+
+ upcoming:
+ {% if post.custom_date %}
+ {{ post.custom_date}}
+ {% else %}
+ {{ post.date | date: '%B %d, %Y' }}
+ {% endif %}
+ {% if post.future_date %}
+ {% if post.time %}
+ {{post.time}}
+ {% endif %}
+ {% endif %}
+
+ {% else %}
+ {% if post.custom_date %}
+ {{ post.custom_date}}
+ {% else %}
+ {{ post.date | date: '%B %d, %Y' }}
+ {% endif %}
+ {% if post.future_date %}
+ {% if post.time %}
+ {{post.time}}
+ {% endif %}
+ {% endif %}
{% endif %}
- on {{ post.date | date: '%B %d, %Y' }} · {% include read_time.html content=post.content %}
diff --git a/privacy.html b/privacy.html
new file mode 100644
index 0000000000..33fcaeb70d
--- /dev/null
+++ b/privacy.html
@@ -0,0 +1,81 @@
+---
+layout: page
+title: we are heidelberg.ai
+description:
+background: 'https://media.giphy.com/media/cT3wMhLGQWdKU/giphy.gif'
+---
+
+
+
+
+
+Privacy Policy
+Data Protection Declaration
+Last updated: 07/04/2025
+
+1. Data Protection at a Glance
+General Information
+
+The following information provides a simple overview of what happens to your personal data when you visit our website. Personal data is any data that can be used to identify you personally. For detailed information about data protection, please refer to the full policy below.
+
+
+Data Collection on This Website
+
+Who is responsible for data collection on this website?
+Data processing on this website is carried out by the website operator. Contact details can be found in the Imprint section. No data is transferred to external service providers or outside the EU.
+
+
+2. Hosting and Content Delivery Networks (CDN)
+
+This website is hosted on servers of the German Cancer Research Center (DKFZ). The servers automatically collect and store information in server log files, which your browser automatically transmits when you visit the website.
+
+
+3. General Information and Mandatory Information
+Data Protection
+
+We take the protection of your personal data very seriously. We treat your data confidentially and in accordance with statutory data protection regulations and this privacy policy.
+
+
+Use of Cookies
+
+Our website does not use cookies or similar tracking technologies.
+
+
+4. Data Collection on Our Website
+Server Log Files
+
+The website provider automatically collects and stores information in server log files, which your browser automatically transmits to us. These include:
+
+
+ Browser type and version
+ Operating system used
+ Referrer URL
+ Hostname of the accessing computer
+ Time of the server request
+ IP address
+
+
+5. Contact Form and Email Contact
+
+If you send us inquiries via the contact form or email, your details from the inquiry, including your contact information, will be stored for the purpose of processing the inquiry and any follow-up.
+
+
+6. Your Rights
+You have the following rights regarding your personal data:
+
+ Right to information
+ Right to correction or deletion
+ Right to restriction of processing
+ Right to object to processing
+ Right to data portability
+
+
+7. Name and Address of the Data Protection Officer
+
+Data Protection Officer
+German Cancer Research Center (DKFZ) – Foundation under Public Law
+Im Neuenheimer Feld 280
+69120 Heidelberg
+Phone: +49 (0)6221 420
+Email: datenschutz@dkfz.de
+
diff --git a/slides/2017-07-25-differentiable-neural-computers.pdf b/slides/2017-07-25-differentiable-neural-computers.pdf
new file mode 100644
index 0000000000..032b5bf8a7
Binary files /dev/null and b/slides/2017-07-25-differentiable-neural-computers.pdf differ
diff --git a/slides/2018-05-26-parameter-space-noise.pdf b/slides/2018-05-26-parameter-space-noise.pdf
new file mode 100644
index 0000000000..78ebbed48f
Binary files /dev/null and b/slides/2018-05-26-parameter-space-noise.pdf differ
diff --git a/slides/2018-07-30-robot-learning.pdf b/slides/2018-07-30-robot-learning.pdf
new file mode 100644
index 0000000000..bab75b5983
Binary files /dev/null and b/slides/2018-07-30-robot-learning.pdf differ