Leaf Lens

Investigating Concept Learning in Leaf Classification with Deep Neural Networks¶
Welcome to our scientific investigation into how deep neural networks learn and utilize concepts to classify leaves into over 150 distinct families. This collaborative effort brings together researchers from various fields and explore using Explainable AI (XAI) the meaningful patterns found by trained neural net.
Project Goals¶
Our primary objective is to leverage Explainable AI techniques to understand the concepts that matter most for neural networks when classifying leaves. By revealing these concepts, we aim to provide:
- Insights into the model’s decision-making process, identifying the key features used for classification.
- A deeper understanding of the relationships between biological taxonomy and computational representations.
- Visual and interactive tools for exploring how concepts and classes are structured within the learned representations.
Key Highlights¶
- Number of Classes: 150+ leaf families
- Dataset Size: Over 100,000 leaf samples
- Discovered Concepts: 2,000+ unique concepts identified using concepts extraction methods
What This Website Offers¶
- Interactive UMAP Visualizations:
- 2,000+ Concepts: Explore how the network organizes learned concepts in a 2D UMAP projection. Each point represents a distinct concept, clustered based on similarity. Hover over clusters for details.
- 150+ Classes: See how the leaf families relate to one another in the feature space through an interactive UMAP plot. Gain insights into class-level similarities and separations.
Visualizations (interactive plots will be embedded here):
Concepts visualization:
Class visualization:
- Class-Specific Pages:
-
For each of the 150+ leaf families, a dedicated page includes:
- Representative samples from the dataset.
- Concept visualizations that highlight the features most critical for classifying leaves in this family.
- Activation heatmaps showing how the neural network processes these leaves.
-
Concept-Specific Pages:
- Each of the 2,000+ discovered concepts has its own page, detailing:
- Feature visualizations representing the concept.
- The top 10 leaf images that activate the concept most strongly.
- Insights into the concept's role in classifying specific leaf families.
Navigating the Investigation¶
- Begin with the UMAP Visualizations to explore the relationships between concepts and classes.
- Dive deeper into Class Pages to learn about specific leaf families and the features the model uses to classify them.
- Explore the Concept Pages for an in-depth look at the learned concepts and their biological or computational significance.
Broader Implications¶
This research not only provides a detailed understanding of how deep learning models approach the task of leaf classification but also establishes a framework for applying Explainable AI to scientific domains. By uncovering the relationships between learned concepts and biological taxonomy, we hope to inspire future interdisciplinary investigations at the intersection of AI and science.
We invite you to explore the findings, interact with the visualizations, and engage with this collaborative exploration into concept learning.