Skip to content
Go back

Truth About Human Learning: A Stress-Free Guide to Mastering AI/ML

Table of contents

Open Table of contents

Introduction

Hey everyone! Being in an AI/ML PhD program is a massive undertaking. If you’ve been feeling like the Python code isn’t “clicking” or the Math formulas look like a foreign language, you are definitely not alone. Most of us start by trying to “absorb” the material by reading it over and over, but that often leads to burnout rather than mastery.

The good news? It’s usually not a “you” problem—it’s a strategy problem. By shifting our focus from passive learning to active learning, we can actually spend less time studying while retaining much more.

Here’s a breakdown of how to make your study sessions more effective and a lot less stressful.


Why “Hard” is Actually Good: The Science of Learning

Before we dive into the “how,” let’s look at the “why.” We often feel discouraged when we struggle to remember something, but research shows that the struggle is the signal that you are actually learning.

Below are two common traps and corresponding researches. More researches are attached at the bottom.

The Re-reading Trap (Roediger and Karpicke, 2006)

Roediger, H. L., & Karpicke, J. D. (2006). Test-enhanced learning: taking memory tests improves long-term retention. Psychological science, 17(3), 249–255.

In a famous 2006 study by Roediger and Karpicke, researchers compared two groups of students:

Retention IntervalRestudy Group (Read 4x)Test Group (Read 1x, Test 3x)Winner
5 Minutes Later81%70%Restudy (Short-term “Fluency”)
1 Week Later40%61%Test Group (Long-term Mastery)

The Result: While Group A felt more confident immediately after studying, they forgot almost everything a week later. Group B scored significantly higher on long-term retention. Taking a “quiz” (even if you fail it) forces your brain to build much stronger connections than simply re-reading.


The “Fluency” Trap (Kornell, 2009)

Kornell, N. (2009). Optimising learning using flashcards: Spacing is more effective than cramming. Applied Cognitive Psychology, 23(9), 1297–1317.

Research by Nate Kornell showed that students who cram often perform just as well as “spacers” on a test taken immediately after the session.

The Quantitative Results:


Conclusion: The “Desirable Difficulty” Principle

Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. In M. A. Gernsbacher, R. W. Pew, L. M. Hough, & J. R. Pomerantz (Eds.), Psychology and the real world: Essays illustrating fundamental contributions to society (pp. 56–64).

Psychologist Robert Bjork coined the term “Desirable Difficulty.” He found that when your brain has to work hard to retrieve a memory (that “tip of the tongue” feeling), it’s actually a biological “save” button.

If it feels easy, you’re likely just recognizing the information. If it feels hard, you’re actually encoding it. So, when you struggle to recall a Python function or an AI/ML concept, don’t be discouraged—that’s your brain literally building a stronger neural path!


Comparing Learning Strategies: What Works?

To help us move away from the “busy work” and toward actual growth, here is how common strategies compare. Notice how the “High-Efficiency” side is all about doing rather than just viewing.

Low-Efficiency Strategy (The “Comfort Zone”)High-Efficiency Strategy (The “Growth Zone”)Why the Switch Matters
Re-reading: Going over your Linear Algebra notes multiple times.Active Recall: Closing the book and trying to write the formulas from memory.Recall builds “retrieval paths”; reading just builds “familiarity.”
Cramming: Spending 8 hours on Sunday night coding.Spaced Repetition: Spending 30 mins every other day reviewing concepts.Spacing prevents the “forgetting curve” from wiping your progress.
Blocked Practice: Doing 20 similar Linear Algebra problems in a row.Interleaved Practice: Mixing a Linear Algebra problem with a Python implementation.Mixing helps you learn when to use a specific technique in the real world.
Rote Memorization: Memorizing a block of code line-by-line.The Feynman Technique: Explaining the code’s logic to a classmate simply.If you can explain the “why,” the “how” becomes intuitive.
Passive Observation: Watching a tutorial without typing a single line.Elaborative Interrogation: Asking “Why did the author use this function here?”Asking “Why” anchors new info to what you already know.

Putting it Together: Your 5-Step AI/ML Workflow

Instead of doing these in isolation, try this integrated flow. Let’s use Decision Tree as our example:

1. Elaborative Interrogation (The Foundation)

As you read about Decision Trees, ask yourself: “Why do we use Entropy or Gini Impurity to split a node?” (Answer: To find the feature that narrows down the possibilities most effectively, creating “purer” groups of data).

2. The Feynman Technique (The Clarity Check)

Explain how a Decision Tree works to a “rubber duck” or a friend.

3. Interleaved Practice (The Integration)

Don’t just read the theory.

4. Active Recall (The Stress Test)

After an hour, cover your notes. Try to write down the Entropy formula H(S)=pilog2piH(S) = -\sum p_i \log_2 p_i or sketch a small tree for a simple logic gate (like AND or OR) from memory. Don’t worry if you get the log base wrong—the struggle to remember is what creates the “save” point in your brain.

5. Spaced Repetition (The Long-term Lock)

Create an Anki card for the specific difference between “Information Gain” and “Gini Impurity.” Review it tomorrow, then 3 days later, then a week later.

Fortunately, I have created an Anki deck about AI/ML. DM me if you need a copy.


Researches

Cramming vs. Spaced Repetition (Cepeda et al., 2006)

Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological bulletin, 132(3), 354–380.

A massive meta-analysis by Cepeda and colleagues found that “spacing” out sessions resulted in a significant increase in test scores—sometimes by as much as 10% to 30%—compared to massed practice (cramming) for the same amount of total study time.


Rote vs. Feynman (Chi et al., 1989 & Fiorella, 2013)

Chi, M. T. H., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science, 13(2), 145–182. Fiorella, L., & Mayer, R. E. (2013). The relative benefits of learning by teaching and teaching expectancy. Contemporary Educational Psychology, 38(4), 281–288.

The Feynman Technique is effectively Self-Explanation. A classic study by Chi et al. (1989) found that “high-explainers” (students who explained the why of each step to themselves) solved twice as many problems as those who relied on rote memorization.

The “Desirable Difficulty” Principle: Psychologist Robert Bjork found that when your brain has to work hard to retrieve a memory (that “tip of the tongue” feeling), it’s performing a biological “save” button. The hardness is not a sign of failure—it’s the sound of neural connections getting stronger.


Final Thoughts

We’re all in this together! The goal isn’t to be a perfect coder overnight—it’s just to make our learning a little more intentional. Next time it feels “hard,” take a deep breath and remember: that’s just your brain getting stronger.


Share this post on:

Next Post
Database Systems