<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
      <title>Rajnikant Dhar Dwivedi - Executive Education</title>
      <link>https://rajnikantdhardwivedi.in/</link>
      <description>Hey, I&#x27;m Rajnikant Dhar Dwivedi from India. I love programming, Cyber Security, taking photos and learning new things!</description>
      <generator>Zola</generator>
      <language>en</language>
      <atom:link href="https://rajnikantdhardwivedi.in/tags/executive-education/rss.xml" rel="self" type="application/rss+xml"/>
      <lastBuildDate>Wed, 13 May 2026 00:00:00 +0000</lastBuildDate>
      <item>
          <title>My MIT Mathematics &amp; Modeling for Modern AI Journey (Dec 2025 – May 2026)</title>
          <pubDate>Wed, 13 May 2026 00:00:00 +0000</pubDate>
          <author>Rajnikant Dhar Dwivedi</author>
          <link>https://rajnikantdhardwivedi.in/blog/the-mathematics-behind-modern-ai/</link>
          <guid>https://rajnikantdhardwivedi.in/blog/the-mathematics-behind-modern-ai/</guid>
          <description xml:base="https://rajnikantdhardwivedi.in/blog/the-mathematics-behind-modern-ai/">&lt;p&gt;In this post, I reflect on completing MIT Professional Education’s 64-hour certificate course &lt;strong&gt;“Mathematics and Modeling for Modern AI”&lt;&#x2F;strong&gt; (Dec 16, 2025 – May 13, 2026). I’ll explain why a 64-hour program spanned nearly 5 months, how I balanced it with running my startup, and the time-management tactics I used. You’ll get concrete examples of what I learned from linear algebra and calculus refreshers to probability and optimization how I applied these ideas to my work, and how I overcame challenges along the way. Finally, I’ll share key takeaways and next steps.&lt;&#x2F;p&gt;
&lt;p&gt;…This narrative is part personal story, part technical deep dive aimed at anyone curious about learning AI’s math fundamentals while juggling real-world projects.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;img src=&quot;https:&#x2F;&#x2F;rajnikantdhardwivedi.in&#x2F;blog&#x2F;the-mathematics-behind-modern-ai&#x2F;mit-certificate.png&quot; alt=&quot;MIT Mathematics &amp;amp; Modeling for Modern AI Certificate – Rajnikant Dhar Dwivedi&quot; &#x2F;&gt;&lt;&#x2F;p&gt;
&lt;hr &#x2F;&gt;
&lt;h2 id=&quot;course-background-and-structure&quot;&gt;Course Background and Structure&lt;&#x2F;h2&gt;
&lt;p&gt;MIT’s course page frames the core idea clearly: modern AI is evolving rapidly, but the foundational concepts behind today’s systems are stable, elegant, and intuitive. Understanding the mathematical and modeling building blocks that power AI is essential for anyone working seriously in this space. In other words, this program was designed to give engineers and managers an &lt;strong&gt;under-the-hood grasp&lt;&#x2F;strong&gt; of AI foundations.&lt;&#x2F;p&gt;
&lt;p&gt;Instructor Justin Solomon (CSAIL) puts it plainly: the course helps us look beneath the surface of AI systems by examining the statistics, architectures, mathematical models, and data that make them tick.&lt;&#x2F;p&gt;
&lt;p&gt;The official outline covers five thematic modules:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Linear algebra&lt;&#x2F;li&gt;
&lt;li&gt;Calculus and optimization&lt;&#x2F;li&gt;
&lt;li&gt;Probability and generative modeling&lt;&#x2F;li&gt;
&lt;li&gt;Advanced modeling techniques&lt;&#x2F;li&gt;
&lt;li&gt;Evaluating AI models&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;Each module blends math concepts (like matrices, derivatives, probability distributions) with AI applications (like neural network layers, backpropagation, diffusion models). Hands-on activities such as visualizing neural network layers or training a simple diffusion model cement the ideas.&lt;&#x2F;p&gt;
&lt;p&gt;Although the certificate says &lt;em&gt;“64 Hours of Effort”&lt;&#x2F;em&gt;, it didn’t come as a five-consecutive-day bootcamp. Instead, it was an &lt;strong&gt;online, self-paced&lt;&#x2F;strong&gt; Professional Education program spread over winter and spring semesters. The course officially ran from Dec 16, 2025 to May 13, 2026 — about 21 weeks. During that time, the 64 hours were broken into video lectures, readings, problem sets, and a capstone project.&lt;&#x2F;p&gt;
&lt;hr &#x2F;&gt;
&lt;h2 id=&quot;why-it-spanned-months-despite-64-hours&quot;&gt;Why It Spanned Months Despite 64 Hours&lt;&#x2F;h2&gt;
&lt;p&gt;The course’s schedule was flexible: new lessons and assignments appeared weekly, not all at once. This meant content trickled out over months. More importantly, balancing this with my startup work naturally extended the timeline.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;em&gt;64 hours&lt;&#x2F;em&gt; sounds like a busy couple of weekends, but that number only counts instructor-led content. It doesn’t include all the self-study, assignments, or context-switching when your phone rings with a work issue. In reality, I ended up dedicating roughly &lt;strong&gt;140–150 hours&lt;&#x2F;strong&gt; total between December and May. Here’s how that breaks down:&lt;&#x2F;p&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Activity&lt;&#x2F;th&gt;&lt;th style=&quot;text-align: center&quot;&gt;Official Course Hours&lt;&#x2F;th&gt;&lt;th style=&quot;text-align: center&quot;&gt;Estimated Hours I Spent&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;Video Lectures&#x2F;Modules&lt;&#x2F;td&gt;&lt;td style=&quot;text-align: center&quot;&gt;25 h&lt;&#x2F;td&gt;&lt;td style=&quot;text-align: center&quot;&gt;50 h&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Problem Sets &amp;amp; Exercises&lt;&#x2F;td&gt;&lt;td style=&quot;text-align: center&quot;&gt;15 h&lt;&#x2F;td&gt;&lt;td style=&quot;text-align: center&quot;&gt;30 h&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Final Project&#x2F;Capstone&lt;&#x2F;td&gt;&lt;td style=&quot;text-align: center&quot;&gt;10 h&lt;&#x2F;td&gt;&lt;td style=&quot;text-align: center&quot;&gt;20 h&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Self-study&#x2F;Review&lt;&#x2F;td&gt;&lt;td style=&quot;text-align: center&quot;&gt;10 h&lt;&#x2F;td&gt;&lt;td style=&quot;text-align: center&quot;&gt;15 h&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Context-switching &amp;amp; Extra&lt;&#x2F;td&gt;&lt;td style=&quot;text-align: center&quot;&gt;4 h&lt;&#x2F;td&gt;&lt;td style=&quot;text-align: center&quot;&gt;25 h&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;strong&gt;Total&lt;&#x2F;strong&gt;&lt;&#x2F;td&gt;&lt;td style=&quot;text-align: center&quot;&gt;&lt;strong&gt;64 h&lt;&#x2F;strong&gt;&lt;&#x2F;td&gt;&lt;td style=&quot;text-align: center&quot;&gt;&lt;strong&gt;~140 h&lt;&#x2F;strong&gt;&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;p&gt;&lt;em&gt;Breakdown of official course hours vs. estimated actual time spent (figures based on personal tracking).&lt;&#x2F;em&gt;&lt;&#x2F;p&gt;
&lt;p&gt;In short, the academic workload and the reality of a busy engineering schedule stretched those 64 hours into months of sustained effort.&lt;&#x2F;p&gt;
&lt;hr &#x2F;&gt;
&lt;h2 id=&quot;balancing-the-course-with-startup-duties&quot;&gt;Balancing the Course with Startup Duties&lt;&#x2F;h2&gt;
&lt;p&gt;I’m also the co-founder of a tech startup, so juggling this coursework meant careful planning. Here’s how I typically structured my week:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Weekly Planning:&lt;&#x2F;strong&gt; Every Sunday evening, I set aside 30 minutes to plan the coming week’s study blocks and startup tasks. I listed assignments due, project milestones, and meetings then assigned “study slots” into my calendar (often early mornings or late evenings) around existing work commitments.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Daily Routine:&lt;&#x2F;strong&gt; I tried to spend 1–2 focused hours on coursework each weekday evening, plus longer sessions on weekends. I treated these study blocks like urgent meetings: no interruptions, timer on, phone on silent. This included watching lecture videos, working problem sets, or reading ahead in the materials.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Context-Switch Strategies:&lt;&#x2F;strong&gt; Whenever possible, I combined startup and course topics. For example, if I was reading a course section on optimization, I would apply it directly to optimize a model in our product. This dual-use of time prevented learning from feeling like a separate task.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Review and Buffer:&lt;&#x2F;strong&gt; I always built in a “buffer” day each week either Monday morning or Friday afternoon used for catching up on missed work or reviewing concepts I found tough.&lt;&#x2F;p&gt;
&lt;p&gt;This routine kept me on track: plan → prioritize → execute → review → repeat.&lt;&#x2F;p&gt;
&lt;hr &#x2F;&gt;
&lt;h2 id=&quot;core-topics-learned-with-examples&quot;&gt;Core Topics Learned (with Examples)&lt;&#x2F;h2&gt;
&lt;p&gt;This section covers &lt;em&gt;what&lt;&#x2F;em&gt; I learned and &lt;em&gt;how&lt;&#x2F;em&gt; I used it. The MIT program was heavy on fundamentals.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;linear-algebra-parametric-modeling&quot;&gt;Linear Algebra &amp;amp; Parametric Modeling&lt;&#x2F;h3&gt;
&lt;p&gt;We kicked off with vectors, matrices, and transformations. For example, I refreshed how data can be represented in high-dimensional vector spaces and how neural network layers are essentially matrix operations. I even visualized hidden-layer activations of a pre-trained image model during a hands-on module.&lt;&#x2F;p&gt;
&lt;p&gt;In my startup’s AI project, this helped when I was engineering feature embeddings: I now have a much more intuitive sense of why an embedding layer helps compress and represent input data.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;calculus-optimization&quot;&gt;Calculus &amp;amp; Optimization&lt;&#x2F;h3&gt;
&lt;p&gt;The course dove into gradients, backpropagation, and optimizers (SGD, Adam). I re-derived the chain rule and saw how loss functions behave. One night I was debugging slow learning in our model, and remembering the “saddle point” concept from class turned out crucial: I adjusted the learning rate schedule a decision grounded in calculus. Understanding &lt;em&gt;why&lt;&#x2F;em&gt; Adam optimizer was converging faster than plain SGD saved me hours of trial-and-error in hyperparameter tuning.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;probability-generative-models&quot;&gt;Probability &amp;amp; Generative Models&lt;&#x2F;h3&gt;
&lt;p&gt;We covered probability distributions, likelihoods, and noise processes. A highlight was training a simple diffusion model (a video activity in the course). I learned why adding noise and denoising is mathematically sensible. This paid off in our product’s data augmentation pipeline: I introduced controlled Gaussian noise in our training data, inspired by diffusion models, to make our model more robust to data jitter.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;modeling-for-ai-architectures-and-loss&quot;&gt;Modeling for AI (Architectures and Loss)&lt;&#x2F;h3&gt;
&lt;p&gt;Later modules taught how to “turn messy problems into math.” We discussed regularization, inductive bias, and architectures like CNNs and attention. For instance, I better understood why adding an L2 penalty helps prevent overfitting it’s literally constraining the optimization landscape. In my startup’s model code, I added a new regularization term (kernel regularizer in a neural net) directly because of that lesson.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;evaluating-ai-models&quot;&gt;Evaluating AI Models&lt;&#x2F;h3&gt;
&lt;p&gt;The final unit was about generalization and reliability. We looked at bias-variance tradeoffs and adversarial examples. One insight: testing a model on out-of-distribution samples. After learning this, I immediately created a test suite with some outlier data. We found a flaw in one of our classification models (it failed on slightly shifted inputs) and fixed it by augmenting the training set.&lt;&#x2F;p&gt;
&lt;p&gt;In each of these areas, MIT’s focus on &lt;em&gt;translating real-world problems into the abstract language of AI&lt;&#x2F;em&gt; came through. The case studies and activities meant I wasn’t just watching videos I was doing mini-projects. This hands-on element consistently bridged theory and practice.&lt;&#x2F;p&gt;
&lt;hr &#x2F;&gt;
&lt;h2 id=&quot;applying-concepts-to-my-startup-work&quot;&gt;Applying Concepts to My Startup Work&lt;&#x2F;h2&gt;
&lt;p&gt;Since I run a small AI-focused startup, I was eager to tie every lesson back to our products. Here are a few snapshots:&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Enhanced Feature Engineering:&lt;&#x2F;strong&gt; Learning about latent features and dimensionality helped me when refining our data pipelines. We realized that two of our features were nearly redundant (linearly dependent), so we dropped one simplifying the model, saving computation, and improving performance.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Better Hyperparameter Tuning:&lt;&#x2F;strong&gt; The calculus&#x2F;optimization module taught me about gradient descent and learning curves. I applied that by plotting and analyzing our model’s learning curves after every training run. When I saw a plateau, I remembered the course discussion on overfitting vs. underfitting, which guided me to adjust our model complexity appropriately.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Smarter Model Choices:&lt;&#x2F;strong&gt; Studying various architectures and inductive biases made me more confident choosing model types. For a graph problem at work, I realized a Graph Neural Network (GNN) could exploit structure better than a simple MLP an idea reinforced by the course’s emphasis on specialized architectures. After a quick prototype, the GNN outperformed our baseline.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Data Strategy and Risk Awareness:&lt;&#x2F;strong&gt; The section on uncertainty and adversarial examples pushed me to think beyond accuracy. I set up a monitoring step to catch data drift in our live system and incorporated a small “sanity-check” model to flag unusual inputs.&lt;&#x2F;p&gt;
&lt;p&gt;In short, every module gave me at least one &lt;em&gt;“aha!”&lt;&#x2F;em&gt; application in our day-to-day. The rigorous foundations meant I stopped treating libraries as black boxes. Instead, I could explain what’s happening under the hood and even adjust algorithms if needed. It felt like a force multiplier for the team’s technical decisions.&lt;&#x2F;p&gt;
&lt;hr &#x2F;&gt;
&lt;h2 id=&quot;challenges-and-how-i-solved-them&quot;&gt;Challenges and How I Solved Them&lt;&#x2F;h2&gt;
&lt;p&gt;No journey is without bumps.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Math Refresher, Slow Start:&lt;&#x2F;strong&gt; It had been a while since advanced calculus and linear algebra for me. The first week felt like drinking from a firehose. My solution: I blocked extra time for reviewing prerequisite math on Khan Academy and in textbooks. I also joined the course Slack channel (peer discussions with fellow learners) to get quick clarifications. This extra prep paid off by the second module, when I could focus on AI specifics rather than struggling with basic derivatives.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Balancing Crises:&lt;&#x2F;strong&gt; As a startup founder, urgent issues cropped up — servers crashed, client demos, and more. These occasionally swallowed entire weekends. To mitigate, I identified “trimming tasks” in the course: for weeks when work demands peaked, I focused on core videos and skipped non-critical readings, planning to catch up later. Having the flexible schedule helped I never felt behind for long, because I had buffer weeks built in.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Self-Discipline:&lt;&#x2F;strong&gt; Studying at home on my own schedule required real self-discipline. Some evenings, after a full workday, I had zero study energy. I tackled this by treating study time like a standing meeting: I blocked it in Google Calendar and even put an icon ⚡️ to remind me. I also occasionally had a study buddy one of my co-founders sit with me during coding exercises to keep me accountable.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Technical Hurdles:&lt;&#x2F;strong&gt; Some course assignments required code (e.g. training a small neural network). I hit library version conflicts and spent hours debugging. My strategy was to dedicate a weekend session exclusively to coding (no startup work), and to document each environment setup step so the next time would be faster. Once the setup was stable, subsequent exercises went much smoother.&lt;&#x2F;p&gt;
&lt;p&gt;Each challenge taught me something: planning buffers, the power of accountability, and the benefit of lean learning. By May, I had a toolbox of strategies for learning while doing.&lt;&#x2F;p&gt;
&lt;hr &#x2F;&gt;
&lt;h2 id=&quot;key-takeaways&quot;&gt;Key Takeaways&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;strong&gt;Fundamentals Over Flash:&lt;&#x2F;strong&gt; Understanding the under-the-hood math and data is a strategic advantage. It’s tempting to chase flashy demos, but solid foundations build real competence. This course reaffirmed that conviction.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Continuous Learning Pays Off:&lt;&#x2F;strong&gt; Breaking the course into bits and applying each module immediately to work made the months fly by. By the end, I felt I not only understood each topic, but actually &lt;em&gt;used&lt;&#x2F;em&gt; it. That practical link — math → real code — is invaluable.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Planning is Everything:&lt;&#x2F;strong&gt; Without a plan, the course would have lingered unfinished. My weekly routine (plan, do, review) kept me accountable. A simple cycle can carry you through 100+ hours of study alongside a demanding job.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Math is Intuitive:&lt;&#x2F;strong&gt; A personal insight: concepts like gradients or eigenvectors aren’t mystical when you learn them hands-on. Once they “clicked,” I found myself spotting them in problems everywhere. That intuition is now a permanent part of my toolkit.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Collaboration Helps:&lt;&#x2F;strong&gt; Discussing tricky points with peers or mentors made a big difference. For example, a coworker helped me grasp a proof of the bias-variance theorem. I plan to mentor others in turn, since teaching cements knowledge.&lt;&#x2F;p&gt;
&lt;hr &#x2F;&gt;
&lt;h2 id=&quot;next-steps&quot;&gt;Next Steps&lt;&#x2F;h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Apply and Share:&lt;&#x2F;strong&gt; I’ll continue integrating these concepts into our product roadmap implementing one advanced modeling idea (like attention mechanisms) we only skimmed in the course. I’ll also present a summary of my learnings to my team so the knowledge spreads.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Stay Sharp:&lt;&#x2F;strong&gt; I plan to pick up the next logical topic: deepening knowledge in generative models (e.g. taking an MIT Deep Learning course) or exploring numerical optimization libraries more. I’ll also revisit linear algebra regularly to keep that intuition fresh.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Public Write-up:&lt;&#x2F;strong&gt; I’m writing this blog as part of that process articulating my journey helps crystallize it. I’ll tag it appropriately so others interested in AI education can find it.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Connect and Network:&lt;&#x2F;strong&gt; I’ll follow up on connections made during the course (Slack, LinkedIn). Discussing how others applied the same content in different industries will broaden my perspective.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Practical Experimentation:&lt;&#x2F;strong&gt; I’ll turn one course project into a published demo or open-source tool for example, the simple diffusion model we built. That will be a fun way to cement learning and give back to the community.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;Each step ensures this certificate isn’t a &lt;em&gt;one-time checkbox&lt;&#x2F;em&gt; but a stepping stone.&lt;&#x2F;p&gt;
&lt;hr &#x2F;&gt;
&lt;p&gt;&lt;em&gt;– &lt;strong&gt;Rajnikant Dhar Dwivedi&lt;&#x2F;strong&gt;&lt;&#x2F;em&gt;
&lt;em&gt;Co-founder &amp;amp; Lead Developer | MIT Professional Education Certificate, May 2026&lt;&#x2F;em&gt;&lt;&#x2F;p&gt;
</description>
      </item>
    </channel>
</rss>
