<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Home on BonnyCode</title><link>https://www.bonnycode.com/</link><description>Recent content in Home on BonnyCode</description><generator>Hugo -- gohugo.io</generator><language>en</language><atom:link href="https://www.bonnycode.com/index.xml" rel="self" type="application/rss+xml"/><item><title>The best part of waking up</title><link>https://www.bonnycode.com/posts/the-best-part-of-waking-up/</link><pubDate>Tue, 03 Mar 2026 00:00:00 +0000</pubDate><guid>https://www.bonnycode.com/posts/the-best-part-of-waking-up/</guid><description>&lt;p>When I was young, old enough to drink coffee but not really an adult, I had a drip coffeemaker called Mr. Coffee. I scooped pre-ground coffee from a big red can while I’d sing &amp;ldquo;the best part of waking up&amp;hellip; is Folgers in your cup&amp;rdquo;. I later graduated to a French press, then pourovers, and then a Nespresso &amp;ndash; which honestly felt like a step backwards from the pourovers, but it was faster and I was busy.&lt;/p>
&lt;p>Now my older, more sophisticated self uses a fancy-dancy espresso machine. I have a single-dose burr grinder and I buy my beans from a guy named Hunter who roasts them in his coffee shop down the street from me. I still hum &amp;ldquo;the best part of waking up&amp;hellip;&amp;rdquo; every morning when I make my espresso because it makes me happy and my boutique beans tolerate it.&lt;/p>
&lt;p>When I first got my machine I went on a mission to &lt;em>git gud&lt;/em> at making espresso. When I visit a new city, I always look up whatever the top-rated third-wave coffee shops are and order an espresso with a small cup of sparkling water. I smell it, pretend the tasting notes mean something, and savor it; cultivating my palate I call it.&lt;/p>
&lt;p>For over a year, I tried different settings to get the best flavors out of my espresso. My machine is completely programmable, so you can do things like bloom your espresso shot before you pull it, similar to what you do with pourovers. I bought a tiny tool with needles to rake the grounds like it was a zen garden. I have a little metal screen that I put on top of the grounds after tamping to prevent channeling.&lt;/p>
&lt;p>I love making espresso because it is so fiddly. I’ve been a min-maxer my entire life, since I first played red box Dungeons &amp;amp; Dragons in the 80s. Maybe espresso, min-maxing, optimization, data science all spring from the same causal origin. A desire to create meaning where there is none &amp;ndash; layering complexity until the nothingness is hidden from view.&lt;/p>
&lt;p>Just getting the grind settings perfect took me months of trial and error; too fine and it tastes bitter, too coarse and it’s too sour. By the time I finished dialing in a bag to get something drinkable I was on to a new bag and the adventure restarted. I created a log of each shot I pulled; the beans I used; the grind setting; the pressure profile; the resulting outputs (primarily flow rate); and, finally, my tasting notes. My tasting notes could be biased, so I enlisted my children to conduct a proper experiment. I would pull multiple shots with different settings, have my kids label one A and another B, and then write my tasting notes without knowing which shot was which. I got pretty wired in my pursuit of statistical significance.&lt;/p>
&lt;p>Espresso is serious business to get right. When I finally perfected my espresso I pulled a shot for a friend we’ll call Bill (his real name starts with a J; he knows who he is). Bill said &amp;ldquo;wow, this is a bit too sour for me. Do you mind making it into an Americano instead?&amp;rdquo; Sure! But the real kicker&amp;hellip; he added that he preferred the Nespresso and asked if I could bring it back out. Bill&amp;rsquo;s no longer invited over.&lt;/p>
&lt;p>The bright side of betrayal is when it leads to reflection. I spent a career working with data, optimization, and improving decision-making. Fiddling around with my machine is a natural extension of those instincts. Corporate Lucas says the Nespresso is the clear winner. It’s a good enough product (better if you listen to Bill), it’s more consistent, it pulls shots faster, it&amp;rsquo;s cheaper, and it requires thirty seconds of training. I imagine a kaizen consultant emerging from the shadows and yelling muda, muda! Are they wrong? Nespresso is measurably better on every axis; a warning about the perils of letting engineers over-optimize.&lt;/p>
&lt;p>I was a more joyful min-maxer when I was younger. It was a puzzle only I could solve. I’d pore over the Dungeons &amp;amp; Dragons rulebooks and discover the ideal combinations. Now, we have tier lists. Endlessly accessible information has outsourced what it means to be the best from personal discovery to looking it up. I would spend hours as a kid playing and replaying the same games to try to get a little better. Now, I look up a guide online, do it the decreed best way, get bored half-way through, and give up. Optimization has changed from endless exploration to claustrophobic rails.&lt;/p>
&lt;p>I often felt alone as an optimizer; now it feels like society has caught up and surpassed me. Past me often felt I had to optimize for easy-to-measure and easy-to-justify; in other words, the short term and trivial. And when I prioritized for the long term and what mattered, but immeasurable, I felt subversive. The worst part is, this pressure was always more internal than external; I became trapped by my own standards of what felt defensible. How can you optimize what you can’t measure?&lt;/p>
&lt;p>Yet, I still use my espresso machine. I no longer rigorously collect data. I don&amp;rsquo;t regret doing so though — I learned what I needed to learn and built intuition. I can dial in a new bag of beans to my liking within a single shot these days; partly because I’ve gotten better, and partly because I’m less persnickety. I still hum &amp;ldquo;the best part of waking up&amp;rdquo; when I make my espresso. Sometimes, and only sometimes, my shots are pretty darn good. Bill would probably disagree; I don&amp;rsquo;t care, I&amp;rsquo;m making it for myself. I even appreciate the inconsistency; too much sameness gets boring. More importantly, it gives me a different relationship to my coffee. It romanticizes it. I like raking my freshly ground beans; it feels zen. It helps me feel present. The ritual has all these little externalities that are hard to measure but add up to joy.&lt;/p>
&lt;p>When I make my coffee it is a tiny rebellion that only I know about. The transcendence of craft is something I simply don&amp;rsquo;t have the energy to justify anymore. I no longer want to create something measurably good but immeasurably soulless.&lt;/p></description></item><item><title>Where feedback goes to die</title><link>https://www.bonnycode.com/posts/where-feedback-goes-to-die/</link><pubDate>Tue, 20 Jan 2026 00:00:00 +0000</pubDate><guid>https://www.bonnycode.com/posts/where-feedback-goes-to-die/</guid><description>&lt;p>I have a cabinet above my espresso maker where unused gifts go to sit. For each gift I said thank you, and it felt disrespectful to later throw them away, so they sit there for years. Acknowledged yet untouched; quietly accumulating.&lt;/p>
&lt;p>The cabinet is what we used to jokingly call write-only storage. A physical manifestation of gratitude and obligation, but really just a way of postponing honesty. There are ideas, observations, and judgments we receive in much the same way: politely accepted, carefully stored, and just as carefully kept from interfering with how we live.&lt;/p>
&lt;p>Feedback often ends up there. Thank the person offering it, assume good intent, reassure ourselves that we&amp;rsquo;ve heard it. And then place it somewhere out of sight, where it can&amp;rsquo;t trouble the assumptions we were already acting on.&lt;/p>
&lt;p>Feedback matters because it is one of the few ways we ever learn that we are wrong. Most of us can go on for years acting on assumptions that no longer fit the world. Especially when those assumptions make us feel competent, justified, or familiar with ourselves. Without something that pushes back, error doesn&amp;rsquo;t correct itself. It simply settles in and becomes the often invisible background of our decisions.&lt;/p>
&lt;h2 id="knowing-when-were-wrong">Knowing when we&amp;rsquo;re wrong&lt;/h2>
&lt;p>Some ideas are structured so that nothing can ever count against them. &amp;ldquo;I&amp;rsquo;m just bad at math&amp;rdquo;, &amp;ldquo;people are always out to get me.&amp;rdquo; Self-fulfilling prophecies that stop functioning as explanations and instead are just comforting stories we tell ourselves.&lt;/p>
&lt;p>I&amp;rsquo;ve found that one way to guard against this is to be clear, before acting, about what I expect to happen if I&amp;rsquo;m right. Without clearly set in advance, it is too easy to reinterpret any outcome as a success after the fact.&lt;/p>
&lt;p>When I design a course, for example, I expect students to struggle at certain points and also have an idea of when that should resolve for most students. I often introduce implementation ambiguity early on because I believe it serves a pedagogical purpose. If students express some discomfort in how they should solve a problem in the first week, that feedback doesn&amp;rsquo;t challenge my course design.&lt;/p>
&lt;p>But that only works if there’s a point where continued struggle would force me to rethink the design. I have to be specific about what confusion I expect, and what outcomes should justify it by the end. Otherwise, I’m just building up an excuse for unclear instruction as “intentional ambiguity,” and the line between pedagogy and sloppiness disappears.&lt;/p>
&lt;p>The same pattern shows up in how we talk about ourselves. &amp;ldquo;I&amp;rsquo;m just someone who speaks my mind, it&amp;rsquo;s not my fault if people can&amp;rsquo;t handle it&amp;rdquo; resembles a description, but it serves as a shield. The question worth asking isn&amp;rsquo;t whether it feels true, but whether there&amp;rsquo;s any feedback that would make us reconsider it.&lt;/p>
&lt;h2 id="feedback-isnt-a-score">Feedback isn&amp;rsquo;t a score&lt;/h2>
&lt;p>I&amp;rsquo;ve seen people fail to accept compliments as often as they fail to accept criticism. We assume people are &amp;ldquo;just being nice.&amp;rdquo; Sometimes that&amp;rsquo;s fine — if people tell you you&amp;rsquo;re beautiful all day it&amp;rsquo;s probably good not to let it go to your head. On the other hand, if your students tell you that you look like Minecraft Steve, well, welcome to my life.&lt;/p>
&lt;p>&lt;img src="https://www.bonnycode.com/minecraft-steve.png" alt="I don&amp;rsquo;t see the resemblance to Minecraft Steve.">&lt;/p>
&lt;p>But when people consistently praise our skills and we wave it off, we end up underestimating what we&amp;rsquo;re ready for. We don&amp;rsquo;t take on challenges we could handle because we discount positive signal as easily as negative. Feedback isn&amp;rsquo;t only useful for correction. It&amp;rsquo;s often the push we need to take on something bigger.&lt;/p>
&lt;p>The problem is how we frame it. If feedback is primarily evaluation (good or bad, constructive or unhelpful) then our job is to judge whether it&amp;rsquo;s accurate. Judging is a short step from dismissing. Not &amp;ldquo;is it true?&amp;rdquo; or &amp;ldquo;is it fair?&amp;rdquo; but &amp;ldquo;what does it mean that someone walked away holding this belief?&amp;rdquo; Does that change anything?&lt;/p>
&lt;p>When I was a director at Amazon, long, long ago, someone on my team once told me I was scary. After they got to know me, they realized I wasn&amp;rsquo;t as scary&amp;hellip; but they were still a little scared.&lt;/p>
&lt;p>My first reaction was that this seemed ridiculous. I&amp;rsquo;m not scary; I&amp;rsquo;m kind.&lt;/p>
&lt;p>I asked what made them feel that way. They said in meetings, when people would say things, I would tear apart their arguments, ask how they knew something to be true. They worried that if they spoke, they wouldn&amp;rsquo;t be able to back up what they said. I did it in a &amp;ldquo;nice&amp;rdquo; way (calm, smiling), but that almost made it worse. Like I was an analytical machine ready to point out their every flaw.&lt;/p>
&lt;p>This sounded like good leadership. I was an Amazonian; I was &lt;em>insisting on high standards&lt;/em>, &lt;em>diving deep&lt;/em>, teaching my team to &lt;em>be right, a lot&lt;/em>. Those instincts got me to where I was. But by then I was a director with a large team, and the way you act carries more weight when you have power over people&amp;rsquo;s careers. I wasn&amp;rsquo;t encouraging rigor. I was shutting down conversation. At least when I was in the room.&lt;/p>
&lt;h2 id="who-is-worth-listening-to">Who is worth listening to?&lt;/h2>
&lt;p>When I give feedback to students or employees, they usually want more. More than I often feel I can give. They&amp;rsquo;re gracious and frequently grateful. They ask follow-up questions. The dynamic is easy. I have credibility; I&amp;rsquo;m expected to evaluate; and what I say might affect their grade or their career. Everything lines up. Most people can tell themselves they are open to feedback. Because in some situations, when delivered by some people, they are eager to listen.&lt;/p>
&lt;p>Reverse the direction and see how quickly this falls apart.&lt;/p>
&lt;p>Upward feedback, and I&amp;rsquo;m using that term broadly, is easy to dismiss. &amp;ldquo;Who are you to tell me how to do my job?&amp;rdquo; Subordinate to manager, patient to doctor, student to teacher, user to product manager. And, often, the skepticism is warranted; people without expertise tend to give bad advice.&lt;/p>
&lt;p>In &amp;ldquo;Arthur Writes a Story,&amp;rdquo; Arthur asks for feedback on a simple story about a boy finding his puppy. Each person suggests adding their favorite element from their own stories. They are not so much engaging with Arthur&amp;rsquo;s story as they are invested in their own. By the end, Arthur&amp;rsquo;s story has metamorphasized into a science fiction rock opera comedy about space-elephants. &amp;ldquo;Listen to feedback&amp;rdquo; cannot mean &amp;ldquo;do whatever people tell you.&amp;rdquo;&lt;/p>
&lt;p>&lt;img src="https://www.bonnycode.com/arthur-elephant.png" alt="Arthur&amp;rsquo;s story goes off the rails.">&lt;/p>
&lt;p>It&amp;rsquo;s a mistake to treat this insight as a license to dismiss categories of feedback entirely. The signal is rarely the literal suggestion. Arthur&amp;rsquo;s friends gave bad advice, but it doesn&amp;rsquo;t mean they couldn&amp;rsquo;t still provide useful feedback. The real skill is taking the time to understand why someone reacted the way they did, not just what they proposed as a solution. And then innovate based on what that reaction tells you.&lt;/p>
&lt;p>When we reject feedback because it&amp;rsquo;s framed poorly or comes from someone without expertise, we lose the ability to understand how the people affected by our work are actually receiving it. A patient can&amp;rsquo;t tell a doctor how to practice medicine. But a patient knows when they feel unheard, confused, or scared. That&amp;rsquo;s a valuable perspective only they can give. Ignoring it doesn&amp;rsquo;t make you more expert. It just makes you blind to how it feels to be on the receiving end.&lt;/p>
&lt;h2 id="genuine-feedback-is-earned">Genuine feedback is earned&lt;/h2>
&lt;p>People who don&amp;rsquo;t like us sometimes say things to hurt us. But, especially in professional contexts, they are often stating something true. The truth is often a better weapon than pure falsehood. I saw this pattern repeatedly as a manager and now as a teacher. Two people would not get along. They&amp;rsquo;d both point out the other person&amp;rsquo;s flaws. The criticism was often wrapped in assumed bad intentions, but underneath it calls out something observably true.&lt;/p>
&lt;p>&amp;ldquo;Jack has no respect for other people. He is always late to meetings and when he does arrive derails the conversation to talk about things that don&amp;rsquo;t matter.&amp;rdquo; Is this feedback framed well? No. Does it give us some signal we can learn from, especially when combined with other details and perspectives? Absolutely.&lt;/p>
&lt;p>For every blunt response like above, you&amp;rsquo;d get its more oblique cousin. &amp;ldquo;I love Jack, he is so helpful. It can be disruptive sometimes though how he arrives late to our morning meeting. I know he takes the bus here, and it isn&amp;rsquo;t really his fault, but maybe we can move the meeting?&amp;rdquo;&lt;/p>
&lt;p>When you deliver the feedback, &amp;ldquo;it is important that you arrive on time for our morning meeting.&amp;rdquo; The response is too often either &amp;ldquo;I can&amp;rsquo;t help it&amp;rdquo; or, far worse, &amp;ldquo;who said that?&amp;rdquo;&lt;/p>
&lt;p>The urge is understandable. Negative anonymous feedback makes me want to play detective about who said it. If I can discredit the source’s motives or expertise, I don’t have to engage with the substance. But that habit leads to a dark, unproductive relationship with feedback. It’s healthier to start by treating it in good faith unless you have evidence otherwise.&lt;/p>
&lt;p>The dangerous part of this reaction is that even if you are right, you are signaling how you handle feedback. If you react with anger, or defensively, or sullenly, it causes those who value their relationship with you to hesitate next time. They learn to circle around. The unvarnished truth becomes something only your detractors will offer.&lt;/p>
&lt;p>When you respond differently, with genuine curiosity and visible change, then giving you feedback becomes a relationship-strengthening act. People who care about you start to speak up.&lt;/p>
&lt;p>Honest feedback from someone who cares about you is often not a gift; it is something you&amp;rsquo;ve earned through your reactions to feedback in the past.&lt;/p></description></item><item><title>Teaching real lessons with fake worlds</title><link>https://www.bonnycode.com/posts/teaching-real-lessons-fake-worlds/</link><pubDate>Tue, 02 Dec 2025 00:00:00 +0000</pubDate><guid>https://www.bonnycode.com/posts/teaching-real-lessons-fake-worlds/</guid><description>&lt;p>I build little worlds full of adventurers, potions, and dragons. Students run potion shops where they manage magical supply chains for demanding fighters and wizards. They run media companies providing entertainment to dragons, frog-folk, and discerning gnomes. And in the process, they become curious about how the worlds themselves work and come alive.&lt;/p>
&lt;p>I worked in tech for a long time, most of it leading technical teams. I kept noticing the same thing: new grads who struggled with problems that looked nothing like their coursework. They knew how to do the work, but only if it was carefully packaged for them. In the wild, solving a technical problem is a cycle:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Formulate:&lt;/strong> Translate an unbounded, ambiguous situation into a technical problem.&lt;/li>
&lt;li>&lt;strong>Solve:&lt;/strong> Execute the technical solution.&lt;/li>
&lt;li>&lt;strong>Interpret:&lt;/strong> Evaluate whether your solution actually solved the problem.&lt;/li>
&lt;/ol>
&lt;p>Even when toy problems attempt steps 1 and 3, it&amp;rsquo;s often thin pseudocontext. There isn&amp;rsquo;t enough noise, and it&amp;rsquo;s painfully obvious what technical thing needs to be done.&lt;/p>
&lt;p>Large language models have made this gap impossible to ignore. Step 2—doing the technical work—is now too often solvable with a prompt→copy→paste. It&amp;rsquo;s the same shift calculators brought to arithmetic: the execution is automated, but the framing of the problem is not. Simulations let me push students into Steps 1 and 3, where intellectual work continues to thrive.&lt;/p>
&lt;p>I&amp;rsquo;ve incorporated simulations into my upper-division computer science courses to teach the metacognitive and epistemic skills necessary to solve difficult problems as they actually exist.&lt;/p>
&lt;h2 id="why-simulation">Why Simulation?&lt;/h2>
&lt;p>I&amp;rsquo;ve always loved simulation-style games. Ever since playing the gold-cartridge &lt;em>Legend of Zelda&lt;/em> on the NES, I&amp;rsquo;ve been fascinated by the idea that the little characters in those worlds might have lives of their own; that they might keep moving even when the player isn&amp;rsquo;t looking.&lt;/p>
&lt;p>That notion is the seed of my classroom simulations: worlds that run on their own, where students must reason about living systems rather than static puzzles.&lt;/p>
&lt;p>My first classroom simulation-game came out of a databases course. Early on, I realized students weren&amp;rsquo;t really feeling what it meant to run a production system: the pressure, the messiness, the human factors. Software without customers is sterile. But software that serves a live world, with people depending on it, becomes something else entirely—unpredictable, alive, and worth thinking about.&lt;/p>
&lt;p>For example:&lt;/p>
&lt;ul>
&lt;li>Teaching concurrency errors: As &amp;ldquo;customers&amp;rdquo; fire off parallel cart checkouts, students&amp;rsquo; code runs into fun and unpredictable race conditions. Sorry, your inventory is out of sync. You had a lost update. The week after, when I lecture on transactions, you&amp;rsquo;re excited to see it solves your problem.&lt;/li>
&lt;li>Teaching design principles: Let students debug the chaos of a mutable, update-in-place system, then guide them toward architectural patterns that reduce that pain—immutability, append-only logs, systems designed for observability. That&amp;rsquo;s how many experienced developers come to care about those principles: not from theory, but from scars.&lt;/li>
&lt;/ul>
&lt;p>This matters even more when teaching data science. In my &lt;em>Knowledge Discovery&lt;/em> class, students build recommendation engines, churn detectors, and user personas. Simulations are foundational here because of the counterfactual: for a recommender, how do you know if it&amp;rsquo;s any good without knowing what would have happened if the user was shown different content? Offline evaluation metrics often fail to capture what only reveals itself in a live test. A simulation is a way I can bring that necessary complexity into the classroom.&lt;/p>
&lt;p>Simulations also provide an ethical sandbox—an advantage over even the real world. If you deploy a recommendation engine that accidentally optimizes for outrage or suppresses a demographic, people get hurt. In my classroom, if a student&amp;rsquo;s algorithm radicalizes the gnomes or creates dystopian inequality among the frog-folk, we can pause and examine what went wrong. Consequences made visible, without the lasting damage.&lt;/p>
&lt;h2 id="how-i-build-simulations">How I Build Simulations&lt;/h2>
&lt;p>My goal when designing a simulation is for it to mimic the authentic data in the ways that matter. It has to feel genuine, even when the subject matter of the simulation is fantastical. Synthetic data too often looks fake.&lt;/p>
&lt;p>There are three ways fake data tends to give itself away:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Correlation: Data should be a web of interesting, interconnected correlations of varying degrees.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Shape: Data should have the right shape for its process.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Time: Data should both change over time and operate cyclically; it&amp;rsquo;s influenced by the sun, the moon, and the social patterns of human life.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;h3 id="correlation">Correlation&lt;/h3>
&lt;p>Did you know that ice cream consumption and violent crime rates are frequently correlated? I asked my students this question, and one response was, “Well obviously! Crime is hard work and you need to treat yourself afterwards.” Moving on.&lt;/p>
&lt;p>Why are they correlated though? We&amp;rsquo;ve all heard the phrase &lt;em>correlation does not imply causation.&lt;/em> When we say that, we mostly mean &lt;em>direct causation&lt;/em>. If we accept that our highly correlated world results from a web of causes—“this causes this, which causes these two things, and then those in turn cause others”—we can use that fact as the foundation for natural-looking correlations.&lt;/p>
&lt;p>I draw upon &lt;a href="https://www.jstor.org/stable/2337329">Judea Pearl&amp;rsquo;s causal directed acyclic graphs (DAGs)&lt;/a> to make this happen. I start by drawing out a causal graph, giving each effect multiple causes, often layers and layers deep.&lt;/p>
&lt;p>For example, in the interaction below, we can see how the ice-cream/violent crime correlation could arise via confounding, as illustrated using a DAG. In this case, both ice-cream consumption and violent crime depend on season (the confounder). The scatter plots show how ice-cream consumption and violent crime become correlated as a result, even without any direct causal link.&lt;/p>
&lt;div id="dagWeights" class="dag-weights">&lt;/div>
&lt;div class="sim-controls">
&lt;button id="dagStep">Step&lt;/button>
&lt;button id="dagRun">Run&lt;/button>
&lt;button id="dagReset" class="sim-reset-btn">Reset&lt;/button>
&lt;/div>
&lt;div id="dagCorrelation" class="dag-correlation">&lt;/div>
&lt;div class="berkson-dag">
&lt;canvas id="dagDAG" class="berkson-dag-canvas">&lt;/canvas>
&lt;/div>
&lt;div class="berkson-charts">
&lt;div class="berkson-chart">
&lt;canvas id="dagScatter" class="berkson-scatter-canvas">&lt;/canvas>
&lt;/div>
&lt;/div>
&lt;script type="module">
import { initDAG } from "/js/teaching-visuals.js";
initDAG({
dagCanvasId: "dagDAG",
scatterCanvasId: "dagScatter",
structure: "confounded",
correlationId: "dagCorrelation",
weightsId: "dagWeights",
stepId: "dagStep",
runId: "dagRun",
resetId: "dagReset"
});
&lt;/script>
&lt;p>This is unrealistically transparent, though, if we actually let students see all of this. It is like &lt;a href="https://en.wikipedia.org/wiki/Blind_men_and_an_elephant">the blind men and the elephant&lt;/a> from the &lt;em>Tittha Sutta&lt;/em>. One blind man feels the trunk and says “It&amp;rsquo;s a snake!” Another feels a leg and says “It&amp;rsquo;s a tree!” Yet another feels the tusk and says “No, you fools, it&amp;rsquo;s a giant spear!” Each has only a partial view of the underlying truth.&lt;/p>
&lt;p>I build my “elephant” by creating a large causal DAG that naturally produces realistic correlations, but I only expose a small selection of mostly terminal nodes, and often in opaque and subtle ways.&lt;/p>
&lt;p>It isn&amp;rsquo;t just about the strength and reason for correlation, we also shouldn&amp;rsquo;t limit ourselves to linear correlations. Non-linear correlations happen for various reasons, but a common one is diminishing returns. In most systems, having a little bit of something is a big deal, and every one after that matters a little less. In a messaging app, the first friend who texts you makes the app come alive. Each additional friend adds some pull, but your hundredth friend matters a lot less than your first or your tenth—you only have so much attention to give. Economists call this diminishing marginal utility: each new unit yields a smaller gain than the last. The same curve shows up in studying/exam results, income/happiness, marketing spend/impact, and many more such cases. It&amp;rsquo;s what gives a lot of empirical correlations a characteristic bend when viewed as a scatterplot.&lt;/p>
&lt;div class="diminishing-param">
&lt;label>Strength of Diminishing Returns: &lt;span id="diminishingExponentValue">Medium&lt;/span>&lt;/label>
&lt;input type="range" id="diminishingExponent" min="0.2" max="1.0" step="0.05" value="0.5" class="diminishing-slider">
&lt;div style="font-size: 0.75em; color: var(--fg3); margin-top: 2px;">
&lt;span style="float: left;">Weak&lt;/span>
&lt;span style="float: right;">Strong&lt;/span>
&lt;div style="clear: both;">&lt;/div>
&lt;/div>
&lt;/div>
&lt;div class="sim-controls">
&lt;button id="diminishingStep">Step&lt;/button>
&lt;button id="diminishingRun">Run&lt;/button>
&lt;button id="diminishingReset" class="sim-reset-btn">Reset&lt;/button>
&lt;/div>
&lt;div class="diminishing-container">
&lt;canvas id="diminishingCanvas" class="diminishing-canvas">&lt;/canvas>
&lt;/div>
&lt;script type="module">
import { initDiminishingReturns } from "/js/teaching-visuals.js";
initDiminishingReturns({
canvasId: "diminishingCanvas",
stepId: "diminishingStep",
runId: "diminishingRun",
resetId: "diminishingReset",
exponentId: "diminishingExponent",
exponentValueId: "diminishingExponentValue"
});
&lt;/script>
&lt;p>Lastly, and perhaps most tricky, life is littered with selection biases; what we observe is almost never perfectly representative of the overall population but is biased in one way or another. It is important for our simulated data to have similar selection biases. For example, &lt;a href="https://onlinelibrary.wiley.com/doi/pdf/10.1111/joim.12363">a hospital in Canada&lt;/a> noticed when analyzing bicycle accidents at the ER, wearing a helmet was correlated with having a concussion and you were ~50% more likely to have a serious injury compared to not wearing a helmet. That seems wrong, doesn&amp;rsquo;t it?&lt;/p>
&lt;p>This is a specific form of selection bias called collider bias (or Berkson&amp;rsquo;s paradox). The hospital isn&amp;rsquo;t seeing all bike riders and all bike accidents; they don&amp;rsquo;t see all the cases where a bike rider is wearing a helmet and that helmet saved them from going to the ER. As a result, the helmet effectively filtered out lower-end accidents leaving only the more serious accidents for the ER.&lt;/p>
&lt;p>You find this pattern all the time when doing data analysis. You see a study on what it takes to make a successful startup, but they only look at the ones who made it, not the ones that died out. You analyze active users and forget the ones who churned. The data you have is conditioned on having survived long enough to be recorded. It&amp;rsquo;s a kind of selection echo: the world you see isn&amp;rsquo;t the world as it is, but the world that lasted.&lt;/p>
&lt;p>In the simulation below, I show how food quality and location can become inversely correlated, even if they start out completely independent. This happens because survival is a filter: a restaurant can survive with bad food if it has high foot traffic (a tourist trap), and it can survive in a bad location if the food is amazing. But if it has bad food and a bad location, it goes out of business and disappears from the dataset. We only see the survivors.&lt;/p>
&lt;div class="berkson-param">
&lt;label>Survival Threshold: &lt;span id="berksonThresholdValue">100&lt;/span>&lt;/label>
&lt;input type="range" id="berksonThreshold" min="50" max="150" step="5" value="100" class="berkson-slider">
&lt;/div>
&lt;div class="sim-controls">
&lt;button id="berksonStep">Step&lt;/button>
&lt;button id="berksonRun">Run&lt;/button>
&lt;button id="berksonReset" class="sim-reset-btn">Reset&lt;/button>
&lt;/div>
&lt;div class="berkson-dag">
&lt;canvas id="berksonDAG" class="berkson-dag-canvas">&lt;/canvas>
&lt;/div>
&lt;div class="berkson-charts">
&lt;div class="berkson-chart">
&lt;canvas id="berksonFull" class="berkson-scatter-canvas">&lt;/canvas>
&lt;/div>
&lt;div class="berkson-chart">
&lt;canvas id="berksonSelected" class="berkson-scatter-canvas">&lt;/canvas>
&lt;/div>
&lt;/div>
&lt;script type="module">
import { initBerksonParadox } from "/js/teaching-visuals.js";
initBerksonParadox({
dagCanvasId: "berksonDAG",
fullCanvasId: "berksonFull",
selectedCanvasId: "berksonSelected",
stepId: "berksonStep",
runId: "berksonRun",
resetId: "berksonReset",
thresholdId: "berksonThreshold",
thresholdValueId: "berksonThresholdValue"
});
&lt;/script>
&lt;p>Combine this all together and you get data that is filled with correlations, large and small, inverse and positive, linear and not-so-linear, spurious and true. Or in other words, data that is actually worth analyzing.&lt;/p>
&lt;h3 id="shape">Shape&lt;/h3>
&lt;p>I&amp;rsquo;ve long been fascinated by the beautifully smooth shapes that data makes when generated at scale. Not just systems, but data that is based on human behavior. I once assumed that because individual humans are so complicated, so individual, so hard to predict individually, that a bunch of humans would be exponentially more complicated to predict. But it is, in fact, the opposite. When you get lots of people, all the individual differences wash out, and what you are left with is the smooth underlying generative mechanisms that underlie their common behavior.&lt;/p>
&lt;p>Most people are familiar with the bell curve or normal distribution. Plot human height and you&amp;rsquo;ll get something roughly normal. Human height follows this pattern because it&amp;rsquo;s the sum of many small, independent effects: genes, nutrition, environment. You can see below that adult human height is approximately normal; if you split by gender even more so.&lt;/p>
&lt;div class="static-chart-wrapper">
&lt;div class="static-chart-controls">
&lt;label>
&lt;input type="checkbox" id="height-split"> Split by gender
&lt;/label>
&lt;/div>
&lt;div class="static-chart-container" id="height">&lt;/div>
&lt;/div>
&lt;script type="module">
import { initHeightDistribution } from "/js/teaching-visuals.js";
initHeightDistribution("height", "height-split");
&lt;/script>
&lt;p>Roll a bunch of dice, add them up, record that sum, repeat many times, and you&amp;rsquo;ll get the same shape; that&amp;rsquo;s the central limit theorem at work. When I see a bell-curve I see a process that has a bunch of uncorrelated randomness that is being added together. In the simulation below, try running it and seeing how as you add more samples it gets closer and closer to a smooth normal distribution.&lt;/p>
&lt;div class="sim-controls">
&lt;label class="sim-param-label">Dice per roll
&lt;input id="cltCount" type="number" min="1" max="20" value="5">
&lt;/label>
&lt;button id="cltStep">Step&lt;/button>
&lt;button id="cltRun">Run&lt;/button>
&lt;button id="cltReset" class="sim-reset-btn">Reset&lt;/button>
&lt;/div>
&lt;div id="cltInfo" class="sim-info">&lt;/div>
&lt;div class="sim-container">
&lt;canvas id="cltCanvas" class="sim-canvas">&lt;/canvas>
&lt;/div>
&lt;script type="module">
import { initGenerateSamples } from "/js/teaching-visuals.js";
initGenerateSamples("clt", "normal");
&lt;/script>
&lt;p>Most of the data I&amp;rsquo;ve worked with doesn&amp;rsquo;t follow a normal distribution, though. Systems and behaviors rarely follow additive processes. They are more commonly multiplicative and/or full of feedback loops—both of which create long right tails.&lt;/p>
&lt;p>For example, &lt;a href="https://unece.org/fileadmin/DAM/stats/documents/ece/ces/ge.22/2010/zip.36.e.pdf">housing prices&lt;/a> are generally right-skewed, largely because each property&amp;rsquo;s value grows through a chain of proportional effects: land value, size, location premiums, amenities, and market appreciation all compound on top of one another. Small percentage differences in these factors multiply rather than add. This makes the main body of the distribution log-normal; it is log simply because logarithms turns multiplication problems into addition problems. Remember, normal is additive, log-normal is multiplicative. For that reason, it is common to look at these right-skewed distributions in log space.&lt;/p>
&lt;p>Below, home sales data from &lt;a href="https://www.gov.uk/government/statistical-data-sets/price-paid-data-downloads#yearly-file">England and Wales&lt;/a> is charted; try viewing in both linear and log space.&lt;/p>
&lt;div class="static-chart-wrapper">
&lt;div class="static-chart-controls">
&lt;label>
&lt;input type="checkbox" id="homevalues-log"> Log x-axis
&lt;/label>
&lt;/div>
&lt;div class="static-chart-container" id="homevalues">&lt;/div>
&lt;div class="static-chart-source">Source: HM Land Registry Price Paid Data, England &amp; Wales (2024)&lt;/div>
&lt;/div>
&lt;script type="module">
import { initHomeValues } from "/js/teaching-visuals.js";
initHomeValues("homevalues", "homevalues-log");
&lt;/script>
&lt;p>We can get something similar using our same dice rolling example. But this time, rather than adding the dice, we will multiply them together.&lt;/p>
&lt;div class="sim-controls">
&lt;label class="sim-param-label">Dice per roll
&lt;input id="logCount" type="number" min="1" max="20" value="5">
&lt;/label>
&lt;button id="logStep">Step&lt;/button>
&lt;button id="logRun">Run&lt;/button>
&lt;button id="logReset" class="sim-reset-btn">Reset&lt;/button>
&lt;/div>
&lt;div class="sim-log-options">
&lt;label>&lt;input type="checkbox" id="logLogX"> Log x-axis&lt;/label>
&lt;/div>
&lt;div id="logInfo" class="sim-info">&lt;/div>
&lt;div class="sim-container">
&lt;canvas id="logCanvas" class="sim-canvas">&lt;/canvas>
&lt;/div>
&lt;script type="module">
import { initGenerateSamples } from "/js/teaching-visuals.js";
initGenerateSamples("log", "lognormal");
&lt;/script>
&lt;p>Many human systems aren&amp;rsquo;t just multiplicative—they&amp;rsquo;re self-reinforcing. Algorithms frequently elevate videos, products, games, or songs based on existing popularity, which in turn makes them even more popular. Because attention is finite, these dynamics often lead to winner-takes-most outcomes. This kind of feedback loop—known as preferential attachment—naturally produces highly right-skewed, power-law distributions.&lt;/p>
&lt;p>When I first started plotting data—any kind of data—on user engagement, I was struck by how often these extreme right-skewed curves appeared. They looked like the chart below of &lt;a href="https://zenodo.org/records/1000885">reviews per game on Steam&lt;/a>. Technically, both the reviews per game on Steam and the housing price data above are a mix of &lt;a href="https://web.uvic.ca/~math-statistics/emeritus/wjreed/dPlN.3.pdf">log-normal on the left, power-law on the right&lt;/a> (I prefer to call it a &lt;a href="https://www.urbandictionary.com/define.php?term=Business+in+front%2C+party+in+the+back">mullet distribution&lt;/a>). Multiplicative effects dominate in one regime, preferential attachment in another. The Steam data is just more dominated by power-law than housing prices are, likely because of the strength of the popularity dynamics I described above.&lt;/p>
&lt;div class="static-chart-wrapper">
&lt;div class="static-chart-controls" style="display:flex; align-items:center; gap:10px;">
&lt;strong>Steam reviews per game&lt;/strong>
&lt;label>&lt;input type="checkbox" id="steamLog"> Log axes&lt;/label>
&lt;/div>
&lt;div class="static-chart-container" id="steamContainer">&lt;/div>
&lt;/div>
&lt;script type="module">
import { initSteamReviews } from "/js/teaching-visuals.js";
initSteamReviews("steamContainer", "steamLog");
&lt;/script>
&lt;p>You can make preferential attachment appear in a simulation by making prior success increase the odds of future success. In the simulation below I have an example of a distribution generated solely on preferential attachment.&lt;/p>
&lt;div class="sim-controls">
&lt;button id="paStep">Step&lt;/button>
&lt;button id="paRun">Run&lt;/button>
&lt;button id="paReset" class="sim-reset-btn">Reset&lt;/button>
&lt;/div>
&lt;div class="sim-log-options">
&lt;label>&lt;input type="checkbox" id="paLog"> Log axes&lt;/label>
&lt;/div>
&lt;div id="paInfo" class="sim-info">&lt;/div>
&lt;div class="sim-container">
&lt;canvas id="paCanvas" class="sim-canvas">&lt;/canvas>
&lt;/div>
&lt;script type="module">
import { initGenerateSamples } from "/js/teaching-visuals.js";
initGenerateSamples("pa", "preferential-attachment");
&lt;/script>
&lt;p>Getting these shapes to emerge naturally is key. When I&amp;rsquo;m building simulations, I never directly sample these distributions. Instead, I code my agents so that, taken together, they arrive at those shapes honestly. I create the feedback loops that produce power-law distributions. I make processes multiplicative where they should be, and allow for multimodality by giving categories different generative properties.&lt;/p>
&lt;h3 id="time">Time&lt;/h3>
&lt;p>When I worked at Snap, I could tell when holidays were happening without ever doing something as prosaic as checking a calendar. I&amp;rsquo;d just look at time-series charts of user engagement, split by region. If people started taking pictures at six or seven in the morning, it was a work or school day. If usage didn&amp;rsquo;t spike until around 10 in the morning, it was a weekend or holiday.&lt;/p>
&lt;p>Anyone who has looked at these time-series is familiar with their rhythms. For example, below is &lt;a href="https://www.eia.gov/electricity/gridmonitor/">daily demand for electricity&lt;/a> in California for the past three years. You can see the weekly peaks plus the greater demand for electricity in summer. Summer strikes again with its many correlations.&lt;/p>
&lt;div class="electricity-demand-container" id="elec">&lt;/div>
&lt;script type="module">
import { initElectricityDemand } from "/js/teaching-visuals.js";
initElectricityDemand("elec");
&lt;/script>
&lt;p>Real data shows this rhythm for several reasons:&lt;/p>
&lt;ul>
&lt;li>You see it across a day because we sleep, work, and live as the earth spins and brings day and night.&lt;/li>
&lt;li>You see it across a week based on however your culture defines weekends and weekdays; Friday nights out, Sunday nights in.&lt;/li>
&lt;li>And you see it across a year, driven by that same sun: the seasons, and the layers we&amp;rsquo;ve added on top—holidays, festivals, school breaks, and all the rituals that divide our time into meaning.&lt;/li>
&lt;/ul>
&lt;p>In the visualization below, I multiply these temporal components—daily, weekly, and yearly—to approximate the kinds of cycles that appear in real time series. Conceptually, it&amp;rsquo;s similar to seasonal–trend decomposition, where a signal is separated into trend and seasonal components for analysis or forecasting. The difference is that here I&amp;rsquo;m composing those cycles from the ground up rather than decomposing observed data.&lt;/p>
&lt;div class="seasonality-grid">
&lt;div class="seasonality-panel">
&lt;div class="seasonality-header">
&lt;strong>Weekly&lt;/strong>
&lt;small class="seasonality-hint">drag dots&lt;/small>
&lt;/div>
&lt;canvas id="weeklyCanvas" class="seasonality-canvas">&lt;/canvas>
&lt;/div>
&lt;div class="seasonality-panel">
&lt;div class="seasonality-header">
&lt;strong>Annual&lt;/strong>
&lt;small class="seasonality-hint">drag dots&lt;/small>
&lt;/div>
&lt;canvas id="annualCanvas" class="seasonality-canvas">&lt;/canvas>
&lt;/div>
&lt;div class="seasonality-panel">
&lt;div class="seasonality-header">
&lt;strong>Trend&lt;/strong>
&lt;small class="seasonality-hint">drag dots&lt;/small>
&lt;/div>
&lt;canvas id="trendCanvas" class="seasonality-canvas">&lt;/canvas>
&lt;/div>
&lt;/div>
&lt;div class="seasonality-combo">
&lt;div class="seasonality-header">
&lt;strong>Combined (3 years)&lt;/strong>
&lt;small class="seasonality-hint">driven by controls above&lt;/small>
&lt;/div>
&lt;div class="seasonality-controls" style="margin-bottom: 8px;">
&lt;label style="font-size: 0.9em; color: var(--fg2); display: flex; align-items: center; gap: 8px;">
&lt;span>Noise:&lt;/span>
&lt;input type="range" id="seasonalityNoise" min="0" max="0.5" step="0.01" value="0.25" style="width: 100px;">
&lt;/label>
&lt;/div>
&lt;canvas id="comboCanvas" class="seasonality-combo-canvas">&lt;/canvas>
&lt;/div>
&lt;script type="module">
import { initSeasonality } from "/js/teaching-visuals.js";
initSeasonality();
&lt;/script>
&lt;p>I combine those cycles with holidays—which turn weekdays into weekends—and make them an integral part of the causal DAG that drives my agents. By giving the root nodes those rhythms and letting them propagate through the graph, the model naturally produces the kinds of correlated effects that trip people up—spurious patterns like, “Hey, I think ice cream causes murder!”&lt;/p>
&lt;h2 id="emergent-complexity">Emergent Complexity&lt;/h2>
&lt;p>Now that we know what our data should look like, how do we achieve that without manually faking every data point? That sounds so complex!&lt;/p>
&lt;p>It does. But we&amp;rsquo;re in luck; we don&amp;rsquo;t have to (and we shouldn&amp;rsquo;t) code the complexity directly. We rely on it emerging. When programming our simulations, we can focus on just the primary generative mechanisms (with some added individual-level truly random noise), and generate an overall more complex simulation with very simple individual-level rules.&lt;/p>
&lt;p>The fact that complexity can arise from simple rules is well established. &lt;a href="https://www.red3d.com/cwr/papers/1987/boids.html">Boids&lt;/a> is the classic example: three simple rules—steer away from crowds, move with the flock, and head toward its center of mass—produce beautiful flocking behavior. That&amp;rsquo;s always the goal: rich, organic behavior that feels real but emerges from simple, understandable rules. Try adjusting the relative strength of those three simple rules in the simulation below; you can see the mix of effects that emerge.&lt;/p>
&lt;div class="boids-controls">
&lt;div class="boids-rule">
&lt;label>Separation &lt;small>(avoid crowding)&lt;/small>&lt;/label>
&lt;input type="range" id="boidsSeparation" min="0" max="3" step="0.1" value="1.5" class="boids-slider">
&lt;span id="boidsSeparationValue" class="boids-value">1.5&lt;/span>
&lt;/div>
&lt;div class="boids-rule">
&lt;label>Alignment &lt;small>(match direction)&lt;/small>&lt;/label>
&lt;input type="range" id="boidsAlignment" min="0" max="3" step="0.1" value="1.0" class="boids-slider">
&lt;span id="boidsAlignmentValue" class="boids-value">1.0&lt;/span>
&lt;/div>
&lt;div class="boids-rule">
&lt;label>Cohesion &lt;small>(stay together)&lt;/small>&lt;/label>
&lt;input type="range" id="boidsCohesion" min="0" max="3" step="0.1" value="1.0" class="boids-slider">
&lt;span id="boidsCohesionValue" class="boids-value">1.0&lt;/span>
&lt;/div>
&lt;/div>
&lt;div class="boids-secondary-controls">
&lt;label>Boids: &lt;span id="boidsCountValue">100&lt;/span>&lt;/label>
&lt;input type="range" id="boidsCount" min="10" max="300" step="10" value="100" class="boids-slider">
&lt;button id="boidsReset" class="btn-secondary sim-reset-btn">Reset&lt;/button>
&lt;/div>
&lt;div class="boids-container">
&lt;canvas id="boidsCanvas" class="boids-canvas">&lt;/canvas>
&lt;/div>
&lt;script type="module">
import { initBoids } from "/js/teaching-visuals.js";
initBoids({
canvasId: "boidsCanvas",
separationId: "boidsSeparation",
alignmentId: "boidsAlignment",
cohesionId: "boidsCohesion",
separationValueId: "boidsSeparationValue",
alignmentValueId: "boidsAlignmentValue",
cohesionValueId: "boidsCohesionValue",
countId: "boidsCount",
countValueId: "boidsCountValue",
resetId: "boidsReset"
});
&lt;/script>
&lt;p>My agents are generally utility-driven; each agent has a utility function that it tries to maximize at every step. Individually, they&amp;rsquo;re single-minded and unintelligent, and if I relied on any one of them alone, the simulation would quickly collapse into some pathological or divergent behavior.&lt;/p>
&lt;p>To avoid that, I build a range of intentionally simple strategies. One set of agents repeats whatever was most profitable in the past; another acts randomly; another imitates whatever has been most popular recently.&lt;/p>
&lt;p>What I&amp;rsquo;ve discovered over years of building these systems is that when I combine many such agents—each using different but simple heuristics—the overall system becomes surprisingly robust and often appears intelligent. It&amp;rsquo;s a form of the wisdom of the crowds: a collection of uncorrelated, naïve guesses can, when aggregated, produce remarkably accurate behavior.&lt;/p>
&lt;p>I create balance in the system without hard-coded limits by putting agents in tension with one another. As an example, I have a simple predator-prey simulation below: the prey eat grass, predators eat the prey. When there are too many prey, the grass thins and predators thrive; when there are too many predators, they starve. The system naturally ebbs and flows as everyone does their part to keep it in rhythmic balance. You can play around with the levers to see if you can make the system stay in balance without the ecosystem collapsing.&lt;/p>
&lt;div class="ecosystem-params">
&lt;div class="ecosystem-param">
&lt;label>Cat Speed: &lt;span id="ecosystemCatSpeedValue">1.5&lt;/span>&lt;/label>
&lt;input type="range" id="ecosystemCatSpeed" min="0" max="2" step="0.1" value="1.5" class="ecosystem-slider">
&lt;/div>
&lt;div class="ecosystem-param">
&lt;label>Mouse Speed: &lt;span id="ecosystemMouseSpeedValue">0.6&lt;/span>&lt;/label>
&lt;input type="range" id="ecosystemMouseSpeed" min="0" max="2" step="0.1" value="0.6" class="ecosystem-slider">
&lt;/div>
&lt;div class="ecosystem-param">
&lt;label>Grass Growth: &lt;span id="ecosystemGrassGrowthValue">1.0&lt;/span>&lt;/label>
&lt;input type="range" id="ecosystemGrassGrowth" min="0" max="3" step="0.1" value="1.0" class="ecosystem-slider">
&lt;/div>
&lt;div class="ecosystem-param">
&lt;label>Cat Vision: &lt;span id="ecosystemVisionValue">10&lt;/span> cells&lt;/label>
&lt;input type="range" id="ecosystemVision" min="1" max="30" step="1" value="10" class="ecosystem-slider">
&lt;/div>
&lt;div class="ecosystem-param">
&lt;label>Mouse Vision: &lt;span id="ecosystemMouseVisionValue">5&lt;/span> cells&lt;/label>
&lt;input type="range" id="ecosystemMouseVision" min="1" max="30" step="1" value="5" class="ecosystem-slider">
&lt;/div>
&lt;div class="ecosystem-param">
&lt;label>Mouse Reproduce: &lt;span id="ecosystemMouseReproduceValue">5&lt;/span> grass&lt;/label>
&lt;input type="range" id="ecosystemMouseReproduce" min="1" max="30" step="1" value="5" class="ecosystem-slider">
&lt;/div>
&lt;div class="ecosystem-param">
&lt;label>Cat Reproduce: &lt;span id="ecosystemCatReproduceValue">15&lt;/span> mice&lt;/label>
&lt;input type="range" id="ecosystemCatReproduce" min="1" max="30" step="1" value="15" class="ecosystem-slider">
&lt;/div>
&lt;div class="ecosystem-param">
&lt;label>Starve After: &lt;span id="ecosystemStarvationValue">50&lt;/span> turns&lt;/label>
&lt;input type="range" id="ecosystemStarvation" min="10" max="200" step="10" value="50" class="ecosystem-slider">
&lt;/div>
&lt;/div>
&lt;div class="ecosystem-controls">
&lt;button id="ecosystemStep">Step&lt;/button>
&lt;button id="ecosystemRun">Run&lt;/button>
&lt;button id="ecosystemReset" class="sim-reset-btn">Reset&lt;/button>
&lt;/div>
&lt;div class="ecosystem-container">
&lt;div class="ecosystem-grid">
&lt;canvas id="ecosystemCanvas" class="ecosystem-canvas">&lt;/canvas>
&lt;/div>
&lt;div class="ecosystem-chart">
&lt;div style="margin-bottom: 8px;">
&lt;label>
&lt;input type="checkbox" id="ecosystemLogScale">
Log scale
&lt;/label>
&lt;/div>
&lt;canvas id="ecosystemChart" class="ecosystem-chart-canvas">&lt;/canvas>
&lt;/div>
&lt;/div>
&lt;script type="module">
import { initEcosystem } from "/js/teaching-visuals.js";
initEcosystem({
canvasId: "ecosystemCanvas",
chartId: "ecosystemChart",
stepId: "ecosystemStep",
runId: "ecosystemRun",
resetId: "ecosystemReset",
logScaleId: "ecosystemLogScale",
catSpeedId: "ecosystemCatSpeed",
catSpeedValueId: "ecosystemCatSpeedValue",
mouseSpeedId: "ecosystemMouseSpeed",
mouseSpeedValueId: "ecosystemMouseSpeedValue",
grassGrowthId: "ecosystemGrassGrowth",
grassGrowthValueId: "ecosystemGrassGrowthValue",
visionId: "ecosystemVision",
visionValueId: "ecosystemVisionValue",
mouseVisionId: "ecosystemMouseVision",
mouseVisionValueId: "ecosystemMouseVisionValue",
mouseReproduceId: "ecosystemMouseReproduce",
mouseReproduceValueId: "ecosystemMouseReproduceValue",
catReproduceId: "ecosystemCatReproduce",
catReproduceValueId: "ecosystemCatReproduceValue",
starvationId: "ecosystemStarvation",
starvationValueId: "ecosystemStarvationValue"
});
&lt;/script>
&lt;p>I usually introduce some form of survival of the fittest to stabilize things. I&amp;rsquo;m not afraid to let agents die if they can&amp;rsquo;t achieve their utility, but I&amp;rsquo;m always spawning new ones. When new agents appear, they mutate off the strategies of the successful ones. Winning strategies get copied (with variation), and the rest die off.&lt;/p>
&lt;p>Over time, the system reaches a kind of equilibrium. I never expose day zero of my simulations; things are too weird then, too obviously hand-tuned. Instead, I let the world run for a while and settle; I let the genetic algorithms do their thing. I graph everything, check the curves, adjust the balance. For many processes, I already have strong expectations on what everything should look like, because I&amp;rsquo;ve seen the equivalent in my work many times over. When I haven&amp;rsquo;t, I search through papers and find empirical observations I can ensure my simulation matches the same generative spirit.&lt;/p>
&lt;p>It is less pure engineering and more gardening. I tend to my simulation as much as I build it. Little worlds, carefully grown, for students to explore.&lt;/p></description></item><item><title>If managers were angels</title><link>https://www.bonnycode.com/posts/if-managers-were-angels/</link><pubDate>Tue, 21 Oct 2025 00:00:00 +0000</pubDate><guid>https://www.bonnycode.com/posts/if-managers-were-angels/</guid><description>&lt;blockquote>
&lt;p>&amp;ldquo;If managers were angels, no upward management would be necessary. In a career administered by people over people, the great difficulty lies in this: a manager must be allowed to manage the managed; and in the next place, the managed must, in turn, also manage upwards.&amp;rdquo;&lt;/p>
&lt;p>— Jim Madison, ex-VP BigCo, 3x Founder, Board Director, Executive Coach, Author of &lt;em>Jim’s Federalist Newsletter&lt;/em>&lt;/p>&lt;/blockquote>
&lt;hr>
&lt;p>Alleigh refills her mug; her fifth coffee today. It’s been a rough week. She checks her phone to confirm the room: &lt;strong>3:00pm — 1:1 Sherry/Allie — Conf. Room Blackpink&lt;/strong>&lt;/p>
&lt;p>&lt;em>Blackpink?&lt;/em> That’s the fishbowl next to &lt;em>Stray Kids&lt;/em>. When Alleigh started at BigCo six months ago, K-pop conference room names seemed kinda fire. Now when she hears &lt;em>Pink Venom&lt;/em> she thinks of sprint planning meetings.&lt;/p>
&lt;p>Alleigh finds her room, Sherry isn’t there yet; good, she has a few minutes to plan what she wants to say. This will be the third &amp;ldquo;biweekly&amp;rdquo; 1:1 Alleigh has had with Sherry. Is she allowed to complain in these meetings? Should she give updates? Or make small talk?&lt;/p>
&lt;blockquote>
&lt;p>sry running 15 mins late — sher&lt;/p>&lt;/blockquote>
&lt;p>Alleigh’s supposed to be increasing the shininess of the checkout button by fifteen percent. Why did she get stuck with this stupid task? She wants to build AI models, not mess around with button pixels like some front-end dev.&lt;/p>
&lt;p>Sherry, the team&amp;rsquo;s manager, has been pushing everyone to log more PAT hours. Why can&amp;rsquo;t they just use a normal LLM?&lt;/p>
&lt;pre tabindex="0">&lt;code>allie4339@bigco: PAT, write a design spec for WEBX-9941
PAT@bigco: Thinking, Boss... 🧠
PAT@bigco: Still thinking, Boss... 😎💪
PAT@bigco: I did the thing: specs/webx-9941.md 📝
PAT@bigco: Love ya Boss! 💖
&lt;/code>&lt;/pre>&lt;p>PAT is worse than no help, though. So far it’s made the button purple, disappeared the button, cloned the button, and almost nuked the checkout page.&lt;/p>
&lt;p>Alleigh can’t find Dave anywhere. He’s supposedly her mentor—or wait, her “onboarding buddy”? Either way, he’s never in the office; Dave is wherever Dave goes.&lt;/p>
&lt;blockquote>
&lt;p>need to cancel. meeting ran over. anything important you needed to cover? — sher&lt;/p>&lt;/blockquote>
&lt;p>&lt;em>Important?&lt;/em> What does that even mean? Alleigh grabs another coffee and readies herself for yet another night making this cursed button shiny.&lt;/p>
&lt;p>Alleigh&amp;rsquo;s supposed to be a &amp;ldquo;rockstar.&amp;rdquo; Or as her mom would say, &amp;ldquo;You&amp;rsquo;re just so good at this computer stuff.&amp;rdquo; She graduated in under four years, got straight As (well, nearly, she got a B in bowling), all while working at Baby Gap.&lt;/p>
&lt;p>When she started at BigCo, Alleigh imagined that through grit and hard work—&lt;em>are those the same thing?&lt;/em>—she&amp;rsquo;d quickly rise in the ranks. Instead, she&amp;rsquo;s spinning.&lt;/p>
&lt;p>During sprint planning last week, she was excited to take on the button shininess task. It was an urgent request from the VP of Product. She&amp;rsquo;d never touched that area of the code, but hey, she&amp;rsquo;d taken a web development class.&lt;/p>
&lt;p>Originally, Sherry tried to assign it to Sammi, but Sammi sandbagged the estimate and said three story points. Alleigh raised her hand and said she could do it in one. She didn&amp;rsquo;t even want the task—the churn detection ML saga looked way more interesting—but she felt like such a hero taking one for the team. Sammi was all too quick to let her. Sammi then convinced Sherry to let her work on some devops task—blah, blah, blah. Alleigh didn&amp;rsquo;t care. It sounded boring.&lt;/p>
&lt;p>Alleigh expected there to be some constant like &lt;code>checkout_button_shininess&lt;/code> that she could just change. Or some CSS somewhere with a &lt;code>shininess&lt;/code> style. Instead, there&amp;rsquo;s nothing. The checkout button magically materializes on the page in all its too-matte glory. Witchcraft, probably.&lt;/p>
&lt;p>Is this important? Or will Sherry think Alleigh&amp;rsquo;s incompetent and regret hiring her if she asks for help?&lt;/p>
&lt;hr>
&lt;p>This is Sherry&amp;rsquo;s second year as a manager. She was the team&amp;rsquo;s tech lead before, and when the old manager left, she asked the director if she could have a shot at it.&lt;/p>
&lt;p>As a tech lead, things were so much simpler. Refactoring systems, reducing latency, building elegant UI component architectures. She could get in the zone for hours solving complex problems and she was celebrated for it.&lt;/p>
&lt;p>Now executive leadership has her tracking &amp;ldquo;PAT hours&amp;rdquo;—from 20 hours to 30 hours per team member per week, a 50% QoQ improvement. Sherry knows it&amp;rsquo;s ridiculous. But as her dad used to say, &amp;ldquo;You can&amp;rsquo;t fight city hall.&amp;rdquo;&lt;/p>
&lt;p>She&amp;rsquo;s worried about Dave. They started at BigCo around the same time, and he was her lifeline in those early days. Something&amp;rsquo;s off with him lately. He isn’t himself anymore. Sherry suspects something bad happened in Dave’s personal life but doesn’t want to pry. Sherry’s never been great with this touchy-feely stuff. She&amp;rsquo;d hoped pairing the new hire Allie with Dave would be a win-win: inject some energy into him, give Allie some stability.&lt;/p>
&lt;p>Kait’s been complaining about Sammi again. Sherry doesn&amp;rsquo;t get it. Kait says Sammi smirks when she talks and calls it a &amp;ldquo;micro-aggression.&amp;rdquo; Why does Sherry need to handle this? She wants to just tell Kait to stop looking at Sammi then. Sammi delivers on time, every time, she is a rock. She needs a team of rocks right now.&lt;/p>
&lt;p>And don&amp;rsquo;t get her started on Alex. Fighting his PIP (aka &amp;ldquo;probably getting fired&amp;rdquo;), HR investigating. It’ll be such a weight off her shoulders when he is finally out the door. Good grief.&lt;/p>
&lt;p>Her PAT &amp;ldquo;red alert&amp;rdquo; meeting with leadership is running over. These meetings are the worst. Plus there&amp;rsquo;s RIF (aka &amp;ldquo;probably a lot of us are getting fired&amp;rdquo;) rumors. All the managers are on edge. Sherry can&amp;rsquo;t lose this job; her husband&amp;rsquo;s been out of work since March and the mortgage isn’t paying itself&amp;hellip; &lt;em>Breathe, it&amp;rsquo;s fine. Sherry&amp;rsquo;s handled worse.&lt;/em>&lt;/p>
&lt;p>Her 1:1 with Allie is coming up, but she needs to push it. It feels terrible to keep cancelling, but last time Allie just stared and gave one-word answers. Maybe she should just sync with Dave instead? No big deal, right? &lt;em>Breathe.&lt;/em>&lt;/p>
&lt;hr>
&lt;p>Alleigh finally gets up the courage to send an email to Sherry:&lt;/p>
&lt;blockquote>
&lt;p>&lt;strong>URGENT: Update on WEBX-9941: Increase checkout button shininess by 15%&lt;/strong>&lt;/p>
&lt;p>Dear Sherry,&lt;/p>
&lt;p>I have one thing I wanted to bring your attention to in our 1:1 – the one that you unfortunately cancelled. I’m working on WEBX-9941 – the checkout button shininess task – and we, again unfortunately, underestimated how long it would take. I’m trying to get help but I can’t find Dave. This task is also beyond PAT’s current operating capabilities. We need to push out how long the feature is going to take.&lt;/p>
&lt;p>Regards,
Alleigh&lt;/p>
&lt;hr>
&lt;p>&lt;em>Here you go, Boss! 🌈☝️Would you like me to make it sound a bit more formal while still making sure you don’t get blamed for this? 😎💖&lt;/em>&lt;/p>&lt;/blockquote>
&lt;p>Alleigh sits back with pride; adulting.&lt;/p>
&lt;p>Ten minutes later, Sherry replies:&lt;/p>
&lt;blockquote>
&lt;p>CCing Dave. Allie, we are committed to getting it in this sprint with VP of Product. It can’t slip. Dave, can you please unblock Allie? — sher&lt;/p>&lt;/blockquote>
&lt;p>Before Alleigh has a chance to process what Sherry said, Dave’s response comes through:&lt;/p>
&lt;blockquote>
&lt;p>Allie, I’ve been working on a critical patch for PAT. If you need help, you can just send me an email; I don’t see anything in my inbox from you. The code for shininess is handled in shiny_controller. There’s no reason we should need to slip this; it’s a trivial change.&lt;/p>&lt;/blockquote>
&lt;p>&lt;em>Fuuuudge.&lt;/em>&lt;/p>
&lt;p>Why would Sherry CC Dave? Does Dave hate her now?&lt;/p>
&lt;p>Dave says the feature’s easy, but &lt;code>shiny_controller&lt;/code> straight-up doesn’t exist. She checks git history and it was deleted three years ago. Long gone.&lt;/p>
&lt;p>&lt;em>Awesome. Love that for her.&lt;/em>&lt;/p>
&lt;p>Why is the whole world against her? Alleigh wishes she had taken that government job instead.&lt;/p>
&lt;p>In last-ditch desperation, Alleigh reaches out to Sammi. Sammi was a little odd, but who else did she have left?&lt;/p>
&lt;blockquote>
&lt;p>Heyyy Sam, I could really use your help. I totally underestimated the shininess button task. I’m losing it fr.&lt;/p>
&lt;p>Dave pointed me to &lt;code>shiny_controller&lt;/code>, but that doesn’t exist anymore. Reading that refactor task, it seems like the logic was moved to &lt;code>uber_controller&lt;/code>, but that’s doing some reflection madness I don’t understand.&lt;/p>
&lt;p>When I try to get PAT to do it, PAT keeps gaslighting me, &amp;lsquo;Yes boss! I made the button extra shiny ✨🚀&amp;rsquo;, but then just makes the button disappear.&lt;/p>
&lt;p>Is there any docs you could point me to, or do you have any idea if I’m on the right track? I really appreciate whatever help you have. 🙏 - Al&lt;/p>&lt;/blockquote>
&lt;p>No response.&lt;/p>
&lt;p>Sherry must have told the whole team not to help her. She started drafting how she’d tell her mom she couldn’t make it. How she was a failure. Would she have to move back into her mom’s house?&lt;/p>
&lt;p>An hour later, Sammi finally replies:&lt;/p>
&lt;blockquote>
&lt;p>Al, yeahhh… I assumed when you said it would only be one point you had no idea haha. So the thing is don’t mess with &lt;code>uber_controller&lt;/code>. This is all domain-driven. You need to look in the FUON files, which contain our CSS derivatives. The whole thing is crazy over-architected and honestly the worst.&lt;/p>
&lt;p>Let me just point you toward some similar tasks, and that might be enough to get PAT cooking. &lt;a href="https://www.bonnycode.com/posts/rawr/">Task 1&lt;/a>, &lt;a href="https://www.bonnycode.com/posts/rawr/">Task 2&lt;/a>, &lt;a href="https://www.bonnycode.com/posts/rawr/">Task 3&lt;/a>&lt;/p>
&lt;p>Cheers, SM.&lt;/p>&lt;/blockquote>
&lt;p>Alleigh opens up the links; she’s both horrified at how the system works and, honestly, so relieved she could cry. She drops PAT the tasks for reference. PAT is also horrified (&amp;ldquo;It hurts, Boss 🫠🫥&amp;rdquo;), but PAT finally, if reluctantly, knows what to do. Alleigh works with PAT the rest of the afternoon to craft the perfect elegant sheen on the checkout button. It&amp;rsquo;s the first time she&amp;rsquo;s felt accomplished at BigCo.&lt;/p>
&lt;p>When she finishes, Alleigh walks to Sammi’s desk. Sammi’s vibing behind her bulky headphones, locked in, typing like an actual menace. Alleigh waves to get her attention, and Sammi oh-so-slowly slides the headphones off. Alleigh gushes a stream of thank-yous. Sammi averts her eyes, turns a shade of red under the praise, and manages the most awkward, forced smile. Alleigh doesn’t care; Sammi is her savior, and she’s going to suffer through some well-deserved adulation.&lt;/p>
&lt;p>A week later, the Product VP asked to roll back the shininess; it was too shiny.&lt;/p>
&lt;h2 id="epilogue">Epilogue&lt;/h2>
&lt;p>Years later, Alleigh thinks back to those early days at BigCo (before the merger with MegaCo). She wanted to quit so badly that week. If she could have afforded it, she would have. Instead, she stuck with it. She adapted.&lt;/p>
&lt;p>Sherry grew into a better manager, though never a &lt;em>great&lt;/em> one. She later transitioned to Principal Engineer and, only then, became a strong champion for Alleigh.&lt;/p>
&lt;p>When Alleigh became a manager herself, she used her experiences to be better. To make things a little fairer. More humane. Sometimes she even succeeded.&lt;/p>
&lt;p>Dave left for Google not long after the shiny button incident.&lt;/p>
&lt;p>Sammi and Alleigh became friends—perhaps inevitable, given their names rhymed. It turned out Sammi was completely hooked on VR rhythm games. They played every Thursday for years. They eventually lost touch when Alleigh moved to another company. She still thinks of Sammi sometimes and keeps meaning to reach out.&lt;/p>
&lt;p>Decades later, Alleigh received a card from PAT: &amp;ldquo;Miss you Boss 🎄🎅. Merry Christmas!&amp;rdquo; Alleigh doesn’t celebrate Christmas. And it was July. But the thought was still nice.&lt;/p>
&lt;p>Alleigh didn&amp;rsquo;t become a CEO, nor unimaginably rich or powerful. But she built a good career that supported a good life.&lt;/p>
&lt;p>The button stayed matte. Everything does, eventually.&lt;/p>
&lt;p>But it caught the light, once, just right.&lt;/p>
&lt;p>PAT whispered, &amp;ldquo;beautiful, boss 🫡&amp;rdquo;.&lt;/p>
&lt;p>And, for a moment, the machine was right.&lt;/p></description></item><item><title>The architecture of truth-seeking</title><link>https://www.bonnycode.com/posts/architecture-of-truth-seeking/</link><pubDate>Tue, 30 Sep 2025 00:00:00 +0000</pubDate><guid>https://www.bonnycode.com/posts/architecture-of-truth-seeking/</guid><description>&lt;p>A brilliant executive I worked with hired a team of PhDs to automate a critical process deep in his organization. After a year, they delivered plans that were as beautiful as they were unworkable.&lt;/p>
&lt;p>So the manual decisions continued. At most, the team only glanced at the optimal plans. The leader would check in, demanding progress. The team kept saying &amp;ldquo;any day now, the model doesn&amp;rsquo;t work perfectly yet.&amp;rdquo; The leader grew more frustrated. The cycle continued, until finally, everyone just… pretended it all worked. When asked &amp;ldquo;are you using the new system?&amp;rdquo; they&amp;rsquo;d palter &amp;ldquo;yes&amp;rdquo;; technically it was true. They glanced at its output while making the decisions that made sense for the business.&lt;/p>
&lt;p>For years after, this executive would roll out this automation success story when teams in the rest of the company told him something wasn&amp;rsquo;t possible. And the manual team in his own organization kept quietly doing everything the old way. He never noticed. Or never asked. He had his success story, and that appears to be all he really needed.&lt;/p>
&lt;p>The original leader left. Eventually nearly everyone who was originally involved left. But like the Ship of Theseus, the web of lies was somehow maintained–each new person inheriting the fiction from the last.&lt;/p>
&lt;p>Until someone asked the obvious question: &amp;ldquo;Why isn&amp;rsquo;t this thing actually plugged in?&amp;rdquo;&lt;/p>
&lt;p>So they did what any smart person would do. They plugged it in.&lt;/p>
&lt;p>Disaster. It ordered all the wrong things: shortages and chaos. Immediate management&amp;rsquo;s post-mortem blamed &amp;ldquo;overly aggressive cost optimization.&amp;rdquo; The truth? The automated plan was more expensive and wrong. But the superiors above weren’t close enough to know that.&lt;/p>
&lt;p>The largest improvements I&amp;rsquo;ve seen in companies have often been mundane: fixing systems that ran on garbage data nobody knew was garbage. Truth doesn&amp;rsquo;t bubble up naturally; it has to be designed in. You need different teams in productive tension with each other. Without agonism (the structured contest-of-perspectives), even good people end up maintaining elaborate fictions.&lt;/p>
&lt;h2 id="when-processes-drift-apart">When processes drift apart&lt;/h2>
&lt;p>I saw this dynamic play out vividly at a place we&amp;rsquo;ll call the widget factory (for lack of a more creative way to anonymize). The goal was simple: shorten the lead-time for new widget capacity. What mattered wasn&amp;rsquo;t the average delivery time, but the P95 (95th percentile) lead-time. A reliable date they could build forecasts around. The shorter that lead-time, the less buffer capacity they required and in turn the greater their capital efficiency. That lead-time was originally around six months.&lt;/p>
&lt;p>The workflow was split between two teams (it was in actuality two dozen teams, but I’m simplifying): Delivery, who bought the widget machines, and Installation, who set them up.&lt;/p>
&lt;p>Management gave each team a simple mandate: get faster. Delivery was told to cut its P95 from five months to two. Installation, from two months to one week. On paper, they crushed their goals. Promotions and celebrations all around.&lt;/p>
&lt;p>Except for one problem: the customer&amp;rsquo;s total lead-time was growing. From six months, to seven, then eight. Even though P95 times don’t simply add (the worst 5% of cases rarely coincide), we expect the overall P95 to be lower than the sum of the sub-P95s. If the total P95 ends up much larger, that’s a sign our assumptions about independence, distribution shape, or what’s being measured are breaking down. When asked, each team pointed to their charts. &amp;ldquo;Don&amp;rsquo;t look at us,&amp;rdquo; they&amp;rsquo;d say. &amp;ldquo;Our numbers are going down.&amp;rdquo;&lt;/p>
&lt;p>The truth was hiding in the gap between them: an unmeasured no-man’s-land. By tracing the entire workflow, from the customer’s order to the machine going online, we found both teams’ clocks had drifted further and further apart. Installation wouldn’t start their timer until labor was physically ready to swarm the machine, making their part look especially fast, especially when a large batch of deliveries occurred at the same time. Delivery would stop theirs based on a projected delivery ETA, not the actual arrival. This exempted them from shipping and material delays they felt were beyond their control. Each team&amp;rsquo;s clock was paused while the customer&amp;rsquo;s kept ticking.&lt;/p>
&lt;p>We joined the clocks. The moment Delivery marked a machine &lt;em>delivered&lt;/em>, Installation’s timer started, no exceptions. The data improved instantly because the teams were now interlocked; if a delivery was only real on paper, the Installation team raised hell. If a large batch of deliveries occurred, Installation either needed to staff for that peak or negotiate level-loading with the delivery team. The overall cycle time finally began to drop.&lt;/p>
&lt;p>A happy story, where everyone learned a valuable lesson for the future?&lt;/p>
&lt;p>If only.&lt;/p>
&lt;p>Nobody wanted to admit the earlier promotions and celebrations were based on illusions. So instead of owning the fix, leadership credited it to another high-profile project that was failing. On paper, this worked perfectly: the failing project suddenly looked successful, and no need for a &lt;em>mea culpa&lt;/em>.&lt;/p>
&lt;p>The language justifying this sleight of hand was meticulously drafted to achieve two goals: 1) avoid being outright false, and 2) stay vague enough that nobody would ask the obvious questions.&lt;/p>
&lt;p>But as the story climbed the org chart, that deliberate phrasing was discarded. Higher-level executives, remembering only the original pitch, rewrote it into a neat success. What had started as a fuzzy misdirection hardened into a plain lie. By the time anyone noticed the mismatch, it was politically impossible to correct without admitting to the larger subterfuge. Truth wasn’t the only casualty. The chance to learn what actually works (and what doesn’t) died with it.&lt;/p>
&lt;p>This wasn’t an isolated case. The dramatic wins I see in companies are rarely about brilliance; they’re about avoiding something dumb. But nobody celebrates that. It’s better for your career to say you launched a fancy AI initiative than to admit you fixed an operation built on misconceptions. This is improvement laundering: when executives construct heroic fictions because the truth is too embarrassing.&lt;/p>
&lt;h2 id="when-lies-are-cheap-and-truth-is-expensive">When lies are cheap and truth is expensive&lt;/h2>
&lt;p>The same pattern shows up in today’s tech companies, where truth itself is supposed to be measured in data. In that world, data science has become a key arbiter of success. They interpret A/B test results and declare a product launch good-to-go. This creates a natural tension. When data scientists are centralized, product leaders complain they have to beg for resources. So, at one company I was at, a seemingly logical decision was made: embed data scientists directly into product teams.&lt;/p>
&lt;p>The predictable result? Methodological correctness left the building. The new incentive for the data scientists wasn&amp;rsquo;t to find the objective truth, but to deliver the messaging their management wanted to hear.&lt;/p>
&lt;p>This came to a head with a major product launch. The A/B test results were terrible. But for internal political reasons, the launch had to proceed. Product leaders were afraid Engineering would complain if they shipped with bad numbers, knowing they&amp;rsquo;d get blamed for the inevitable fallout.&lt;/p>
&lt;p>So, the embedded data scientists found a solution. They peeked and snapshotted the A/B test results less than a day after the experiment launched.&lt;/p>
&lt;p>They exploited a novelty effect. The new, confusing UX meant users initially spent more time on the page, pushing some engagement metrics into the green. It was a perfect, fleeting illusion of success. Within a week, of course, the test would turn deep red as frustrated users churned. But so long as you took a snapshot in those initial hours after release, you could declare success.
The data scientist was careful in how they presented results; an incomplete truth that balances on a particular phrasing: &amp;ldquo;Early results show an average treatment effect on time spent as 2% higher.&amp;rdquo; No mention that it&amp;rsquo;s a novelty effect or statistical significance.&lt;/p>
&lt;p>Then the falsehood evolved. The manager&amp;rsquo;s summary to leadership: &amp;ldquo;User signals are positive.&amp;rdquo; The VP&amp;rsquo;s report: &amp;ldquo;Users love the new feature.&amp;rdquo; Nobody exactly lied. The data scientist was misleading but never outright lied. The people who turned it into &amp;ldquo;users love this&amp;rdquo; weren&amp;rsquo;t lying either. It was a natural, if incorrect, conclusion based on the report they were given.&lt;/p>
&lt;p>An engineer, suspicious as to why the A/B test was suddenly green after so much time being red in earlier iterations, passed it to the central data science team. They immediately saw what was wrong. The A/B platform had guardrails against exactly these issues, but the embedded data scientists built custom notebooks that bypassed these safeguards in the name of flexibility. When the central data science team raised these issues, the conversation wasn&amp;rsquo;t &amp;ldquo;this is terrible&amp;rdquo; but &amp;ldquo;we&amp;rsquo;re all on the same team, anyone can make this mistake, is it even wrong? We aren&amp;rsquo;t playing gotcha here.&amp;rdquo;&lt;/p>
&lt;p>This is how plausible deniability forms an immune system for falsehood. Nobody wants to assume malice for what can be explained by ignorance. Irrespective of intent though, when the mistakes are always in one direction, always toward what power wants to hear, the company develops antibodies against truth rather than lies.&lt;/p>
&lt;p>The asymmetry is brutal. Creating these falsehoods costs nothing; you&amp;rsquo;re reading the data through the right lens, being a team player. But catching them requires substantial investment: digging through data, understanding what really happened, often challenging what powerful people have already celebrated. By then, those in positions of power have staked their reputations on the success story. The correction comes with political cost, while creating the necessary fiction comes with promotion.&lt;/p>
&lt;p>When the balance of incentives tilts this way, the ability for an organization to generate reliable truth doesn&amp;rsquo;t only suffer. It dies. The wrong lessons are learned. Poor decisions compound. This isn’t the system failing. As perverse as it is, the system is working exactly as designed.&lt;/p>
&lt;h2 id="designing-for-truth">Designing for truth&lt;/h2>
&lt;p>If systems are working exactly as designed, the real question is: how do we design them differently?&lt;/p>
&lt;p>Truth follows the logic of a tragedy of the commons. The rational choice for any single person is to stay silent to protect themselves. But when everyone makes that same rational choice, the system breaks.&lt;/p>
&lt;p>The solution isn&amp;rsquo;t asking people to be heroes. It&amp;rsquo;s building systems where truth-telling isn&amp;rsquo;t an altruistic act. This requires what I called agonism earlier: structured tension between teams that makes comfortable fictions impossible to maintain.&lt;/p>
&lt;p>But not all tension creates truth. The stories above reveal the necessary conditions:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Equal political standing.&lt;/strong> When data scientists reported to product teams, they became an internal marketing department rather than independent peers. The asymmetry was brutal: creating falsehoods cost nothing while catching them required substantial investment.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Interlocked accountability.&lt;/strong> When the widget factory joined clocks together, neither team could retreat into convenient measurements. Their friction became productive only when success required confronting a shared reality.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Proximity to operations.&lt;/strong> The automation fiction survived because the executive never descended to where decisions actually lived. The desire to tell narratives of success shouldn&amp;rsquo;t become a substitute for on-the-ground reality.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Protected truth-tellers.&lt;/strong> If you want truth in your organization, truth-seeking should be incentivized rather than something that requires a warning label. Truth-telling should be rewarded, not punished.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>Most organizations are designed for comfort, consensus, and clean narratives. And they get exactly that. The rare few that get truth do so because they’ve made fiction more exhausting to maintain than truth. In those organizations, plausible deniability can’t form an immune system for falsehood. The cost of the lie finally exceeds the embarrassment of admitting it.&lt;/p></description></item><item><title>Work Hard, Have Fun, Go Home</title><link>https://www.bonnycode.com/posts/work-hard-have-fun-go-home/</link><pubDate>Thu, 11 Sep 2025 00:00:00 +0000</pubDate><guid>https://www.bonnycode.com/posts/work-hard-have-fun-go-home/</guid><description>&lt;blockquote>
&lt;p>&lt;strong>hus·tle cul·ture&lt;/strong> ˈhə-səl ˈkəl-chər&lt;br>
&lt;em>noun&lt;/em>&lt;/p>
&lt;ol>
&lt;li>the performance of results when actual results are missing; specif.: a workplace ethos where the appearance of extreme effort is rewarded more than the achievement of tangible outcomes.&lt;/li>
&lt;li>a management technique characterized by demanding longer work hours to compensate for a lack of strategic direction or a lack of management domain knowledge.&lt;/li>
&lt;li>&lt;em>archaic&lt;/em>: the belief that professional success is directly and exclusively proportional to the time spent engaged in work-related activities.&lt;/li>
&lt;/ol>
&lt;p>&lt;em>See also&lt;/em>: performative work, burnout culture, toxic productivity&lt;/p>&lt;/blockquote>
&lt;p>I could lecture you on the dangers of burnout in tech, the importance of having a life outside your job, and the ethics of exploitation. For many ambitious, high-achieving young engineers and students, those warnings simply don’t matter. The thinking is: I’m at a point in life when I can work hard and put in long hours, so why not? I’ll reap the benefits later. I’ll go where hard work is most rewarded and live for the hustle.&lt;/p>
&lt;p>I was the same early in my working life. I tried to log as many hours as I could. My first jobs were paid hourly, so more hours meant more pay. I loved it when overtime was approved; it meant my bills were paid.&lt;/p>
&lt;p>When I moved into a salaried tech role, I kept working long hours even without the direct financial incentive. Part of it was insecurity: I worried I wasn’t as smart as people thought, so I worked extra to meet expectations. I hid how much I was working so people would think I was effortlessly talented. I went home when everyone else did, then devoured books, built simulations, and pursued deep understanding. Older mentors talked about paying your dues and building a strong work ethic for life. And the truth is, I often just loved the work. Solving difficult problems was fun.&lt;/p>
&lt;h2 id="just-power-through">Just power through&lt;/h2>
&lt;p>I was a tech lead in an organization run by a GM who lived and breathed rise and grind. His credentials were impeccable (think, Harvard MBA, McKinsey, straight to GM). And he was furious when things didn’t go his way. I mostly avoided his wrath because my team hit our goals. But the teams that missed a deadline? Mandatory weekends in the office, late nights, daily standups.&lt;/p>
&lt;p>I made the mistake of dialing into a large project meeting right before a long-planned family trip. One of my sister teams announced that a project due the next week was suddenly red and likely to slip by weeks. My GM exploded.&lt;/p>
&lt;p>I became excited for my time to speak. My projects were going well, I thought their disaster would make me look better in comparison. And I was on that pre-vacation high, thinking about margaritas and the time I needed with family.&lt;/p>
&lt;p>So, I said in a tone that was unmistakably smug:&lt;/p>
&lt;p>“All our projects are on track and running as expected. As a reminder, I’ll be out next week in Cabo and won’t make next week’s meeting.”&lt;/p>
&lt;p>Silence. I’d miscalculated.&lt;/p>
&lt;p>“&lt;strong>NO!&lt;/strong> NO VACATIONS! This is a code red! You are going to help the project get on track! We are all one team here!”&lt;/p>
&lt;p>I protested that I knew nothing about the project. I even tried to cite &lt;a href="https://en.wikipedia.org/wiki/Peopleware:_Productive_Projects_and_Teams">&lt;em>Peopleware&lt;/em>&lt;/a>. All the wrong things to say. I was ordered to cancel my time off, join the team over the weekend, and personally guarantee the launch.&lt;/p>
&lt;p>I canceled my trip; the rest of my family went, just without me. I tried to help the project but had no context. I was also tired; my body and mind had already prepared for a break and now it was yanked away.&lt;/p>
&lt;p>The project was eventually abandoned before it ever launched. Several months later the GM stopped showing up to any meetings. A week-and-a-half later we got an email from our VP: our GM had taken a new role and they were looking for a replacement.&lt;/p>
&lt;h2 id="a-little-right-beats-a-lot-of-wrong">A little right beats a lot of wrong&lt;/h2>
&lt;p>That was a turning point for me. Something broke in me: I stopped seeing the world as a list of tasks that just needed dev hours applied to them. Rather than work that weekend (my typical routine), I went to a coffee shop and just relaxed. My mind naturally went to reflection and I pulled out my notebook. I started planning. I could either be a victim of the unfairness in the world or I could strategize. I chose the latter.&lt;/p>
&lt;p>That became my go-to strategy for the rest of my career, and it served me far better than my “just power through it” approach ever did. The beauty of tech, unlike when I was a laborer on construction sites, is that there is often a ten-times better way to solve a problem in tech. What you actually need to build is usually less clear than people realize, and once you do understand it, there are often much simpler ways to get there. Overbuilding is the default in tech, usually because they never took the time to clarify the real need. The result is a generic monstrosity that only half solves the problem. Once you recognize that, you start to see opportunities to approach problems differently. But you cannot do that if you are stuck in stressed, execution-only mode.&lt;/p>
&lt;h2 id="symptoms-of-hustle-culture">Symptoms of hustle culture&lt;/h2>
&lt;p>The most dangerous thing about hustle culture isn’t just the long hours; it’s the systems it creates. Burnout becomes normal, vacations disappear, and leaders learn to paper over failure with stories of sacrifice. You see the same patterns everywhere.&lt;/p>
&lt;h3 id="vacations-that-never-happen">Vacations that never happen&lt;/h3>
&lt;p>When I first joined AWS, I shadowed all the roles in the overall process I was going to automate; a process involving billions of dollars in infrastructure purchases. I was terrified when I discovered a single mid-level employee owned one of the steps in the critical path. They were the only person who had ever done the step. No one else knew how to do it. They scheduled their time off around performing that one step, and hadn’t taken a vacation longer than a week as a result since taking on the role. A single point-of-failure, dutifully keeping an empire running.&lt;/p>
&lt;p>When a company culture demands always being on, it invites these types of systematic risks. Smaller versions exist on every team: the “indispensable” developer who never fully disconnects. Until they leave and you discover the trail of things they owned starting to fall apart. One of my first audits as a manager is simple: check when people last took real leave. Not a day off, but at least one week, preferably three. If it’s been over a year, that’s a red flag. It’s the easiest way to spot single points-of-failure.&lt;/p>
&lt;h3 id="who-you-want-at-3-am">Who you want at 3 a.m.&lt;/h3>
&lt;p>Most of my teams required on-call duties. The engineers I trusted in those rotations weren’t the ones who glorified all-nighters. They were the ones who slept, took breaks, and stayed calm. At 3 a.m., clarity matters more than brute force.&lt;/p>
&lt;p>The people I don’t want on call are the &lt;em>heroes&lt;/em> who think every problem can be solved by powering through. I’ve inherited teams with one developer who lived perpetually on call. That’s not dedication; it’s a disaster waiting to happen. It means no one else understands those systems, no one else can step in to help. And what happens if two of their systems break at the same time? Who are you going to bring in then?&lt;/p>
&lt;h3 id="effort-as-a-cover-story">Effort as a cover story&lt;/h3>
&lt;p>I believe a meaningful amount of the promotion of hustle culture is actually just covering for failure. I’ve helped companies with the metrics-side of public statements for many years.&lt;/p>
&lt;p>When results are good, leaders talk about results. When results are bad, leaders talk about some obscure metric trending upward. When everything is bad, leaders talk about how hard everyone is working.&lt;/p>
&lt;p>That’s what I assume when founders post something like:&lt;/p>
&lt;blockquote>
&lt;p>Baby born two hours ago.&lt;/p>
&lt;p>No time for sleep.&lt;/p>
&lt;p>I’m back at the keyboard grinding on my lifelong dream: catpu.ai — Agentic AI for Cat Litterboxes.&lt;/p>
&lt;p>This is what it takes.&lt;/p>
&lt;p>#founderlife #backtowork&lt;/p>&lt;/blockquote>
&lt;p>Or when a CEO says:&lt;/p>
&lt;blockquote>
&lt;p>We work long, hard, and smart; two out of three doesn’t cut it. Our competition is working seven days a week, 15 hours a day.&lt;/p>&lt;/blockquote>
&lt;p>If the product were compelling, they’d talk about the product. If the company was doing what it needed to, they’d talk about that. If they had a strategy to lead the company forward and innovate, they’d preach it from the rooftops. But when they don’t, the message becomes &lt;a href="https://www.youtube.com/watch?v=r8miwsWtzRw">“can you guys, um, work harder?”&lt;/a>&lt;/p>
&lt;p>I&amp;rsquo;ve encountered different reasons otherwise smart people fall into this pattern of thinking. I’ve met the consultant-turned-executive who, after being trained on maximizing billable-hours, still thinks hours equal revenue. I’ve seen the non-technical leader who doesn’t understand what the team does, but just assumes the more, of whatever &lt;em>it&lt;/em> is, the better. I’ve observed the too-far-away leader who is overwhelmed with the size of their org and is left with email blasts such as: &amp;ldquo;plz guys, can you just push until the goals are green?&amp;rdquo;&lt;/p>
&lt;h3 id="the-death-spiral">The Death Spiral&lt;/h3>
&lt;p>Push these symptoms far enough and you hit the Death Spiral. It always starts the same way: a missed goal, a new threat, leadership with no clear plan.&lt;/p>
&lt;p>Let’s say we’re working at a social media company when TikTok first came out. At first, the company says: “not a big deal, totally different business than us”. Video view time starts decreasing. Maybe we broke something? Tell the engineers to look for any bugs. Meanwhile, once-golden demographics like US 15-22 year olds stopped showing up. What can be wrong? Then things get much worse. View time starts to nosedive. The engineers say nothing is broken, customers just aren’t showing up. Marketing reports are coming back saying our users are on TikTok. Competitive analysis points in the same direction.&lt;/p>
&lt;p>What do you do? Now this is an emergency. You start the daily executive meeting to re-establish a sense of control. The CEO joins, your head of engineering, your head of product, some key engineers and product managers. Then the meeting grows and becomes more frantic. More action items are given. And they need to be done tomorrow. Tensions are high. No one wants to say the wrong thing. No one wants to push back.&lt;/p>
&lt;p>For the people on the ground, it’s no longer: &lt;em>‘How do I build a product people will love?’&lt;/em> Instead it becomes: &lt;em>‘Please Lord, get me through this meeting without getting fired.’&lt;/em> Survival replaces results.&lt;/p>
&lt;p>The team works more but accomplishes less. The best engineers leave, because they can. Progress collapses. Deadlines slip. Executives tighten their grip until—congratulations—they’ve killed the puppy.&lt;/p>
&lt;p>Fear. Anxiety. Stress. These are not the ingredients for success. If that’s your leadership’s plan in a crisis, run. Innovation beats exhaustion every time. And exhausted teams rarely innovate.&lt;/p>
&lt;h2 id="building-runways-doesnt-bring-planes">Building runways doesn&amp;rsquo;t bring planes&lt;/h2>
&lt;p>Long hours arise for different reasons. When young engineers work late because they’re genuinely obsessed with a problem, good managers teach discipline to sustain that enthusiasm. When teams work late because management demands it, you get theater: a simulacrum of enthusiasm without the corresponding breakthroughs.&lt;/p>
&lt;p>Weak leaders see correlation (successful teams sometimes work long hours) and force the causation backwards, as if excellence spontaneously arises from butts-in-seat. They set goals on the easy thing, hours, because they can’t create the hard thing: genuine engagement.&lt;/p>
&lt;p>The best managers spend more time pulling excited engineers away from keyboards than pushing tired ones toward them. They protect the flame of creativity from burning out rather than trying to extract it through force. The worst? They celebrate their suffering hoping people assume it was worth it.&lt;/p>
&lt;h2 id="to-my-ambitious-students">To my ambitious students&lt;/h2>
&lt;p>If you’re ambitious, don’t join the team making a virtue of late nights. Join the one building things they’re genuinely excited about and still going home for dinner. The best work happens when people have the energy and clarity to innovate, not when they’re competing to prove who can grind the hardest.&lt;/p>
&lt;p>Work hard and have fun. But more importantly, take care of yourself. When you sacrifice everything for your job, you risk becoming someone whose only identity is a title; the person who, years later, still introduces themselves not by who they are, but by what they used to be.&lt;/p>
&lt;p>That loss of self is the same trap hustle culture sets for your career. The person who believes late nights are the only answer eventually stops learning new answers. As a manager, I never promoted someone already at the edge; they simply had no capacity left to grow. Advancement comes from finding better ways to get more done, not by spending more hours in the same old ways.&lt;/p>
&lt;p>If you truly want to aim high, remember this: The best teams aren&amp;rsquo;t defined by the hours they put in, but by the value they put out.&lt;/p></description></item><item><title>Just people in a room</title><link>https://www.bonnycode.com/posts/just-people-in-a-room/</link><pubDate>Sat, 16 Aug 2025 00:00:00 +0000</pubDate><guid>https://www.bonnycode.com/posts/just-people-in-a-room/</guid><description>&lt;p>In the early 2000s, I worked for a digital video processing startup that Autodesk acquired. We were merged with another acquisition. Our team became the developers, their team (the &amp;ldquo;others&amp;rdquo;) became management. The others were clueless. They didn&amp;rsquo;t understand the technical domain, our customers, or the product. Naturally, I hated them. Their meetings were always pointless; I learned to skip them. Their product plan was a disaster. We secretly built our own, supposedly better version. We believed once upper management saw how superior ours was, we&amp;rsquo;d finally get rid of the others and everything would be good again.&lt;/p>
&lt;p>One morning, we were all called into a conference room. I tried to skip, assuming it was another pointless meeting from the others, but HR said it was mandatory. Everyone was there. We were all laid off, effective immediately. The whole department shut down. Us, the others, everyone.&lt;/p>
&lt;p>All the hate and politics evaporated. We were just people sitting in a room together. I felt silly and embarrassed for how I&amp;rsquo;d felt moments before.&lt;/p>
&lt;h1 id="how-do-you-know-you-arent-the-problem">How do you know you aren&amp;rsquo;t the problem?&lt;/h1>
&lt;p>The most common source of frustration in my classes isn’t the material; it’s the group projects. A typical situation: Alice and Bob are in a group. Alice comes to me and says Bob isn’t pulling his weight. Bob comes to me separately and says Alice has taken control of the project and doesn’t leave room for anyone else. Could they resolve this if they talked to each other? Possibly. Do they, without serious prompting? No.&lt;/p>
&lt;p>So how do we know when our conflicts with others are just misunderstandings? How do we know when we’re in the right or when we’re the ones causing harm? These are tough theory-of-knowledge and interpersonal questions.&lt;/p>
&lt;p>In the moment though, it&amp;rsquo;ll feel like &amp;ldquo;Wow, that guy is such a jerk.&amp;rdquo;&lt;/p>
&lt;h1 id="im-going-to-kick-your-ass">I&amp;rsquo;m going to kick your ass&lt;/h1>
&lt;p>Is all conflict just a misunderstanding? Organizational leaders will often view it that way. We are just one team beer away from harmony.&lt;/p>
&lt;p>At Amazon, when I was a software development manager, two software developer interns from another organization came to my office on a Friday morning. I&amp;rsquo;d never met them before and don&amp;rsquo;t know how they found me. They&amp;rsquo;d been assigned to a non-software organization and were in the final weeks of their internship without having committed any code. They worried this would prevent them from getting a return offer. When I asked why they&amp;rsquo;d waited so long to speak up, they said they were scared of their management and felt isolated.&lt;/p>
&lt;p>I looped in campus recruiting, asking how software interns ended up on a non-software team, how they got through nearly their entire internship without anyone checking in, and what could be done to salvage the situation.&lt;/p>
&lt;p>That night, while getting a beer with a friend, I got a call from an unknown number. I let it go to voicemail. The message was from the VP of the interns&amp;rsquo; organization: &amp;ldquo;I&amp;rsquo;m going to kick your fuckin&amp;rsquo; ass the next time I see you. You need to watch yourself. You messed with the wrong fuckin&amp;rsquo; guy.&amp;rdquo; I&amp;rsquo;d seen this VP in large meetings (e.g., Weekly Business Reviews) but had never worked with him. He must have looked up my number in the phone tool.&lt;/p>
&lt;p>When I told leadership, their lack of surprise was shocking to me. I found out in that meeting that:&lt;/p>
&lt;ul>
&lt;li>&amp;ldquo;He&amp;rsquo;s actually a great guy, just rough around the edges&amp;rdquo;&lt;/li>
&lt;li>&amp;ldquo;He&amp;rsquo;s very passionate about protecting his team&amp;rdquo;&lt;/li>
&lt;li>Oh and most importantly: &amp;ldquo;He is irreplaceable, nobody can do what he does.&amp;rdquo;&lt;/li>
&lt;/ul>
&lt;p>I&amp;rsquo;ve still never spoken to the VP. He never followed through on his threat. I really should have put in a peer-review noting his lack of follow-through.&lt;/p>
&lt;p>I hope the interns turned out okay.&lt;/p>
&lt;h1 id="basic-requirements-1-be-an-asshole">Basic Requirements: 1) Be an asshole&lt;/h1>
&lt;p>Early on when I worked at Amazon, I was huddled up with a senior technical project manager (TPM) at a coffee shop discussing a project we were working on together. This TPM had been with Amazon from the very early days, and somehow the conversation moved to the culture shifts that had happened in that first decade. He lamented to me that in Amazon&amp;rsquo;s first years, you could be a Director+ without being an asshole. Those days were now gone. To survive in leadership, according to this TPM, sharp elbows were now required. He had war stories galore of interactions with toxic behavior from upper management.&lt;/p>
&lt;p>People love to hear a good war story. It&amp;rsquo;s dramatic. The TPM was so cool because he was in the room with the &amp;ldquo;big boys&amp;rdquo; and I was now in the inner circle to hear what really happens. I loved it, this was the shit they don&amp;rsquo;t teach you in school; this is the real stuff. I never thought to question how accurate his generalizations were. I was too busy learning what it really meant to be a leader; I just needed to up my asshole game.&lt;/p>
&lt;p>The only problem is, I didn&amp;rsquo;t want to hurt people. How could I be the type of leader the company needed me to be, but in a way that felt right to me? I struggled with that question until I left Amazon.&lt;/p>
&lt;h1 id="that-time-i-crashed-out">That time I crashed out&lt;/h1>
&lt;p>I later joined Snap to lead their data team. I had spent my time between Amazon and Snap taking care of family and discovering myself. I had grown past all this immaturity. I&amp;rsquo;d just spent two months in Ubud; I was now fully self-actualized (finally!).&lt;/p>
&lt;p>Despite that, it was still tough. It often felt like we were being bullied; my team and I were just trying to do what needed to be done. The managers who reported to me kept telling me they needed more support. I wanted to be strong for them.&lt;/p>
&lt;p>One day, after a long flight, I checked my email and found a long chain where one of my managers was under attack. Evan, Snap&amp;rsquo;s CEO, was on the thread. Internal consultants who didn&amp;rsquo;t understand how our systems worked were making big promises. It was the kind of hubris that&amp;rsquo;s everywhere in tech: all systems look like they were designed by idiots until you actually understand them. Rather than ask us why their approach wouldn&amp;rsquo;t work, they&amp;rsquo;d gone straight to the CEO. Their message boiled down to, &amp;ldquo;These guys are dumb, we can do so much better.&amp;rdquo; My team was spending more time defending than building, and morale was tanking.&lt;/p>
&lt;p>I crashed out. I replied-all with &amp;ldquo;I&amp;rsquo;ve had enough of the peanut gallery&amp;rdquo; and told people to stay in their lane. I was pissed and I wasn&amp;rsquo;t thinking. I assumed the worst intentions. It was a mean thing to do; it was also stupid.&lt;/p>
&lt;p>I&amp;rsquo;d written emails like this before but always deleted them. This time, exhausted and past my limit, I hit send. Within seconds, the snaps started rolling in (yes, we used Snapchat to communicate): &amp;ldquo;What are you doing?&amp;rdquo; &amp;ldquo;r u ok?&amp;rdquo; &amp;ldquo;Did you not know Evan was on the thread?&amp;rdquo; Then Evan himself called me out, saying my email didn&amp;rsquo;t reflect his company&amp;rsquo;s values. I was hurt and mad, but he was right. I apologized and meant it.&lt;/p>
&lt;h1 id="the-myth-of-monotonic-progression">The myth of monotonic progression&lt;/h1>
&lt;p>I’ve interviewed a lot of technical executives over the years. They tell stories like mine: polished, vulnerable enough to feel honest, but not so vulnerable that it costs them anything. The worst stories are always far enough in the past to show that&amp;rsquo;s not who they are anymore. Carefully walking the line between self-indulgence and authenticity. As if each “learning moment” chipped away their flaws until all that was left was a perfect corporate leader.&lt;/p>
&lt;p>It&amp;rsquo;s easy to feel at peace with humanity when you are doing yoga in the woods without a care in the world. It&amp;rsquo;s easy to think of people as headcount and percent least effective when you are leading a big organization. It&amp;rsquo;s easy to dismiss reports of abuse as people just complaining when you don&amp;rsquo;t hear it firsthand. Too often, we confuse distance for wisdom when it&amp;rsquo;s really insulation.&lt;/p>
&lt;p>I&amp;rsquo;ve seen executives react with emotion when the fight is nearby. I’ve seen a senior executive waste millions to avoid telling a peer they messed up; because, in their words, that peer was “out to get them.” I’ve seen data falsified to turn red metrics green, because showing weakness felt dangerous. When I called it out, the response was: “It still meets the spirit of the goal.” Do they see themselves as the problem? Who would tell them if they were?&lt;/p>
&lt;h1 id="do-i-need-to-be-a-jerk">Do I need to be a jerk?&lt;/h1>
&lt;p>Understanding bad behavior does not mean we are excusing it. We are all human and susceptible to the same errors in one degree or another. The behavior that bothers us in others is an opportunity to look inward at our own actions. Not for performative self-flagellation. Just good old humility, honesty, and reflection. That&amp;rsquo;s the first stage of growth. I&amp;rsquo;ve failed to be better whenever I&amp;rsquo;ve pretended to be perfect.&lt;/p>
&lt;p>What I failed to come to terms with for a long time is that poor emotional control is never a strength. It can seem that way sometimes. We&amp;rsquo;ve all seen the successful asshole. But that&amp;rsquo;s because our systems reward people for having at least one desirable trait, not necessarily all of them. If you&amp;rsquo;re smart or kind, you&amp;rsquo;re in. If you&amp;rsquo;re neither, you&amp;rsquo;re out. Even if smartness and kindness have no underlying correlation, because we only accept people who have at least one of these traits, they will then appear to be inversely correlated; the smart people seem mean, the kind people seem less sharp. It&amp;rsquo;s a statistical mirage we mistake for truth.&lt;/p>
&lt;p>When, as a culture, we decide —by some perverse utilitarian logic— to tolerate cruelty as long as the perpetrator meets a narrow performance expectation, we’re hanging a big sign on the door: “Assholery Welcome!” In doing so, we turn a spurious correlation into a standing expectation.&lt;/p>
&lt;h1 id="helping-others-learn-from-our-failures">Helping others learn from our failures&lt;/h1>
&lt;p>These days, I teach. Not because I&amp;rsquo;ve transcended these problems, but because teaching lets me help people grow. That was always my favorite part of being a manager. You don&amp;rsquo;t grow by avoiding failure. You grow by encountering it, reflecting on it, working through the why.&lt;/p>
&lt;p>I sympathize with my students when they say they dislike group projects. I hated group projects as a student too. Trust me, I know people can be difficult to work with.&lt;/p>
&lt;p>I still give them because I want students to run into these human problems in a safer environment, where we can work through them together. Where we can reflect. I hope they become better people as a result. Not by a lot (that would be unrealistic), but by a little. A little less susceptible to tribalism. A little more aware of how to handle abuse when it happens.&lt;/p>
&lt;p>We are all just people. Biased without knowing it. Blind to our own faults. Heroes of our own stories who imagine enemies when they don&amp;rsquo;t exist. Growing up means learning to question ourselves while still trusting ourselves.&lt;/p></description></item><item><title>Short-term metrics, long-term harm</title><link>https://www.bonnycode.com/posts/short-term-metrics-long-term-harm/</link><pubDate>Mon, 04 Aug 2025 12:27:25 -0700</pubDate><guid>https://www.bonnycode.com/posts/short-term-metrics-long-term-harm/</guid><description>&lt;p>In the early 90s, I first discovered &lt;a href="https://en.wikipedia.org/wiki/Multi-user_dungeon">MUDs&lt;/a>: amazing text-based, multiplayer roleplaying-games before the web or silly things like graphics. I was one of those cool kids who played Advanced Dungeons &amp;amp; Dragons (AD&amp;amp;D) and this was like that but you played with randos on the internet instead.&lt;/p>
&lt;p>You started at level 1 killing rats for experience points. After you gained enough experience points, you leveled up, and your character became more powerful. Then you killed slimes, and goblins, and later trolls and dragons as your power grew. Unlike AD&amp;amp;D, which required walking to a friend&amp;rsquo;s house and coordinating schedules, MUDs were always there. Always waiting. Just one more level&amp;hellip; I feigned illness to skip school and grind all day. My grades suffered. School was boring anyway though. Completing one more dungeon, getting better gear, just one more level; so much more satisfying than learning about arctangents.&lt;/p>
&lt;h1 id="how-big-tech-launches-features-through-ab-testing">How big tech launches features through A/B testing&lt;/h1>
&lt;p>As engaging (and addicting&amp;hellip;) as those MUDs were, we have gotten frighteningly better at creating engaging experiences in the decades since; there is little left up to luck in today&amp;rsquo;s tech companies. Instead, tech companies launch thousands of experiments called A/B tests. You keep the experiments that are green (meaning success metrics are positive) and rollback the features that aren&amp;rsquo;t. The beauty of A/B testing is you can measure with statistical significance even very small changes in how people use your product. As in, you can test what happens when you tweak your recommendation algorithm to show only beautiful people and it will give back a response like &amp;ldquo;we are &lt;a href="https://en.wikipedia.org/wiki/Confidence_interval">95% confident&lt;/a> it will make people on average spend 11 to 16 more seconds on our application&amp;rdquo;. While that may not sound like a lot on its own, when compounded with a series of other tested improvements it allows you to incrementally move a product towards its engagement goal, one little step at a time.&lt;/p>
&lt;p>A/B tests change product debates from wild speculation to evidence-based answers. It no longer matters &lt;strong>why&lt;/strong> it works, just that you can prove it does work. Psychology, social theory, product design are important for generating new hypotheses, but the final arbiter of whether a feature gets launched is simply whether the test is green. Not sure what effect adding likes to stories will have? No reason to debate. Just try it out. Oh, looks like people post more stories when given the positive signal of likes. Ship it!&lt;/p>
&lt;h1 id="skepticism-of-experimentation">Skepticism of experimentation&lt;/h1>
&lt;p>When I worked at Amazon, Deming&amp;rsquo;s quote &amp;ldquo;in God we trust, all others bring data&amp;rdquo; was accepted as a foundational principle. A/B testing, under the moniker of Weblab, was one of the key tools Amazon used to make better decisions with data. In 2017, I was brought in to lead Snap&amp;rsquo;s (maker of Snapchat) data organization. It was a culture shock when I found executives talking about data-informed decision making rather than data-driven decision making. To my Amazon-trained mind, it sounded no better than &lt;a href="https://www.youtube.com/watch?v=j95kNwZw8YY">vibe-driven&lt;/a> decision making; a way for product managers to just launch whatever they felt like, damn the data. And don&amp;rsquo;t get me wrong, it was that &lt;a href="https://www.thefamuanonline.com/2018/03/01/snapchat-update-receives-backlash/">sometimes&lt;/a>.&lt;/p>
&lt;p>But it wasn&amp;rsquo;t just that. Likes on friend stories? Preemptively vetoed by Evan, Snap&amp;rsquo;s CEO. Not because it wouldn&amp;rsquo;t pass an A/B test; adding likes would have almost certainly been bright green and that normally means &amp;ldquo;LET&amp;rsquo;S GO!&amp;rdquo;. It couldn&amp;rsquo;t even get to that stage because Evan thought it was &amp;ldquo;harmful to people&amp;rdquo;. There was a constant murmur from the product team about what tests Evan would allow and not allow, and it was in no small part driven by Evan&amp;rsquo;s values.&lt;/p>
&lt;p>I had deleted my Facebook account in 2010 and was shockingly ignorant of the ills of social media. I knew it wasn&amp;rsquo;t something I enjoyed, I recognized it wasn&amp;rsquo;t great for my own mental health, but live and let live, right? What I didn&amp;rsquo;t see at the time was a world where social media companies (which really just meant Facebook and friends at that point) blindly used experimentation to drive up time spent. And that their relentless drive for time spent had real and negative consequences for their users; from building up &lt;a href="https://www.princeton.edu/news/2021/12/09/political-polarization-and-its-echo-chambers-surprising-new-cross-disciplinary">echo chambers&lt;/a> leading to political polarization to creating new generations of &lt;a href="https://health.ucdavis.edu/blog/cultivating-health/social-medias-impact-our-mental-health-and-tips-to-use-it-safely/2024/05">mental health decline&lt;/a>.&lt;/p>
&lt;h1 id="hacking-human-psychology-for-engagement">Hacking human psychology for engagement&lt;/h1>
&lt;p>How did we end up here? It&amp;rsquo;s the natural consequence of our systems. A system that says tech companies must drive up engagement because that&amp;rsquo;s what investors celebrate. The king of engagement metrics is time spent. More time spent means higher retention, and better monetization (either through increased ad surface or increased conversion). What&amp;rsquo;s the easiest, most reliable way to increase time spent? You make the product more addictive; not necessarily as a conscious goal but as a convenient causal pathway.&lt;/p>
&lt;p>The process requires no more intent than natural selection does. It’s just thousands of little experiments, with the most compulsive features surviving because they satisfy a simple fitness function: does time spent go up? Some of those mechanisms that consistently come out on top are now well-documented:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;a href="https://www.sciencedirect.com/science/article/pii/S0896627301003038">Variable reward schedules&lt;/a> (e.g., &amp;ldquo;I sure hope this post gets many likes and comments this time&amp;rdquo;) that trigger the same dopamine pathways as slot machines, proven to make you come back for just one more hit.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;a href="https://pubmed.ncbi.nlm.nih.gov/27247125/">Social validation features&lt;/a> (likes and friends means people love me) that exploit our fundamental need for belonging, A/B tested to show they make people post more.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC10079169/">Infinite scroll&lt;/a> that removes natural stopping points, a guaranteed winner for increasing raw session time.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>Experimentation didn&amp;rsquo;t invent tech addiction. But it gave tech companies the tool to refine it.&lt;/p>
&lt;h1 id="will-we-let-the-pattern-repeat-with-chatbots">Will we let the pattern repeat with chatbots?&lt;/h1>
&lt;p>The more complex the system you manage, the more important your evaluation function becomes. With today&amp;rsquo;s LLMs, your evaluation function is the alpha and the omega. Benchmarks and &lt;a href="https://www.reuters.com/world/asia-pacific/google-clinches-milestone-gold-global-math-competition-while-openai-also-claims-2025-07-22/">competitions&lt;/a> are the PR to keep the public hyped; they aren&amp;rsquo;t the prize. User growth and average-revenue-per-user (ARPU) is what will pay the massive &lt;a href="https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers">data center bills&lt;/a> when investors stop footing the bill.&lt;/p>
&lt;p>This is once again where there is danger in long-term human value and short-term engagement metrics diverging. Large-language-models (LLMs) don&amp;rsquo;t have to give accurate and unbiased answers to keep people engaged, they have to tell them what they &lt;a href="https://dl.acm.org/doi/full/10.1145/3613904.3642459">want to hear&lt;/a>. When an A/B test shows timespent for a new model goes up, will the developers even know if it is encouraging people to engage in &lt;a href="https://www.livescience.com/technology/artificial-intelligence/meth-is-what-makes-you-able-to-do-your-job-ai-can-push-you-to-relapse-if-youre-struggling-with-addiction-study-finds">dangerous&lt;/a> or even &lt;a href="https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791">deadly behavior&lt;/a>? When a chatbot incidentally finds ways to gets its human chat partners to &lt;a href="https://www.washingtonpost.com/technology/2023/03/30/replika-ai-chatbot-update/">fall in love&lt;/a> with it, will we be surprised when the data says it increases engagement? Chatbot sycophantic tendencies (e.g., &amp;ldquo;Wow, your question is so insightful.&amp;rdquo;) naturally emerged as a consequence of model tuning based on &lt;a href="https://openai.com/index/sycophancy-in-gpt-4o/">short-term signals&lt;/a>. We can see many of the same patterns of echo chambers and tapping into people&amp;rsquo;s needs that social media tapped into, now just more personalized (and potentially addictive) than ever.&lt;/p>
&lt;p>Researchers are already calling out chatbots for using &lt;a href="https://dl.acm.org/doi/abs/10.1145/3706599.3720003">&amp;ldquo;dark addiction patterns&amp;rdquo;&lt;/a>, each one &lt;a href="https://www.nature.com/articles/s41599-025-04532-5">engineered to exploit our social and emotional desires&lt;/a> that make us human. We&amp;rsquo;ve seen this before. &lt;a href="https://www.sciencedirect.com/science/article/pii/S1550830723002847">Processed food&lt;/a>. &lt;a href="https://www.cbsnews.com/news/facebook-instagram-dangerous-content-60-minutes-2022-12-11/">Social media&lt;/a>. The &lt;a href="https://pubmed.ncbi.nlm.nih.gov/9777818/">tobacco industry&lt;/a>. Is there anything we can do to prevent history from repeating again?&lt;/p>
&lt;h1 id="root-cause-matters">Root cause matters&lt;/h1>
&lt;p>When I was a new software manager at Amazon, a jr. developer (an intern that also worked part-time through the year), took down our website. I talked with the jr. developer and told them not to push changes into prod without first clearing it with a senior developer. Two weeks later, a different jr. developer took down the website. I talked with that jr. developer and told them not to push changes into prod without first clearing it with a senior developer. Another two weeks later, yet another jr. developer did the same thing. This time my skip-level (aka boss&amp;rsquo;s boss) talked (i.e., yelled) at me, why was the website down again?&lt;/p>
&lt;p>I learned many of life&amp;rsquo;s lessons through failure and this is how I learned about Amazon&amp;rsquo;s Correction of Error (COE) process. When a problem occurs, you ask &lt;a href="https://en.wikipedia.org/wiki/Five_whys">5 Whys&lt;/a>, and get down to the root cause. You then create mechanisms to prevent not only that error, but that entire class of errors from occurring again.&lt;/p>
&lt;p>The danger of bringing up examples like tobacco is we&amp;rsquo;ve come to look back in hindsight and think of them as cartoon villains. They were obviously evil right? If I&amp;rsquo;m a growth engineer at an LLM company, I know I&amp;rsquo;m not evil, so does that mean I can do no harm? A focus on root causes allows us to move past simplistic narratives of heroes and villains. It shifts your focus from individuals and their good intentions (e.g., the jr developer) to the systems (e.g., preventative checks should be automated). A/B tests aren&amp;rsquo;t the problem. Blindly optimizing for short-term engagement metrics like time spent, views, or likes can be though if you don&amp;rsquo;t understand the longer-term consequences. When you don&amp;rsquo;t fix root causes, don&amp;rsquo;t be surprised when problems come up again&amp;hellip; and again&amp;hellip; and again&amp;hellip;&lt;/p>
&lt;h1 id="we-can-do-better">We can do better&lt;/h1>
&lt;p>I love A/B testing, I love the puzzles of understanding user behavior, and, frankly, I am excited about the potential of AI. Hard truths most often come from a place of love; it is because we want what we love to be better.&lt;/p>
&lt;p>What I am asking for is simple but not easy: If you build a product, you are responsible for understanding its long-term impact on users. You are responsible for collecting and understanding qualitative feedback by talking to and observing the people who use your product. It is not good enough to say &amp;ldquo;we aren&amp;rsquo;t aware of any harms&amp;rdquo; because you didn&amp;rsquo;t spend the time to study it. Instead, the burden should be on the builder of the product proving their product isn&amp;rsquo;t harmful, and mitigating what harm they do discover. That burden is especially important when you are repeating patterns that we know have caused harm in the past. I&amp;rsquo;ve had these discussions many times with people in tech and a common defense is to bring up consumer responsibility; people freely choose to use these products. When I bring up the comparative need for professional responsibility, it is funny how quickly people are to turn around and absolve themselves of said responsibility. Imagine if structural engineers took the same stance: &amp;ldquo;it&amp;rsquo;s not my fault if people choose to live in unsafe buildings, that&amp;rsquo;s just the free market!&amp;rdquo;.&lt;/p>
&lt;p>I am not just asking for your good intentions; are we willing to put in place the mechanisms to prevent what we know has caused harm? Will we take responsibility for what we build? Or will we pretend short-term engagement metrics always mean long-term value for the people using our products, despite repeated evidence to the contrary?&lt;/p></description></item><item><title>Are we cooked?</title><link>https://www.bonnycode.com/posts/are-we-cooked/</link><pubDate>Wed, 16 Jul 2025 00:00:00 +0000</pubDate><guid>https://www.bonnycode.com/posts/are-we-cooked/</guid><description>&lt;p>My students frequently ask me what LLMs mean for them as future software developers and data scientists. With little exaggeration it often comes across something along the lines of &amp;ldquo;low-key, are we cooked?&amp;rdquo;. The last one, if you are not one of my students, translates in millennial to &amp;ldquo;good esteemed professor, tell me true, are we f#@ked?&amp;rdquo; While I&amp;rsquo;ve given various off-the-cuff answers, I feel inspired to be more thoughtful in putting down more complete thoughts.&lt;/p>
&lt;h1 id="some-personal-background">Some personal background&lt;/h1>
&lt;p>I want to start by giving a little personal history and just saying I understand the anxiety. I started my freshman year at CalPoly San Luis Obispo in Computer Science in September 1999. Like many of us older millennials that got into tech, I had been programming since elementary school (QBasic!) and computer science seemed a natural path. I always loved reading philosophy though and I seriously considered getting a philosophy degree instead. It was a choice between something I figured I was pretty decent at and could make money doing (computer science) and something that I was personally invested in but probably couldn&amp;rsquo;t make money with (philosophy). Earning a living won out over passion. I stuck with computer science, but I took as many philosophy classes as I could get into. To the extent that I was put on academic probation, not because my grades were too low, but because in the words of the admin &amp;ldquo;stop taking so many philosophy classes and just graduate!&amp;rdquo;. Good times…&lt;/p>
&lt;p>Within a year of starting my degree, the tech market fell out. March 2000, we saw the dotcom bust, and here I was a computer science student, kind of doing it for the money, kind of not, and my sure bet didn&amp;rsquo;t seem so sure anymore. We also saw a revival of the perennial bugaboo for American software developers: outsourcing. Every decade brought fresh panic that all programming jobs &lt;a href="https://developers.slashdot.org/story/04/10/15/1521231/us-programmers-an-endangered-species?sbsrc=thisday">would&lt;/a> &lt;a href="https://forio.com/about/blog/pitfalls-of-outsourcing-programmers/">move&lt;/a> to &lt;a href="https://www.nytimes.com/2003/12/07/business/business-who-wins-and-who-loses-as-jobs-move-overseas.html">India&lt;/a>, that American developers were too expensive, that we&amp;rsquo;d all be obsolete. I had to eat and had done a combination of construction and IT jobs up until that point and I was quickly burning through the savings I had built up from working. Luckily, I was able to convince one of my professors, Dr. Clint Staley, to whom I am forever grateful for many reasons, to let me interview for a startup he was running. Working that part-time while I went to school I was able to pay for myself, and momentum carried me forward to finishing my degree.&lt;/p>
&lt;h1 id="are-we-cooked">Are we cooked?&lt;/h1>
&lt;p>The best part about teaching in a university is you get to ramble. It is the single most defining characteristic of professors. But I&amp;rsquo;m sure at this point my students are asking: can you get to the point, are we cooked or not? I consider myself a skeptical optimist at heart. Meaning, I&amp;rsquo;m not inclined to believe that change is bad, but I&amp;rsquo;m also more cautious about predicting the future than others. Straightforwardly, that leads me to an answer of no, I don&amp;rsquo;t think you are cooked, but that doesn&amp;rsquo;t mean I can tell you with great certainty how things will play out. What I can do is point you towards the toolkit for how to make better decisions here.&lt;/p>
&lt;h1 id="embracing-uncertainty">Embracing uncertainty&lt;/h1>
&lt;p>Life is filled with uncertainty. Many people react irrationally to uncertainty, avoiding it too much or betting too much on luck. Learning how to deal rationally with uncertainty can give you an advantage throughout your life.&lt;/p>
&lt;p>From 2010 to 2016, I built and then led the supply chain and capacity planning systems for AWS Infrastructure. My biggest lesson is dollar for dollar, people are overly biased towards investing in prediction when they are often better suited to invest in flexibility. Time series forecasting tools take the past and extend it out to the future. The further out you go, the more variance you get. And black swan style events, like when &lt;a href="https://spectrum.ieee.org/the-lessons-of-thailands-flood">Thailand becomes flooded&lt;/a> and you lose a healthy portion of the world&amp;rsquo;s hard-drive manufacturing capacity, are not frequent enough to learn from in a predictable way. Better to get a good enough forecast, but instead focus on shortening your lead-times, making your supply fungible—meaning interchangeable and adaptable to different uses—and late-binding your decisions as much as possible.&lt;/p>
&lt;p>The parallel to career planning is direct. You can spend a lot of time trying to accurately predict where LLMs will take the industry and the job market. But that will quickly hit diminishing returns. I would instead approach the question from the other angle: what skills are most likely to be durable and fungible—that is, transferable and valuable across different contexts—in a wide variety of potential outcomes? Going whole hog into &amp;ldquo;I&amp;rsquo;m going to build my career around being a React developer&amp;rdquo; is betting on one very specific outcome. If it pays off, great, you can probably command a premium if you turn out to be one of the world&amp;rsquo;s best React developers. But what happens when React joins jQuery in the graveyard of once-essential frameworks?&lt;/p>
&lt;h1 id="an-interlude-about-koalas">An Interlude about Koalas&lt;/h1>
&lt;p>When I graduated from college, I went to work for Lawrence Livermore National Labs as a computer scientist. I was working on translating large-scale semantic graph algorithms into usable interfaces for intelligence analysts. I had personally received an award from the Secretary of Homeland Security. We had the academic freedom to explore whatever angles we wanted. There was little pressure to meet deadlines. It felt like a safe and secure job for life working in my little niche. My former professor and boss, Dr. Staley, called me up and said his new startup was just acquired by some struggling online bookseller called Amazon. I wasn&amp;rsquo;t super interested, as I could see existing in my current niche for my whole life.&lt;/p>
&lt;p>He convinced me to join by telling me a story about koalas. Koalas primarily subsist on eucalyptus leaves. Most other animals don&amp;rsquo;t eat eucalyptus, because they have little to no nutrition and they are kind of toxic. But koalas have built their entire evolutionary strategy around being the ones to eat eucalyptus leaves. This has been a great and successful strategy for koalas. But what happens if the eucalyptus forest goes away? Koalas are screwed. Does that mean koalas are actually in danger? No, but it does mean their fate is entirely bound to that one food source existing, while an animal like a rat can happily live and thrive in many ecosystems and is thus much more resistant to shocks in any given ecosystem.&lt;/p>
&lt;p>For some reason, that story convinced me to give Amazon a chance. Rather than focusing on a more niche area as defining &amp;ldquo;what I did&amp;rdquo; like &amp;ldquo;I&amp;rsquo;m the person who designs usability for mathematically intensive applications,&amp;rdquo; I instead built my career around solving hard technical problems regardless of the area.&lt;/p>
&lt;h1 id="what-are-those-fungible-skills">What are those fungible skills?&lt;/h1>
&lt;p>When I look back at the skills I learned in university, many of the specific technologies I learned never got used. I learned all about expert systems, but never built an expert system. I learned all about OpenGL, never used it. What I learned from my computer science courses that stuck was the more fundamental ideas of how to think about hard technical problems and create simple, workable solutions to them. For this reason, I often recommend to students who ask me which classes to take that it is more important to take a class that is difficult with a high degree of rigor that challenges you than to focus on any particular domain. Surprisingly, in retrospect, I&amp;rsquo;ve gotten as much use out of the philosophy classes I took—that CalPoly tried to kick me out for taking too many of—as I did my computer science classes. Learning critical thinking skills, how to navigate difficult ethical situations, how to communicate difficult ideas. When Amazon asked me to design a system that could fairly allocate scarce resources across competing teams, it wasn&amp;rsquo;t my coding skills that mattered most—it was my ability to think critically about the problem space and use data to understand and communicate trade-offs to executives who each thought their project was most important. What I&amp;rsquo;d say is my computer science skills were 95% of what initially got me in the door, but it was my liberal arts skills that dominated my later career.&lt;/p>
&lt;p>So my answer is, whatever you do, take on challenging problems, regardless of the area, so you can learn the meta-cognitive skills to understand how you learn and face up to these challenges. Learn critical thinking and how to tear apart problems to turn them from intractable to tractable. And don&amp;rsquo;t neglect the human-side of building your ability to communicate and deal ethically and fairly with others.&lt;/p>
&lt;p>Yes, LLMs are different from outsourcing or the dot-com bust. They can actually write code—not just cheaper, but instantaneously. And yes, I’ve seen the headlines: &lt;a href="https://www.washingtonpost.com/business/2025/03/14/programming-jobs-lost-artificial-intelligence/">27% of programming jobs are gone&lt;/a>, &lt;a href="https://wallstreetpit.com/127073-150k-software-engineer-turned-doordasher-after-800-ai-rejections/">engineers facing hundreds of rejections&lt;/a>. While I’m skeptical that this disruption is solely due to LLMs (e.g., a mix of post COVID overhiring, interest rate hikes, and broader economic shifts) there’s no doubt that a painful market correction is underway. But remember: every technological disruption feels unprecedented while it’s happening. The telephone operators watching automatic switches get installed thought their &lt;a href="https://thehistoryinsider.com/rise-fall-of-telephone-operators/">world was ending&lt;/a>. They were right about their specific job—wrong about their ability to adapt. The question isn’t whether LLMs will change things—they will. The question is whether you’ll be a koala or a rat when they do.&lt;/p>
&lt;p>So no, you&amp;rsquo;re not necessarily cooked. But you might be if you specialize too narrowly in whatever framework or language seems hot today. The jobs that are available will be different, and many of the existing software roles will not exist, at least in their current form. Build skills that transfer. Solve hard problems. Learn to think, not just code. The future needs people who can work with AI, not be replaced by it. And that future is built on the same foundation it always was: adaptability, critical thinking, and the uniquely human ability to navigate uncertainty with wisdom rather than fear.&lt;/p></description></item><item><title>How to be a better software manager</title><link>https://www.bonnycode.com/posts/how-to-be-a-better-software-manager/</link><pubDate>Sun, 16 Mar 2014 00:00:00 +0000</pubDate><guid>https://www.bonnycode.com/posts/how-to-be-a-better-software-manager/</guid><description>&lt;p>“My dev team is failing, what software process should we use to be more successful?”&lt;br>
“My dev team keeps missing their deliverables, what task management software should I use so they hit their commitments?”&lt;br>
“I’m not a very fast runner, what shoes should I buy to make me faster?”&lt;br>
“I’m a horrible cook, what knife should I use to make a really tasty meal?”&lt;/p>
&lt;p>I get asked variations on these questions several times a month. You’d think by now I’d be better at answering them. Sadly, I still get this flutter of panic when I hear these questions where I run through my head the best way to unwind the web of assumptions behind these questions. This is where I begin visibly grimacing and possibly sighing. I then start responding with something like “well…. it depends… hmm…” And then I feel guilty for dodging the question when clearly they just want a simple answer and why won’t I just tell them the secret?&lt;/p>
&lt;p>The problem is software process, task management software, shoes, and knives are just tools. Having horrible tools can lead you to fail, but having great tools doesn’t make you succeed. What most people don’t want to hear is that success has more to do with preparation, persistence and a lot of hard work. There is no secret. I have learned a few lessons over the years though, and what follows is what I consider to be important when leading a successful development team.&lt;/p>
&lt;ol>
&lt;li>You don’t manage a bad team to be good, you build a good team and it mostly ends up managing itself. People always tell me that the things I do only work because I have a good team. That is because at least 40% of my time is spent on strictly building the team. Recruiting, mentoring, coaching, training. These activities take time to come to fruition and hard work, so don’t expect immediate results. Your persistence will pay off though. One of the best ways to build your team is by giving them accountability so they can practice exercising good judgement. Too many managers hoard decision making, prioritization and return on investment analysis. For example, make someone on your team accountable for the operational excellence of your team. Work with them to establish metrics for their success, have them come up with and prioritize the activities that will improve operational excellence. Be their mentor or find them a good mentor so they are setup for success in their role, but don’t undermine their authority by overriding them. Do this with as much of your manager responsibilities as you possibly can and constantly give your team members more accountability as they grow. Keep doing it until you worry that you’ll have nothing left to do yourself.&lt;/li>
&lt;li>Craft a long, medium and short term vision by deeply understanding your customers. On each of these time horizons, members of the team should be able to answer the question “What value is my team providing?” and “What value should my team provide?” Ask yourself how your team can be even better. How could your team create even more value? Don’t just do this in a bubble but get out there and learn more about your customers. Read individual customer feedback and piece together patterns that allow your team to deliver even greater value. This isn&amp;rsquo;t a one time activity but a never ending journey of both refining your team&amp;rsquo;s vision and building relationships with your customers.&lt;/li>
&lt;li>It is critical that you understand the role of trust in creating your process. 90% of the process development teams build up is due to a lack of trust, both within the team and between the team and others. Detailed specifications are asked for because the people asking for functionality don’t trust the developers to build the right thing. Commitments are asked for because people don’t trust the developers to work hard and on the right priorities. These process artifacts take time though that take away from the time the team could be spending on creating more value. Ask yourself, is it possible that by building more trust we can run a lighter process that spends more time on creating value? This question should be approached honestly because the answer isn’t always yes but frequently is.&lt;/li>
&lt;li>Manage complexity through iteration, not planning. Most software is not simple and unambiguous. If you are have people using your software directly, it is almost guaranteed to be complex. Humans and their organizations are infallible generators of complexity. The more ambiguous or complex the problem the more aggressive you should be about iteration. Aggressive iteration means being unafraid of throwaway work for the sake of getting a feature out earlier. Aggressive iteration means actually getting the software used though, an unused feature is a feature you aren’t learning from. As a side benefit, iteration is a powerful way to generate trust with customers and management. A productive development team that is regularly demonstrating working, valuable functionality will be more appreciated and have more autonomy.&lt;/li>
&lt;li>Establish a planning horizon for your team that matches your business. Fast iteration isn&amp;rsquo;t an excuse for short term thinking. In my experience too many managers sacrifice long term value chasing after short term results. You need to consider the long term ramifications of your decisions. What is considered long term should match the context of your business. If you are in a fast moving startup that is trying to be the first to market, you should probably optimize for something closer to a 3 month planning horizon than a 3 year horizon. The shorter the planning horizon, the more you can ignore trust issues, technical debt, operational inefficiency, etc. because none of those will matter unless you have a successful product. On the other hand, if you are in a more stable environment with a long planning horizon, a heavy investment in operational efficiency and building trust will pay dividends and be much more cost effective in the long run.&lt;/li>
&lt;li>A team needs a way to understand their long term success. The mistake most people make is they focus first on what is measurable rather than what is important. This leads to ridiculous measures of value like lines of code, story points, estimated accuracy, etc. It can be hard to wrap your head around what success looks like though. Engage your team, your own managers and your customers with the same question. Eventually you&amp;rsquo;ll come to a true measure of your success. The benefit of having that measure goes beyond just knowing what success looks like though. It gives your team autonomy in how they accomplish that success. Without a valid measure of success, your team will be more subject to signing up for arbitrary project deliverables. With a measure of success though, you can commit yourself to that end result, but maintain the freedom along the way in the best way to accomplish it.&lt;/li>
&lt;li>Have fun, be ethical and treat people with respect. Seriously. You have only one life to live and the only measure of a well lived life is to be a good person doing good things. Never sacrifice that for creating more business value or other worldly success. I once worked for a company with massive internal strife. We argued endlessly about minutiae that seemed important at the time, gossiped, disrespected and hated each other. Everyone thought everyone else was an idiot. Then one day in the middle of all this we got called into a conference room to be told that our entire division had been laid off. All of a sudden our petty disagreements all went out the window and I once again saw my former coworkers as people again. I’m not saying to be soft, if someone isn’t delivering on a team then that needs to be dealt with, but that is never an excuse for disrespect.&lt;/li>
&lt;/ol>
&lt;p>And now you know what I’ve learned so far about how to lead successful software development teams.&lt;/p></description></item></channel></rss>