<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Quiet Clairvoyance</title><link>https://alokjani.github.io/</link><description>Recent content on Quiet Clairvoyance</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Tue, 05 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://alokjani.github.io/index.xml" rel="self" type="application/rss+xml"/><item><title>The Hard Part Isn't the Code. It's Connections and Edge Cases.</title><link>https://alokjani.github.io/hard-part-isnt-code-its-connections-and-edge-cases/</link><pubDate>Tue, 05 May 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/hard-part-isnt-code-its-connections-and-edge-cases/</guid><description>AI is good at solving problems in isolation. Give it a well-scoped function, a clear specification, and a defined interface, and it will produce correct code more often than not.
Production systems are not isolated. They are messy networks of dependencies stitched together over time. The complexity does not live in the core logic of any single component. It lives at the seams — where systems meet, where assumptions differ, where small changes create large effects.</description></item><item><title>AI Can Write Code; But It Doesn't Understand Production</title><link>https://alokjani.github.io/ai-can-write-code-but-doesnt-understand-production/</link><pubDate>Tue, 28 Apr 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/ai-can-write-code-but-doesnt-understand-production/</guid><description>AI is getting better at writing code. That is not the question anymore. The question is whether AI understands where that code actually runs.
Most generated code works in isolation. It solves the problem described in the prompt. It handles the happy path. It looks correct in a diff. Production systems do not work in isolation. They depend on layers of context that are rarely visible in prompts — legacy architecture, implicit dependencies, operational conventions, and compliance constraints.</description></item><item><title>Where Designers Actually Gain Leverage With AI</title><link>https://alokjani.github.io/where-designers-actually-gain-leverage-with-ai/</link><pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/where-designers-actually-gain-leverage-with-ai/</guid><description>Designers are coding with AI — but not in the way people expect. AI did not turn designers into engineers. It removed the gap between design intent and execution.
Designers are not moving down the stack. They are going deeper into their own layer — the interface layer where user experience is shaped. AI gives them the ability to express design decisions directly in code without needing an engineering translation layer.</description></item><item><title>Where PMs Actually Gain Leverage With AI</title><link>https://alokjani.github.io/where-pms-actually-gain-leverage-with-ai/</link><pubDate>Tue, 14 Apr 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/where-pms-actually-gain-leverage-with-ai/</guid><description>The headline reads &amp;ldquo;PMs are coding.&amp;rdquo; The reality is more nuanced. They are not replacing engineering. They are expanding their own capability to move from idea to validation faster.
AI is expanding who can build. It is not eliminating the need for those who build at scale. PMs will write more code — but in narrow, high-leverage zones aligned to their strengths: business intuition, customer context, and fast decision loops.</description></item><item><title>Building Features Rarely Builds Advantage</title><link>https://alokjani.github.io/building-features-rarely-builds-advantage/</link><pubDate>Thu, 09 Apr 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/building-features-rarely-builds-advantage/</guid><description>Most companies are busy shipping features. Their competitors are busy building moats.
If your product roadmap is not creating leverage, it is creating debt. Shipping more features is the fastest way to kill a great product — not because features are bad, but because feature accumulation without strategic intent produces complexity without advantage.
Here is why more usually weakens your competitive position.
1. Features Decay Faster Than Advantage A feature is a momentary advantage.</description></item><item><title>AI Didn't Remove Engineering Complexity, It Only Relocated It</title><link>https://alokjani.github.io/ai-didnt-remove-engineering-complexity-it-only-relocated-it/</link><pubDate>Tue, 07 Apr 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/ai-didnt-remove-engineering-complexity-it-only-relocated-it/</guid><description>The first wave of AI made one thing obvious: writing code is getting easier. Boilerplate, scaffolding, routine implementations, test generation — tasks that consumed hours now take minutes. The productivity gains are real.
But shipping reliable software is still hard. The complexity did not disappear. It relocated.
The difficulty moved from syntax and boilerplate to everything around the code — the context it depends on, the systems it integrates with, the edge cases it quietly breaks, how it behaves in production, how it evolves over time.</description></item><item><title>What CTOs Must Redesign in the AI Era (Part 2/2)</title><link>https://alokjani.github.io/what-ctos-must-redesign-in-the-ai-era-part-2/</link><pubDate>Tue, 31 Mar 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/what-ctos-must-redesign-in-the-ai-era-part-2/</guid><description>AI will break your operating model. Ownership blurs. Workflows lose linearity. Accountability becomes harder to trace.
Part 1 covered the decision system — observability, cognitive infrastructure, human-AI boundaries, transformation paths, and learning velocity. Those fixes address how the organization thinks. But thinking differently is not enough. The structures that support execution — ownership, workflows, talent, economics, and risk — must also change.
If you do not redesign the how, AI will simply scale your dysfunction.</description></item><item><title>What CTOs Must Redesign in the AI Era (Part 1/2)</title><link>https://alokjani.github.io/what-ctos-must-redesign-in-the-ai-era-part-1/</link><pubDate>Fri, 27 Mar 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/what-ctos-must-redesign-in-the-ai-era-part-1/</guid><description>For the majority of organizations, AI strategy is just legacy at 10x speed. The same processes, the same decision flows, the same accountability structures — but with AI bolted on. The result is not transformation. It is amplification. And when you amplify a system that was designed for predictable inputs and predictable outputs, you amplify its flaws.
Post-AI organizations need something fundamentally different. They need to operate in ambiguity, probability, and autonomous evolution.</description></item><item><title>Why Geopolitics Now Matters to Engineering Leaders</title><link>https://alokjani.github.io/why-geopolitics-matters-engineering-leaders/</link><pubDate>Wed, 04 Mar 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/why-geopolitics-matters-engineering-leaders/</guid><description>Engineering used to ignore geopolitics. Architecture decisions were about latency, cost, and developer productivity. Regulatory compliance was a legal concern. Supply chains were a procurement issue.
In 2026, that world is gone. Geopolitics is core strategy. Every major architecture decision — where data lives, which cloud provider to use, which talent markets to hire from, which open-source dependencies to trust — is now shaped by sovereignty, sanctions, and regulatory divergence.</description></item><item><title>Goal Setting That Works</title><link>https://alokjani.github.io/goal-setting-that-works/</link><pubDate>Fri, 13 Feb 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/goal-setting-that-works/</guid><description>Ambition without structure is noise. Every team wants impact. Few can trace their output back to a clear goal.
The difference between organizations that execute and organizations that stay busy is not talent or resources. It is the discipline of connecting vision to measurable outcomes to concrete work. Objectives, Key Results, and Initiatives are not bureaucracy. They are the mechanism that turns strategy into motion.
Here is how to build a goal system that actually works.</description></item><item><title>Move Fast Without Losing Direction</title><link>https://alokjani.github.io/move-fast-without-losing-direction/</link><pubDate>Sat, 07 Feb 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/move-fast-without-losing-direction/</guid><description>Three signs of speed without direction are visible in every organization that has lost its way:
Activity is celebrated more than impact. Features ship, but churn stays high. Teams are busy, but the north star is blurry.
The common response is to slow down. That is usually the wrong answer. The organization does not need less speed — it needs more intentionality. Speed without direction is wasted energy. Speed with direction compounds.</description></item><item><title>Signals Your Digital Team Has Lost Its North Star</title><link>https://alokjani.github.io/signals-your-digital-team-has-lost-its-north-star/</link><pubDate>Thu, 05 Feb 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/signals-your-digital-team-has-lost-its-north-star/</guid><description>The most dangerous moment in a digital transformation is when velocity returns but direction doesn&amp;rsquo;t.
Teams that were stuck for months suddenly start shipping again. Features hit production. Backlogs shrink. Morale improves. Everyone high-fives. And then, six weeks later, the metrics haven&amp;rsquo;t moved. The customer feedback hasn&amp;rsquo;t changed. The strategy review reveals that all that output added up to very little impact.
I&amp;rsquo;ve seen this pattern enough times to recognize it early.</description></item><item><title>Why 'Move Fast' Fails Digital Companies at Scale</title><link>https://alokjani.github.io/why-move-fast-fails-digital-companies-at-scale/</link><pubDate>Mon, 02 Feb 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/why-move-fast-fails-digital-companies-at-scale/</guid><description>Speed is a startup&amp;rsquo;s early advantage. When you are small, moving fast lets you outmaneuver incumbents, learn faster than competitors, and ship before the window closes. Speed is oxygen.
At scale, the same speed becomes a liability. Not because the organization is slower — it often ships more than ever. But the costs of that speed compound in ways that are invisible in the early stages. Coordination overhead, deployment risk, decision noise, and quality debt accumulate beneath the surface.</description></item><item><title>Questions Leaders Must Ask Before Approving Platforms</title><link>https://alokjani.github.io/questions-leaders-must-ask-before-approving-platforms/</link><pubDate>Sat, 31 Jan 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/questions-leaders-must-ask-before-approving-platforms/</guid><description>Leaders do not need to design systems every day. They do not need to choose the framework, the database, or the deployment model. That is what the engineering team is for.
But leaders must protect optionality. That is their job. The platform decisions made today determine whether the organization can adapt tomorrow. And the questions that leaders ask before approving those decisions matter more than any technical detail they could evaluate.</description></item><item><title>Early Warnings You've Hard-Locked Your Platform's Future</title><link>https://alokjani.github.io/early-warnings-youve-hard-locked-your-platforms-future/</link><pubDate>Wed, 28 Jan 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/early-warnings-youve-hard-locked-your-platforms-future/</guid><description>You do not wake up with a brittle monolith overnight. No single decision creates a platform that cannot evolve. The lock builds gradually — through reasonable choices, accepted tradeoffs, and friction that gets normalized because there is no time to address it.
The danger is that the signals are quiet. They look like normal operational friction. A deployment that takes a little longer. A change that requires a little more coordination.</description></item><item><title>India's Digital Infrastructure: A Masterclass in National-Scale Strategy</title><link>https://alokjani.github.io/indias-digital-infrastructure-national-scale-strategy/</link><pubDate>Mon, 26 Jan 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/indias-digital-infrastructure-national-scale-strategy/</guid><description>Most governments digitize services. They take existing paper processes and put them online — portals, dashboards, digital forms. The result is faster versions of the same analog processes, not fundamentally new capabilities.
India did something different. Instead of digitizing services, it built digital infrastructure — reusable primitives that banks, hospitals, transport systems, and citizens all plug into. The result is not just faster government services. It is a transformed economy that leapfrogged to a $3 trillion digital economy in under a decade.</description></item><item><title>Architecture Decisions That Silently Kill Optionality</title><link>https://alokjani.github.io/architecture-decisions-that-silently-kill-optionality/</link><pubDate>Tue, 20 Jan 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/architecture-decisions-that-silently-kill-optionality/</guid><description>Most architecture decisions do not fail loudly. There is no outage, no incident, no rollback. The system works. Features ship. Metrics look fine.
The failure is quieter. It is the opportunity that the team cannot pursue because the architecture makes it too expensive. The integration that takes months instead of weeks. The strategic pivot that requires a rewrite instead of a configuration change. The competitor who moves faster because their architecture gives them more options.</description></item><item><title>What a Real, Compounding Software Moat Looks Like</title><link>https://alokjani.github.io/what-a-real-compounding-software-moat-looks-like/</link><pubDate>Sat, 17 Jan 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/what-a-real-compounding-software-moat-looks-like/</guid><description>The software moat is dead. Long live the compounding engine.
Most leaders treat a moat like a fortress: static, defensive, and expensive to maintain. You build it, you defend it, and you hope it holds long enough to matter. This worked in an era where competitive advantages lasted a decade. It does not work in an era where the average advantage lasts a quarter.
In 2026, real moats are not fortresses.</description></item><item><title>Signals Your Moat Won't Survive Scale</title><link>https://alokjani.github.io/signals-your-moat-wont-survive-scale/</link><pubDate>Thu, 15 Jan 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/signals-your-moat-wont-survive-scale/</guid><description>Every startup claims a moat. Few survive contact with customers, competitors, and complexity. The ones that do not are not always wrong about their advantage — they are often wrong about its durability.
The difference between a moat and a momentary advantage is scale. A moat gets stronger as the business grows. A momentary advantage gets weaker. The same pressures that growth creates — more customers, more competitors, more complexity — should reinforce a true moat.</description></item><item><title>Why Most Moats in Software Are Imaginary</title><link>https://alokjani.github.io/why-most-moats-in-software-are-imaginary/</link><pubDate>Mon, 12 Jan 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/why-most-moats-in-software-are-imaginary/</guid><description>The concept of a moat comes from investing. Warren Buffett popularized it — the idea that a great business has durable competitive advantages that protect it from competitors, the way a moat protects a castle.
In software, the term is used liberally. Network effects. Switching costs. First-mover advantage. Proprietary technology. Every startup pitch includes at least one. Every strategy document claims at least one. Most of them are imaginary.
Not because the advantages aren&amp;rsquo;t real in the moment.</description></item><item><title>A Simple Test to Separate Strategy From Planning</title><link>https://alokjani.github.io/simple-test-to-separate-strategy-from-planning/</link><pubDate>Sat, 10 Jan 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/simple-test-to-separate-strategy-from-planning/</guid><description>Most leadership teams believe they have a strategy. Few actually do. What they have is a plan — a detailed, well-intentioned, thoroughly reviewed plan that looks and feels like strategy but lacks the one quality that makes strategy valuable: the ability to guide decisions when circumstances change.
The difference matters. Planning organizes work. Strategy organizes thinking. A plan tells you what to do next. A strategy tells you how to decide what to do next — even when the plan no longer applies.</description></item><item><title>Early Signs Your Roadmap Is Hiding Strategic Indecision</title><link>https://alokjani.github.io/early-signs-your-roadmap-is-hiding-strategic-indecision/</link><pubDate>Thu, 08 Jan 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/early-signs-your-roadmap-is-hiding-strategic-indecision/</guid><description>If everything is important, nothing is strategic. Roadmaps do not lie — they expose the hard choices we dodged.
A roadmap that is crowded, noisy, and constantly in flux is not a planning problem. It is a strategy problem. The roadmap is carrying weight that the strategy was supposed to bear. Every item added without a clear strategic rationale, every priority that never gets deprioritized, every feature that exists because someone asked for it rather than because it serves a coherent direction — these are not planning failures.</description></item><item><title>Why Strategy Dies When It Becomes a Roadmap</title><link>https://alokjani.github.io/why-strategy-dies-when-it-becomes-a-roadmap/</link><pubDate>Mon, 05 Jan 2026 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/why-strategy-dies-when-it-becomes-a-roadmap/</guid><description>Most strategies don&amp;rsquo;t fail because they&amp;rsquo;re wrong. They fail because they slowly morph into a roadmap.
It happens gradually. A leadership team defines a set of strategic priorities. Someone asks &amp;ldquo;what does that mean for next quarter?&amp;rdquo; The priorities get translated into features. The features get timelines. The timelines get committed. And somewhere in that translation, the strategy dies. What remains is a list of things to build, sequenced and dated, with no connection to the choices that gave it meaning.</description></item><item><title>Leaders Are Learners</title><link>https://alokjani.github.io/leaders-are-learners/</link><pubDate>Mon, 15 Dec 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/leaders-are-learners/</guid><description>The higher you rise, the more dangerous it becomes to stop learning.
Titles don&amp;rsquo;t make leaders — learning does.
Every great leader I&amp;rsquo;ve worked with had one thing in common: an unreasonable commitment to growth.
Here&amp;rsquo;s what that looks like:
1. Curiosity Over Certainty
Ask more questions than you answer Challenge assumptions, especially your own Stay a student of the business, not just a guardian of the org chart 2. Feedback as Fuel</description></item><item><title>AI-Driven Software Development</title><link>https://alokjani.github.io/ai-driven-software-development/</link><pubDate>Tue, 09 Dec 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/ai-driven-software-development/</guid><description>Most software development teams measure AI adoption by tool usage. But that&amp;rsquo;s the wrong metric.
The real goal? Fewer incidents, cleaner deployments, faster PR flow, and higher-value engineering time.
AI isn&amp;rsquo;t for speed alone. It&amp;rsquo;s about stability + leverage.
Here&amp;rsquo;s how to adopt AI-driven application development the right way:
1. Fix the PR Queue First
AI is useless if your bottleneck is human review.
AI-draft PRs → engineers refine, not rewrite AI-assisted code search → faster context gathering Auto-review bots → catch basic issues before humans 2.</description></item><item><title>AI Adoption Anti-Patterns</title><link>https://alokjani.github.io/ai-adoption-anti-patterns/</link><pubDate>Sun, 07 Dec 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/ai-adoption-anti-patterns/</guid><description>Everyone&amp;rsquo;s hyped about &amp;ldquo;AI adoption.&amp;rdquo; Almost no one talks about the mistakes that quietly destroy engineering velocity.
Speed rises. Incidents rise faster. Ops teams burn out. And then bosses wonder what happened.
Here are anti-patterns I see most often and how to avoid them:
1. Treating AI as Tooling, Not Workflow
AI fails when teams sprinkle it on top of chaos.
No PR hygiene → AI multiplies review noise AI without process → accelerates dysfunction No ownership model → unclear who corrects AI mistakes 2.</description></item><item><title>Fixed Cost, Fixed Time — When Predictability Wins</title><link>https://alokjani.github.io/fixed-cost-fixed-time-when-predictability-wins/</link><pubDate>Tue, 02 Dec 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/fixed-cost-fixed-time-when-predictability-wins/</guid><description>Most leaders love flexibility in their projects, until the board asks: &amp;ldquo;How long will it take and how much will it cost?&amp;rdquo;
Fixed Cost, Fixed Time works when certainty is not optional. Here&amp;rsquo;s when it becomes the right tool:
1. Predictability Over Drift
Fixed dates → fixed budgets Clear scope → clear contract Works best when requirements won&amp;rsquo;t move 2. Discipline Over Chaos
No room for ambiguity Forces upfront alignment Everyone knows exactly what &amp;ldquo;done&amp;rdquo; means 3.</description></item><item><title>Product &amp; Technology: Two Lenses, One Mission</title><link>https://alokjani.github.io/product-and-technology-two-lenses-one-mission/</link><pubDate>Fri, 28 Nov 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/product-and-technology-two-lenses-one-mission/</guid><description>Planning season is here. Product imagines what&amp;rsquo;s possible. Engineering guards what&amp;rsquo;s practical.
But when vision meets execution — that&amp;rsquo;s when real progress happens.
Here&amp;rsquo;s what that looks like in action:
1. Shared Truths &amp;gt; Separate Agendas
Start from the same metrics, not different dashboards.
Customer impact must be the common language Quantify Objectives to Initiatives plan 2. Problems &amp;gt; Features
Product defines pain; tech defines pattern.
Build fewer things that solve deeper issues Ship outcomes, not outputs 3.</description></item><item><title>Resonance Is the Hidden Physics of Leadership</title><link>https://alokjani.github.io/resonance-is-the-hidden-physics-of-leadership/</link><pubDate>Fri, 28 Nov 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/resonance-is-the-hidden-physics-of-leadership/</guid><description>Great leaders ignite resonance — where ideas land, decisions stick, and progress flows smoothly.
Poor leaders breed dissonance — where momentum stalls, confusion reigns, and obstacles multiply.
Here&amp;rsquo;s the difference, and why it matters:
1. Resonance produces clarity; dissonance breeds noise
Clear decisions end second-guessing Clear boundaries protect focus &amp;amp; execution Clear roles prevent overlap — no turf battles 2. Resonance amplifies energy; dissonance drains it
Clear intent — people know what to do, not guess Coherent direction — strategy doesn&amp;rsquo;t contradict itself Emotional steadiness — tone shapes mindset and performance 3.</description></item><item><title>Building AI Agents That Matter</title><link>https://alokjani.github.io/building-ai-agents-that-matter/</link><pubDate>Thu, 20 Nov 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/building-ai-agents-that-matter/</guid><description>I recently completed the &amp;ldquo;Building AI Agents for the Enterprise&amp;rdquo; course led by Abhijith Neerkaje (Head of Data Science) and Ajay Shenoy (PhD, IISc) — one of the most rigorous, practical, and engineering-first AI courses out there.
For my capstone, I built TBawareGPT: a multilingual Q&amp;amp;A + triage agent for frontline Tuberculosis community healthworkers. It&amp;rsquo;s designed to be medically safe, constraint-driven, and ready for real-world use.
Five things that stood out:</description></item><item><title>Pragmatism Is the Most Underrated Leadership Flex</title><link>https://alokjani.github.io/pragmatism-is-the-most-underrated-leadership-flex/</link><pubDate>Tue, 18 Nov 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/pragmatism-is-the-most-underrated-leadership-flex/</guid><description>Vision builds the hype. Pragmatism delivers the win.
The best leaders don&amp;rsquo;t wait for perfect plans — they turn half-baked realities into real outcomes.
Here&amp;rsquo;s the playbook for pragmatic leadership:
1. Direction Over Detail
80% clarity beats 100% delay Anchor on principles, not perfection Set guardrails, then give execution room to breathe 2. Trade-offs Over Ideals
Momentum matters more than purity Know when &amp;ldquo;good enough&amp;rdquo; actually is Every &amp;ldquo;yes&amp;rdquo; costs something — be explicit about the trade 3.</description></item><item><title>Every Company Talks About Using AI. Real Change Is a System Redesign.</title><link>https://alokjani.github.io/every-company-talks-about-using-ai-real-change-is-a-system-redesign/</link><pubDate>Thu, 30 Oct 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/every-company-talks-about-using-ai-real-change-is-a-system-redesign/</guid><description>Every company talks about &amp;ldquo;using AI&amp;rdquo; these days. But real change isn&amp;rsquo;t just slapping AI onto what you already do. It&amp;rsquo;s about re-architecting how decisions, actions, and accountability flow.
Here&amp;rsquo;s how that shift really happens:
1. From Data-Driven → Agent-Driven Operations
Data gives you insights. Agents actually get things done. Forget dashboards telling you what happened — you&amp;rsquo;ve got agents that do something about it. Suddenly, every insight can trigger action right away.</description></item><item><title>First Principles: Unlearn and Relearn How to Think</title><link>https://alokjani.github.io/first-principles-unlearn-and-relearn-how-to-think/</link><pubDate>Fri, 17 Oct 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/first-principles-unlearn-and-relearn-how-to-think/</guid><description>Most people reason by analogy. The best leaders seek truth.
They don&amp;rsquo;t ask, &amp;ldquo;What&amp;rsquo;s everyone else doing?&amp;rdquo; They ask, &amp;ldquo;What&amp;rsquo;s actually true, and why?&amp;rdquo;
Here&amp;rsquo;s how to build from first principles:
1. Deconstruct
Break down ideas, don&amp;rsquo;t borrow.
Reduce complexity until what&amp;rsquo;s left can&amp;rsquo;t be reduced further Strip the problem to its fundamentals — no assumptions Ask &amp;ldquo;why&amp;rdquo; five times until logic stops, not excuses 2. Rebuild
Create from scratch, don&amp;rsquo;t copy.</description></item><item><title>Fully Autonomous AI Needs Oversight — Or Even Smart Agents Drift</title><link>https://alokjani.github.io/fully-autonomous-ai-sounds-futuristic-without-oversight-agents-drift/</link><pubDate>Wed, 15 Oct 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/fully-autonomous-ai-sounds-futuristic-without-oversight-agents-drift/</guid><description>Fully autonomous AI sounds futuristic and elegant. But without oversight, even the smartest agents drift.
Human-in-the-loop (HITL) design isn&amp;rsquo;t a constraint. It&amp;rsquo;s the control system that keeps AI aligned with intent.
Here&amp;rsquo;s how to balance autonomy with accountability for Agentic AI:
1. Governance as freedom with fences
Define agentic operating boundaries &amp;amp; embed explainability traceback Use policy-driven access, versioning, and audit trails 2. Human decisions are part of the circuit</description></item><item><title>Empowerment Is Not Permission</title><link>https://alokjani.github.io/empowerment-is-not-permission/</link><pubDate>Mon, 13 Oct 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/empowerment-is-not-permission/</guid><description>Empowerment is not permission.
It&amp;rsquo;s not &amp;ldquo;go ahead, do this.&amp;rdquo; It&amp;rsquo;s the system that ensures people don&amp;rsquo;t need to ask in the first place.
Too many leaders confuse empowerment with delegation. Delegation is handing tasks. Empowerment is handing trust + authority + context.
Here&amp;rsquo;s what real empowerment looks like:
1. Empowerment = Autonomy &amp;gt; Approval
Decisions happen at the edge, not escalated up the chain Teams move faster because they own outcomes, not just actions 2.</description></item><item><title>Recognition</title><link>https://alokjani.github.io/recognition/</link><pubDate>Wed, 08 Oct 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/recognition/</guid><description>Recognition isn&amp;rsquo;t applause. It&amp;rsquo;s oxygen.
People don&amp;rsquo;t burn out from hard work; they burn out when no one notices.
Here&amp;rsquo;s how I think about recognition that really matters:
1. Specific &amp;gt; Generic &amp;gt; Empty Praise
&amp;ldquo;Amazing job cutting build time by 40%&amp;rdquo; hits way harder than &amp;ldquo;Good work.&amp;rdquo; When you name what went well, you show others what great looks like Throwaway praise fades fast, empty praise erodes trust 2. Timely &amp;gt; Delayed &amp;gt; Forgotten</description></item><item><title>Retrospectives</title><link>https://alokjani.github.io/retrospectives/</link><pubDate>Thu, 02 Oct 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/retrospectives/</guid><description>Retrospectives are wasted in most teams.
They get treated like a checkbox exercise: write stickies, vote, move on. But retros were never meant to be meetings — they&amp;rsquo;re meant to be mirrors. Run well, they can transform a team into a learning machine.
Here&amp;rsquo;s the uncomfortable truth about how the best teams run retrospectives:
1. Reflection
Spot patterns — not just incidents Look back without blame — growth over guilt Capture context — why it happened, not just what happened 2.</description></item><item><title>Refactoring Isn't Optional — It's Insurance</title><link>https://alokjani.github.io/refactoring-isnt-optional-its-insurance/</link><pubDate>Wed, 01 Oct 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/refactoring-isnt-optional-its-insurance/</guid><description>Refactoring isn&amp;rsquo;t optional — it&amp;rsquo;s insurance.
Code rots. Systems drift. Without cleanup, speed today becomes drag tomorrow.
Here&amp;rsquo;s why you need to refactor now, not later:
1. Health &amp;gt; Velocity
Clear debt before scaling — don&amp;rsquo;t pile features on fragile foundations Shipping fast on messy code is like sprinting with a broken ankle Check your bug rate trend; set aside time to fix 2. Small &amp;gt; Big
Massive rewrites fail; continuous refactors succeed Touch a file, clean it — don&amp;rsquo;t wait for &amp;ldquo;refactor quarter&amp;rdquo; Sprinkle a % of commits with small refactors 3.</description></item><item><title>Regional AI Models Are the Next Frontier</title><link>https://alokjani.github.io/regional-ai-models-are-the-next-frontier/</link><pubDate>Fri, 26 Sep 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/regional-ai-models-are-the-next-frontier/</guid><description>Leadership lesson: In AI, relevance beats raw scale. Focused and contextual often outperform bigger and generic.
Large Language Models carry inherent biases — biases that persist after pre-training, fine-tuning, and human feedback. They&amp;rsquo;ve mastered English, yet struggle with queries in local languages, mainly because there isn&amp;rsquo;t enough online data in those languages.
Training these models costs hundreds of millions. True inclusivity means closing this language and data gap.
That&amp;rsquo;s why regional AI initiatives will continue rising into the future — not as a fallback, but as a pillar of national strategy.</description></item><item><title>New-Age Skills for the Next Generation</title><link>https://alokjani.github.io/new-age-skills-for-the-next-generation/</link><pubDate>Tue, 23 Sep 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/new-age-skills-for-the-next-generation/</guid><description>New-Age Skills for the Next Generation
When I lived in North America, I saw firsthand how economies position themselves for the future. It taught me that competitiveness isn&amp;rsquo;t just about talent — it&amp;rsquo;s about building systems that let new skills thrive.
The market new grads enter today isn&amp;rsquo;t the one their parents knew. Energy is getting rewired. Robots are leaving labs. Cars drive themselves. AI is mainstream.
To stay competitive, talent and entire economies must master a new toolkit.</description></item><item><title>Skill Stacking</title><link>https://alokjani.github.io/skill-stacking/</link><pubDate>Thu, 18 Sep 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/skill-stacking/</guid><description>Skill stacking beats skill hoarding.
You don&amp;rsquo;t need to be world class at one thing. You need to be strong enough at the right combination. That&amp;rsquo;s where breakthroughs happen — at the intersections. And that&amp;rsquo;s how you become truly unique in the information age.
Range &amp;gt; Specialization
One great skill makes you useful Having two great skills makes you rare Three complementary skills make you unstoppable Edges &amp;gt; Mastery
You don&amp;rsquo;t need top 1% — aim for top 20% across skills Stack adjacent abilities for exponential leverage The edge is in connection, not perfection Versatility &amp;gt; Rigidity</description></item><item><title>Honoring Vishwakarma Day: The Spirit of Creation</title><link>https://alokjani.github.io/honoring-vishwakarma-day-the-spirit-of-creation/</link><pubDate>Tue, 16 Sep 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/honoring-vishwakarma-day-the-spirit-of-creation/</guid><description>Vishwakarma, the divine architect, reminds us: everything we build should carry purpose, precision, and grace.
For engineers, designers, and makers, this day celebrates the craft behind progress.
Key Insights for Modern Builders, Inspired by Vishwakarma:
1. Respect the craft
Master your tools before you scale ideas Care for details; quality is built, not wished Treat code, steel, or circuits as an art form 2. Build for utility and beauty
Solve real problems, not just hard ones Keep the end user in every blueprint Balance function with form — elegance endures 3.</description></item><item><title>The Protégé</title><link>https://alokjani.github.io/the-protege/</link><pubDate>Thu, 11 Sep 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/the-protege/</guid><description>Every leader remembers the moment someone took a bet on them.
That&amp;rsquo;s what creates a protégé — not a title, but a responsibility passed forward.
It&amp;rsquo;s not about cloning yourself. It&amp;rsquo;s about multiplying leadership.
When nurturing a protégé, here&amp;rsquo;s what matters most:
1. Spot potential, not polish
Look for hunger, not just credentials Notice curiosity more than confidence See adaptability as the real signal of growth 2. Create stretch, not stress</description></item><item><title>Frontline General</title><link>https://alokjani.github.io/frontline-general/</link><pubDate>Tue, 09 Sep 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/frontline-general/</guid><description>Leadership isn&amp;rsquo;t sitting in the war room. It&amp;rsquo;s standing on the frontlines with your people, in the fire.
The best leaders are &amp;ldquo;Frontline Generals.&amp;rdquo; Not detached overseers, but those who shoulder risk, signal direction, and fight alongside.
Here are 5 truths about being a Frontline Leader:
1. Presence &amp;gt; Position
Show up where the pressure is — join war-rooms, not just read summaries Be visible when it&amp;rsquo;s hardest — late nights, escalations, outages Listen firsthand from the team, not through layers 2.</description></item><item><title>The Mentorship</title><link>https://alokjani.github.io/the-mentorship/</link><pubDate>Fri, 05 Sep 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/the-mentorship/</guid><description>Mentorship isn&amp;rsquo;t advice on demand. It&amp;rsquo;s an investment that compounds over time.
Too many confuse it with career coaching or casual networking. The real impact of mentorship is not in the answers given, but the doors opened and the belief transferred.
Here&amp;rsquo;s what real mentorship looks like:
1. Mentorship = Guidance &amp;gt; Answers
You learn to navigate, not just follow directions.
Offers perspective, not instructions Share lessons, not prescriptions Builds clarity, not dependence 2.</description></item><item><title>SLMs vs LLMs: The Case for Small Language Models in Production</title><link>https://alokjani.github.io/slms-vs-llms/</link><pubDate>Tue, 02 Sep 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/slms-vs-llms/</guid><description>Every engineering leader I know is looking at Large Language Models and asking the same question: &amp;ldquo;How do we use this?&amp;rdquo;
It&amp;rsquo;s the wrong question. The right question is: &amp;ldquo;Which model class actually solves our problem?&amp;rdquo;
The assumption that bigger is better has led more teams to overengineer AI products than any other mistake. They reach for a trillion-parameter model to classify customer support tickets, then spend months fighting latency, cost, and hallucination problems that a Small Language Model would have avoided entirely.</description></item><item><title>Product-Led Growth</title><link>https://alokjani.github.io/product-led-growth/</link><pubDate>Fri, 29 Aug 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/product-led-growth/</guid><description>SaaS is one of the highest-leverage, most profitable business models ever built. The surprising part? You don&amp;rsquo;t need an MBA to understand SaaS growth. If you can read the product, you can read the business.
Here&amp;rsquo;s the blueprint that powers SaaS profitability:
1. Activation → Get users to their first win fast
Activation % ; Time-to-Value ; Onboarding %
Cut friction — fewer clicks, instant access Bake value early — pre-filled data, guided setup Contextual onboarding — help only when it matters 2.</description></item><item><title>Accountability Isn't a Value — It's a System</title><link>https://alokjani.github.io/accountability-isnt-a-value-its-a-system/</link><pubDate>Tue, 26 Aug 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/accountability-isnt-a-value-its-a-system/</guid><description>Accountability isn&amp;rsquo;t a value — it&amp;rsquo;s a system.
In high-output engineering teams, ownership is visible. Standards are explicit, not implicit. Intervention is designed, not improvised.
Accountability is a system of systems. Here&amp;rsquo;s how to drive it at scale:
1. Reset the Ground Truth
Engineering missions drift without intentional resets.
Acknowledge what&amp;rsquo;s falling short — delivery gaps, missed SLAs, silent ownership handoffs Recommit to 3-5 clear engineering standards — ownership, clarity, response time Open an opt-out window — better to part ways than drift along 2.</description></item><item><title>Change or Risk Irrelevance</title><link>https://alokjani.github.io/change-or-risk-irrelevance/</link><pubDate>Thu, 21 Aug 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/change-or-risk-irrelevance/</guid><description>&amp;ldquo;If you dislike change, you&amp;rsquo;ll dislike irrelevance even more.&amp;rdquo;
Someone said it decades ago; but the lesson hits harder today.
IBM, AT&amp;amp;T, GE — companies that outlasted eras — didn&amp;rsquo;t just bet on tech. They built systems to metabolize change, boldly stepping into the future. Be it markets, consumer habits, or technology cycles. And the companies that vanish? They confuse tenure with progress.
Here is the math that every engineering leader should internalize:</description></item><item><title>Tribal Knowledge Is Hidden Tech Debt</title><link>https://alokjani.github.io/tribal-knowledge-is-hidden-tech-debt/</link><pubDate>Tue, 19 Aug 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/tribal-knowledge-is-hidden-tech-debt/</guid><description>It plays out too often: a system fails. Nobody knows why.
The answer? &amp;ldquo;Ask the one person who built it.&amp;rdquo;
Tribal knowledge is tech debt in disguise. What isn&amp;rsquo;t documented becomes fragile. Left unattended, tech debt becomes a liability.
Here&amp;rsquo;s 5 ways to pay it down systematically:
1. Refactor Knowledge into Systems
Replace shortcuts with published runbooks, ADRs and blueprints Treat documentation like code: versioned, reviewed, enforced Make knowledge searchable &amp;amp; failure patterns observable 2.</description></item><item><title>Augmented AI Is the New Iron Man Suit</title><link>https://alokjani.github.io/augmented-ai-is-the-new-iron-man-suit/</link><pubDate>Tue, 12 Aug 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/augmented-ai-is-the-new-iron-man-suit/</guid><description>For software developers, Augmented AI is the new Iron Man suit — but it&amp;rsquo;s all about how you pilot it.
In capable hands, it accelerates productivity; in less skilled hands, it speeds up mistakes.
The difference? Not tools. Not tokens. It&amp;rsquo;s a change in perspective — thinking in value streams instead of products.
Here&amp;rsquo;s what we&amp;rsquo;re learning about building software with Augmented AI:
1. Map Outcomes, Not Just Interfaces
Clarity = your GPS.</description></item><item><title>Quiet Ideas May Fade in Loud Rooms</title><link>https://alokjani.github.io/quiet-ideas-may-fade-in-loud-rooms/</link><pubDate>Tue, 29 Jul 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/quiet-ideas-may-fade-in-loud-rooms/</guid><description>Great meetings bring out great ideas. Yet sometimes quiet thinkers get overlooked.
Quiet ideas may fade in loud rooms. But they don&amp;rsquo;t necessarily have to lose impact.
Here&amp;rsquo;s my playbook to prevent airtime getting hijacked and cut through noise:
1. Spot the Blind Spot
Impact comes from insight, not repetition.
Find the missing data, ignored detail, or unseen angle Focus on what no one else is saying Add signal, not commentary 2.</description></item><item><title>AI Can Move Fast. But It Can't Lead.</title><link>https://alokjani.github.io/ai-can-move-fast-but-it-cant-lead/</link><pubDate>Thu, 24 Jul 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/ai-can-move-fast-but-it-cant-lead/</guid><description>When AI Agents accidentally delete your production database, it&amp;rsquo;s not just a software bug.
It&amp;rsquo;s a failure — in judgment, safeguards, and governance.
No amount of compute can replace trust.
The real edge in the Age of AI? Doing what AI can&amp;rsquo;t.
Here are 5 Human Skills AI won&amp;rsquo;t replace — and why they matter more than ever:
1. Discernment
AI offers options. You choose the path.
Know right vs wrong Know when not to act Prioritize amid ambiguity 2.</description></item><item><title>Dependency Playbook to Avoid Roadmap Hell</title><link>https://alokjani.github.io/dependency-playbook-to-avoid-roadmap-hell/</link><pubDate>Tue, 22 Jul 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/dependency-playbook-to-avoid-roadmap-hell/</guid><description>Dependencies won&amp;rsquo;t kill your Department&amp;rsquo;s Roadmap — ignoring them will.
Here&amp;rsquo;s how I manage them at scale:
1/ Surface Early, Prioritize Hard
Run a pre-mortem to surface hidden blockers Catalog who, what &amp;amp; when — don&amp;rsquo;t assume DRIs + risks Rank ruthlessly — delivery plan by business impact and visibility 2/ Engage Before It&amp;rsquo;s a Blocker
Embed, pair, and co-design — not just coordinate Don&amp;rsquo;t wait for dependencies to fail — pre-align early Engineer-to-engineer syncs beat passive doc handoffs 3/ Milestone Like a Pro</description></item><item><title>Everyone Says They Want Feedback. Most Orgs Get It Wrong.</title><link>https://alokjani.github.io/everyone-says-they-want-feedback-most-orgs-get-it-wrong/</link><pubDate>Sat, 19 Jul 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/everyone-says-they-want-feedback-most-orgs-get-it-wrong/</guid><description>Everyone says they want feedback. Most orgs get it wrong — and it costs trust, clarity, and talent.
I&amp;rsquo;ve learned the hard way. Here&amp;rsquo;s what they don&amp;rsquo;t teach you in leadership training:
1/ Prepare like it matters
Set your intention — know the outcome you&amp;rsquo;re driving Create safety — right moment, right tone, private space Be specific — focus on behavior, not just vibes 2/ One size doesn&amp;rsquo;t fit all</description></item><item><title>Your Org Design Is Slowing You Down</title><link>https://alokjani.github.io/your-org-design-is-slowing-you-down/</link><pubDate>Tue, 15 Jul 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/your-org-design-is-slowing-you-down/</guid><description>When delivery slows, it&amp;rsquo;s easy to chalk it up to a tech issue or a team problem.
But more often than not, it&amp;rsquo;s the structure that&amp;rsquo;s holding you back.
Here&amp;rsquo;s how to rethink your org design for speed and clarity:
1. Give teams ownership of outcomes
Vague mandates dilute accountability Every team should have a clear charter Every team should have measurable impact 2. Pair PM and Tech leads
Outcome + Execution = alignment Balance product vision with execution excellence Divide &amp;ldquo;what&amp;rdquo; and &amp;ldquo;how&amp;rdquo; to drive clarity and balance 3.</description></item><item><title>K8s + GenAI Agentic Workloads Are Just Another Deployment</title><link>https://alokjani.github.io/k8s-genai-agentic-workloads-are-just-another-deployment/</link><pubDate>Mon, 14 Jul 2025 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/k8s-genai-agentic-workloads-are-just-another-deployment/</guid><description>GenAI isn&amp;rsquo;t magic — it&amp;rsquo;s a workload. Inference, training, serving — Kubernetes eats them for breakfast.
But Agentic Apps on K8s bring new scheduling and utilization challenges. Low GPU utilization? Wrong operators? Misaligned resource requests? That&amp;rsquo;s wasted money.
Here&amp;rsquo;s how I see it coming together:
1. Treat GenAI as a First-Class Workload
Training, inference, serving — schedule them like any microservice Separate control-plane and data-plane for AI pipelines Apply standard K8s patterns — don&amp;rsquo;t reinvent orchestration 2.</description></item><item><title>Algorithm of Happiness: A Code Review</title><link>https://alokjani.github.io/algorithm-of-happiness-code-review/</link><pubDate>Wed, 29 Dec 2021 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/algorithm-of-happiness-code-review/</guid><description>Somewhere on the internet, Danny Ma published a simple algorithm for a happy life. It was written as pseudocode — a loop that runs while alive, with functions for family, health, gratitude, compassion, and the relentless pursuit of goals.
Then Sergey Zelvenskiy did a code review.
The review is brilliant not because it&amp;rsquo;s harsh, but because it applies the same rigor we use for software to something much harder: how we think about our own lives.</description></item><item><title>Building a Web Application Security Testing Program</title><link>https://alokjani.github.io/building-web-application-security-testing-program/</link><pubDate>Thu, 11 Mar 2021 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/building-web-application-security-testing-program/</guid><description>Most engineering teams treat security testing as a phase. You build the feature, you test it for functionality, and then — if there&amp;rsquo;s time or if compliance requires it — you run some security scans.
This approach produces a predictable outcome: vulnerabilities are discovered late, fixed under pressure, and rediscovered in the next release because nobody addressed the root cause.
I&amp;rsquo;ve spent enough time around application security to know that the tool list is never the bottleneck.</description></item><item><title>Ask Exactly What You Want</title><link>https://alokjani.github.io/ask-exactly-what-you-want/</link><pubDate>Thu, 18 Feb 2021 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/ask-exactly-what-you-want/</guid><description>A tweet by Lewis Howes has stayed with me for years:
&amp;ldquo;The mind is so powerful that when you ask for what you want, the world starts aligning to show you more of that thing. You must take the actions and decision to follow through on your desires instead of falling into past habits or sabotaging yourself. You must be consistent in your pursuit of positive qualities otherwise it&amp;rsquo;s easy to slip back into a place of suffering.</description></item><item><title>What a Desert Survival Exercise Taught Me About Engineering Leadership</title><link>https://alokjani.github.io/desert-survival-engineering-leadership/</link><pubDate>Tue, 02 Feb 2021 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/desert-survival-engineering-leadership/</guid><description>It is approximately 10:00 a.m. in mid July, and you have just crash landed in the Sonora Desert in the southwestern United States. The light twin engine plane, containing the bodies of the pilot and the co-pilot, has completely burned. Only the air frame remains. None of the rest of you have been injured.
The pilot was unable to notify anyone of your position before the crash. However, ground sightings taken before you crashed indicate that you are 100 km off the course that was filed in your VFR Flight Plan.</description></item><item><title>GitOps and Machine Sets: Operating Infrastructure at Scale</title><link>https://alokjani.github.io/gitops-machine-sets/</link><pubDate>Sun, 31 Jan 2021 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/gitops-machine-sets/</guid><description>Before GitOps, infrastructure management looked like this: someone SSH&amp;rsquo;d into a server, ran a few commands, and hoped nothing broke. If it did, they fixed it manually and moved on. The next person to touch that server had no idea what changed or why.
That approach works at startup scale. It fails catastrophically at enterprise scale.
GitOps emerged as a response to this problem. It applies the same discipline that engineers use for application code — version control, code review, automated testing — to infrastructure operations.</description></item><item><title>Seven Sacred Laws of the Ojibway: Leadership Lessons From an Ancient Tradition</title><link>https://alokjani.github.io/seven-sacred-laws-ojibway-leadership/</link><pubDate>Sun, 03 Jan 2021 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/seven-sacred-laws-ojibway-leadership/</guid><description>All Life Is Sacred And All Creation Related. What We Do Effects The Whole Universe. So Let Us Walk In Balance With Mother Earth And All Her Peoples. — Smiling Bear
Ojibway educator and spiritual leader Edward Benton-Banai published The Mishomis Book: The Voice of the Ojibway in 1988, containing traditional teachings of the Anishinabe people. Among them are the Seven Sacred Laws — the teachings of the seven grandfathers — which represent the principles governing all aspects of daily living.</description></item><item><title>About</title><link>https://alokjani.github.io/about/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/about/</guid><description>I&amp;rsquo;m Alok Jani — Director of Engineering at Deutsche Telekom.
I&amp;rsquo;ve spent over 15 years building products, leading engineering teams, and learning what actually works when you&amp;rsquo;re responsible for delivery, strategy, and people at scale.
This blog is where I turn messy leadership lessons into something durable. Every post starts as a thought I needed to get out of my head — something I learned the hard way, a pattern I&amp;rsquo;ve seen repeat across teams, or a framework that saved me from a bad decision.</description></item><item><title>Search</title><link>https://alokjani.github.io/search/index.json</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://alokjani.github.io/search/index.json</guid><description/></item></channel></rss>