This page is the long-form version of my experience. I wanted it to read more like a story than a bullet list: what changed, what I owned, what worked, and what I learned.

Staff Software Engineer, METi — Milwaukee Tool (METi Team)

Dec 2025 - Present | Milwaukee, WI (Remote)

Leading product content architecture and platform modernization across Milwaukee Tool's web ecosystem, with a focus on scalable PIP infrastructure, experimentation, and operational resilience.

Moving into the Staff role meant taking on a broader scope of ownership — not just shipping features but shaping the platform direction that other teams build on top of. A lot of the early energy went into the PIP program, which was a significant architectural rethink. The old approach was fragile and hard to scale. We replaced it with a standardized templated framework backed by automated commercial data pipelines and a cleaner 1:1 product-to-data model in Sitecore . For the first time the team could actually measure the return on product content investments rather than just guessing.

Alongside that I was responsible for delivering the full category and sub-category page experience for the Interstellar product line across Q2 and Q3. That meant translating Figma specs into production components — hero banners, filterable product listings, FAQ modules, sub-category pages — and establishing a content architecture that could scale with the brand's cadence rather than lag behind it.

Security was a recurring thread throughout this period. We integrated Snyk into CI/CD pipelines with automated scanning and pipeline blocking so vulnerabilities couldn't silently ship into production. We upgraded the Azure WAF ruleset to DRS 2.1 and worked through CVE remediation ahead of a planned external penetration test. None of this is glamorous work but it compounds. A platform that is quietly secure is one the team can trust and move fast on.

On the infrastructure side I led the migration of MET.Search from AKS to Azure Container Apps . The key motivation was breaking the dependency on .NET Framework, which had locked the service to Windows-only hosting. Moving to ACA opened up Linux-based deployment, reduced operational overhead, and eliminated a whole class of maintenance burden the team had been carrying without necessarily naming it.

We also stood up an A/B testing platform during this period. The first experiment was straightforward — a 50/50 split on call-to-action copy — but the infrastructure behind it was the real deliverable. Once that foundation was in place, the team had a repeatable way to test product hypotheses against real behavior instead of debating assumptions in planning meetings.

PIPELINE was another area I pushed hard on. It had historically been a seasonal launch destination that went quiet between events. I drove the work to make it a year-round channel: trunk-based development, integration testing with snapshot support for MET.Search , SEO-aligned content infrastructure. The commercial impact extends across the full product release cycle now rather than spiking at a single event date.

Software Architect — Milwaukee Tool (Dotcom Team)

Dec 2023 - Nov 2025 | Milwaukee, WI (Remote)

Led the headless re-engineering of MT.com and drove major platform modernization, security, and data initiatives across the full Dotcom stack.

The biggest single thing I did in this role was lead the re-engineering of milwaukeetool.com from the ground up. The existing platform was a Sitecore XP monolith that had accumulated years of dead code, fragile deployments, and architectural debt that made every release feel risky. We replaced it with a Next.js headless frontend backed by Sitecore XM, styled with Tailwind CSS , powered by Coveo search, and wired up to a full analytics stack including Google Tag Manager , RudderStack , and HotJar . We delivered multilingual support across four locales, built a Storybook-based component library, and launched in a state that was genuinely more maintainable than what came before. That is not always the outcome of a re-platform.

One of the more consequential pieces of work that doesn't always make it into the summary was the authentication platform rebuild. The platform had Auth0 deeply woven into the Sitecore layer, which made identity an obstacle to almost every architectural decision. I extended the YARP reverse proxy to support Azure Active Directory as a dual identity provider alongside the existing Auth0 and Sitecore identities, enforced authorization globally on all incoming requests, and introduced Microsoft Entra claim-based internal user detection so the system could make routing decisions based on who was actually logged in. I also migrated session data protection keys to a shared Azure Key Vault , which eliminated the cookie decryption failures that had periodically knocked users out of authenticated sessions across services. The cookie payload itself went from roughly 10 KB to 377 bytes after the chunked cookie manager changes — not a headline metric, but the kind of thing that quietly improves session reliability at scale.

Alongside the re-engineering, I designed and delivered a standalone containerized image offload service and established a shared infrastructure library used across all backend services. I used Pulumi initially and later standardized on Terraform for provisioning Azure Container Apps , regional Service Bus instances, Redis and MongoDB private DNS zones, Key Vault , Container Registry, and Front Door resources across dev, staging, and production. The shared library itself grew significantly over the tenure — from a Serilog logging layer to a .NET 8 library delivering a resilient HTTP client with retry logic, startup configuration validation, selective per-route logging controls, and a deployment traceability endpoint so you could see exactly which build was running in any environment at any time.

On the search and frontend modernization side, I restructured the frontend codebase out of a Turborepo monorepo into a single Next.js application, introduced TanStack React Query for typed, structured server-side data fetching, and wired in Statsig for environment-aware feature flagging that didn't require a redeploy to toggle behavior. Parallel to that, I shipped a versioned product listings API for the search service — adding pagination, score-based ordering, and category filters — integrated a Meilisearch sidecar for improved local development parity, and introduced a snapshot cache so the development environment wasn't dependent on live search index data. Getting to trunk-based release pipelines with integration test snapshots was a meaningful operational improvement for a team that had been living with branch-based release friction for years.

Product data was a persistent structural problem. We had a model where products, families, and pages were all tangled together in Sitecore in ways that made PIM integration unreliable and search accuracy hard to improve. I led a two-phase product data modernization that established a clean 1:1 product-to-page model and properly separated content from presentation. I also wired content change event handlers into the Sitecore CMS so that product updates published by editors would propagate in real time to the search service, the frontend, and any other downstream consumer that needed fresh data without polling. That work directly unblocked the new product detail page (PIP) program and reduced the friction of every downstream integration that touched product data.

On the PIP side, I spearheaded the redesign for Milwaukee Tool's power tool pages. We built a templatized headless architecture with an A/B testing layer baked in from the start — responsive image galleries sourced from PIM, a sticky CTA component with regional pricing via Price Spider , side-by-side specification comparison, enhanced promotional content blocks, and FAQ accordions. The intent was to give the business a measurable baseline to iterate from rather than just shipping a new design and hoping it performed.

Security was a thread running through the whole tenure. I ran Snyk across multiple repositories, systematically worked down the vulnerability backlog, enforced CSRF protections, applied vendor security bulletins via configuration patch transforms, and addressed findings from a third-party Guidepoint penetration test. I also handled the CDP migration — moving tag management from Tealium to Google Tag Manager and implementing RudderStack with SHA-256 email hashing and OneTrust for CCPA compliance. These were not glamorous projects but they were the kind of work that reduces real organizational risk.

The last two years also involved a sustained production hotfix track that I owned alongside everything else. Post-launch cleanup, search infrastructure improvements, authentication layer tuning, AKS resource management, and a steady stream of critical fixes across the frontend, CMS, and search service whenever something needed immediate attention. I think that kind of sustained work often gets undervalued, but it is what keeps a platform from slowly degrading back to where it started.

Senior Software Engineer — Milwaukee Tool (Dotcom Team)

Aug 2022 - Nov 2023 | Milwaukee, WI (Remote)

Drove full-stack platform delivery, security hardening, and technical modernization across Milwaukee Tool's digital storefront while laying the groundwork for the headless architecture that followed.

This was the period where I started thinking less about individual features and more about what the platform needed to be healthy over time. There was real accumulation happening — technical debt, fragile pipelines, security gaps — and the team needed someone willing to work on the unglamorous stuff alongside the product work. I tried to be that person while still shipping things customers actually used.

On the product side I delivered end-to-end across several high-visibility initiatives. The system merchandiser gave shoppers a curated, system-centric way to browse M12 and M18 cordless ecosystems with configurable Sitecore components, chip-based filtering, and analytics instrumentation baked in. The comparison tool gave users a side-by-side specification view across product alternatives, supported multilingual audiences, and was wired to behavioral analytics so we could actually measure whether it influenced purchase intent. The Pipeline work transformed what had been a once-a-year launch destination into a continuously active product channel with filterable listings and proper tracking.

The identity work was quieter but had long-term consequences. I built it out as a standalone service called MET.Authentication — a YARP -based reverse proxy that pulled Auth0 completely out of the Sitecore layer. Beyond the proxy itself, I implemented CSRF protection via cookie-to-header token validation, moved session storage to a distributed Redis cache so sessions survived pod restarts, and persisted data protection keys in Azure Key Vault to eliminate the cookie decryption failures that had been knocking users out of authenticated sessions across services. I also built a custom multi-domain cookie manager that brought the auth cookie payload down from roughly 10 KB to 377 bytes — a small number that had real consequences for session reliability across multiple regional domains. That service became the foundation for the Accounts initiative that followed and the model for how identity infrastructure worked going forward.

In parallel I established MET.Kernel as a shared library consumed across multiple services. It started as a way to standardize logging — getting structured, consistent output out of every service so we could actually debug distributed issues — but grew into something more useful: a resilient HTTP client with retry policies, custom serialization converters for the types we passed across service boundaries, a naming convention resolver that kept .NET and JavaScript data contracts in sync, and a server-side Razor templating engine for string rendering. Having a versioned, shared foundation meant that improvements rippled across the platform without requiring each team to independently re-solve the same problems.

I also stood up MET.Search as a proper microservice. The search concerns had been living inside the Sitecore monolith, which made them hard to evolve and impossible to scale independently. I used FastEndpoints to give it a clean, minimal API surface, introduced product-type-specific data contracts and mapping logic so the API response matched exactly what the frontend needed, built Coveo autocomplete and query-suggestion proxy endpoints, and restructured the repository layer to be testable and maintainable. Separating search out gave the team real leverage — changes to how we indexed or served product data no longer required touching a system that ran the entire website.

Security was a structured program across this stretch, not a series of one-off fixes. We ran three phases: discovery and auditing to understand our actual exposure, remediation to close the highest-risk gaps, and a third pass driven by external penetration test findings. That meant HSTS enforcement, Nginx upgrades, CSRF protections across headless clients, Auth0 session timeout controls, rate limiting on resource-intensive endpoints, and clickjacking mitigations. The difference between doing this work reactively and proactively is significant. We chose proactive.

I also put a lot of energy into SEO and platform debt. I refactored the Product API to introduce in-memory caching, implemented hreflang and canonical URL handling, split sitemaps across US, Canada, and Mexico, and extended sitemap generation to cover product detail pages. That work directly improved crawl efficiency across a multi-region web presence. On the debt side, I helped drive over 160 work items across microservices migration, Node.js and Kubernetes upgrades, and the introduction of Robot Framework test automation — none of which are exciting individually, but collectively they made the platform something you could build on with confidence.

Toward the end of this role I was doing research and proof-of-concept work for headless CMS adoption. That work informed the architecture decisions I made as soon as I moved into the Software Architect role.

Software Engineer II — Milwaukee Tool (Dotcom Team)

Apr 2022 - Jul 2022 | Milwaukee, WI (Remote)

Delivered the Compare Tool and Merchandiser from scratch, overhauled the build pipeline and products service, owned AKS deployment patterns, and completed a platform-wide video infrastructure migration ahead of a hard vendor deadline.

This role was short but it covered a lot of ground. Four months, and the work ranged from shipping a customer-facing product comparison experience to refactoring the products domain to overhauling the build pipeline to migrating every video player on the platform. Looking back it was probably more work than was reasonable to squeeze into a single quarter, but it moved.

The most visible thing I shipped was the Compare Tool — the end-to-end product comparison experience that let customers put Milwaukee Tool products side by side. I built the backend from scratch: Sitecore API endpoints, computed fields for specs, SKUs, features, and marketing categories, the templates and rendering definitions. Then the frontend in Svelte — a bottom sheet drawer, product chip components, mini comparison cards, and badge elements. The computed fields were designed specifically to support the filtering and ranking logic the frontend needed, so the whole thing felt coherent rather than bolted together. Getting from zero to production on that in a compressed window required staying focused and not wasting time.

Alongside the Compare Tool I was iterating on the Merchandiser system. That was more of a continuous improvement track — Flickity carousel integration, chip-based category filtering, caching the product results to avoid hammering the backend on every filter change, and a custom configurable dropdown field type in Sitecore that gave content authors the ability to configure system-specific behavior without developer involvement. The goal was to reduce the feedback loop between what marketing wanted to merchandise and what actually showed up on the page. Small abstraction, real operational value.

On the infrastructure and developer experience side, I refactored the products domain. The original product service was a class that had grown to handle everything and was effectively untestable. The replacement split responsibilities cleanly across a dedicated service layer, a data repository, and a listing service with proper facet models and multilingual support. Separately, I overhauled the Gulp build pipeline — migrating from Gulp 3 to Gulp 4 on Node.js 16 with a modular task architecture, environment-specific configs for local, CI, and three deployment targets, and Terser minification with regex pre-processing transforms that meaningfully improved production asset sizes. Neither of these were features anyone demoed. They were the kind of work that makes everything else faster and easier to reason about.

The other significant project was the video infrastructure migration. The platform had Brightcove integrations scattered across a large number of rendering surfaces, each implemented slightly differently. Before touching anything, I ran a full codebase audit to make sure nothing was missed — that kind of upfront coverage check matters on a migration like this because a single overlooked component means a broken experience after the legacy platform is gone with nothing left to fall back on. Once I had a complete picture, I worked through each surface systematically — banner views, promo views, modal components, 360-degree product views, testimonial layouts, dynamic column layouts, autoplay slides, carousel slides, video description cards, hero components, and the product video API layer — replacing each integration with a single shared Vimeo Video Player component. The migration landed ahead of the Brightcove end-of-life deadline with no disruption to end users.

I was also responsible for AKS deployment patterns and authored Helm charts for reproducible stage and production behavior, planned and executed a Sitecore infrastructure edition migration that reduced cloud VM capacity by eliminating unused platform overhead, and extracted a large-scale interactive product builder into a standalone React workload with its own independent release cycle. By the time I rolled into the Senior Engineer role in August, the platform was in meaningfully better shape than I found it.

Software Engineer — Milwaukee Tool (Dotcom Team)

Aug 2021 - Mar 2022 | Milwaukee, WI (Remote)

First engineering role on the Dotcom team — learning fast, contributing broadly, and building a foundation in platform work, Sitecore, and agile delivery.

I joined the Dotcom team in August 2021 coming from an analytics and data background. The stack was new to me — Sitecore , .NET MVC, Azure infrastructure — but the team was strong and the work was real from day one. I treated that gap as an advantage more than a liability and picked up tickets that had the most unknowns so I could close ground faster.

The work was more substantive than the "first role" framing implies, looking back. I drove a three-phase Vimeo video platform integration — refactoring the global video player architecture to support provider-agnostic rendering across hero banners, carousels, modals, and promo components — so editors could embed Vimeo content without duplicating component logic. This was the first pass at consolidation before the full migration I completed in the SE II role. I also rewrote a promotion winner selection service from scratch and achieved a roughly 15x throughput improvement by replacing the original's redundant enumeration with a purpose-built hash set structure, alongside stored procedure execution, improved async patterns in Azure Blob Storage , and structured logging throughout the workflow. Neither of those were starter tickets.

On the platform infrastructure side, I migrated sitemap generation into the .NET codebase, extended it to serve a careers subdomain independently, added HTTP response compression, automatic post-generation publishing, and built the Azure Function agent that automated the whole pipeline. I also expanded the Sitecore ORM ( Glass.Mapper ) model coverage across eight feature layers — HR, Identity, Navigation, Documents, Catalogs, Contests, Redemptions, and Components — replacing raw Sitecore API calls with strongly-typed C# models that unblocked type-safe content rendering across the platform. Both of these were unglamorous but had real downstream reach.

I cleared a meaningful amount of technical debt during this stretch too. That meant removing deprecated third-party libraries — abcPDF, NVelocity, the legacy Azure Storage SDK, DynamicPlaceholders — replacing the platform's legacy static Sitecore context access with properly injected, testable equivalents, and wiring dependency injection into the Accounts, Components, Navigation, and Redemptions feature modules. I also maintained the Azure Pipelines CI/CD configurations — including the product builder's React compiler workflow and Nginx ingress configuration parity across INT, QA, and PROD environments — and refactored the Selenium UI test suite to support multi-environment execution in Azure DevOps.

What I carried out of this role was less about any specific ticket and more about learning how to operate in a complex codebase with real users, real pressure, and real consequences when something breaks. The technical velocity came. I was promoted to Software Engineer II in about six months.

Sales Analyst — Builders World Wholesale Distribution

Jan 2019 - Jul 2021 | Pewaukee, WI (Hybrid)

Supported a finance-led migration to Acumatica and built reporting automation that cut daily Excel work and made marketplace pricing safer and easier to manage.

This role gave me my foundation in data operations, pricing analysis, and business systems. I supported a large Acumatica migration and helped connect product and pricing data to marketplace workflows across Amazon, Walmart, eBay, and internal channels.

Most of my impact came from automation. I built daily reporting and export tooling that cut spreadsheet-heavy manual work, added pricing guardrails, and made downstream processes easier for finance and distribution teams to run. It was my first clear example that practical technical work can create immediate business value.

I am a footer.