Let me describe a project I have encountered more than once. You join a Dynamics 365 engagement mid-project, or you are asked to troubleshoot a production issue, and after a few hours of digging you realize the business logic is scattered across five different places.
There is a business rule locking a field on the form. There is a web resource also trying to change that same field. There is a classic workflow running in the background setting a value on record save. Somewhere there is also a Power Automate flow doing something related. And there is a plugin as well, but nobody is sure if it is still active.
This is tech sprawl, and it is one of the most common and most costly problems in enterprise D365 CRM projects. Debugging takes a full day instead of an hour. New developers on the project spend weeks just mapping out what runs when and why. Every change carries risk because you cannot be sure what you might break.
This post is about how to avoid that. I want to walk through what a good D365 CRM architecture actually looks like, why mixing low-code and pro-code tools without a clear strategy causes real problems, and why a code-first, ALM-driven approach with plugins and web resources as the primary building blocks is the right choice for enterprise projects.
The Core Principle: One Technology Per Concern
Before going into specific tools, the principle that simplifies everything is this: use one technology per concern.
- Server-side logic: Plugins
- UI logic on forms: web resources (TypeScript over JavaScript)
- External integrations: Plugins, Azure Functions
- Human-in-the-loop workflows: Power Automate
- Approvals, notifications to external systems: Power Automate
- Azure integrations: Plugins, Service Bus
That is it. When you add a second tool to the same concern, you create ambiguity. When a field behaves unexpectedly in production, you want exactly one place to look. If both a business rule and a web resource can affect the same field, you now have two places to look. Add a workflow and you have three.
The number of tools you use is not a measure of flexibility. It is a measure of complexity. In enterprise software, complexity is a liability.
What Happens When You Mix Business Rules, Web Resources, and Workflows
Business rules, web resources, and classic workflows are all legitimate tools. The problem is using them together without clear ownership boundaries, on the same fields, triggered by the same events.
Here is a real scenario. A field called new_approvalstatus is managed by:
- A business rule that locks the field when it is set to “Approved”
- A web resource that unlocks and clears the field in a specific edge case
- A classic workflow that sets it to “Pending” after a related record changes
Now a user reports that the approval status is sometimes blank after saving. Which tool caused it? You have to check three different places with three different debugging approaches. Business rules are tested in the form editor with no logging. Web resources are debugged in browser dev tools with source maps if you are lucky. Classic workflows are investigated in the system job history with minimal detail.
Each technology has a different execution context, different timing, and a different failure mode. Business rules run on form load and on save. Web resources run on form events. Classic workflows run asynchronously in the background. They do not know about each other. They do not coordinate. They just all run, and when they conflict, you get subtle data integrity problems that are very hard to reproduce.
The fix is not to write better documentation about which tool does what. The fix is to pick one tool for each concern and consistently use only that one.
Why Low-Code Is Risky Without Clear Ownership
Low-code tools are designed to let non-developers build business logic quickly. That is genuinely valuable in the right context. The risk is not the tools themselves. The risk is using them in an environment where governance, change management, and long-term maintainability matter.
Here is what low-code tools cost you in an enterprise D365 project:
Business rules are configured inside the form editor in the Power Apps make portal. There is no file on disk, no commit in a Git repository, no pull request, no code review. When you export a solution, business rules are included as XML inside the solution zip file, but that representation is not human-readable. You cannot look at a diff in Azure DevOps and understand what a business rule change actually does. You cannot unit test a business rule. When one breaks in production, you find out from a support ticket.
Classic workflows have the same problem. They live in the platform, they are exported as XML, they have a nondeterministic timing model, and they have almost no debugging surface. If a classic workflow fails silently on a record, you have system jobs to check. If the system job queue is large or the job was deleted, you may have nothing to go on.
Power Automate flows are somewhat better because they have a run history and they support Application Insights integration. But the flow definition is stored as JSON in the platform. Reading a complex flow definition in a code review is difficult. Reviewing the logic in a pull request in Azure DevOps is not something most reviewers can do effectively. Flow ownership sits in the platform, and changes made directly in the flow editor bypass any ALM process you have in place.
None of this means low-code is wrong in general. It means that for enterprise D365 projects where formal change management, source control traceability, and enterprise compliance requirements exist, low-code tools without strict guardrails create risk.
Web Resources as the Single UI Technology
Once you commit to using TypeScript web resources as the only UI-layer technology in your project, several things become much simpler.
Your form logic lives in .ts files in a Git repository. When a developer makes a change, they submit a pull request. Reviewers see exactly what changed, in a format they can read and reason about. The CI pipeline compiles, lints, and optionally runs unit tests before anything reaches the environment. When something breaks in production, the developer opens browser dev tools, sets a breakpoint in the source map, and debugs it the same way they would debug any TypeScript application.
There is no question of “is this a business rule or a web resource?” because business rules do not exist in this architecture. There is no question of “which event handler runs first?” because there is only one set of event handlers.
Here is a simple example of the kind of form logic that belongs in a web resource, not a business rule:
// formLogic.ts// Locks the status reason field when the case is resolved.// Called on form load and on status change.export function onStatusChange(context: Xrm.Events.EventContext): void { const formContext = context.getFormContext(); const status = formContext.getAttribute("statuscode")?.getValue(); const isResolved = status === 5; // 5 = Resolved option value formContext.getControl("statuscode")?.setDisabled(isResolved);}
This is a trivial example, but the point is that the logic is in a file. It has a name, a comment, a clear condition, and it lives in version control. The equivalent business rule lives in a dialog box in the make portal, with no file, no comment, and no history.
Plugins as the Single Server-Side Technology
Plugins are the right tool for server-side business logic in Dynamics 365 projects for three reasons: they execute inside the platform transaction, they are fully testable, and they are version-controlled as code.
Transactional execution means that if a plugin throws an exception, the entire operation rolls back. The record is not saved in a partial state. This is the correct behavior for business rules that have data integrity implications. A Power Automate flow that runs after a record save cannot roll back the record. If the flow fails, the record is already saved, and you now have an inconsistency to resolve.
Testability means you can write unit tests against your plugin logic using frameworks like FakeXrmEasy. Before a change reaches the development environment, you have automated confirmation that the logic behaves correctly. Nothing equivalent exists for business rules or classic workflows.
Version control means every line of plugin code has a commit, an author, a message, and a history. Every change goes through a pull request. Your team lead reviews the logic before it ships. You can bisect a regression in minutes because you have a Git history to work with.
Here is the pattern for a well-structured plugin that uses tracing and throws a meaningful exception when business rules are violated:
using Microsoft.Xrm.Sdk;using System;public class OpportunityPreValidatePlugin : IPlugin{ public void Execute(IServiceProvider serviceProvider) { var context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext)); var tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService)); tracingService.Trace("OpportunityPreValidatePlugin: started"); if (!context.InputParameters.Contains("Target") || !(context.InputParameters["Target"] is Entity target)) return; // Validate that estimated revenue is positive before create or update if (target.Contains("estimatedvalue")) { var estimatedValue = target.GetAttributeValue<Money>("estimatedvalue")?.Value ?? 0; tracingService.Trace($"Estimated value: {estimatedValue}"); if (estimatedValue <= 0) { throw new InvalidPluginExecutionException("Estimated revenue must be greater than zero."); } } tracingService.Trace("OpportunityPreValidatePlugin: completed"); }}
The ITracingService entries appear in the plugin trace log when the plugin fails, and they can be streamed to Application Insights if you have that configured. When this plugin fails in production, you know exactly where it failed, what value triggered it, and what happened step by step. Compare that to trying to understand why a classic workflow failed.
If You Have Plugins, Do You Really Need Power Automate?
This is the question that comes up most often in architecture conversations on D365 projects, and the honest answer is: less often than you might think.
If your server-side logic already lives in plugins, Power Automate adds a second execution model, a second debugging approach, and a second source of truth for behavior. Here is a direct comparison across the dimensions that matter most:
Performance and reliability: Plugins run synchronously inside the Dataverse transaction. They are as fast as your code makes them. Power Automate flows that trigger on record changes run asynchronously outside the transaction, with retry logic that can leave records in inconsistent states if the flow modifies data after an initial operation already succeeded partially.
Source control and change management: Plugins are C# files in a Git repository. Every change is a commit with an author and a date. Power Automate flow definitions are JSON stored in the platform. You can export them into a solution, but the JSON is not readable in a code review. Changes made directly in the flow designer bypass your pull request process entirely.
Tracing and observability: Plugins have ITracingService for in-context diagnostic messages that appear when the plugin fails, plus full Application Insights integration if you configure it. Power Automate has a flow run history UI and supports Application Insights as well, but viewing the logs requires navigating the admin center, and the run history has a retention limit.
Readability and maintainability: C# plugin code reads clearly. Any developer on your team can read it, understand the logic, and modify it confidently. Flow canvas logic is visual, which helps during initial authoring, but becomes hard to audit at scale. A flow with thirty actions is not easy to review in a pull request.
When Power Automate is the right tool: That said, there are genuine use cases where Power Automate is the better choice. Human-in-the-loop approval processes that integrate with Teams or Outlook are where Power Automate thrives. Notifications that need to reach people through email, Teams adaptive cards, or Outlook actionable messages fit naturally into Power Automate because those connectors exist for exactly that purpose. Scenarios built by citizen developers who do not have access to a plugin deployment pipeline also benefit from Power Automate. For external system integrations and Azure integrations, prefer calling out to an Azure Function or publishing to a Service Bus topic from a plugin. That keeps the integration logic in code, versioned, and testable, rather than buried in a flow canvas that nobody can review in a pull request.
The guidance is not “never use Power Automate”. It is “when you already have a plugin infrastructure, do not reach for Power Automate for logic that a plugin handles better”.
ALM-First: If It Is Not in Git, It Does Not Exist
Enterprise compliance requirements, regulatory audits, and even basic incident response all depend on one thing: being able to answer the question “what changed, when, and who approved it?”
For any artifact stored only in the Dataverse platform with no representation in version control, you cannot answer that question reliably. Business rules, classic workflows, and Power Automate flows edited directly in the designer all fall into this category by default.
Plugins and web resources do not have this problem. They live in files. Files live in Git. Git has history, blame, and diffs. Azure DevOps has pull requests, reviewer approvals, and build logs. You can look at any production behavior and trace it back to a commit, a pull request, a reviewer, and a deployment date.
A practical ALM setup for a D365 project looks like this:
# azure-pipelines.yml — simplified exampletrigger: branches: include: - mainstages: - stage: Build jobs: - job: BuildPlugins steps: - task: DotNetCoreCLI@2 inputs: command: build projects: 'src/Plugins/**/*.csproj' - task: DotNetCoreCLI@2 inputs: command: test projects: 'src/Plugins.Tests/**/*.csproj' - job: BuildWebResources steps: - script: npm ci workingDirectory: src/WebResources - script: npm run lint workingDirectory: src/WebResources - script: npm run build workingDirectory: src/WebResources - stage: DeployToDev dependsOn: Build jobs: - job: Deploy steps: - task: PowerPlatformImportSolution@2 inputs: authenticationType: PowerPlatformSPN PowerPlatformSPN: 'Dev-ServiceConnection' SolutionInputFile: '$(Build.ArtifactStagingDirectory)/solution.zip'
Every plugin build includes a test run. Every web resource build includes a lint check. The import to the development environment only happens if both pass. This is the baseline. Nothing ships to any environment without passing through this pipeline, and every run is logged.
A Simple Mental Model
If you are starting a new D365 project or refactoring an existing one, here is the mental model I use:
- Does it change data, validate business rules, or execute logic on record save? Use a plugin.
- Does it change form behavior, field visibility, or UI state? Use a web resource.
- Does it need a human to approve something, or send a notification through Teams or Outlook? Use Power Automate.
- Does it need to call an external system or integrate with Azure? Use a plugin calling an Azure Function or publishing to Service Bus.
- Does a business rule or classic workflow seem like the quick option? Stop and ask whether a plugin or web resource would be cleaner.
One technology per concern. Everything in Git. Deployed through a pipeline. That is the architecture that holds up under pressure, scales with your team, and keeps incidents short.
Wrapping Up
Tech sprawl in D365 CRM projects is not usually the result of bad decisions. It happens gradually, one business rule at a time, one flow added to solve an urgent problem, one workflow left over from a previous phase. Over time, the system becomes something nobody fully understands.
The way to avoid that is to make intentional architectural decisions early, enforce them consistently, and favor tools that fit naturally into a code-first, ALM-driven delivery model. Plugins and TypeScript web resources check all of those boxes. Low-code tools are valuable, but they need clear boundaries.
If you are on a project right now that looks like the scenario I described at the start of this post, the path forward is not to add more documentation. It is to stop adding new logic in low-code tools, migrate existing logic into plugins and web resources over time, and make everything deployable through a pipeline.
I hope this gives you a clear framework to take back to your next architecture discussion. If you have questions or want to share how your team handles this, drop a comment below.
Thanks for reading!
Leave a comment