Views:

Transcript:

0:01 Welcome to this video on version control
0:03 for business rules using North52 and
0:06 Azure DevOps. I'm going to show you how
0:08 to implement a complete application life
0:11 cycle management solution that gives you
0:13 full traceability, automated
0:15 deployments, and complete auditability
0:18 for your Dynamics 365 Power Platform
0:20 business rules. Whether you're managing
0:23 credit card eligibility rules, approval
0:26 workflows, or any other business logic
0:28 in data versse, this approach will
0:30 transform how you track and deploy
0:32 changes.
0:34 Let's start with the fundamental
0:36 question. Why do we need version control
0:38 for business rules?
0:40 The reality is that business rules
0:43 change constantly. Market conditions
0:45 shift, regulations update, and business
0:48 requirements evolve. Your eligibility
0:50 criteria today might be completely
0:53 different six months from now. The risk
0:56 is what happens when you track these
0:57 changes manually. Spreadsheets get out
1:00 of date, emails get lost, and worst of
1:02 all, you end up with what I call the
1:04 blackbox deployments. Changes going to
1:07 production and nobody really knows who
1:09 changed what or why. The goal is
1:12 complete traceability. We want to be
1:14 able to trace every change from the
1:16 original business requirement all the
1:18 way through to production deployment.
1:21 Who requested it? What was changed? When
1:23 was it deployed? And why? That's what
1:26 we're going to build today. Here is the
1:29 workflow we're implementing. It's a
1:31 four-step process that completely
1:33 automated once you set it up. Step one
1:35 is the request. Everything starts with a
1:38 work item in Azure DevOps. This could be
1:40 a user story, a bug fix, or a change
1:42 request. The key is that every change
1:45 has a trackable origin. Step two is the
1:48 build. The developer or business analyst
1:51 makes the changes in North52 formula
1:53 editor in your development environment.
1:56 This is where the actual business logic
1:58 gets updated.
2:01 Step three, commit. Here's where the
2:03 automation kicks in. An Azure DevOps
2:06 pipeline automatically exports the
2:08 solution from dev, unpacks it, and
2:11 commits it to your git repository. The
2:13 commit message includes a link to the
2:15 back to the original work item.
2:18 Step four, deploy.
2:20 Another pipeline take that committed
2:22 code and deploys it as a managed
2:24 solution to your test environment and
2:26 optionally to production. The key
2:29 benefits here are zero touch. There's no
2:31 manual solution exports or imports and
2:34 integration. Every code change is linked
2:37 back to the work item that requested it.
2:40 If you contrast the old way with the
2:42 modern way, it was a manual approach
2:45 relied on spreadsheets for tracking
2:46 changes, manual solution exports and
2:49 imports and moving zip files around and
2:51 hoping someone remembers to document
2:53 what they did. There are accountability
2:56 gaps everywhere. The automated approach,
2:59 the modern way, gives you an automated
3:02 audit trail. Every change is recorded
3:04 automatically. One-click pipelines
3:06 handle all the exports and imports and
3:09 you have proof of every change. Who made
3:11 it, when, what the actual code different
3:14 was. This isn't just about convenience.
3:17 In regulated industries, this kind of
3:20 audit trail can be the difference
3:21 between passing or failing a compliance
3:23 audit.
3:25 Let's look at how the solution flows
3:27 through your environments. On the left,
3:30 we have the development environment.
3:32 This is where your developers and
3:34 business analysts work. They implement
3:36 changes using the North52 formula editor.
3:40 Once the changes are validated and
3:41 tested locally, they're ready for
3:43 export. In the middle, we have the Azure
3:46 DevOps. When you run the export
3:49 pipeline, it connects your dev
3:50 environment, exports the solution, and
3:53 commits it to your Git repo. This
3:55 creates a permanent audit trail. You can
3:58 see exactly what changed in each commit,
4:00 compare versions, and even roll back if
4:02 needed. On the right, we have test and
4:06 production. The import pipeline takes
4:08 the managed solution from the repository
4:10 and deploy it first to test for
4:12 validation and then to production when
4:14 you're ready. The key here is that the
4:17 same managed solution that was tested is
4:19 exactly what gets deployed to
4:21 production. No manual steps, no
4:24 opportunity for human error.
4:28 Now, let me show you a concrete example
4:30 of this workflow in action. We're
4:32 updating the platinum card eligibility
4:34 rule. Let's follow the single business
4:37 rule change from request all the way to
4:39 production deployment.
4:41 Let's see this in action. We have under
4:44 our work items work item 16 platinum
4:48 card eligibility. change the minimum
4:50 total assets acquired for customers with
4:53 a credit rating between 760 and 770. We
4:56 are to increase the total assets of
4:58 applicant from 200,000 to 300,000.
5:04 A functional consultant opens the North52
5:06 decision suite and updates the logic
5:08 directly. No complex coding, just a
5:12 simple configuration change. Behind the
5:14 scenes, this rule is saved as a web
5:16 resource inside the business rules
5:18 credit card solution. This is the
5:20 critical artifact that gets stored and
5:22 tracked in our DevOps repository.
5:25 We don't email zip files. We simply
5:28 trigger the pipeline
5:30 and tag it with work item 16. This tag
5:34 is the key. It links the business
5:36 request to the technical solution.
5:39 For our commit message, we give a
5:41 meaningful title. In this case, we're
5:44 going to use updated platinum card
5:47 eligibility min total assets 300,000.
5:55 The automation takes over. It exports
5:57 the solution, unpacks the files and
6:00 commits them to source control. It is
6:02 creating a permanent source of truth for
6:04 your project.
6:07 This is the result. Complete audit
6:10 trail.
6:12 We can see exactly what changed. The
6:15 system highlights that the value moved
6:16 from 200,000 to 300,000.
6:19 It is proof that the system matches the
6:21 requirement.
6:23 Here in the committed cells, we can see
6:27 not just the change made to the formula
6:29 PGK, we can also see the managed and
6:32 unmanaged solutions that were exported.
6:35 And if we click on details,
6:37 we can see who exported it. We can see
6:41 the work item linked below number 16.
6:43 And when we click on it, we can see that
6:46 the commit has been added to the
6:48 development section. This ensures
6:50 traceability for each and every change.
6:54 Deploying to test or production is just
6:56 as easy. We use the exact same manage
6:59 solution we just verified, ensuring
7:01 consistency across all environments with
7:03 zero manual steps.
7:09 Now let's look at how to set this up.
7:11 First, the repository structure. Your
7:14 Azure DevOps repository should have this
7:16 folder structure. The pipelines YAML
7:20 folder contains your YAML pipeline
7:22 definitions. You'll have two files, dev
7:25 exports sync.yml for exporting from dev
7:29 and deploy solution.yml for deploying to
7:31 test and production. The solutions
7:34 archived folder is where the pipeline
7:36 stores version zip files of your
7:38 solutions. There's a folder for each
7:40 solution.
7:42 The two subfolders, managed and
7:43 unmanaged. Each export creates a new zip
7:46 file with a version number and the file
7:48 name like business rules credit card
7:51 managed 1050.zip.
7:54 This gives you an archive of every
7:56 version you've ever deployed.
7:59 The solutions unpacked folder again
8:01 broken down by solution contains the
8:03 unpack source code.
8:06 This is what enables the diff view we
8:07 saw earlier. When the pipeline exports
8:10 your solution, it unpacks the YAML file
8:12 into this folder so get can track
8:15 individual file changes.
8:19 See here here under history and the
8:21 commits
8:22 the structure gives you the best of both
8:24 worlds. archive zip files for easy
8:26 deployment and unpack source control for
8:29 detailed version tracking.
8:32 So let's walk through the export
8:33 pipeline step by step. Publish all
8:36 customizations before exporting the
8:38 pipeline publishes all the
8:40 customizations in your dev environment.
8:42 This ensures you're capturing the latest
8:44 data and not any stale changes.
8:47 We set the solution version. The
8:49 pipeline automatically sets the solution
8:51 version using semantic versioning. The
8:53 format being major.mminor.patch.0.
8:56 The major and minor numbers are
8:58 controlled by pipeline variables that
9:00 you set in the UI. So right now we've
9:02 got major set to one and minor set to
9:04 zero. The patch number auto increments
9:07 every time the pipeline runs. And it
9:09 resets back to zero whenever you bump
9:11 the major or minor. So your versions
9:13 will look like 1.0.0.0
9:17 1.0.1.0.2.0
9:20 etc. This gives you clean meaningful
9:22 version numbers rather than arbitrary
9:24 build ids.
9:26 When we export to solutions, the
9:28 pipeline exports both unmanaged and
9:30 managed versions. The unmanaged version
9:33 is for source control. It gets unpacked
9:35 and can be diffed. The managed version
9:38 is what you is what gets deployed to
9:40 downstream environments.
9:42 We unpack and archive everything. The
9:45 unmanaged solution gets unpacked into
9:47 the solutions unpacked folder. This
9:49 breaks it apart into the individual XML
9:51 files. The Git can track changes at the
9:53 component level. Both the managed and
9:56 unmanaged zips are also copied into the
9:58 solutions archive folder with version
10:00 file names. So you see something like
10:02 business rules credit card manage
10:04 1030.zip
10:06 in the manage subfolder. Every version
10:08 is kept which means that you can always
10:10 roll back to a previous one.
10:13 Finally, we commit everything to git.
10:15 Everything gets committed to the repo.
10:17 the unpacked files, the archive zips,
10:19 all of it. The commit message includes
10:21 your custom message, a link to the work
10:23 item, and the version number like for
10:26 example updated platinum credit card
10:28 threshold dash 16 v1030.
10:33 The deploy pipeline later reads directly
10:35 from this repo when it's time to promote
10:38 to test or production.
10:40 When you do run the pipeline, you'll see
10:42 two input fields, a commit message, and
10:44 a work ID. The solution name itself is
10:47 locked down as a pipeline variable
10:49 called target solution. So there's no
10:51 risk of somebody accidentally exporting
10:53 the wrong solution. The pipeline uses a
10:56 service connection called data dev for
10:58 authentication. This is a service
11:00 principle that have permission to export
11:02 solutions from your dev environment.
11:05 Now let's look at the deploy pipeline
11:07 which handles deployment. The pipeline
11:10 has two stages, deployed to test and
11:12 deploy to production. But unlike a
11:14 typical pipeline where stages run
11:16 automatically, both stages here are
11:18 controlled by checkboxes. When you run
11:21 the pipeline, you take which
11:22 environments you want to deploy to,
11:24 test, production or both. You also could
11:27 specify the solution version, which
11:29 defaults to latest, but can be set to a
11:32 specific version number for rollbacks.
11:34 Stage one is deploy to test. Only runs
11:37 if you takeick the deploy to test
11:39 checkbox. It checks out the G repo. Runs
11:42 the PowerShell script to find the
11:43 correct manage version from the
11:45 solutions archive folder. Installs the
11:47 Power Platform builds tools. Imports the
11:50 manage solution to your test environment
11:52 and publishes customizations.
11:54 This pipeline uses the data vers test
11:56 service connection. Stage two is deploy
11:59 to production. It only runs if you take
12:01 the deploy to production checkbox. If
12:03 both check boxes are ticked, production
12:06 waits for test to succeed first. If test
12:08 fails, production is skipped and your
12:10 production environment is protected. If
12:13 only production is ticked, it runs
12:14 automatically. It follows the same steps
12:17 but use the data prod service
12:19 connection.
12:20 The version resolution step is worth
12:22 noting. If you leave the version as
12:25 latest, the pipeline scans the solution
12:27 archive manage folder, source the
12:29 version zip files, and grabs the most
12:31 recent one. If you type in a specific
12:34 version, say 1.0.2.0,
12:36 it looks for an exact match. If that
12:39 version doesn't exist, it lists all the
12:41 available versions in the log output. So
12:43 you can see what's there and pick the
12:45 correct one.
12:48 And notice the pipeline reads directly
12:49 from the git repo. There is no pipeline
12:52 resource connection or artifact download
12:53 from the export pipeline. It simply
12:55 checks out the repo, pulls the manage
12:58 dep from the solutions archive folder,
13:00 and this keeps both pipelines decoupled
13:02 and means you can deploy any version
13:04 that's been committed to the repo, not
13:06 just the most recent export.
13:09 Azure DevOps environments give you
13:11 deployment tracking and history. Here
13:13 you can see the test and production
13:15 environments. Each one shows the current
13:17 deployment status, the build number
13:19 that's deployed, and when it was last
13:21 updated. The benefits of using
13:23 environments are complete deployment
13:25 history. You can see every deployment
13:28 that's ever been made to each
13:29 environment. Approval gates. You can
13:32 require approval before production
13:34 deployments.
13:35 Easy roll back. If something goes wrong,
13:38 you can redeploy a previous build with a
13:40 few clicks. And accountability. You know
13:43 exactly who deployed what and when. This
13:46 is all built into Azure DevOps. You just
13:48 need to configure it.
13:51 The final piece of the puzzle is service
13:53 connections. These are how your
13:55 pipelines authenticate to your data
13:56 versse environments. You need three
13:59 connections. Data versse dev which is
14:01 used by the export pipeline to connect
14:03 your development environment and export
14:05 solutions. Then data versse test and
14:07 data versse pro which are both used by
14:10 the deploy pipeline to deploy solutions
14:12 to your test and production
14:13 environments.
14:15 All three use service principle
14:17 authentication also known as FPN. This
14:20 is more secure than using a user account
14:23 because it's not tied to a specific
14:24 person and it doesn't expire when
14:26 someone leaves the organization. To set
14:29 these up, you need to create an app
14:31 registration in Azure AD or now called
14:34 entra ID. This gives you an application
14:36 ID and client secret. You need to add an
14:39 app user in the power platform admin
14:41 center. This application user would need
14:43 the system administrator security role
14:45 in each environment.
14:47 And you would then need to create the
14:48 service connection in Azure DevOps. Go
14:51 to your project settings, then service
14:53 connections as you can see here, and
14:55 then create a new Power Platform
14:56 connection using the application ID and
14:58 client secret.
15:01 The easy way to set up service
15:02 connections is by using the Power
15:04 Platform CLI and run pack admin create
15:08 service principle targeting your
15:10 environment. This single command
15:12 performs a triple play. It registers the
15:15 app in enter ID, creates the user in
15:18 data versse and assigns system admin
15:20 role automatically. You'll get your
15:22 client ID and secret output directly in
15:25 the terminal instantly. Copy these
15:27 credentials and you're ready to
15:29 configure your Azure DevOps service
15:30 connections in record time. Here's a
15:33 side-by-side view of how the two
15:34 pipelines work together. The export
15:37 pipeline on the left handles everything
15:38 on the development side. It publishes
15:41 customizations in dev. Stamps a solution
15:43 with a semantic version. Exports both
15:46 managed and unmanaged zips. Unpacks the
15:48 unmanaged solution into individual files
15:50 for source control. Archives the version
15:53 zip files into the solutions archive
15:55 folder and commits everything to git
15:57 with your work item links. The deploy
15:59 pipeline on the right handles
16:01 deployment. It checks out the repo,
16:03 finds the correct manage zip from the
16:05 solutions archive folder, either the
16:08 latest or a specific one you've typed
16:10 in, and imports it into test,
16:12 production, or both. The important thing
16:14 to notice is how these two pipelines are
16:17 connected. And the answer is they're
16:19 not. There's no pipeline resource link
16:21 or artifact dependency between them. The
16:24 export pipeline writes to the repo and
16:26 the deploy pipeline reads from the repo.
16:28 That's it. They're completely decoupled,
16:31 which means you can export without
16:33 deploying, deploy an older version
16:35 without reexporting, or run them
16:36 independently at different times. The
16:38 Git repo is a single source of truth
16:41 that ties everything together.
16:43 Let's recap what we've achieved with
16:45 this setup. First, link requirements.
16:49 Every technical change is linked back to
16:51 a business requirement. Work items in
16:53 Azure DevOps create a clear chain from
16:56 request to implementation.
16:58 Second, a source of truth. Your Git
17:00 repository becomes a single source of
17:02 truth for all business rules. It's
17:04 automated, version controlled, and
17:06 always up to date. Third, consistent
17:09 deployment. The same pipeline deploys to
17:12 test and production. No manual steps, no
17:15 variation, no human error. The result,
17:19 reduced risk and increased speed. You
17:22 get full traceability for compliance,
17:24 zero manual errors because everything is
17:26 automated, and faster deployments
17:28 because you're not waiting for someone
17:30 to manually export and import solutions.
17:33 This is enterprisegrade ALM for your
17:36 Dynamics 365 Power Platform business
17:39 rules.
17:41 If you'd like to implement this in your
17:43 organization, here are your steps. Visit
17:46 North52.com to download the decision suite
17:49 and request a trial. Check out the
17:51 knowledge base at support.North52.com
17:54 for documentation, free training, and
17:56 certification programs. And if you have
17:58 questions, you can reach out to
18:00 sales@North52.com.
18:02 The team would be happy to walk you
18:04 through the setup for your specific
18:05 environment. Don't forget to subscribe
18:08 to our channel for more videos in
18:09 Dynamics 365 Power Platform automation
18:12 and best practices. Thanks for watching.