Revised in October 2025
In this post, I’ll provide an up-to-date overview of the various integration options available in Dynamics 365 Finance and Operations. We’ll explore each integration method in detail—covering how they work, their strengths and weaknesses, and how they compare to one another. I’ll also share practical insights and lessons learned from my own hands-on experience implementing these solutions.
Topics
OData
OData is an open-source protocol that serves and consumes interoperable data using common query operations with RESTful APIs. D365 F&O exposes all its public data entities as OData endpoints, which can then be accessed using the following URI format :
https://<d365fourl>/data/<dataentity>
OData provides a quick and codeless data integration with its featured query options and CRUD operations. You can use its open standard query string language to query your data and do data manipulations using standard OData CRUD commands, all using just simple and open standard REST calls. There is even support for bulk operations like bulk insert and bulk update. With the OData actions feature, you can create and run custom methods within OData entities, manipulating data or running custom program logic.
You can definitely use OData for your own integration projects by building a REST client with any REST-enabled tool or programming language, but there are also many OData-ready software available today, and these can be directly connected to D365 F&O OData endpoints. Also, many F&O standard integration tools, like logic apps connector, Dual-Write, MCP server, and Data Factory connector use OData under the hood to perform the integration with F&O.
Although it looks like the optimum way of data integration with D365 F&O, there are some drawbacks involved. OData query and operation execution times are quite slow, and data reading may take ages if you try to retrieve a large entity. OData is mainly designed for simple CRUD operations and simpler queries. If you need to execute complex queries, like complex joins and lookups, for example, you may start to hit its limits.
There is also the throttling feature of D365 F&O that throttles calls to OData endpoints by giving priorities to them, to avoid system lockdowns that might be caused by frequent OData calls. You can read more about it from the link below :
Remember, you can also use OData endpoints in Azure API Manager, just like custom services and other F&O endpoints.
ADVANTAGES
- Open, standard data integration protocol with many data inquiry, CRUD, and bulk operation commands out of the box
- Support for extending functionality with custom data actions
- OData-ready software can be directly integrated using existing endpoints
DISADVANTAGES
- Slow
- Not suitable to be use with complex business logic and queries
- Throttling can be an issue in busy integration scenarios
More info: https://docs.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/data-entities/odata
Custom Services
This is by far the most flexible and customizable way to integrate with D365 F&O amongst standard integration tools. Custom services are custom REST API endpoints that are created with standard D365 F&O X++ code. They can be used for both data-based and operation-based integrations. Flexibility is great since everything is created with the plain X++ programming language, the limit being just your imagination.
AIF services used in Dynamics AX are now upgraded to custom services in F&O, and they are automatically published into SOAP and JSON REST endpoints when deployed. However, management and monitoring tools used for old AX AIF services have now disappeared, leaving just naked, non-configurable service endpoints.
When you deploy a custom service, the following endpoints are created, and you can call them remotely using standard AAD OAuth authorization and SOAP/REST HTTP calls:
SOAP:
https://<d365fourl>/soap/services/<servicename>?wsdl
JSON:
https://<d365fourl>/api/services/<servicegroupname>/<servicename>/<operation_name>
Data integration in custom services is done via data contracts, which are then converted to XML or JSON depending on the endpoint. Performance of doing so is quite fast, compared to other data integration options, since everything is handled with low-level .NET commands under the hood. You can, in theory, transfer large amounts of data this way; however, I do not recommend doing so. With large data transfers, you may hit some unforeseen limitations and timeout problems on the way, and also the throttling. But it is always possible to program any kind of paging or chunking mechanism in your service class to send small chunks of larger data to skip these limitations.
If you need an example, I have used a custom service in my previous Android integration blog post, and the X++ code of this service is available on my GitHub page:
The throttling mechanism we mentioned in the OData topic also exists for the custom service endpoints. F&O throttles the calls if the server reaches a limit.
If you need to do management and monitoring of your custom service, or configure some custom authorization methods instead of AAD, there is a possibility to connect it with Azure API Manager and empower its monitoring and authoring abilities, as described in my other blog post :
Empowering D365 FO service endpoints with the Azure API Management service
Until here, everything is great with using custom services in our integration scenarios; however, custom services can only be called synchronously via third-party consumers, which is a great drawback in today’s REST API standards.
ADVANTAGES
- Highly flexible and customizable way of integration
- Best performance compared to other integration methods
- Support for both operation and data-based integrations
- Both SOAP and JSON endpoints are available upon deployment
- Possibility to use them in Azure API Manager
DISADVANTAGES
- Lack of management and monitoring tools in UI (possible with Azure API Manager)
- Development requires an experienced X++ developer
- Not suitable out of the box for large data transfers
- Only synchronous operation is possible
More info: https://docs.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/data-entities/custom-services
DMF (Data management framework)
DMF allows you to import and export big data in D365 F&O using various source/target data files and services. This is the recommended way of data integration in D365 F&O if you need to transfer large amounts of data from other systems using legacy file formats and services. It supports bulk or incremental data import/export options with change tracking and data staging features.

Import and export projects are created using D365 F&O standard “data entities”, which are normalized views of D365 F&O data tables and can be added programmatically if necessary. There is a dedicated Data management workspace (above), which you can set up data import and export projects, import actual data, or see available data entities and check integration mappings in graphical schemas. However useful it is, I cannot tell you that the DMF UI is a user-friendly one. Some parts of the “advanced” and the “standard” views are confusing to users, even to developers. Also, field mapping and Unicode setup parts are confusing and do not always work as expected.
There is support for every old file format that was used by Dynamics AX before, and some new formats and SQL Server DB connection options are also available. But there is still no support for modern data transfer file formats like JSON, Parquet, Delta, ARRF, Avro, etc. Also, as package options, you can only use ZIP compression, and there is no support for GZIP, which is widely used for data transfer today. To transfer the data inside a file, CSV files are the recommended option. As you can see in the analysis below from luminousmen.com, a CSV file is quite an outdated file format and falls far behind on every aspect compared to modern data transfer file formats. Bulk data integration with modern formats is only possible today via the Azure Fabric (Synapse) links, which I will mention later.
https://luminousmen.com/post/big-data-file-formats
DMF has REST APIs and actions to import and export data from outside D365 F&O. You can find Azure Logic Apps actions under the “DataManagementDefinitionGroups” section of “F&O Execute action” logic app block:

You can find example code (and logic apps) for DMF data import and export operations in the Dynamics-AX-Integration GitHub page. However, using these API commands and actions is not as straightforward as you expect. For example for importing a CSV file, you need to create a ZIP file package containing the file you want to import and add fixed Manifest and Header XML files in it, then place it in a temporary container DMF provides for you, import it using another action and then use an infinite loop for checking if the import is completed successfully or not using the get status action, and then try to get the actual error checking various DMF actions..
There is currently no other straightforward method for import and export available if you would like to use the DMF API calls, other than using the recurring integrations option instead. It would be very nice to have one straightforward option, though, that receives the data file as input and consolidated error and log messages as output..
Another thing to mention is running import jobs in parallel. If you make two (synchronous) API calls in parallel, importing into the same data entity, you get errors and problems. But it is possible to execute such an action using async DMF API methods instead, or async processing in logic apps. Also, the table you need to export should not have any SQL-level locks on it, or you run into some nasty problems. So exporting in a timeframe that users do not use the system is recommended.
ADVANTAGES
- Suitable for big data integrations
- Incremental data import/export possibility
- Staging tables for post data validation
- Legacy file formats support
DISADVANTAGES
- No support for modern big data formats
- UI is not user-friendly and confusing
- DMF API’s and actions for integration from outside D365FO are not straightforward and practical
- Table lock problems, parallel executions will fail for the same data entity, although async import is possible
- Slow
Recurring integrations
Recurring integrations is a D365FO data integration platform based on DMF and data entities, providing automated data exchange possibilities between third-party data providers. Once set up, the Recurring Integration platform REST endpoints can be used by third-party integrators to import/export data from and to D365 F&O using simple REST calls.
The recurring integrations feature can be enabled from a single button click in a DMF project, and later can be managed/monitored from the F&O Data management workspace:

It creates a recurring integration with an activity ID. Recurring integrations REST endpoints are then used to push and pull data for the recurring integration, in an ordered or scheduled way. Below are the standard recurring integrations, REST endpoints, and their usage:
https://<base URL>/api/connector/enqueue/<activity ID>?entity=<entity name>
https://<base URL>/api/connector/dequeue/<activity ID>
https://<base URL>/api/connector/ack/<activity ID>
You can directly import a data file as well as a zipped DMF data package without issues, and a consistency check for empty or corrupt files also exists. After the operation, DMF API calls and methods can then be used to check the status of the import or get the execution errors.
Recurring integrations provide the simplest way to integrate using DMF data import and exports. However, these REST endpoints used for recurring integrations are fixed, and you cannot extend them anyhow for custom functionality.
ADVANTAGES
- Out-of-the-box, complete, stable integration option providing all the basic features needed
- Fully automated, scheduled way of integration
- Support for direct data file imports and file consistency checks
DISADVANTAGES
- Endpoints and their functionality are fixed and not customizable
Business events
D365 F&O business events feature allows you to send notifications of FO business events to Azure event handlers and trigger-based workflow providers. It comes with many out-of-the-box FO business events, plus it provides you with the ability to create new ones with X++ programming. Business events can be activated and used from the business event catalog form in D365 F&O:

You can send notifications from these events to Azure endpoints like Service Bus, Event Hub, Event Grid, and Logic Apps (or Power Automate), also to custom HTTPS endpoints using standard HTTP methods. It is also possible to subscribe to a business event directly from Logic Apps or Power Automate using the “When a business event occurs” trigger, as shown below:

It is also possible to attach small amounts of data to the notifications using the message payload; however, I would advise you to be really careful with that since attaching large amounts of data to your payload will not only create problems, it will also ruin the lightweight operation expected from an eventing platform.
If you need more information, I have a complete blog post about business events, describing also how to create a new event using X++: https://devblog.sertanyaman.com/2019/05/23/how-to-integrate-d365fo-with-microsoft-flow-using-the-new-business-events/
ADVANTAGES
- Real-time integration with event-based Azure services and custom HTTPS services
- Trigger methods available for Logic Apps and Power Automate
- Possibility to create custom events
DISADVANTAGES
- Not suitable for large data transfers
- Creating custom events requires skilled X++ developers
More info: https://docs.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/business-events/home-page
Data events
Data events resemble business events and are triggered per CRUD operations performed on a data entity. They can also be activated and configured from the business event catalog form:

Unlike business events, data events are closely linked to Power Platform and require D365 F&O and Power Platform linking to be active in the environment where they need to be used. Then, triggered events can be published to endpoints set up in the BE catalog, or can be consumed by the Power Automate triggers for the created virtual entity:

There are limits in F&O on how many data events can be triggered per hour, so they are not quite suitable for entities that receive bulk updates, like products.
ADVANTAGES
- Real-time CRUD operation updates on activated data entities
DISADVANTAGES
- Events are published through Power Platform virtual entities; thus, Power Platform linking is mandatory
- Limits and throttling on the number of events can degrade performance
More info: https://learn.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/business-events/data-events
Dual-write
Dual-write provides a near-real-time, tightly coupled, and synchronous-asyncronous data transfer between D365 F&O and Dataverse tables. Dataverse sits at the heart of Microsoft Power Platform and is used by many Microsoft Dynamics 365 applications (like CE, Sales, Field Service, Marketing, and Project Service Automation). Therefore, Dual-write makes it possible to integrate D365 F&O with the rest of the Dynamics 365 family.
Any data written to D365 F&O dual-write mapped data entities is written directly into its Dataverse mapped table in near real-time. Data validations and business logic are executed simultaneously on both sides and reflected to the user, showing messages and errors from both platforms. Recently also an asynchronous mode is added, eliminating the waiting time for users caused by its real-time data synchronization. Real-time data synchronization process has “Running” and “Paused” mode options, and can cache write operations in offline or paused mode to be executed afterward.
At the time of writing, a dual write setup can be done via LCS, the Power Platform admin center, and from the Data Management module section below. The details of the setup can be found here: https://learn.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/data-entities/dual-write/connection-setup

Once set up and enabled, a health check is done for the linking, and you can do an initial sync for the entities to synchronize data between both platforms. Many ready-to-use dual-write mappings and integration packages are available and can be enabled via solutions from the Power Platform admin center.
Dual-write is currently mature enough to be used without issues, and Microsoft is investing a lot to make it better. However, there are some considerations before enabling it for an environment. When installed and enabled, it adds new features and limitations to the existing F&O and Dataverse tables and forms, such as new mandatory fields, business rules, and workflow changes. It also introduces data validations from the integrated platform to the existing one; for example, CE users will start seeing F&O validations when they add a new customer. Another thing to mention is the data schema differences between Dataverse and D365 F&O. Dataverse is using “Common Data Model”, which has simple and generic schemas for its master data tables; on the other hand, D365 F&O can have quite complex data structures like trade agreement and pricing modules or directory and party structures for storing contact information and addresses. So it is quite challenging to keep two different data structures in harmony with each other.
Dual-write has a consolidated error logging mechanism, which makes it possible to catch errors on both sides. However, in practice, it is quite difficult to track an error, mostly requiring you to enable debug mode (severe performance impact) and debug on both sides.
ADVANTAGES
- Tightly-coupled, near real-time integration with Dataverse
- Free service by Microsoft
- Queries and filters for data integration
- Ability to customize field-entity mappings
- Coupled data validations and error handling
- Caches synchronization when paused or offline
- Async running mode
DISADVANTAGES
- Room for improvement in “near-real time” performance
- Mappings of some data models are not really in harmony
- Debugging and error handling are very tricky
- No number sequence sharing (yet)
- Not suitable for high-volume data integration
Virtual entities
Virtual entities expose the data from D365 F&O entities to Dataverse as virtual links, which allow you to use real-time data from D365 F&O in Power Platform modules, similarly to Dataverse entities. Configuration is automatically done on the Power Platform side when you link it with F&O. Once configured, you can select the entities you want to enable as virtual entities from the list available in Dataverse and use them directly in Power Apps or Power Automate:



Virtual entities work as a link, a placeholder for the actual data sitting on the F&O side. By default, all the CRUD operations on the virtual entities are supported. Any data operation you perform on the virtual entity is directly performed on the F&O data, without duplicating the data in Dataverse. However, creating and updating F&O data directly from virtual entities is not very handy. Validations and lookups on the F&O fields, extended data types, and field lookups are not available in virtual entities. All the complex data types and relations in F&O are presented in the Dataverse virtual entity as flat free-text format. So, data modified in a virtual entity is only validated after it is written to the F&O table. Therefore, virtual entities are more useful for browsing data from D365 F&O in a read-only way, or for visualizations.
Not all field types from the F&O entity are supported by virtual entities, like calculated fields. Also, there are limitations in usage compared to Dataverse native entities, like you cannot join it to another Dataverse entity, and use offline caching.
ADVANTAGES
- Real-time D365 F&O data access possibility for Power Platform
- Free tool by Microsoft
- CRUD operations supported
DISADVANTAGES
- Not the best tool for direct F&O data modifications
- F&O complex data types and field relations are not presented in Dataverse
- Calculated field types are not supported
- Not all Dataverse entity features are available for virtual entities
Consuming external web services
In addition to the standard integration features of D365 F&O, there is also a possibility to integrate with external APIs or Azure services using custom X++ code or libraries. I recommend you build a separate .NET library for your external API integrations to avoid hitting some limitations you might experience with F&O CLR-Interop (like async features and collections). In D365 F&O, you can easily add .NET libraries to your VS solution and reference them to your X++ projects in the same solution.
Creating a .NET library inside D365 F&O is tricky and requires good .NET skills and experience. You need to target the same .NET framework as the F&O projects of the time (at the time of writing, .NET Framework 4.7.2). Then, you need to use the newest and most stable .NET HTTP client that is supported by the framework (unfortunately, none of the new .NET Core ones). X++ does not support the async keyword nor the async tasks of .NET, so you need to convert the async call to sync using Task.GetAwaiter().GetResult() method. This can potentially introduce deadlock risks, and in my own experience, it does fall into deadlocks. To remedy it to some point, you need to decorate the async methods in your library with ConfigureAwait(false), and design your module to avoid deadlocks as much as possible.
For these reasons, I do not have a blog post to showcase how it is done, and I recommend that you use this option sparingly and carefully, only for desperate scenarios.
The web service you are trying to access might also be operating in async web service mode. Async web services are special ones that callers do not have to wait until the operation is complete, but respond immediately with a 202 Accepted message and return an inquiry link for polling until a response is available. Afterward, you need to check if the operation is completed by using polling from your caller code.
Once your library is complete and connected to your X++ class, you can use standard X++ calls to retrieve data from the external service, then transfer the data to C# using C# model classes. Model classes of your connected module can be accessed directly from X++ by using the “include” keyword.
ADVANTAGES
- High performance, native integration with external services using low-level .NET commands
- Flexibility
DISADVANTAGES
- Requires experienced X++ and .NET developers
- Deadlock risk because of the lack of async tasks in X++
Entity store (legacy)
Entity store publishing is deprecated in March 2025 and replaced by the Synapse link feature

Entity store integration pushes normalized D365 F&O data entity data to an Azure SQL database to be used read-only by Microsoft BI tools. Entity store data can be updated manually or using batch jobs. As necessary, there is also an option to develop custom aggregate measures using D365 F&O development tools. As you can guess, this integration is read-only and, in fact, currently only available to MS Power BI. The reason is that, in D365FO production environments, these AXDW databases are not accessible to end users, and you cannot get authorization for any other third-party app if you need to use the same database for your own purposes. BYOD, on the other hand, which we will mention in the next topic, makes this possible by exposing entity store data to a user database instead.
ADVANTAGES
- A faster way of providing analytical data to Azure reporting tools
- Batch update possibility
DISADVANTAGES
- Read-only
- Power BI only
Bring your own database (BYOD)
Bring your own database publishing is no longer recommended from March 2025 and Synapse link feature is recommended as a replacement
BYOD allows you to export entity data to a user-created SQL Server database instead of the Entity store AXDW, which is currently not allowed to be accessed by end users in production environments. Using BYOD, you can export D365FO data entities to your database manually or with batch update or change tracking options using the standard DMF export projects page. To start using BYOD, you need to configure your SQL Server database in the “Data management” workspace, “Configure Entity export to database” tile:

Then you can use the Publish button to create SQL tables for selected data entities in the target database. After these steps are completed, you can export the data to your SQL database using DMF export jobs and selecting your BYOD SQL Database from the list as a target.
Advantages
- Data can be exported to and accessed from user-created SQL server databases
- Exporting table data using query filtering is possible
- Incremental updates with change tracking are supported.
- Batch update possibility
Disadvantages
- One-way integration
Export to data lake (legacy)
Export to data lake is deprecated in March 2025 and replaced by the Synapse link feature
Azure data lake publishing provides a more practical, cheaper, and faster storage option for your big analytical and integration databases. It can publish data from entity store, data entities, and standard FO tables to a Data Lake Storage Gen2 storage, in a “near real-time” speed using the new D365 F&O “Data feed service”. Data transfer is using a CSV format, which can be inefficient compared to modern data transfer formats.
Publishing your data into a data lake storage brings in many benefits. It costs much less to store data in a Data Lake compared to a SQL Server data space in Azure. Plus data lake allows you to combine your FO data with data from other applications, and also Dataverse inside the same data lake space, and run consolidated reports with them. Data lake supports the “Common Data Model” folder structure, which can be understood by many cloud applications and services. Usage of Azure data transformation tools like Azure Data Factory is also possible.
D365 F&O has a “Export to Data Lake” form, and you can choose to push the Entity Store to Azure data lake from the Entity Store window:

ADVANTAGES
- Reduced business data storage costs for BI and analytics
- Near-realtime data synchronization with automatic data feeds
- Improved integration with many Azure cloud apps and services using Common Data Model
- Possibility of using Azure BI services and data tools
Disadvantages
- One-way, analytics-only integration
- Data transfer is using an inefficient CSV format
Azure Data Factory F&O connector
Microsoft recommends using the Link to Fabric and Fabric Data Factory instead
Azure Data Factory is an integration service in Azure that performs bulk data transformations and integrations using low-code, serverless ETL (extract, transform, load) data pipelines. It extracts data via data connectors, including one for D365 F&O called Dynamics AX connector (yes, the old name). As you can guess, this connector also uses OData under the hood, and the copying is done in bulk via ADF background services. However, the ADF sink option, which chooses the data storage destination, is not available for the Dynamics AX connector.


We also have a D365 F&O connector for the logic apps, which can also do data transforms and integrations, so what is the difference? Compared to logic app-based data transforms, which process records one by one, ADF can perform data transformations much faster in bulk processing using data transform and integration-specific tools. However, operation costs are higher compared to logic app-based integrations.

ADVANTAGES
- High-performance bulk data transformations
- Serverless, low-code pipeline design
- Native data ingestion from F&O
- Possibility of using many out-of-the-box data processing tools and connectors
Disadvantages
- High operation costs
- Using OData to extract data, which can be slow
- The sink option is not available for F&O, so one-way-only data integration
More info: https://learn.microsoft.com/en-us/azure/data-factory/connector-dynamics-ax?tabs=data-factory
Azure Synapse Link
Azure Synapse Link for Dataverse is the new tool available by Microsoft to export F&O data to low-cost Azure Data Lake Storage Gen2 storage and/or an Azure Synapse Analytics workspace. Linking is done via the Power Platform Synapse link tab, selecting the F&O entities and tables to be linked for export.

The operation of Synapse Link data export is similar to the legacy export to data-lake option, and it covers all the previous benefits of it. Added benefits are that data transfers and storage can now be done in efficient Parquet or Delta formats, there is a continuous replication option, and there is better collaboration with other Microsoft apps in analytical projects inside the Synapse Analytics workspace.



Once you load the F&O data, you can run analytical projects inside Synapse Analytics together with the data extracted from other sources. Results from these projects can then be used, for example, by BI tools like Power BI or integrated into new data sources using pipelines.
Currently, not all F&O tables are available on the Synapse link page, but Microsoft is adding them regularly. If you do not see your own custom tables listed, try to enable the “Row version change tracking” option on your table. Although it transfers data to the data lake side with modern and fast parquet and delta formats.
ADVANTAGES
- High-performance, efficient storage option for analytics projects using Delta Lake
- Continuous data synchronization with efficient Parquet and Delta formats
- Common Data Model support
- Possibility of using all Synapse Analytics tools
Disadvantages
- One-way-only integration for analytical projects
Link to Microsoft Fabric
preview version
Same as Synapse Link for Dataverse, the new kid on the block, Microsoft Fabric, enhances data analytics, business intelligence, and integration capabilities by linking F&O data. Link to Microsoft Fabric option does not require data to be copied into another data storage as the Synapse analytics option, but creates data source shortcuts and caches the data internally using high-performance and efficient ways when needed. Once your data is linked, Microsoft Fabric features the latest Microsoft technology on data analytics, business intelligence, and data integration.
Linking is at the time of writing done via the same Synapse link tab in Power Platform, but selecting the Microsoft One Lake option. By default, it links all Dataverse tables, including F&O tables linked to Dataverse. Instead of creating extra data lake storage for processing, it uses Dataverse directly if extra storage is needed. You can also connect existing Azure Synapse Links with Microsoft Fabric if you would like to use the new Fabric features, like integrated data pipelines or Power BI DirectLake mode.
Microsoft Fabric charges its costs on a pay-as-you-go, or reservation basis per capacity units, and the lowest cost option is F2. However, the F2 option cannot achieve much, and the recommended ones for a small to medium-scale ERP analytics or integration scenario are F16 or F32, which are considerably more expensive.
At the time of writing, it is still in preview. Please check the Microsoft information link below to get the latest, up-to-date information.
ADVANTAGES
- Future-proof platform for BI, integration, and data analytics
- Real-time data access and processing
- No extra data storage is needed
- Possibility of using F&O data with innovative Fabric tools, including Fabric Data Factory pipelines and Power BI DirectLake
Disadvantages
- It can be overkill to use for small to medium integration scenarios
Finance and operations MCP server
MCP servers stand for standardized plug-ins for AI platforms, which enable AI agents to access data and perform operations on external services. The F&O MCP server allows CoPilot and other MCP-compatible AI agents access to certain F&O business logic as tools. At the time of writing, the number of tools is limited to 13 (listed here), and the tools and their functionality are not extensible. Accessing F&O data directly by using this MCP server is also not possible.
Adding the F&O MCP server to CoPilot is straightforward; just select the “Dynamics 365 ERP MCP” from the list of tools for the agent. Currently, no instructions are available about adding it to other AI platforms.
If you need an F&O MCP server with more tools and possibilities, you either need to wait until this one gets improved or use 3rd third-party F&O MCP server options available on the internet. Another option is to create your own MCP servers using Logic Apps, which I have described in my blog post here.
ADVANTAGES
- Usage of F&O business logic in CoPilot
Disadvantages
- No F&O data access (yet)
- Limited number of tools (for now)
- Not extensible tools and functionality
More info: https://learn.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/copilot/copilot-mcp

Nice post. Do you plan to support/update it?
if yes
Can you also add a link to throttling page for OData https://docs.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/data-entities/priority-based-throttling ?
and probably add one more point in your list
Custom x++ periodic batch jobs (where you can read/write from the external services, like Azure SQL, webservices, etc..). I think there are several examples in the standard system like Periodic import of exchange rates..
Sure!
Indeed a very helpful post
Awesome overview, thanks!
And yes, dual-write and data lake are still maturing technologies, so you will have a full-time job updating details 🙂
Suggest to add Virtual Entities, even if they are simply oData endpoints wrapped up nicely
… and, for the historical record, Axapta could also run on Office Access db :-O
Thank you, yes it is quite difficult these days to update blog posts, Business Events post for example has more updates than post itself and today it needs an update again because MS removed Logic App IDs as links.
Virtual entities stays on the CE side so I did not include it here.
I remember running Navision on Access DB for demos but personally never saw Axapta on Access DB 🙂 Interesting fact, thanks.
Thank you for the post. One question, can recurring integration update data? I think it can only insert new records. Thanks.
Hi, RI is also based on DMF platform and data entities so if you can update records from a DMF project for that entity, you can also do it via recurring integrations. You can also directly create a recurring integration from a DMF project by clicking the menu item on the toolbar.
Very useful post, I have a client who is experiencing difficulties with using the BYOD option, data from power BI reports went out of sync for a number of days, would you recommend using data lakes instead now ?
If applicable for the customer scenario, that would be the best
Great post! Maybe you can help me to clarify how to approach the following.
We have this integration scenario: a third-party developed mobile app and cloud-based middleware that collect sales orders and process payments. They store this data in the middleware and need to send it to D365FO. At the same time, from D365FO we need to send them Parameters, Master data and some transactional data aggregated like sales stats, customer balance, collection balance, etc.
Currently we are using a file-based integration based on the Recurring Integrations Scheduler(RIS). But both the customer and the mobile app provider have some issues with this setup and want to get rid off it.
Taking into account that we might have different sizes of daily inbound/outbound loads, where master data might represent a few hundreds, few thousands for sales orders and aggregated data could be several hundreds of thousands, which approach would you recommend using?
-File-based integration based on periodic import/export of files from/to Azure storage and calling external Azure services from X++ to download/upload these files. Not using RIS nor DMF. For small and large loads.
-Azure Integration Platform (Service Bus queues) and read/write messages from X++. For small and large loads.
-Hybrid: OData/Custom services for small loads and either file-based or Azure Service Bus queues for large ones.
-The current one, based on RIS.
Any clue would be much appreciated.
Thanks!
Thanks. I understand the problem you have,. file based integrations with DMF unfortunately works like this and not quite performance friendly options but they can handle large amounts of data and do bulk data transfers with data staging/error tracking abilities. Other options like OData have their own shortcomings like lack of staging functionality, error handling and data consistency checks.
It would be a good idea for you to check custom FO integration solutions done by various ISVs which are designed to remedy shortcomings of DMF and OData. Otherwise, if possible, a complete customized solution using custom services on FO side and Business events/Azure functions and services on cloud side will perform much better and be more flexible but of course you need a lot of development work to achieve this.
Very nice post. I would add some sort of information with examples and information regarding DW and common issues which might happen when you decide which integration to choose. I also consider ADF and Azure service bus and Synapse as a complex solutions to build integration.
Thanks. I would prefer keeping it on basics like that otherwise I need to update the post at least every week.
Hi Sertan,
Do OData export occur as batch? What I mean is, if I am executing an OData Request to return 100 records, does FinOps processes each record at a time or does it executes as a batch?
That being said, if lets say 1 record takes 10 seconds, will processing an export of 100 records via OData take 100*10 seconds?
Hi, OData executes as a REST service with request and respond logic. And it is using FO data entities to send normalized data.
I think from your description you mean ‘batch’ as sending bulk data from SQL. Unfortunately FO does not do that and processes OData internally to collect the data line by line. This is how virtual fields and data entity “postLoad” code works for each line, those are not executed inside SQL server but FO.
Great post indeed. However, i am always really disappointed of the lack in the FO On-Prem docs as well as the limitations that Microsoft intends to set on on-prem.
i would have appreciated if you clearly mentioned if there is any way to apply for on-prem for for every option.
Hi Sertan,
Thanks for the detailed explanation on the approaches of integration. I have a use case where we have to send the customer/product/order data from third party to D365 F&O. Also we have to send the order status updates back to third party from D365 F&O. We will have to do this both in realtime and in batches. For real time data transfer, is “custom services” the only option?. Can you please suggest us on which approaches we can use for this? Also can we use Power Automate to do this data exchange? Can we have connections to third party and D365 F&O in power automate?