Data transformations are an essential part of integrations, and there are many tools available to perform them with pre-determined rules and schemas. However, we may run into integration scenarios where using deterministic schemas is not feasible or too difficult to maintain. Some API’s change their schema version nearly every day, like open-source web shop APIs. Or, the customer may want an integration that involves collecting data from a wide array of unknown data schemas or documents, as demonstrated in the D365 F&O integration sample below. In this case, AI can help us (to some extent). So, let’s dive into it and see it in action.
F&O vendor prices collector
The requirement is: the customer needs to collect item purchase prices data from vendors to update item standard costs. The collected data from this integration needs to go into the D365 F&O pending item prices form, and these prices should be manually activated for the item. Every vendor uses a different data format, data schema, and field names to send the price updates to us, but their SKUs always match the F&O item IDs. So there are no fixed schemas. Vendors send only one item price at a time. Challenging, isn’t it?
The standard logic app I have created to solve the problem, which you can download from my GitHub link, is as the screenshot below.

The F&O data entity we will use is InventItemPendingPricesV2. For the AI part, I will use Azure OpenAI’s “Get chat completions using Prompt Template” action and GPT 4.1 mini model. The request receives the data to be transformed from the vendors, and in the variables, I provide a sample of data as a template for the InventItemPendingPricesV2 data entity:

Then, in the heart of our integration, the AI assistant transforms the data and returns it in the format of InventItemPendingPricesV2 data entity, which is then parsed for a schema validation, and written into the F&O table via the Create Record action. The same scenario could be solved by using an AI agent loop from the beginning to the end, but this option saves a fair deal of operational costs. AI prompt is as below:
system:
You are a helpful assistant for converting data from one format to another and mapping fields from one document to another by the closest field match. The user will provide you with two documents: a source document and a conversion template document. You need to convert the data in the source document to the template data format, matching and mapping the data to the closest fields in the template, and return a raw document containing the same fields in the template.
user:
Convert data from this source document:
"{{source}}"
to the same format as the template document:
"{{template}}"
Do not change the PriceType field from the template; return the result
Since the AI action is a completion action, we need to mention the given role for the system and the user clearly, so it can complete the ongoing communication. The source and template data are parsed to the AI assistant using action parameters (do not embed them directly inside the prompt here!). Language for passing the parameters to the AI prompt is similar to Liquid templates language, so you can also pass arrays by using for loops, etc. Now let’s see it in action:
Testing
To test it, I will go and hunt for item price documents in different formats on the internet and send them for the Contoso item “P004”. I have fixed the PriceType field to “Cost” since the AI assistant should not fill non-F&O price types there. First, let’s send a random JSON item price document and see the result:
{
"item_id": "SKU-001",
"name": "Wireless Mouse",
"category": "Electronics",
"price": 25.99,
"currency": "USD",
"stock": 120,
"last_updated": "2025-11-03T10:15:00Z"
}

It worked well. Now, let’s send a payload with XML data from an old ERP system in it and see if it will work:
<Product>
<Product_ID>P0004</Product_ID>
<SKU>P0004</SKU>
<Name>On Cloud Nine Pillow</Name>
<Product_URL>https://www.domain.com/product/heh-9133</Product_URL>
<Price>24.99</Price>
<Unit>ea</Unit>
<Thumbnail_URL>https://www.domain.com/images/heh-9133_600x600.png</Thumbnail_URL>
<Search_Keywords>lorem, ipsum, dolor, ...</Search_Keywords>
<Description>Sociosqu facilisis duis ...</Description>
<Category>Home>Home Decor>Pillows|Back In Stock</Category>
<Category_ID>298|511</Category_ID>
<Brand>FabDecor</Brand>
<Child_SKU></Child_SKU>
<Child_Price></Child_Price>
<Color>White</Color>
<Color_Family>White</Color_Family>
<Color_Swatches></Color_Swatches>
<Size></Size>
<Shoe_Size></Shoe_Size>
<Pants_Size></Pants_Size>
<Occassion></Occassion>
<Season></Season>
<Badges></Badges>
<Rating_Avg>4.2</Rating_Avg>
<Rating_Count>8</Rating_Count>
<Inventory_Count>21</Inventory_Count>
<Date_Created>2018-03-03 17:41:13</Date_Created>
</Product>


This also worked well! See that when you go into the logic app run details, you can see how the OpenAI action interpreted your prompt, how it picked up the assistant and user roles, and how many tokens it has used for the operation. Conversion operations use around 550 tokens per run on average.
Now, let’s test it with a CSV data payload coming from a very old vendor:


This also worked well; however, see that it has taken the base price instead of the final_price field with all discounts applied, which I would pick myself if I did it manually. As expected, an AI assistant does not do miracles. You may need to adjust the prompt continuously, use an evaluator-optimizer pattern, or just accept that it works imperfectly. This is, more or less, how things work with AI today. In this example, since these item prices need to be activated manually, we can live with the imperfections done by the AI assistant. It does a great job of converting any format to F&O records. But if this were a business-critical integration scenario, we might need to retreat to our good old rigid transform schemas and tell vendors to send their prices in our fixed schema.
On the F&O side, we see all the data we have sent so far in the pending prices:

Now, let’s turn up the heat and see how far our AI assistant can go. A vendor from France refused to send us any formatted data, but instead sent the item price in a reply email, in the French language. Can the AI assistant in our integration even import this email as a data payload? Let’s see:
Objet : Détails et tarif de l’article SSD 1 To NVMe 3.0
Bonjour Monsieur Sertan,
Comme convenu lors de notre dernier échange, je vous transmets ci-dessous les informations détaillées concernant l’article demandé.
Il s’agit du Disque SSD 1 To NVMe 3.0 (référence SKU : P0004). Ce modèle offre une capacité de stockage de 1 téraoctet, avec une interface NVMe PCIe 3.0 x4 et des vitesses de lecture/écriture de 3500/3000 Mo par seconde. Il est particulièrement adapté aux ordinateurs de bureau et portables nécessitant des performances élevées.
Le prix unitaire est fixé à 89,90 € HT, valable du 1er novembre 2025 au 31 décembre 2025. Ce tarif inclut une garantie constructeur de trois ans.
N’hésitez pas à me contacter si vous souhaitez recevoir un devis formel ou procéder à la commande.
Bien cordialement,
Julien Martin
Responsable commercial
TechStor Solutions
📞 06 24 58 39 10
📧 julien.martin@techstor-solutions.fr

And the result is yet again a success:

Final thoughts
As seen in the tests, AI handles simple to rather complex dynamic data transformations quite nicely. However, it can make mistakes and does not process data as reliably as the human eye does. It is a wonderful tool to solve this kind of challenging integration scenarios, but not a miracle one without its imperfections. An AI prompt call here with the Azure OpenAI GPT 4.1 mini model costs around 550 tokens. The pricing is now around 2 dollars per 1 million tokens, so we can do around 1818 calls for 2 dollars, which is quite acceptable. The same scenario can be solved by using an AI agent loop from the beginning to the end, but then the costs are much higher.

To conclude, AI can definitely be quite useful in data transformations if its limitations are considered carefully.
