Handling Large CSV Data in Power Automate: Efficient Processing with Chunking and Excel Scripts 

  • Published
  • Posted in General / Power BI
  • Updated
  • 6 mins read

Processing large CSV files in Power Automate can be a challenge, especially when dealing with files containing thousands of rows. In this blog, we’ll walk through an efficient technique for handling large CSV files by reading data from OneDrive, chunking it dynamically, processing it using an Excel script, and inserting the data into an Azure SQL Database using a stored procedure

By following this approach, you can overcome Power Automate’s memory and timeout limitations while maintaining a smooth data pipeline. 

Challenges of Handling Large CSV Files in Power Automate 

  1. Memory Limitations: Power Automate has a limit on how much data can be processed in a single step. 
  1. Execution Timeout: Processing a large dataset in a single operation may exceed Power Automate’s time constraints. 
  1. Inefficient Processing: Without chunking, processing the CSV file as a whole can slow down the flow. 

Proposed Solution 

We use a chunking mechanism that processes the CSV file in manageable portions. The approach consists of the following steps: 

  1. Read the CSV file from OneDrive. 
  1. Extract the headers and store them in a variable. 
  1. Use two variables (Index and Buffer) to control chunking. 
  1. Process the CSV in chunks using the take(skip(…)) expression. 
  1. Concatenate headers with each chunk and pass it to an Excel script to convert CSV to JSON. 
  1. Pass the JSON output to an Azure SQL stored procedure for insertion. 

Step-by-Step Implementation in Power Automate 

1. Read CSV File from OneDrive 

  • Use the “Get file content” action to retrieve the CSV file from OneDrive. 
  • Store the file content in a variable. 

2. Extract Headers 

Since headers are required for structured processing, we store them separately. 

  • Use the following expression to extract headers: 
  • first(split(outputs(‘Get_file_contents’)?[‘body’], decodeUriComponent(‘%0D%0A’))) 
  • split() breaks the file into an array of lines. 
  • decodeUriComponent(‘%0D%0A’) represents the newline (CRLF). 
  • first() extracts the first row (headers). 
  • Store the extracted headers in a variable (Headers). 

3. Define Chunking Variables 

We define two variables: 

  1. Index: Tracks the starting row for each chunk. 
  1. Buffer: Defines the number of rows per chunk. 
  • Initialize Index as 1 (to skip headers). 
  • Set Buffer to a reasonable number, e.g., 500 rows. 

4. Process the CSV in Chunks 

To handle large files, we extract a chunk of rows dynamically using the take(skip(…)) function. 

Expression: 

take(skip(split(outputs(‘Get_file_contents’)?[‘body’], decodeUriComponent(‘%0D%0A’)), variables(‘Index’)), variables(‘Buffer’)) 

Explanation: 

  • split(outputs(‘Get_file_contents’)?[‘body’], decodeUriComponent(‘%0D%0A’)): Splits the CSV file into an array of lines. 
  • skip(…, variables(‘Index’)): Skips rows that have already been processed. 
  • take(…, variables(‘Buffer’)): Extracts the next Buffer number of rows. 

5. Concatenate Headers with the Current Chunk 

Each chunk should be processed as a complete CSV, so we concatenate the headers with the chunk

Expression: 

concat(variables(‘Headers’), decodeUriComponent(‘%0D%0A’), join(take(skip(split(outputs(‘Get_file_contents’)?[‘body’], decodeUriComponent(‘%0D%0A’)), variables(‘Index’)), variables(‘Buffer’)), decodeUriComponent(‘%0D%0A’))) 

Breakdown: 

  • variables(‘Headers’): Appends headers at the start of each chunk. 
  • decodeUriComponent(‘%0D%0A’): Ensures proper line separation. 
  • join(…): Joins the chunked rows into a single text block. 

6. Convert CSV to JSON Using an Excel Script 

Now that we have a well-formed CSV chunk, we pass it to an Excel script to convert it to JSON. 

Excel Script for CSV to JSON Conversion 

function main(workbook: ExcelScript.Workbook, csvData: string): string { 

let sheet = workbook.getWorksheet(“Sheet1”); 

sheet.getRange().clear(); // Clear previous data 

let rows = csvData.split(“\n”).map(row => row.split(“,”)); 

sheet.getRange(`A1:${String.fromCharCode(65 + rows[0].length – 1)}${rows.length}`) 

.setValues(rows); 

let jsonArray = []; 

let headers = rows[0]; 

for (let i = 1; i < rows.length; i++) { 

let rowObject: { [key: string]: string } = {}; 

for (let j = 0; j < headers.length; j++) { 

rowObject[headers[j]] = rows[i][j]; 

jsonArray.push(rowObject); 

return JSON.stringify(jsonArray); 

We use an Office Script inside an “Run script” action in Power Automate. 

  • The script writes CSV data into an Excel sheet. 
  • It converts the sheet into a JSON array. 
  • The JSON data is returned to Power Automate. 

7. Insert JSON into Azure SQL via Stored Procedure 

The JSON output from the Excel script is passed into an Azure SQL stored procedure for bulk insertion. 

SQL Table Schema 

CREATE TABLE SalesData ( 

SalesID INT, 

CustomerName NVARCHAR(255), 

Amount FLOAT 

); 

Stored Procedure for JSON Insertion 

CREATE PROCEDURE InsertSalesData (@JsonData NVARCHAR(MAX)) 

AS 

BEGIN 

INSERT INTO SalesData (SalesID, CustomerName, Amount) 

SELECT SalesID, CustomerName, Amount 

FROM OPENJSON(@JsonData) 

WITH ( 

SalesID INT, 

CustomerName NVARCHAR(255), 

Amount FLOAT 

); 

END; 

Power Automate Action: Execute SQL Stored Procedure 

  • Use “Execute a SQL Stored Procedure” action. 
  • Pass the JSON data as an input parameter (@JsonData). 

8. Loop Until All Data is Processed 

  • After processing each chunk, update the Index variable: 
  • variables(‘Index’) + variables(‘Buffer’) 
  • Continue looping until the entire file is processed. 

Benefits of This Approach 

Handles Large CSV Files: The chunking technique ensures smooth execution without hitting memory limits. 
Optimized Processing: Instead of processing the entire file at once, the data is processed in smaller, manageable chunks. 
Scalability: Works for both small and large datasets without modifications. 
Flexible Data Handling: The approach can be extended for different data formats and target databases. 

Conclusion 

By using chunking, Excel scripts, and SQL stored procedures, Power Automate can efficiently process large CSV files without running into memory or execution time limits. This method ensures seamless data flow from OneDrive to Azure SQL, making it a scalable and robust solution. 

Afroz Labbai  

Data Engineering  

Addend Analytics  

Addend Analytics is a leading Power BI consulting services provider and Microsoft Power BI partners based in Mumbai, India. In addition to Power BI implementations, we specialize in providing end-to-end solutions like Business Central with Power BI to unlock actionable insights. Our expertise also extends to Microsoft Fabric consulting, offering competitive Microsoft Fabric pricing to meet your business needs. 

We have successfully delivered Power BI for Manufacturing industry, with real-time Power BI manufacturing dashboards. Having successfully completed over 100 projects across industries such as financial services, banking, insurance, retail, sales, real estate, logistics, and healthcare. Whether you’re exploring Business Central implementation cost or seeking advanced data analytics, Addend Analytics is here to help.Get a free consultation now by emailing us at kamal.sharma@addendanalytics.com. 

Addend Analytics is a Microsoft Gold Partner based in Mumbai, India, and a branch office in the U.S.

Addend has successfully implemented 100+ Microsoft Power BI and Business Central projects for 100+ clients across sectors like Financial Services, Banking, Insurance, Retail, Sales, Manufacturing, Real estate, Logistics, and Healthcare in countries like the US, Europe, Switzerland, and Australia.

Get a free consultation now by emailing us or contacting us.