There are no built in actions in Power Automate to Parse a CSV File. There are external connectors which can do this for you, but this blog post will cover how to Parse a CSV in Power Automate without the use of any external connectors. The aim is to end up with a JSON array that we can use in other actions. I’ve exported this flow and you can download it here.
Here is a video explanation of the process:
For the purposes of this blog post, the sample CSV will have the following contents:
Session Data by SSID,,,,, SSID,Session Count (%),Client Count (%),Duration (%),Total Usage (%),Usage (In/Out) test1,90 (54.22%),26 (48.15%),1d:11h:35m (62.06%),939.09MB (50.69%),814.94MB/124.15MB -,36 (21.69%),13 (24.07%),0d:2h:55m (5.09%),0.00B (0.0%),0.00B/0.00B test2,21 (12.65%),13 (24.07%),0d:8h:35m (14.97%),538.12MB (29.05%),500.54MB/37.58MB
We are going to transform it into this:
[ { "SSID": "test1", "sessionCount": "90 (54.22%)", "clientCount": "26 (48.15%)", "duration": "1d:11h:35m (62.06%)", "totalUsage": "939.09MB (50.69%)", "usage": "814.94MB/124.15MB" }, { "SSID": "-", "sessionCount": "36 (21.69%)", "clientCount": "13 (24.07%)", "duration": "0d:2h:55m (5.09%)", "totalUsage": "0.00B (0.0%)", "usage": "0.00B/0.00B" }, { "SSID": "test2", "sessionCount": "21 (12.65%)", "clientCount": "13 (24.07%)", "duration": "0d:8h:35m (14.97%)", "totalUsage": "538.12MB (29.05%)", "usage": "500.54MB/37.58MB" } ]
The first thing to note is that the first two lines of this CSV need to be excluded, because they do not contain any data. So lets get started!
Step 1 – Get the CSV Data and Split it into lines
The first thing is to get the CSV data and split it into lines:

This compose action will use the split function to convert the original CSV into an array of lines, it will now look like this:
[ "Session Data by SSID,,,,,", "SSID,Session Count (%),Client Count (%),Duration (%),Total Usage (%),Usage (In/Out)", "test1,90 (54.22%),26 (48.15%),1d:11h:35m (62.06%),939.09MB (50.69%),814.94MB/124.15MB", "-,36 (21.69%),13 (24.07%),0d:2h:55m (5.09%),0.00B (0.0%),0.00B/0.00B", "test2,21 (12.65%),13 (24.07%),0d:8h:35m (14.97%),538.12MB (29.05%),500.54MB/37.58MB", ]
The expression I used was:
split(outputs('Get_file_content_using_path')?['body'], decodeUriComponent('%0A'))
If your file is stored in SharePoint you will use the action “Get file content”, so the expression will be:
split(outputs('Get_file_content')?['body'], decodeUriComponent('%0A'))
This flow uses only compose actions. There is a very good reason for this which I will come to later. There are no variables whatsoever.
Important note regarding line endings
I used the decodeUriComponent function, to split the CSV.
decodeUriComponent('%0A')
This represents a new line feed character (LF), often displayed as \n. This is the Unix standard.
CSV Files generated in Windows, may use this format but often use a carriage return and line feed (CR+LF). This is represented as \r\n.
The split expression above will still work with CR+LF, but you will be left with \r characters in your data. The correct expression to split on a CR+LF is:
decodeUriComponent('%0D%0A')
If you load your CSV file into Notepad you can easily see which format your file is in, in the bottom right hand corner it will show either “Unix (LF)” or “Windows (CR LF)”.

Step 2 – Process each line of the CSV and create JSON record for each line
Now that we have our CSV formatted as an array, we can loop through each line. Take a look at this loop:

In the select an output from previous steps I used an expression, which requires a bit of explanation. The expression is:
skip(outputs('splitNewLine'),2)
The skip function returns an array, but removes items from the beginning of the collection. My sample CSV had two lines at the beginning which I did not want to include. So by using the Skip function, they will not be sent into the apply-each loop.
This loop will process each line individually, so every iteration will be working with a single line.
The first compose action splits the incoming line on a comma, here is the expression:
split(item(), ',')
This will produce yet another array, within the loop, which contains each of the column values, sample from example above:
[ "test1", "90 (54.22%)", "26 (48.15%)", "1d:11h:35m (62.06%)", "939.09MB (50.69%)", "814.94MB/124.15MB" ]
The next compose action, called JSON is more interesting. First, manually build a JSON string which represents an empty record, like this:
{ "SSID": "", "sessionCount": "", "clientCount": "", "duration": "", "totalUsage": "", "usage": "" }
Then, within the quotes for each column header you can use an expression to access each element of the splitByComma action:
outputs('splitByComma')?[0]
This expression represents the first element of the array produced by the previous compose step. Arrays in Power Automate are numbered from zero. My complete compose action has this code:
{ "SSID": "@{outputs('splitByComma')?[0]}", "sessionCount": "@{outputs('splitByComma')?[1]}", "clientCount": "@{outputs('splitByComma')?[2]}", "duration": "@{outputs('splitByComma')?[3]}", "totalUsage": "@{outputs('splitByComma')?[4]}", "usage": "@{outputs('splitByComma')?[5]}" }
This is everything you need to include in the loop. You might be wondering why this is of any use as the JSON data has not been added to a variable, so you have no method of accessing it! A secret of apply-each loops is that they output an array of output, so our final compose step has the simple expression:
outputs('JSON')
This will compile all of the results from the compose action within the loop, into a nice array containing JSON objects.
Performance
The flow above will work fine, but you may find it takes a long time to execute with a large CSV file. Because no variables have been used, only compose actions, the concurrency setting of the loop can be adjusted so that it will work on many records simultaneously.

With the three sample rows, the flow took just 1 second to run without concurrency enabled.
I increased the number of rows to 500, and without concurrency it took 1 minutes and 46 seconds to execute.
Enabling concurrency with 30 degrees of parallelism reduced the execution time to 7 seconds! Not a bad improvement.

The Complete Flow
Here is an image of the entire flow, to make your life easier I have exported the flow. You can download it here, import it into your environment and modify to your requirements:

This is a very simple flow, which you can recreate easily. You could add a Parse JSON step after the final compose and then the contents of your CSV will be available as dynamic content in the remainder of your flow.
I hope this helps you to Parse your CSV files, reach out if you need any help!
Jericho Rosales says
hi there, im very interested in your flow above. this is exactly what i need. next question is how can i upload the content to a sharepoint list?
Paulie says
Hi Jericho, you could add a parse JSON step and then an apply-each loop. Within the apply each loop use a create item action and add your items to SharePoint. Let me know if you get stuck and I will help you out.
Jericho Rosales says
Let say i have the below field in my SharePoint list named : tblTestData. the data is based on your sample data above. . how can i save the values of these field into my sharepoint list tblTestData. let say i have same field name in the list as below. Please help. i have same scenario that i need to upload CSV files into SharePoint list. Thanks in Advance
“SSID”: “”,
“sessionCount”: “”,
“clientCount”: “”,
“duration”: “”,
“totalUsage”: “”,
“usage”: “”
Jericho Rosales says
Hi Paulie, im super newbie on this. i appreciate if you can assist me. Do not know what to create in New Step after the completeJSON step. how can i add these values in the sharepoint list.
“SSID”: “”,
“sessionCount”: “”,
“clientCount”: “”,
“duration”: “”,
“totalUsage”: “”,
“usage”: “”
Paulie says
Because you are a newcomer, it would be quite difficult to explain, but fortunately it is very easy to do. I’ve just made another very quick video which shows the additional steps you need to take in order to add the results to a SharePoint list. I didn’t edit the video, so it is a bit rough and ready.
https://youtu.be/nPlnCDaud5M
Jericho Rosales says
Thank you Thank you So much. this is very helpful. I was able to do it in the help of you video. thank you for putting it together so quickly. Wow. You saved my day. Now what i need figure out is how to get the CSV Data from SharePoint document library that is my trigger and get the data of that CSV file and connect your code from that to be saved in my sharepoint list. my goal is when a new email arrived in shared box save it to my sharepoint list document folder.. (which i already resolve) then get the data of that CSV file (connect your flow tutorial) to save it to sharepoint list. I dont know if i can do it but i will try.. i’ve been sitting in my chair for 7 hrs now. finally i have lead to follow. thanks again.
Ted Chapin says
When I try to split the file contents
split(outputs(‘Get_file_content’)[‘body’], decodeUriComponent(‘%0A’))
I get
“InvalidTemplate. Unable to process template language expressions in action ‘Compose’ inputs at line ‘1’ and column ‘6460’: ‘The template language function ‘split’ expects its first parameter to be of type string. The provided value is of type ‘Object’. Please see https://aka.ms/logicexpressions#split for usage details.’.”
Paulie says
Take a look at your run history – see what the raw outputs of your get file action were.
Ted Chapin says
“body”: {
“$content-type”: “application/octet-stream”,
“$content”: “Sm9iOkNLQiBPRkZJQ0UgVEVTVCxWZXJzaW9uOjIxLjAwLFVuaXRzOlVTU3VydmV5RmVldA0KMSwwLjAwMCwwLjAwMCwxMDAuMDAwLENQLE9USEVSIChTRUUgTk9URVMpLE5PIENBUCxGTFVTSCwsDQoyLDM1Ljk2MSwwLjQ1Miw5OS45NDIsQ1AsT1RIRVIgKFNFRSBOT1RFUyksTk8gQ0FQLEZMVVNILCwNCjMsMjkuNjU5LDkuMzA2LDk5Ljk0NSxDUCxPVEhFUiAoU0VFIE5PVEVTKSxOTyBDQVAsRkxVU0gsLA0KMTAwMDAsNS4yODQsLTUuNDk3LDk5LjkyNixNb24NCjEwMDAxLDExLjQ3NywtNC4xODgsOTkuOTgyLE1vbg0KMTAwMDIsMTUuMTgxLDkuOTI4LDk5Ljk2MCxNb24NCjEwMDAzLDkuMDIyLDguOTUzLDEwMC4wMTYsTW9uDQoxMDAwNCw1LjgyNiwxMS4wOTAsMTAwLjAwNCxNb24NCg==”
}
It’s an ascii csv file
Paulie says
It doesn’t seem like it recognises it as a text file. What’s the file extension? Renaming it might work. If not, you could base64 decode it with the base64decode function.
Ted Chapin says
The file extension was .csv. It contains ascii data not binary. I changed the extension to .txt and now the Get file content output has text. Was not expecting that. Thanks for the fast replies.
Paulie says
Well done! Good luck
Ted Chapin says
I got the compose to work with the .csv file extension using
split(base64ToString(outputs(‘Get_file_content’)[‘body’][‘$content’]), decodeUriComponent(‘%0A’))
I used base64ToString not base64Decode and had to throw in a [$content] after the [body]
but it should not care what the extension is if the content is ascii.
Paulie says
Well done, it shouldn’t care, but the reason I mentioned it is because I’ve seen it be fussy about these things before. Glad you got it working.
Sven says
When i turn on the concurrency control the output of the csv is not correct – any idea why?
Jesse says
My final Compose can’t find the JSON Compose output. Everything was working great up to this point.
Jesse says
Watched the video…figured it out! Thanks!!!
Daniela Holzner says
Hi Paulie,
Thanks for your blog and videos. Any chance you can show how to do this, when the CSV is received via email?
Many systems generate reports by emailing a CSV. When I use the ‘received email’ as a trigger and ‘get attachments’, the remaining actions need to be within an ‘apply to each’ action (since there could be multiple attachments). BUT when doing this, the CompleteJSON compose action now errors, due to too many nested loops…
“The action ‘JSON’ is nested in a foreach scope of multiple levels. Referencing repetition actions from outside the scope is supported only when there are no multiple levels of nesting.”
I tried working with an ‘append to array’ variable instead, but couldn’t get it to work. It would be great if you could show how to do this! 🙂