• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Tachytelic.net

  • Get in Touch
  • About Me

Power Automate: How to parse a CSV File to create a JSON array

February 19, 2021 by Paulie 46 Comments

There are no built in actions in Power Automate to Parse a CSV File. There are external connectors which can do this for you, but this blog post will cover how to Parse a CSV in Power Automate without the use of any external connectors. The aim is to end up with a JSON array that we can use in other actions. I’ve exported this flow and you can download it here.

Here is a video explanation of the process:

For the purposes of this blog post, the sample CSV will have the following contents:

Session Data by SSID,,,,,
SSID,Session Count (%),Client Count (%),Duration (%),Total Usage (%),Usage (In/Out)
test1,90 (54.22%),26 (48.15%),1d:11h:35m (62.06%),939.09MB (50.69%),814.94MB/124.15MB
-,36 (21.69%),13 (24.07%),0d:2h:55m (5.09%),0.00B (0.0%),0.00B/0.00B
test2,21 (12.65%),13 (24.07%),0d:8h:35m (14.97%),538.12MB (29.05%),500.54MB/37.58MB

We are going to transform it into this:

[
  {
    "SSID": "test1",
    "sessionCount": "90 (54.22%)",
    "clientCount": "26 (48.15%)",
    "duration": "1d:11h:35m (62.06%)",
    "totalUsage": "939.09MB (50.69%)",
    "usage": "814.94MB/124.15MB"
  },
  {
    "SSID": "-",
    "sessionCount": "36 (21.69%)",
    "clientCount": "13 (24.07%)",
    "duration": "0d:2h:55m (5.09%)",
    "totalUsage": "0.00B (0.0%)",
    "usage": "0.00B/0.00B"
  },
  {
    "SSID": "test2",
    "sessionCount": "21 (12.65%)",
    "clientCount": "13 (24.07%)",
    "duration": "0d:8h:35m (14.97%)",
    "totalUsage": "538.12MB (29.05%)",
    "usage": "500.54MB/37.58MB"
  }
]

The first thing to note is that the first two lines of this CSV need to be excluded, because they do not contain any data. So lets get started!

Step 1 – Get the CSV Data and Split it into lines

The first thing is to get the CSV data and split it into lines:

Image showing Power Automate actions retrieving a CSV file and splitting it into an array of lines.

This compose action will use the split function to convert the original CSV into an array of lines, it will now look like this:

[
  "Session Data by SSID,,,,,",
  "SSID,Session Count (%),Client Count (%),Duration (%),Total Usage (%),Usage (In/Out)",
  "test1,90 (54.22%),26 (48.15%),1d:11h:35m (62.06%),939.09MB (50.69%),814.94MB/124.15MB",
  "-,36 (21.69%),13 (24.07%),0d:2h:55m (5.09%),0.00B (0.0%),0.00B/0.00B",
  "test2,21 (12.65%),13 (24.07%),0d:8h:35m (14.97%),538.12MB (29.05%),500.54MB/37.58MB",
]

The expression I used was:

split(outputs('Get_file_content_using_path')?['body'], decodeUriComponent('%0A'))

If your file is stored in SharePoint you will use the action “Get file content”, so the expression will be:

split(outputs('Get_file_content')?['body'], decodeUriComponent('%0A'))

This flow uses only compose actions. There is a very good reason for this which I will come to later. There are no variables whatsoever.

Important note regarding line endings

I used the decodeUriComponent function, to split the CSV.

decodeUriComponent('%0A')

This represents a new line feed character (LF), often displayed as \n. This is the Unix standard.

CSV Files generated in Windows, may use this format but often use a carriage return and line feed (CR+LF). This is represented as \r\n.

The split expression above will still work with CR+LF, but you will be left with \r characters in your data. The correct expression to split on a CR+LF is:

decodeUriComponent('%0D%0A')

If you load your CSV file into Notepad you can easily see which format your file is in, in the bottom right hand corner it will show either “Unix (LF)” or “Windows (CR LF)”.

Image of notepad with a CSV file loaded to determine the format of the line endings.

Step 2 – Process each line of the CSV and create JSON record for each line

Now that we have our CSV formatted as an array, we can loop through each line. Take a look at this loop:

Image of Power Automate actions splitting the line of a CSV file and creating a JSON object from it.

In the select an output from previous steps I used an expression, which requires a bit of explanation. The expression is:

skip(outputs('splitNewLine'),2)

The skip function returns an array, but removes items from the beginning of the collection. My sample CSV had two lines at the beginning which I did not want to include. So by using the Skip function, they will not be sent into the apply-each loop.

This loop will process each line individually, so every iteration will be working with a single line.

The first compose action splits the incoming line on a comma, here is the expression:

split(item(), ',')

This will produce yet another array, within the loop, which contains each of the column values, sample from example above:

[
  "test1",
  "90 (54.22%)",
  "26 (48.15%)",
  "1d:11h:35m (62.06%)",
  "939.09MB (50.69%)",
  "814.94MB/124.15MB"
]

The next compose action, called JSON is more interesting. First, manually build a JSON string which represents an empty record, like this:

{
  "SSID": "",
  "sessionCount": "",
  "clientCount": "",
  "duration": "",
  "totalUsage": "",
  "usage": ""
}

Then, within the quotes for each column header you can use an expression to access each element of the splitByComma action:

outputs('splitByComma')?[0]

This expression represents the first element of the array produced by the previous compose step. Arrays in Power Automate are numbered from zero. My complete compose action has this code:

{
  "SSID": "@{outputs('splitByComma')?[0]}",
  "sessionCount": "@{outputs('splitByComma')?[1]}",
  "clientCount": "@{outputs('splitByComma')?[2]}",
  "duration": "@{outputs('splitByComma')?[3]}",
  "totalUsage": "@{outputs('splitByComma')?[4]}",
  "usage": "@{outputs('splitByComma')?[5]}"
}

This is everything you need to include in the loop. You might be wondering why this is of any use as the JSON data has not been added to a variable, so you have no method of accessing it! A secret of apply-each loops is that they output an array of output, so our final compose step has the simple expression:

outputs('JSON')

This will compile all of the results from the compose action within the loop, into a nice array containing JSON objects.

Performance

The flow above will work fine, but you may find it takes a long time to execute with a large CSV file. Because no variables have been used, only compose actions, the concurrency setting of the loop can be adjusted so that it will work on many records simultaneously.

Animated gif of an apply each loop in Power Automate showing how to change concurrency settings for the loop.

With the three sample rows, the flow took just 1 second to run without concurrency enabled.

I increased the number of rows to 500, and without concurrency it took 1 minutes and 46 seconds to execute.

Enabling concurrency with 30 degrees of parallelism reduced the execution time to 7 seconds! Not a bad improvement.

Image showing performance difference for a Power Automate flow used to Parse a CSV with and without concurrency enabled.

The Complete Flow

Here is an image of the entire flow, to make your life easier I have exported the flow. You can download it here, import it into your environment and modify to your requirements:

Image of an entire Power Automate Flow used to Parse a CSV file and create an array of JSON objects.

This is a very simple flow, which you can recreate easily. You could add a Parse JSON step after the final compose and then the contents of your CSV will be available as dynamic content in the remainder of your flow.

I hope this helps you to Parse your CSV files, reach out if you need any help!

Filed Under: Power Platform Tagged With: Power Automate

Reader Interactions

Comments

  1. Jericho Rosales says

    February 26, 2021 at 8:34 pm

    hi there, im very interested in your flow above. this is exactly what i need. next question is how can i upload the content to a sharepoint list?

  2. Paulie says

    February 26, 2021 at 8:42 pm

    Hi Jericho, you could add a parse JSON step and then an apply-each loop. Within the apply each loop use a create item action and add your items to SharePoint. Let me know if you get stuck and I will help you out.

  3. Jericho Rosales says

    February 26, 2021 at 8:50 pm

    Let say i have the below field in my SharePoint list named : tblTestData. the data is based on your sample data above. . how can i save the values of these field into my sharepoint list tblTestData. let say i have same field name in the list as below. Please help. i have same scenario that i need to upload CSV files into SharePoint list. Thanks in Advance

    “SSID”: “”,
    “sessionCount”: “”,
    “clientCount”: “”,
    “duration”: “”,
    “totalUsage”: “”,
    “usage”: “”

  4. Jericho Rosales says

    February 26, 2021 at 8:54 pm

    Hi Paulie, im super newbie on this. i appreciate if you can assist me. Do not know what to create in New Step after the completeJSON step. how can i add these values in the sharepoint list.

    “SSID”: “”,
    “sessionCount”: “”,
    “clientCount”: “”,
    “duration”: “”,
    “totalUsage”: “”,
    “usage”: “”

  5. Paulie says

    February 26, 2021 at 9:30 pm

    Because you are a newcomer, it would be quite difficult to explain, but fortunately it is very easy to do. I’ve just made another very quick video which shows the additional steps you need to take in order to add the results to a SharePoint list. I didn’t edit the video, so it is a bit rough and ready.
    https://youtu.be/nPlnCDaud5M

  6. Jericho Rosales says

    February 26, 2021 at 9:55 pm

    Thank you Thank you So much. this is very helpful. I was able to do it in the help of you video. thank you for putting it together so quickly. Wow. You saved my day. Now what i need figure out is how to get the CSV Data from SharePoint document library that is my trigger and get the data of that CSV file and connect your code from that to be saved in my sharepoint list. my goal is when a new email arrived in shared box save it to my sharepoint list document folder.. (which i already resolve) then get the data of that CSV file (connect your flow tutorial) to save it to sharepoint list. I dont know if i can do it but i will try.. i’ve been sitting in my chair for 7 hrs now. finally i have lead to follow. thanks again.

  7. Ted Chapin says

    April 8, 2021 at 8:55 pm

    When I try to split the file contents
    split(outputs(‘Get_file_content’)[‘body’], decodeUriComponent(‘%0A’))
    I get
    “InvalidTemplate. Unable to process template language expressions in action ‘Compose’ inputs at line ‘1’ and column ‘6460’: ‘The template language function ‘split’ expects its first parameter to be of type string. The provided value is of type ‘Object’. Please see https://aka.ms/logicexpressions#split for usage details.’.”

  8. Paulie says

    April 8, 2021 at 8:56 pm

    Take a look at your run history – see what the raw outputs of your get file action were.

  9. Paulie says

    April 8, 2021 at 9:00 pm

    It doesn’t seem like it recognises it as a text file. What’s the file extension? Renaming it might work. If not, you could base64 decode it with the base64decode function.

  10. Ted Chapin says

    April 8, 2021 at 9:05 pm

    The file extension was .csv. It contains ascii data not binary. I changed the extension to .txt and now the Get file content output has text. Was not expecting that. Thanks for the fast replies.

  11. Paulie says

    April 8, 2021 at 9:05 pm

    Well done! Good luck

  12. Ted Chapin says

    April 8, 2021 at 10:09 pm

    I got the compose to work with the .csv file extension using
    split(base64ToString(outputs(‘Get_file_content’)[‘body’][‘$content’]), decodeUriComponent(‘%0A’))

    I used base64ToString not base64Decode and had to throw in a [$content] after the [body]

    but it should not care what the extension is if the content is ascii.

  13. Paulie says

    April 8, 2021 at 10:11 pm

    Well done, it shouldn’t care, but the reason I mentioned it is because I’ve seen it be fussy about these things before. Glad you got it working.

  14. Sven says

    April 11, 2021 at 5:26 pm

    When i turn on the concurrency control the output of the csv is not correct – any idea why?

  15. Jesse says

    April 14, 2021 at 6:00 pm

    My final Compose can’t find the JSON Compose output. Everything was working great up to this point.

  16. Jesse says

    April 14, 2021 at 6:26 pm

    Watched the video…figured it out! Thanks!!!

  17. Daniela Holzner says

    April 18, 2021 at 11:17 pm

    Hi Paulie,
    Thanks for your blog and videos. Any chance you can show how to do this, when the CSV is received via email?

    Many systems generate reports by emailing a CSV. When I use the ‘received email’ as a trigger and ‘get attachments’, the remaining actions need to be within an ‘apply to each’ action (since there could be multiple attachments). BUT when doing this, the CompleteJSON compose action now errors, due to too many nested loops…
    “The action ‘JSON’ is nested in a foreach scope of multiple levels. Referencing repetition actions from outside the scope is supported only when there are no multiple levels of nesting.”

    I tried working with an ‘append to array’ variable instead, but couldn’t get it to work. It would be great if you could show how to do this! 🙂

  18. VS says

    April 26, 2021 at 7:31 pm

    YOU ARE A GOD! Thank you for all the insight. Simple. Effective. Efficient flow.

    You rock Paulie!

  19. Virakone says

    April 28, 2021 at 2:00 pm

    Hi Paulie,

    I have a value in my csv that has a comma inside it , ex. “Big Company Inc., Ltd.” – how would you recommend handling this?

  20. Johan Magerman says

    April 29, 2021 at 1:05 pm

    Hi Paulie,
    Great video!
    But I have the same issue as Daniela, would like to start my flow by receiving an email and loading the attached csv.
    And the the for each loop of your flow is nested in the foreach that loops trough the attachments.

  21. Paulie says

    April 29, 2021 at 6:43 pm

    I can help you do what you want to do. Perhaps we could have a teams session, record it and share it for everyone else to see.

  22. Johan Magerman says

    May 3, 2021 at 11:48 am

    Hi Paulie,
    Still struggling to get the flow working like I want it to.
    I’ve managed to work around my previous problem by using the method in your updated video to parse a csv.
    But now I’m stuck after that, cant seem to loop trough it. I get an error now saying that the flow got a object where it expected an array.
    You can contact me if you’d like to have a look for sure!

    What I want to do is receive an email with a csv, parse the CSV and send a separate mail for each line of the csv containing the info from that line.

    Already thanks for your video’s the’ve already helped me a lot!

  23. Paulie says

    May 3, 2021 at 11:50 am

    Hi Johan, get in touch via the contact form and I will see if I can help you out:

  24. Scot Bickell says

    July 10, 2021 at 1:03 am

    Hi Paulie,

    Thanks so much for sharing this and other tips for Power Automate, I have found several of your posts to be very useful.

    I have a csv file that has a comma within the field name and data and it is causing this issue:

    Field Names:

    [
    “\”Last”,
    ” First Name\””,
    “\”Employee EIN\””,
    “\”Date\””,
    “\”Hours\””,
    “\”Is Time Off\””,
    “\” Locations Full Path\””,
    “\”Time Off Name\””,
    “\”Default Department Full Path\””
    ]

    Data:

    {
    “\”Last”: “\”Barton”,
    ” First Name\””: ” Edmond\””,
    “\”Employee EIN\””: “\”123-45-6789\””,
    “\”Date\””: “\”11/16/2021\””,
    “\”Hours\””: “\”8:00\””,
    “\”Is Time Off\””: “\”Y\””,
    “\” Locations Full Path\””: “\”Midwest/Missouri/Kansas City\””,
    “\”Time Off Name\””: “\”Sick/Personal\””
    },
    {
    “\”Last”: “\”Churchill”,
    ” First Name\””: ” Winston\””,
    “\”Employee EIN\””: “\”123-45-6789\””,
    “\”Date\””: “\”07/21/2021\””,
    “\”Hours\””: “\”8:00\””,
    “\”Is Time Off\””: “\”Y\””,
    “\” Locations Full Path\””: “\”Northeast/Michigan/Lansing\””,
    “\”Time Off Name\””: “\”Vacation\””
    },
    {
    “\”Last”: “\”Stafford”,
    ” First Name\””: ” Edward\””,
    “\”Employee EIN\””: “\”123-45-6789\””,
    “\”Date\””: “\”08/27/2021\””,
    “\”Hours\””: “\”8:00\””,
    “\”Is Time Off\””: “\”Y\””,
    “\” Locations Full Path\””: “\”Midwest/Missouri/Kansas City\””,
    “\”Time Off Name\””: “\”Vacation\””
    }

    Is there a method that will allow me to “escape” the comma that is part of the data, but still use the flow for data that might not have embedded commas?

  25. Scot says

    July 10, 2021 at 2:32 am

    I figured out a workaround by using the replace() function before using splitByLines:

    replace(outputs(‘Get_file_content’)?[‘body’],’, ‘,’ ‘)

  26. Paulie says

    July 10, 2021 at 9:45 am

    Well done Scot! There is always a way!

  27. Ralf Alpert says

    August 12, 2021 at 1:42 am

    Great blog, thank you, Just want to also mention that I got
    I got the compose to work with the .csv file extension using
    split(base64ToString(outputs(‘Get_file_content’)[‘body’][‘$content’]), decodeUriComponent(‘%0A’))

    Same as Ted.

  28. Asif Khawaja says

    August 30, 2021 at 11:50 am

    Hi Paul, great article and thank you for sharing the wisdom and the flow export. When I run this flow to create items in SharePoint list, with 1000 items it works fine but when I increased the items to 10,000, the Apply_to_each loop/create item timed out. I used the same sample data as yours. Any ideas how to deal with this timeout issue? Thanks.

  29. MarkSompton says

    September 8, 2021 at 7:00 am

    Thanks, this is a really clear guide but I’m having problems.

    I had the “treated as an object and not string” issue as well but after applying the base64toString function I end up with “\u0000” before every character in the output and when I tried using a replace function to get rid of them it didn’t do anything.

    I can see that my csv is formatted in UTF-16 and not UTF-8 which is probably the root of the problem but I can’t figure out a workaround, any ideas?

    Also the csv seems to be delimited by a tab which is fine but I can’t figure out what character is being used to denote new lines because there are new lines in some of the data already. I can open the csv in Excel without any problems so there is clearly some distinction that Excel can read but I can’t find.

  30. Geraldo José Giansante says

    September 28, 2021 at 7:33 pm

    Hi Paul
    Thanks for your post, this is a clear guide, a great explanation on your video:
    I was able to reproduce the flow Getting File from OneDrive for Business.
    I´m a beginner at Power Automate and I was not able to derivate your solution from OneDrive to SharePoint File. I´ve use Get File Content from SharePoint using Path but the step splitNewLine, coded as follow did not work:
    split(outputs(‘Get_file_content_using_path’)?[‘body’]?[‘body’],decodeUriComponent(‘%0A’))
    The full message was:
    Unable to process template language expressions in action ‘splitNewLine’ inputs at line ‘1’ and column ‘10795’: ‘The template function ‘split’ expects its first parameter to be of type string. The provided value is of type ‘Null’. Please see https://aka.ms/logicexpressions#split for usage details.’.
    I´ve seen aka.ms but was not able to solve.
    I hope you could help me!
    Best regards
    Geraldo (from Brazil – São Paulo)

  31. Karl Nixon says

    September 30, 2021 at 12:00 pm

    Hi Paulie, thanks for this blog – very helpful. A couple of questions;

    Is there a way to exclude records where the 1st (or nth) character includes a certain values(s)
    Is there a way to exclude certain columns (my data has 30 columns but only 3 are required)

    Best regards

    Karl (Sydney, Australia)

  32. Jason Hinkle says

    October 18, 2021 at 4:34 pm

    Hey Paul,

    Thanks again for the video. I know so much more than I did before watching it. I am still having an issue with the last step pulling the data out of the loop. You had mentioned in Youtube comments that it was because I had multiple loops and you are correct. I used “When a new email Arrives v.3” as a trigger, hoping that I would be able to pull out the attached CSV. that is coming from a scheduled export. I was able to do this but when I pull the attachment. It puts it in an apply each automatically (My guess would be to apply to all attachments in the email). Originally there was 4 attachments 3 of which were not csv files so I used a condition action to say if the attachment name contains csv run. If not stop. I was able to edit my export to only have the csv that I need but it still requires me to put in an apply to all when pulling attachments. Any suggestions would be greatly appreciated, but even pointing out the problem has helped greatly so I appreciate you so much for even putting this all together.

  33. Paulie says

    October 18, 2021 at 6:03 pm

    Hi Jason,

    What you need to do is use a filter action on the attachments array to filter it for attachments that are CSVs. Then you can check the length of the array to check that the length of it is one. Then you can use the first expression to get the remaining attachment. Then you’ll have no apply to each loops. If you get stuck let me know and I will help you out.

  34. Judit says

    November 3, 2021 at 1:55 pm

    Hi Paulie,

    I’m very beginner in Power Flows. My final Compose can’t find the JSON Compose output.

  35. Gary says

    November 15, 2021 at 1:26 pm

    I appreciate your instructions – I’m attempting to parse my contents to load each row of my CSV into a SharePoint site, but running into an error: Invalid type. Expected Object but got Array”.

    The contents are only available as dynamic if my schema type is “object”. If I change to array, only my JSON body is available. Do I need to add another step to parse the body of the JSON or am I doing something incorrectly?

  36. Andrew Miller says

    November 30, 2021 at 8:04 pm

    This is great, thanks so much. Does anyone have any advice for how to do the same thing to a file with tab-separated values?

  37. Paulie says

    November 30, 2021 at 11:49 pm

    Hi Andrew,

    Instead of:
    split(item(), ',')
    You could use:
    split(item(), decodeUriComponent('%09'))
    That should give you a tab split.

  38. Andrew MacLean says

    December 10, 2021 at 4:08 pm

    Hi Paulie,

    Thanks for this fantastic write up. I was able to do a really useful automation for our organization following this guide which sends Teams messages to users listed in the CSV file when their passwords are about to expire.

    Quick question for you around the skip function. My CSV file updates daily and the number of lines changes every day. I’m skipping the first line (headers) same as you did above. Say for example the file has 25 lines, the flow runs fine on lines 2-25 but then runs on line “26” which is blank and logs it as a failed run overall, even though it successfully ran on lines 2-25.

    Is there any way or logic to use the skip function to skip the last line? Or “skip if blank” kind of logic I could use? or even just “stop running when you reach the end of the data”?

  39. Paulie says

    December 10, 2021 at 6:35 pm

    Hi Andrew,

    There are two ways that I can think of to do this. You can adjust the skip expression and combine it with the take expression. Here is an example which skips the first two lines and omits the final line:
    take(skip(outputs('splitNewLine'),2), sub(length(skip(outputs('splitNewLine'),2)),1))
    A more reliable method would be to put the output of the splitNewLine action and pass it through a Filter Array action. You would use the output of the splitNewLine action as the input for the filter. Then on the left side of the filter you would use the expression:
    length(item()) and choose is greater than and specify 0 as the value. This will remove any blank line. Then you would need to adjust the apply to each to use the output of the filter array action as it’s input.

    Both methods should work though.

  40. Navi says

    January 6, 2022 at 9:43 am

    Hey Scot,

    You mentioned that, to escape comma inside double quotes you used replace function. But, when I tried to use it it is showing invalid expression error. Could you please explain about which character you replaced in the string with what?

    Thanks in advance.

    Navi (Düsseldorf, Germany)

  41. Laurent says

    January 8, 2022 at 5:26 pm

    Thank’s Paulie, it worked out well (with the two tips provided in the comments about windows delimiter and the tip for suppressing the last empty line)!
    But now I’m having a hard time trying to put the result of the json array into an excel table. I use “apply at each”, but it seems the expected input cannot be directly the object resulting from the json parsing. Any idea?

  42. Aaron says

    January 18, 2022 at 4:32 pm

    This is fabulous. How do I take this into excel in sharepoint? Would this work for files greater than 5MB ?

  43. Dan says

    January 26, 2022 at 11:38 am

    Hi Paulie, great content as always! I read this a while ago but have only just had a use case but for some reason when I view the output after the first split (splitting by CRLF) my first row of data is still on the row with the column headings. All other rows appear as their own row as expected. I took a look at the file in Notepad++ and turned on all symbols and noticed that the end of the first row with the column headings has NUL CR LF where as the other rows just have CR LF any idea how I can get that first row seperated from the headings?

  44. Dan says

    January 26, 2022 at 12:59 pm

    Ignore my comment, even though it didn’t look right in the output of the split, it worked as expected when I continued. Awesome stuff!!

  45. Jean-Marie Berger says

    February 8, 2022 at 4:29 am

    Hi Paul, thank you for the trick, it works very well here ! Just on issue with my project as I use csv files that have special characters embedded (accents, …). Those are changed right away by the UTF8 replace character (#EFBFBD) as soon as I get the file body. Of course, it is transported to my outputm … Any power automate trick to address this ?
    Thank you !

  46. Aaron says

    February 9, 2022 at 12:55 pm

    Hi Paul, I tried using pipe delimiter to spit csv as my csv has columns with commas and this is throwing things off. Any help please ?

    Thanks.

Leave a Reply Cancel reply

Primary Sidebar

Link to my LinkedIn Profile
Buy me a coffee
Image link to all Power Automate content

Excellent Power Automate Blogs

  • Damien Bird
  • Dennis (Expiscornovus)
  • Tom Riha

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 259 other subscribers.

Go to mobile version