-1

I have a Vue/Quasar application that allows the end user to convert an Excel Spreadsheet into an Array and load it into a DataGrid. The Spreadsheet has a list of peoples names, email address, department and other profile data. The Spreadsheet could be 1000 names deep or more. FYI- I'm using AXIOS in the Vue application. My JavaScript options I think are:

  1. Upload the entire file and have ClosedXML.Excel read the file in a .Net web server and processed each record one by one to a 2019 MS SQL Database.

  2. Loop each record serialize into JSON each record and send it to a .NET web server that will use Newtonsoft.Json to read each record and then process.

  3. Serialize into JSON the entire file and send it to a .NET web server that will use Newtonsoft.Json to read each record and then process each one at a time..

  4. Serialize into JSON the entire file and send it to a .NET web server that will send the entire file to the MS SQL Database and let the database parse the JSON and input the data.

I would love to know what others have tried in the past and their experience. Any one of these ways can do it but I was hoping to hear from others.

  • The only way to find the most efficient way is to try each of your options and measure them against each other using suitable test data. – Ben Cottrell Oct 24 '21 at 06:18
  • Ben- Honestly, I feel that nullifies the experience of other developers who have been down this path in the past. If I can't ask what experience others have had then why have this site? Having this question downgraded because I have not tried every possible option is absurd. Maybe I'm wrong and if I am then Good Luck to all and I'll never ask another question here. – user1314159 Oct 25 '21 at 11:56

1 Answers1

0

Efficiency-wise, doing the fewest transactions over the network nearly always wins. So (2) is clearly the least efficient. It is possible that (4) is the most efficient by this metric.

But this is unlikely to matter at a mere 1,000 row. What's more concerning is what happens at 100k or 10m rows - even if the system is not intended to cope with that, it may be constructed as malicious input. Or you may find it grows to want to cope with that.

Then you have to ask what should happen if there's an error at the very last row. What happens to all the previous rows?

I can't see much value in converting to intermediate JSON unless that allows you to surface errors to the user earlier, on the client side. I suppose it also allows you to do the "streaming" mode (2), which while it is less efficient might also cope better with very large files since you don't need to keep the whole thing in memory.

pjc50
  • 10,595
  • 1
  • 26
  • 29