6

We have an enormous number of nested SQL stored procedures, and a number of different ways of debugging them (varying from developer to developer).

So far, our methods include:

  1. An optional @debug parameter, that causes the procedure to printmessages as it runs (passing the variable down to called procedures).

  2. Checking @@servername against a table of test server names, and printing as above

  3. Writing everything the procedure does to a log table (in production and test)

Which of these is preferable (and why), or is there a better method we've overlooked?

Stu Pegg
  • 568
  • 2
  • 13

3 Answers3

4

You should also consider SQL Server Profiler. In SQL Server Management Studio, select Tools -> SQL Server Profiler. When the profiler window opens, select File -> New Trace... (I don't know specifics about other RDBMSs, but they've got to have tools with similar functionality.)

As the long-term solution, you should of course move the business logic out of stored procedures.

azheglov
  • 7,177
  • 1
  • 27
  • 49
  • 2
    I disagree, stored procedure are the best place for this kind of complex database logic. They are easier to preformance tune and easier for the dba to knwo what is affected by proosed changes to the data model and far more secure (if you aren't useing dynamic sql) because you can prevent direct access to tables and views. In a financial application, not to use stored procedures means your application is very open to internal fraud. – HLGEM Oct 14 '10 at 14:55
  • The profiler is a good call. However on your second point; I'm an advocate for doing data-heavy work in the database, for efficiency reasons if nothing else. – Stu Pegg Oct 14 '10 at 16:58
  • "if nothing else" is an if. It follows from the fact that we have this question that there is "something else." – azheglov Oct 14 '10 at 18:04
  • Before this becomes a semantic debate, I believe this ground has already been covered: http://programmers.stackexchange.com/q/6246/3526 – Stu Pegg Oct 15 '10 at 07:02
  • Exactly! Jaco has a pretty good answer there – azheglov Oct 15 '10 at 13:22
2

I prefer #1 personally. With an @debug flag you can specify if production code gets run or not, and I find print messages are one of the most helpful things to use when trying to figure out what is going on.

Rachel
  • 23,979
  • 16
  • 91
  • 159
1

I use an @debug or @test flag as the input variable for all complex stored procs.

What I do with it then depends on the proc. Normally, I issue a rollback if in test mode. I may also show the results of various inserts before the rollback to see if what I expected was what I got.

If I have dynamic SQL that when I use the @debug and in this case I print the sql instead of executing it, so I can check the SQL statement produced.

Of course I can have an @test (used only when I am affecting data and want to be sure I can roll it back) and an @debug (to print out dynamic SQl) in the same proc although I usually don't run in both modes at once.

I have also used an @print to print the steps as I go if I am debugging (helps alot to see where the thing failed!) or alternatively, you can use a table variable to hold the steps processed or any error codes and then select from the table variable after the rollback when in test mode to see what actually happened. You can even insert the values from the table variable into a permanent table if you need to see the data over time (we do this for complex imports so we can show the client how often their crappy data broke the import and caused us to have to do work to fix the issue.) It's important to use a table variable and not a temp table for this because it stays in scope after the rollback, a temp table rolls back.

(Note the table variable temp table stuff is SQL Server specific, I don't know if this will work with other databases.)

A profiler can also provvide the actual sql that was sent to the database and should be used especially if you are not using stored procs.

HLGEM
  • 28,709
  • 4
  • 67
  • 116