Showing posts with label transaction. Show all posts
Showing posts with label transaction. Show all posts

Wednesday, March 28, 2012

Multi-Step Transaction, fails but reports success

I have for example the following SQL:
BEGIN TRAN
Delete from users where userid = 102
Update mainSettings set userCnt = UserCnt -1
COMMIT TRAN
If for example the "update" statement fails, but the "delete" statement
is successful - the overall transaction reports no error, and thus, my
appliction thinks everything was a success.
Any ideas?
VB 6.0
MDAC ADO 2.7
JasonAnswered in microsoft.public.sqlserver.programming
Please do not multi-post.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
<jroozee@.gmail.com> wrote in message
news:1112042240.645733.295030@.f14g2000cwb.googlegroups.com...
> I have for example the following SQL:
> BEGIN TRAN
> Delete from users where userid = 102
> Update mainSettings set userCnt = UserCnt -1
> COMMIT TRAN
>
> If for example the "update" statement fails, but the "delete" statement
> is successful - the overall transaction reports no error, and thus, my
> appliction thinks everything was a success.
> Any ideas?
> VB 6.0
> MDAC ADO 2.7
> Jason
>|||You have to check for @.@.error after every statement that insert / delete or
update
the db. Also, you have to check the return value of the sp.
create procedure proc1
@.userid
as
set nocount on
declare @.error int
BEGIN TRAN
Delete from users where userid = @.userid
set @.error = @.@.error
if @.error != 0
begin
rollback transaction
raiserror('Error deleting from table users.', 16, 1)
return 1
end
Update mainSettings set userCnt = UserCnt -1
if @.error != 0
begin
rollback transaction
raiserror('Error updating from table [mainSettings].', 16, 1)
return 1
end
COMMIT TRAN
return @.@.error
go
declare @.rv int
declare @.error int
declare @.tc int
set @.tc = @.@.trancount
exec @.rv = proc1 102
set @.error = coalesce(nullif(@.rv, 0), @.@.error)
if @.error != 0
begin
print 'there was an error.'
end
if @.@.trancount != @.tc
rollback transaction
go
Implementing Error Handling with Stored Procedures
http://www.sommarskog.se/error-handling-II.html
Error Handling in SQL Server â' a Background
http://www.sommarskog.se/error-handling-I.html
AMB
"jroozee@.gmail.com" wrote:
> I have for example the following SQL:
> BEGIN TRAN
> Delete from users where userid = 102
> Update mainSettings set userCnt = UserCnt -1
> COMMIT TRAN
>
> If for example the "update" statement fails, but the "delete" statement
> is successful - the overall transaction reports no error, and thus, my
> appliction thinks everything was a success.
> Any ideas?
> VB 6.0
> MDAC ADO 2.7
> Jason
>|||Ok, forget the BEGIN/COMMIT...Even with out that, why wouldnt SQL
server return a error?
Jason|||Read the articles. There you will find the answer.
AMB
"jroozee@.gmail.com" wrote:
> Ok, forget the BEGIN/COMMIT...Even with out that, why wouldnt SQL
> server return a error?
> Jason
>|||Same reason, last statement in the batch was successful.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
<jroozee@.gmail.com> wrote in message
news:1112043813.998362.180420@.g14g2000cwa.googlegroups.com...
> Ok, forget the BEGIN/COMMIT...Even with out that, why wouldnt SQL
> server return a error?
> Jason
>|||Additionally, the reason you must check for the error, is that the SQL
programmers can not make the decision for you as to whether or not the error
is sufficient to roll back the transaction. That is for you to decide, so
you much catch the errors and rollback the transaction when necessary.. (
There are types of errors which will automatically roll back the transaction
without your intervention. These are generally more severe errors which
cause batch abort, or you have been chosen as a deadlock victim.)
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
<jroozee@.gmail.com> wrote in message
news:1112042240.645733.295030@.f14g2000cwb.googlegroups.com...
>I have for example the following SQL:
> BEGIN TRAN
> Delete from users where userid = 102
> Update mainSettings set userCnt = UserCnt -1
> COMMIT TRAN
>
> If for example the "update" statement fails, but the "delete" statement
> is successful - the overall transaction reports no error, and thus, my
> appliction thinks everything was a success.
> Any ideas?
> VB 6.0
> MDAC ADO 2.7
> Jason
>

Monday, March 19, 2012

Multiple Transaction Logs

Hey Folks,
I was looking for some documents/articles/links to get a better
understanding of Multiple Transaction Logs for SQL Server 2000/7. For
example, if I create two transaction logs for a database, how are they used?
Round-Robin method until each one is full? Or use one at a time until the
second log file is needed?
I also wanted to learn about backing up and restore multiple transaction
logs. How to delete the second log file? Is one log active and the other
in-active, etc.
Hope to get some good reading material.
Thanks,
DeeJay
DeeJay,
Have you looked in Books Online? I think that there's a pretty good section about these things. Not
that complicated, though:

> For
> example, if I create two transaction logs for a database, how are they used?
> Round-Robin method until each one is full? Or use one at a time until the
> second log file is needed?
One at a time. Think about the log as a snake rattling though the file(s). Each time you backup log,
you cut the tail of the snake. If you have several files, and you are at then end of the current
file (and it is full), the head of the snake moves to the beginning of the next file.

> I also wanted to learn about backing up and restore multiple transaction
> logs.
You don't backup the log file. You perform a transaction log backup for the database. To SQL Server,
the transaction log is one logical construct (which happens to be spread out over several physical
files). At restore, though, you need to have as many files as when you performed the backup.

> How to delete the second log file?
DBCC SHRINKFILE with the EMPTYFILE option. Make sure it is empty. Then ALTER DATABASE to remove it.

> Is one log active and the other
> in-active, etc.
Not the files per se.
One thing you want to look into is the concept of VLF (Virtual Log Files). The log file is divided
up in a number of VLF, and a VLF can be active (log records on it) or inactive (empty, not used).
Use DBCC LOGINFO for this. I mention a little bit about this in
http://www.karaszi.com/SQLServer/info_dont_shrink.asp.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"DeeJay Puar" <DeeJayPuar@.discussions.microsoft.com> wrote in message
news:8F52396F-1FBA-468C-B5C4-11BF4BDDB720@.microsoft.com...
> Hey Folks,
> I was looking for some documents/articles/links to get a better
> understanding of Multiple Transaction Logs for SQL Server 2000/7. For
> example, if I create two transaction logs for a database, how are they used?
> Round-Robin method until each one is full? Or use one at a time until the
> second log file is needed?
> I also wanted to learn about backing up and restore multiple transaction
> logs. How to delete the second log file? Is one log active and the other
> in-active, etc.
> Hope to get some good reading material.
> Thanks,
> DeeJay
|||Tibor,
Thanks for the reply.
I was already familiar with these concepts (the snake analogy is a good
one). I am also familiar with the VLFs.
I was not sure about deleting the EMPTY log files. Thanks for that.
Thanks for the undocumented DBCC LOGINFO command!
I am getting cross-trained in DB2 and DB2 has a slightly different way of
handling its transaction logs (hence the terminology active/inactive). The
DB2 DBA had asked about it and I wanted to make sure I was able to explain it
to him properly.
Thanks,
DeeJay
"Tibor Karaszi" wrote:

> DeeJay,
> Have you looked in Books Online? I think that there's a pretty good section about these things. Not
> that complicated, though:
>
> One at a time. Think about the log as a snake rattling though the file(s). Each time you backup log,
> you cut the tail of the snake. If you have several files, and you are at then end of the current
> file (and it is full), the head of the snake moves to the beginning of the next file.
>
> You don't backup the log file. You perform a transaction log backup for the database. To SQL Server,
> the transaction log is one logical construct (which happens to be spread out over several physical
> files). At restore, though, you need to have as many files as when you performed the backup.
>
> DBCC SHRINKFILE with the EMPTYFILE option. Make sure it is empty. Then ALTER DATABASE to remove it.
>
> Not the files per se.
> One thing you want to look into is the concept of VLF (Virtual Log Files). The log file is divided
> up in a number of VLF, and a VLF can be active (log records on it) or inactive (empty, not used).
> Use DBCC LOGINFO for this. I mention a little bit about this in
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> http://www.sqlug.se/
>
> "DeeJay Puar" <DeeJayPuar@.discussions.microsoft.com> wrote in message
> news:8F52396F-1FBA-468C-B5C4-11BF4BDDB720@.microsoft.com...
>
>

Multiple Transaction Logs

I have just come from an Oracle background and am trying to equate SQL server methodologies to Oracle. I have a couple of questions based on the same principle which is the usage of multiple transaction logs. This is what I have been able to find out from the docs and other posts:

If you have multiple transaction logs they will all be used in a sequential manner before being wrapped around. That is, they will all be filled before they are reused.

My first question is will they be used sequentially or concurrently. SQL Server seems to stripe everything so I am inclined to believe the latter. This question has been asked before but from other posts/docs there seems to be a difference of opinion. I am of course equating transaction logs to redo/archived logs so I want to know categorically (links would be greatly appreciated) as it will have a huge impact on any disaster and recovery plans that I implement.

The second question is a spin off and may or may not be relevant dependant on the answer of the first. If the transaction logs are written to concurrently, that is, entries are striped, why bother with multiple logs? I don't see any benefits. In fact, in a recovery scenario recovering transactions from mutliple striped logs would appear to adversely affect MTTR.

Feel free to point out the errors of my ways as I am all ears and eager to learn.

Thanks in advance.My advice is to get the poop straight from the horse (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_da2_3t2d.asp).

I'd also recommend two books, the first is the SQL 2000 Resource Kit (http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=lL5zATKAxC&isbn=0735612668&itm=2) which has a great chapter on Oracle to MS-SQL, along with a lot of other VERY useful information. The second is Inside Microsoft SQL Server 2000 (http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?userid=lL5zATKAxC&isbn=0735609985&itm=1).

Peruse a bit, then come back... I'm sure that you'll still have questions, you'll just be able to form them a bit better!

-PatP|||OK, found the answer, I think. The logs are used sequentially. In case anybody else wants the answer, when there are multiple log files, the first one fills up, then the logical log moves on to the second physical logfile etc. When the last physical log is filled up it then wraps around to the beginning of the first physical log again. This is the best description that I have found and it implies, but does not say categorically confirm that if you have declared two logfiles in the create database statement, i.e.

log on
(name='test1_log1',
filename='E:\SQLServer\test1\Logs\test1_log1.LDF',
size=5mb,
maxsize=10mb),
(name='test1_log2',
filename='E:\SQLServer\test1\Logs\test1_log2.LDF',
size=5mb,
maxsize=10mb)

test1_log1 would be used, then test1_log2 before wrapping around back to test1_log1.

I had actually researched this issue quite considerably before I posted this question, including reading 3 SQLServer books. The purpose and use of the log is described, but not this issue. If you look at the following post you will can see that there is a difference of opinion where some think they are used concurrently and some think that they are used sequentially:

http://www.mcse.ms/showthread.php?threadid=1184072&perpage=10&pagenumber=1

I tried to imply this before when I posted the thread. My main problem was that folks couldn't agree between themselves how they were used so I was hoping for some clarification.

Thans for responding anyway.|||No problemo!

I've never seen an instance where more than one log file was ever in use at one time. Theoretically there is a tiny window when two adjacent (sequential) log files might be active for the duration of one DML statement, but I've never seen that happen in the real world.

I've found lots of discussions of log files, and how people think those files are used. I've never seen anything to contradict any of the sources that I sited in my previous posting. I don't believe that there is any way to have more than one log file active for a given database at one time, beyond the tiny window theoretically allowed by the log sequencing mechanism.

-PatP|||That was the reason why I was looking into it. All of the configurations we have here have one log, but the docs say you can specify more than one. In Oracle more than one redo log group is mandatory and the reason is quite straight forward. In SQL Server the transaction log can expand, so one of the main reasons for having more than one is eliminated. I was wondering why you would want more than one, other than the lack of disk space to allow it to expand or manual duplexing (which obviously not the case as I have discovered). I was thinking that it may provide more flexibility in a recovery scenario or maybe performance reasons when it comes to truncating and backing up the transaction log(s). I have see some posts where this action can sometimes adversely effect user performance.

Maybe I am digging a bit too deep too early in the process, and I'm sure everything will become clear with the passage of time, but any pointers on this subject would still be appreciated.|||There are probably a gazillion different reasons for allowing multiple log files, but the one that I've hit most frequently is when someone sets up a database (and of course its log file too) on a drive that is "bigger than they'll ever need". Through experience, I've discovered that only infinity is really infinity... For one reason or another, you eventually use "more than you'll ever need" in almost every case! When the database consumes all of the available disk, you can create a new log file on a different disk drive in order to get enough breathing room to fix the underlying problem.

I'm sure that there are lots of other reasons, but that one alone is enough to justify the functionality to me.

There is a lot of information within BOL (SQL Server Books Online), but the real meaty stuff is in the books that I cited in my previous posting. Kalen has forgotten more about indexing than I'll ever know, and the Resource Kit ought to be required reading for someone with your technical background... It gets into the "care and feeding" that an Oracle DBA is accustomed to, explains which things are important to SQL Server and which are irrelevant.

If you are in a major metro area, it doesn't hurt to look for a user group. Even if you aren't in a major metro, check out PASS (http://www.sqlpass.org) for more insight.

-PatP|||That's the conclusion I was coming to. I appreciate the input. I'm coming up to speed quite nicely on the fundamentals of SQLServer but still receptive to any direction that might facilitate this process more easily. Thanks.|||Sure,

SQL Server is simply easier than Oracle...

Notice I didn't say better...

The multiple Redo Logs allow for some pretty substantial amount of transactions...

In all cases I've ever seen in SQL Server, ther's only 1..

I dump tranny logs every 10 minutes in some environments...so in that cas etheres more than 1...I guess that's where the confusion comes in...

Redo Logs are like partitioned logs...which sql server doesn't have...I don't think (And you know, that happens waaaay to much)|||From Books Online:

Transaction Logs
A database in Microsoft SQL Server 2000 has at least one data file and one transaction log file. Data and transaction log information is never mixed on the same file, and individual files are used by only one database.

SQL Server uses the transaction log of each database to recover transactions. The transaction log is a serial record of all modifications that have occurred in the database as well as the transaction that performed each modification. The transaction log records the start of each transaction. It records the changes to the data and enough information to undo the modifications (if necessary later) made during each transaction. For some large operations, such as CREATE INDEX, the transaction log instead records the fact that the operation took place. The log grows continuously as logged operations occur in the database.

The transaction log records the allocation and deallocation of pages and the commit or rollback of each transaction. This allows SQL Server either to apply (roll forward) or back out (roll back) each transaction in the following ways:

A transaction is rolled forward when you apply a transaction log. SQL Server copies the after image of every modification to the database or reruns statements such as CREATE INDEX. These actions are applied in the same sequence in which they originally occurred. At the end of this process, the database is in the same state it was in at the time the transaction log was backed up.

A transaction is rolled back when you back out an incomplete transaction. SQL Server copies the before images of all modifications to the database since the BEGIN TRANSACTION. If it encounters transaction log records indicating that a CREATE INDEX was performed, it performs operations that logically reverse the statement. These before images and CREATE INDEX reversals are applied in the reverse of their original sequence.
At a checkpoint, SQL Server ensures that all transaction log records and database pages modified are written to disk. During the recovery process of each database that occurs when SQL Server is restarted, a transaction must be rolled forward only when it is not known whether all the data modifications in the transaction were actually written from the SQL Server buffer cache to disk. Because a checkpoint forces all modified pages to disk, it represents the point at which the startup recovery must start rolling forward transactions. Because all pages modified before the checkpoint are guaranteed to be on disk, there is no need to roll forward anything done before the checkpoint.

Transaction log backups enable you to recover the database to a specific point in time (for example, prior to entering unwanted data), or to the point of failure. Transaction log backups should be a consideration in your media recovery strategy. For more information,|||Data devices are written to in concurrent fashion (if more than 1 is defined), while log devices are ... well, that was already mentioned. There is a very good reason to have multiple small sized trx log files defined for servers with 5+ user databases and 2+ disk arrays dedicated to log devices.|||Hey Brett, what's up with citing BOL article that has nothing to do with the question?

Multiple Transaction Logs

Hey Folks,
I was looking for some documents/articles/links to get a better
understanding of Multiple Transaction Logs for SQL Server 2000/7. For
example, if I create two transaction logs for a database, how are they used?
Round-Robin method until each one is full? Or use one at a time until the
second log file is needed?
I also wanted to learn about backing up and restore multiple transaction
logs. How to delete the second log file? Is one log active and the other
in-active, etc.
Hope to get some good reading material.
Thanks,
DeeJayDeeJay,
Have you looked in Books Online? I think that there's a pretty good section
about these things. Not
that complicated, though:

> For
> example, if I create two transaction logs for a database, how are they use
d?
> Round-Robin method until each one is full? Or use one at a time until the
> second log file is needed?
One at a time. Think about the log as a snake rattling though the file(s). E
ach time you backup log,
you cut the tail of the snake. If you have several files, and you are at the
n end of the current
file (and it is full), the head of the snake moves to the beginning of the n
ext file.

> I also wanted to learn about backing up and restore multiple transaction
> logs.
You don't backup the log file. You perform a transaction log backup for the
database. To SQL Server,
the transaction log is one logical construct (which happens to be spread out
over several physical
files). At restore, though, you need to have as many files as when you perfo
rmed the backup.

> How to delete the second log file?
DBCC SHRINKFILE with the EMPTYFILE option. Make sure it is empty. Then ALTER
DATABASE to remove it.

> Is one log active and the other
> in-active, etc.
Not the files per se.
One thing you want to look into is the concept of VLF (Virtual Log Files). T
he log file is divided
up in a number of VLF, and a VLF can be active (log records on it) or inacti
ve (empty, not used).
Use DBCC LOGINFO for this. I mention a little bit about this in
http://www.karaszi.com/SQLServer/info_dont_shrink.asp.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"DeeJay Puar" <DeeJayPuar@.discussions.microsoft.com> wrote in message
news:8F52396F-1FBA-468C-B5C4-11BF4BDDB720@.microsoft.com...
> Hey Folks,
> I was looking for some documents/articles/links to get a better
> understanding of Multiple Transaction Logs for SQL Server 2000/7. For
> example, if I create two transaction logs for a database, how are they use
d?
> Round-Robin method until each one is full? Or use one at a time until the
> second log file is needed?
> I also wanted to learn about backing up and restore multiple transaction
> logs. How to delete the second log file? Is one log active and the other
> in-active, etc.
> Hope to get some good reading material.
> Thanks,
> DeeJay|||Tibor,
Thanks for the reply.
I was already familiar with these concepts (the snake analogy is a good
one). I am also familiar with the VLFs.
I was not sure about deleting the EMPTY log files. Thanks for that.
Thanks for the undocumented DBCC LOGINFO command!
I am getting cross-trained in DB2 and DB2 has a slightly different way of
handling its transaction logs (hence the terminology active/inactive). The
DB2 DBA had asked about it and I wanted to make sure I was able to explain i
t
to him properly.
Thanks,
DeeJay
"Tibor Karaszi" wrote:

> DeeJay,
> Have you looked in Books Online? I think that there's a pretty good sectio
n about these things. Not
> that complicated, though:
>
> One at a time. Think about the log as a snake rattling though the file(s).
Each time you backup log,
> you cut the tail of the snake. If you have several files, and you are at t
hen end of the current
> file (and it is full), the head of the snake moves to the beginning of the
next file.
>
> You don't backup the log file. You perform a transaction log backup for th
e database. To SQL Server,
> the transaction log is one logical construct (which happens to be spread o
ut over several physical
> files). At restore, though, you need to have as many files as when you per
formed the backup.
>
> DBCC SHRINKFILE with the EMPTYFILE option. Make sure it is empty. Then ALT
ER DATABASE to remove it.
>
> Not the files per se.
> One thing you want to look into is the concept of VLF (Virtual Log Files).
The log file is divided
> up in a number of VLF, and a VLF can be active (log records on it) or inac
tive (empty, not used).
> Use DBCC LOGINFO for this. I mention a little bit about this in
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> http://www.sqlug.se/
>
> "DeeJay Puar" <DeeJayPuar@.discussions.microsoft.com> wrote in message
> news:8F52396F-1FBA-468C-B5C4-11BF4BDDB720@.microsoft.com...
>
>

Multiple Transaction Logs

Hey Folks,
I was looking for some documents/articles/links to get a better
understanding of Multiple Transaction Logs for SQL Server 2000/7. For
example, if I create two transaction logs for a database, how are they used?
Round-Robin method until each one is full? Or use one at a time until the
second log file is needed?
I also wanted to learn about backing up and restore multiple transaction
logs. How to delete the second log file? Is one log active and the other
in-active, etc.
Hope to get some good reading material.
Thanks,
DeeJayDeeJay,
Have you looked in Books Online? I think that there's a pretty good section about these things. Not
that complicated, though:
> For
> example, if I create two transaction logs for a database, how are they used?
> Round-Robin method until each one is full? Or use one at a time until the
> second log file is needed?
One at a time. Think about the log as a snake rattling though the file(s). Each time you backup log,
you cut the tail of the snake. If you have several files, and you are at then end of the current
file (and it is full), the head of the snake moves to the beginning of the next file.
> I also wanted to learn about backing up and restore multiple transaction
> logs.
You don't backup the log file. You perform a transaction log backup for the database. To SQL Server,
the transaction log is one logical construct (which happens to be spread out over several physical
files). At restore, though, you need to have as many files as when you performed the backup.
> How to delete the second log file?
DBCC SHRINKFILE with the EMPTYFILE option. Make sure it is empty. Then ALTER DATABASE to remove it.
> Is one log active and the other
> in-active, etc.
Not the files per se.
One thing you want to look into is the concept of VLF (Virtual Log Files). The log file is divided
up in a number of VLF, and a VLF can be active (log records on it) or inactive (empty, not used).
Use DBCC LOGINFO for this. I mention a little bit about this in
http://www.karaszi.com/SQLServer/info_dont_shrink.asp.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"DeeJay Puar" <DeeJayPuar@.discussions.microsoft.com> wrote in message
news:8F52396F-1FBA-468C-B5C4-11BF4BDDB720@.microsoft.com...
> Hey Folks,
> I was looking for some documents/articles/links to get a better
> understanding of Multiple Transaction Logs for SQL Server 2000/7. For
> example, if I create two transaction logs for a database, how are they used?
> Round-Robin method until each one is full? Or use one at a time until the
> second log file is needed?
> I also wanted to learn about backing up and restore multiple transaction
> logs. How to delete the second log file? Is one log active and the other
> in-active, etc.
> Hope to get some good reading material.
> Thanks,
> DeeJay|||Tibor,
Thanks for the reply.
I was already familiar with these concepts (the snake analogy is a good
one). I am also familiar with the VLFs.
I was not sure about deleting the EMPTY log files. Thanks for that.
Thanks for the undocumented DBCC LOGINFO command!
I am getting cross-trained in DB2 and DB2 has a slightly different way of
handling its transaction logs (hence the terminology active/inactive). The
DB2 DBA had asked about it and I wanted to make sure I was able to explain it
to him properly.
Thanks,
DeeJay
"Tibor Karaszi" wrote:
> DeeJay,
> Have you looked in Books Online? I think that there's a pretty good section about these things. Not
> that complicated, though:
> > For
> > example, if I create two transaction logs for a database, how are they used?
> > Round-Robin method until each one is full? Or use one at a time until the
> > second log file is needed?
> One at a time. Think about the log as a snake rattling though the file(s). Each time you backup log,
> you cut the tail of the snake. If you have several files, and you are at then end of the current
> file (and it is full), the head of the snake moves to the beginning of the next file.
>
> > I also wanted to learn about backing up and restore multiple transaction
> > logs.
> You don't backup the log file. You perform a transaction log backup for the database. To SQL Server,
> the transaction log is one logical construct (which happens to be spread out over several physical
> files). At restore, though, you need to have as many files as when you performed the backup.
>
> > How to delete the second log file?
> DBCC SHRINKFILE with the EMPTYFILE option. Make sure it is empty. Then ALTER DATABASE to remove it.
> > Is one log active and the other
> > in-active, etc.
> Not the files per se.
> One thing you want to look into is the concept of VLF (Virtual Log Files). The log file is divided
> up in a number of VLF, and a VLF can be active (log records on it) or inactive (empty, not used).
> Use DBCC LOGINFO for this. I mention a little bit about this in
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> http://www.sqlug.se/
>
> "DeeJay Puar" <DeeJayPuar@.discussions.microsoft.com> wrote in message
> news:8F52396F-1FBA-468C-B5C4-11BF4BDDB720@.microsoft.com...
> > Hey Folks,
> >
> > I was looking for some documents/articles/links to get a better
> > understanding of Multiple Transaction Logs for SQL Server 2000/7. For
> > example, if I create two transaction logs for a database, how are they used?
> > Round-Robin method until each one is full? Or use one at a time until the
> > second log file is needed?
> >
> > I also wanted to learn about backing up and restore multiple transaction
> > logs. How to delete the second log file? Is one log active and the other
> > in-active, etc.
> >
> > Hope to get some good reading material.
> >
> > Thanks,
> >
> > DeeJay
>
>

multiple transaction ?

I have a question regarding multiple transactions. I
would like to create a main stored procedure that calls
several other stored procedures. I would like to use
transactions when I write all of my SP's but am unsure
how it would works. I am running into problems when a
rollback occurs when i have multiple transactions. Is
there a standard way of setting up a commit/rollback
procedure in all SP's? I always use the following but it
does not seem to work correctly.
BEGIN TRANSACTION
IF @.ErrorCount = 0
COMMIT TRANSATION
ELSE
ROLLBACK TRANSACTION
Do i need something in here to look at the transaction
count?
Any help would be appreciated.
Thanks
B.A.I have to assume that @.ErrorCount is something of your doing so I will also
assume it is getting this correctly. When you issue a ROLLBACK, regardless
of where it's issued, all teh open transactions will be rolled back. So you
want to test to see if @.@.TRANCOUNT is greater than 0 before issuing the
rollbck otherwise you will get an error.
Andy
"B.A. Baracus" <JCoxEUP@.hotmail.com> wrote in message
news:014c01c34f12$4d4b6c20$a501280a@.phx.gbl...
> I have a question regarding multiple transactions. I
> would like to create a main stored procedure that calls
> several other stored procedures. I would like to use
> transactions when I write all of my SP's but am unsure
> how it would works. I am running into problems when a
> rollback occurs when i have multiple transactions. Is
> there a standard way of setting up a commit/rollback
> procedure in all SP's? I always use the following but it
> does not seem to work correctly.
> BEGIN TRANSACTION
> IF @.ErrorCount = 0
> COMMIT TRANSATION
> ELSE
> ROLLBACK TRANSACTION
> Do i need something in here to look at the transaction
> count?
> Any help would be appreciated.
> Thanks
> B.A.
>

Friday, March 9, 2012

Multiple Table Queries

I am having propblems creating a multiple table query for a stored procedure.

Here is what I have..

db1 = transaction database with multipke fields keyed on transid, and membid

db2 = transaction legal details keyed on transid & legalid

db3 = member details keyed on membid & officeid

db4 = office details keyed on officeid & membid

What I am after is a query which will result in ALL details being returned for a particular transaction

ie. transaction detail, legal detail, member detail & office detail...

please helpsomething like...

select db1.*, db2.*, db3.*, db4.*
from db1
join db2 on db1.transid = db2.transid
join db3 on db1.membid = db3.membid
join db4 on db3.officeid = db4.officeid|||Thanks, worked like a charm.

Much Obliged.. have more queries to do for this project, I am sure I will have more posts for you.

Multiple Table Joins Makes Query Go To Sleep

Hello All & Thanks in advance for your help!

Background:

1) tblT_Documents is the primary parent transaction table that has 10
fields and about 250,000 rows
2) There are 9 child tables with each having 3 fields each, their own
PK; the FK back to the parent table; and the unique data for that
table. There is a one to many relation between the parent and each of
the 9 child rows. Each child table has between 100,000 and 300,000
rows.
3) There are indexes on every field of the child tables (though I
don't believe that they are helping in this situation)
4) The client needs to be presented a view that has 5 of the main
fields from the parent table, along with any and all corresponding
data from the child tables.
5) The client will select this view by doing some pattern-matching
search on one of the child records' detail (e.g. field-name LIKE
%search-item% - so much for the indexes...)

Problem:

When I do the simple join of just the parent with one of the children,
the search works *fairly* well and returns the five parent fields and
the corresponding matching child field.

However, as soon as I add any one of the other child records to simply
display it's unique data along with the previously obtained results,
the resulting query hangs.

Is the overall structure of the tables not conducive to this kind of
query? Is this a situation where de-normalization will be required to
obtain the desired results? Or, more hopefully, am I just an idiot
and there is some simpler solution to this problem?!

Thanks again for your assistance!

- EdLets have the full table schema (including indexes and a few sample records)
of all your tables and a copy of the query(s) you are running.
http://www.aspfaq.com/etiquette.asp?id=5006

We'll take a look and see whats going on.

Mr Tea

<Ed_No_Spam_Please_Weber@.Weber_Spam_Not_Enterprises .Org> wrote in message
news:5jte111qne3gqh0k6o5etlif27jm1rseh1@.4ax.com...
> Hello All & Thanks in advance for your help!
> Background:
> 1) tblT_Documents is the primary parent transaction table that has 10
> fields and about 250,000 rows
> 2) There are 9 child tables with each having 3 fields each, their own
> PK; the FK back to the parent table; and the unique data for that
> table. There is a one to many relation between the parent and each of
> the 9 child rows. Each child table has between 100,000 and 300,000
> rows.
> 3) There are indexes on every field of the child tables (though I
> don't believe that they are helping in this situation)
> 4) The client needs to be presented a view that has 5 of the main
> fields from the parent table, along with any and all corresponding
> data from the child tables.
> 5) The client will select this view by doing some pattern-matching
> search on one of the child records' detail (e.g. field-name LIKE
> %search-item% - so much for the indexes...)
> Problem:
> When I do the simple join of just the parent with one of the children,
> the search works *fairly* well and returns the five parent fields and
> the corresponding matching child field.
> However, as soon as I add any one of the other child records to simply
> display it's unique data along with the previously obtained results,
> the resulting query hangs.
> Is the overall structure of the tables not conducive to this kind of
> query? Is this a situation where de-normalization will be required to
> obtain the desired results? Or, more hopefully, am I just an idiot
> and there is some simpler solution to this problem?!
> Thanks again for your assistance!
> - Ed|||(Ed_No_Spam_Please_Weber@.Weber_Spam_Not_Enterprise s.Org) writes:
> When I do the simple join of just the parent with one of the children,
> the search works *fairly* well and returns the five parent fields and
> the corresponding matching child field.
> However, as soon as I add any one of the other child records to simply
> display it's unique data along with the previously obtained results,
> the resulting query hangs.
> Is the overall structure of the tables not conducive to this kind of
> query? Is this a situation where de-normalization will be required to
> obtain the desired results? Or, more hopefully, am I just an idiot
> and there is some simpler solution to this problem?!

I can only echo what Lee said. Please include CREATE TABLE and CREATE INDEX
statements for the involved tables, and the query you are having problem
with.

My guess from your description is that you simply have an error in
you query...

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Hello All!

Per request, the following is the generate script for 4 tables; the
query that works 'OK' and the query that doesn't work.

<----- Table Defs Begin -------
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_Amounts_FK00]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblT_Amounts] DROP CONSTRAINT tblT_Amounts_FK00
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_AssocBP_FK00]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblT_AssocBP] DROP CONSTRAINT tblT_AssocBP_FK00
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_BlockBld_FK00]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblT_BlockBld] DROP CONSTRAINT tblT_BlockBld_FK00
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_Grantees_FK00]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblT_Grantees] DROP CONSTRAINT tblT_Grantees_FK00
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_Grantors_FK00]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblT_Grantors] DROP CONSTRAINT tblT_Grantors_FK00
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_LotUnit_FK00]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblT_LotUnit] DROP CONSTRAINT tblT_LotUnit_FK00
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_Subdivision_FK00]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblT_Subdivision] DROP CONSTRAINT
tblT_Subdivision_FK00
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_TaxIDs_FK00]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblT_TaxIDs] DROP CONSTRAINT tblT_TaxIDs_FK00
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_Documents]') and OBJECTPROPERTY(id,
N'IsUserTable') = 1)
drop table [dbo].[tblT_Documents]
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_Grantees]') and OBJECTPROPERTY(id,
N'IsUserTable') = 1)
drop table [dbo].[tblT_Grantees]
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_Grantors]') and OBJECTPROPERTY(id,
N'IsUserTable') = 1)
drop table [dbo].[tblT_Grantors]
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_Subdivision]') and OBJECTPROPERTY(id,
N'IsUserTable') = 1)
drop table [dbo].[tblT_Subdivision]
GO

CREATE TABLE [dbo].[tblT_Documents] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[FileNo] [nvarchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL ,
[DocType] [nvarchar] (5) COLLATE SQL_Latin1_General_CP1_CI_AS
NOT NULL ,
[YorDNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL ,
[DocDt] [datetime] NULL ,
[Book] [nvarchar] (5) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL ,
[Page] [int] NULL ,
[ARB] [nvarchar] (7) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
,
[Comments] [ntext] COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[AudCreateWho] [nvarchar] (10) COLLATE
SQL_Latin1_General_CP1_CI_AS NULL ,
[AudCreateWhen] [datetime] NULL ,
[AudUpdateWho] [nvarchar] (10) COLLATE
SQL_Latin1_General_CP1_CI_AS NULL ,
[AudUpdateWhen] [datetime] NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

CREATE TABLE [dbo].[tblT_Grantees] (
[GranteeID] [int] IDENTITY (1, 1) NOT NULL ,
[MainID] [int] NULL ,
[GRANTEE] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL
) ON [PRIMARY]
GO

CREATE TABLE [dbo].[tblT_Grantors] (
[GrantorID] [int] IDENTITY (1, 1) NOT NULL ,
[MainID] [int] NULL ,
[GRANTOR] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL
) ON [PRIMARY]
GO

CREATE TABLE [dbo].[tblT_Subdivision] (
[SubdivID] [int] IDENTITY (1, 1) NOT NULL ,
[MainID] [int] NULL ,
[Subdivision] [nvarchar] (50) COLLATE
SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[tblT_Documents] WITH NOCHECK ADD
CONSTRAINT [PK_tblT_Documents] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[tblT_Grantees] WITH NOCHECK ADD
CONSTRAINT [PK_tblT_Grantees] PRIMARY KEY CLUSTERED
(
[GranteeID]
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[tblT_Grantors] WITH NOCHECK ADD
CONSTRAINT [PK_tblT_Grantors] PRIMARY KEY CLUSTERED
(
[GrantorID]
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[tblT_Documents] ADD
CONSTRAINT [DF__tblT_Docu__DocDt__1367E606] DEFAULT
(convert(datetime,convert(varchar,getdate(),1),1)) FOR [DocDt],
CONSTRAINT [DF__tblT_Docum__Page__145C0A3F] DEFAULT (0) FOR
[Page],
CONSTRAINT [DF__tblT_Docu__AudCr__15502E78] DEFAULT
(getdate()) FOR [AudCreateWhen],
CONSTRAINT [DF__tblT_Docu__AudUp__164452B1] DEFAULT
(getdate()) FOR [AudUpdateWhen],
CONSTRAINT [CK tblT_Documents DocDt] CHECK ([DocDt] >=
'1/1/1837' and [DocDt] <=
convert(datetime,convert(varchar,getdate(),1),1))
GO

CREATE INDEX [IX_FileNo] ON [dbo].[tblT_Documents]([FileNo]) ON
[PRIMARY]
GO

CREATE INDEX [IX_DocType] ON [dbo].[tblT_Documents]([DocType]) ON
[PRIMARY]
GO

CREATE INDEX [IX_DocDt] ON [dbo].[tblT_Documents]([DocDt]) ON
[PRIMARY]
GO

CREATE INDEX [IX_Book] ON [dbo].[tblT_Documents]([Book]) ON
[PRIMARY]
GO

CREATE INDEX [IX_Page] ON [dbo].[tblT_Documents]([Page]) ON
[PRIMARY]
GO

CREATE UNIQUE INDEX [ID] ON [dbo].[tblT_Documents]([ID]) ON
[PRIMARY]
GO

ALTER TABLE [dbo].[tblT_Grantees] ADD
CONSTRAINT [DF__tblT_Gran__MainI__0EA330E9] DEFAULT (0) FOR
[MainID]
GO

CREATE INDEX [IX_MainID] ON [dbo].[tblT_Grantees]([MainID]) ON
[PRIMARY]
GO

CREATE INDEX [IX_Grantee] ON [dbo].[tblT_Grantees]([GRANTEE]) ON
[PRIMARY]
GO

ALTER TABLE [dbo].[tblT_Grantors] ADD
CONSTRAINT [DF__tblT_Gran__MainI__09DE7BCC] DEFAULT (0) FOR
[MainID]
GO

CREATE INDEX [IX_MainID] ON [dbo].[tblT_Grantors]([MainID]) ON
[PRIMARY]
GO

CREATE INDEX [IX_Grantor] ON [dbo].[tblT_Grantors]([GRANTOR]) ON
[PRIMARY]
GO

ALTER TABLE [dbo].[tblT_Subdivision] ADD
CONSTRAINT [DF__tblT_Subd__MainI__7B905C75] DEFAULT (0) FOR
[MainID],
CONSTRAINT [aaaaatblT_Subdivision_PK] PRIMARY KEY
NONCLUSTERED
(
[SubdivID]
) ON [PRIMARY]
GO

CREATE INDEX [{C3638ECB-1FC5-4012-BE3A-855E252E07B2}] ON
[dbo].[tblT_Subdivision]([MainID]) ON [PRIMARY]
GO

CREATE INDEX [GrantorID] ON [dbo].[tblT_Subdivision]([SubdivID]) ON
[PRIMARY]
GO

CREATE INDEX [MainID] ON [dbo].[tblT_Subdivision]([MainID]) ON
[PRIMARY]
GO

CREATE INDEX [Subdivision] ON
[dbo].[tblT_Subdivision]([Subdivision]) ON [PRIMARY]
GO

ALTER TABLE [dbo].[tblT_Documents] ADD
CONSTRAINT [tblT_Documents_FK00] FOREIGN KEY
(
[DocType]
) REFERENCES [dbo].[tblM_DocTypes] (
[DocType]
)
GO

ALTER TABLE [dbo].[tblT_Grantees] ADD
CONSTRAINT [tblT_Grantees_FK00] FOREIGN KEY
(
[MainID]
) REFERENCES [dbo].[tblT_Documents] (
[ID]
) ON DELETE CASCADE ON UPDATE CASCADE
GO

ALTER TABLE [dbo].[tblT_Grantors] ADD
CONSTRAINT [tblT_Grantors_FK00] FOREIGN KEY
(
[MainID]
) REFERENCES [dbo].[tblT_Documents] (
[ID]
) ON DELETE CASCADE ON UPDATE CASCADE
GO

ALTER TABLE [dbo].[tblT_Subdivision] ADD
CONSTRAINT [tblT_Subdivision_FK00] FOREIGN KEY
(
[MainID]
) REFERENCES [dbo].[tblT_Documents] (
[ID]
) ON DELETE CASCADE ON UPDATE CASCADE
GO

<----- Table Defs End -------
<----- Good Query Begins -------
SELECT DISTINCT d.ID, d.DocType, d.DocDt, d.Book, d.Page, gro.GRANTOR
FROM tblT_Documents d LEFT OUTER JOIN
tblT_Grantors gro ON d.ID = gro.MainID
WHERE (gro.GRANTOR LIKE N'%pracht%')

<----- Good Query Ends -------
<----- Failing Query Begins -------
SELECT DISTINCT d.ID, d.DocType, d.DocDt, d.Book, d.Page, gro.GRANTOR,
gre.GRANTEE, s.Subdivision
FROM tblT_Documents d LEFT OUTER JOIN
tblT_Subdivision s ON d.ID = s.MainID LEFT OUTER
JOIN
tblT_Grantees gre ON d.ID = gre.MainID LEFT
OUTER JOIN
tblT_Grantors gro ON d.ID = gro.MainID
WHERE (gro.GRANTOR LIKE N'%pracht%')

<----- Failing Query Ends -------|||Hello All!

Per request, the following is the generate script for 4 tables; the
query that works 'OK' and the query that doesn't work.

<----- Table Defs Begin -------
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_Amounts_FK00]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblT_Amounts] DROP CONSTRAINT tblT_Amounts_FK00
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_AssocBP_FK00]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblT_AssocBP] DROP CONSTRAINT tblT_AssocBP_FK00
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_BlockBld_FK00]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblT_BlockBld] DROP CONSTRAINT tblT_BlockBld_FK00
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_Grantees_FK00]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblT_Grantees] DROP CONSTRAINT tblT_Grantees_FK00
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_Grantors_FK00]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblT_Grantors] DROP CONSTRAINT tblT_Grantors_FK00
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_LotUnit_FK00]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblT_LotUnit] DROP CONSTRAINT tblT_LotUnit_FK00
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_Subdivision_FK00]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblT_Subdivision] DROP CONSTRAINT
tblT_Subdivision_FK00
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_TaxIDs_FK00]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[tblT_TaxIDs] DROP CONSTRAINT tblT_TaxIDs_FK00
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_Documents]') and OBJECTPROPERTY(id,
N'IsUserTable') = 1)
drop table [dbo].[tblT_Documents]
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_Grantees]') and OBJECTPROPERTY(id,
N'IsUserTable') = 1)
drop table [dbo].[tblT_Grantees]
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_Grantors]') and OBJECTPROPERTY(id,
N'IsUserTable') = 1)
drop table [dbo].[tblT_Grantors]
GO

if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tblT_Subdivision]') and OBJECTPROPERTY(id,
N'IsUserTable') = 1)
drop table [dbo].[tblT_Subdivision]
GO

CREATE TABLE [dbo].[tblT_Documents] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[FileNo] [nvarchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL ,
[DocType] [nvarchar] (5) COLLATE SQL_Latin1_General_CP1_CI_AS
NOT NULL ,
[YorDNo] [nvarchar] (8) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL ,
[DocDt] [datetime] NULL ,
[Book] [nvarchar] (5) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL ,
[Page] [int] NULL ,
[ARB] [nvarchar] (7) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
,
[Comments] [ntext] COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[AudCreateWho] [nvarchar] (10) COLLATE
SQL_Latin1_General_CP1_CI_AS NULL ,
[AudCreateWhen] [datetime] NULL ,
[AudUpdateWho] [nvarchar] (10) COLLATE
SQL_Latin1_General_CP1_CI_AS NULL ,
[AudUpdateWhen] [datetime] NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

CREATE TABLE [dbo].[tblT_Grantees] (
[GranteeID] [int] IDENTITY (1, 1) NOT NULL ,
[MainID] [int] NULL ,
[GRANTEE] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL
) ON [PRIMARY]
GO

CREATE TABLE [dbo].[tblT_Grantors] (
[GrantorID] [int] IDENTITY (1, 1) NOT NULL ,
[MainID] [int] NULL ,
[GRANTOR] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL
) ON [PRIMARY]
GO

CREATE TABLE [dbo].[tblT_Subdivision] (
[SubdivID] [int] IDENTITY (1, 1) NOT NULL ,
[MainID] [int] NULL ,
[Subdivision] [nvarchar] (50) COLLATE
SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[tblT_Documents] WITH NOCHECK ADD
CONSTRAINT [PK_tblT_Documents] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[tblT_Grantees] WITH NOCHECK ADD
CONSTRAINT [PK_tblT_Grantees] PRIMARY KEY CLUSTERED
(
[GranteeID]
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[tblT_Grantors] WITH NOCHECK ADD
CONSTRAINT [PK_tblT_Grantors] PRIMARY KEY CLUSTERED
(
[GrantorID]
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[tblT_Documents] ADD
CONSTRAINT [DF__tblT_Docu__DocDt__1367E606] DEFAULT
(convert(datetime,convert(varchar,getdate(),1),1)) FOR [DocDt],
CONSTRAINT [DF__tblT_Docum__Page__145C0A3F] DEFAULT (0) FOR
[Page],
CONSTRAINT [DF__tblT_Docu__AudCr__15502E78] DEFAULT
(getdate()) FOR [AudCreateWhen],
CONSTRAINT [DF__tblT_Docu__AudUp__164452B1] DEFAULT
(getdate()) FOR [AudUpdateWhen],
CONSTRAINT [CK tblT_Documents DocDt] CHECK ([DocDt] >=
'1/1/1837' and [DocDt] <=
convert(datetime,convert(varchar,getdate(),1),1))
GO

CREATE INDEX [IX_FileNo] ON [dbo].[tblT_Documents]([FileNo]) ON
[PRIMARY]
GO

CREATE INDEX [IX_DocType] ON [dbo].[tblT_Documents]([DocType]) ON
[PRIMARY]
GO

CREATE INDEX [IX_DocDt] ON [dbo].[tblT_Documents]([DocDt]) ON
[PRIMARY]
GO

CREATE INDEX [IX_Book] ON [dbo].[tblT_Documents]([Book]) ON
[PRIMARY]
GO

CREATE INDEX [IX_Page] ON [dbo].[tblT_Documents]([Page]) ON
[PRIMARY]
GO

CREATE UNIQUE INDEX [ID] ON [dbo].[tblT_Documents]([ID]) ON
[PRIMARY]
GO

ALTER TABLE [dbo].[tblT_Grantees] ADD
CONSTRAINT [DF__tblT_Gran__MainI__0EA330E9] DEFAULT (0) FOR
[MainID]
GO

CREATE INDEX [IX_MainID] ON [dbo].[tblT_Grantees]([MainID]) ON
[PRIMARY]
GO

CREATE INDEX [IX_Grantee] ON [dbo].[tblT_Grantees]([GRANTEE]) ON
[PRIMARY]
GO

ALTER TABLE [dbo].[tblT_Grantors] ADD
CONSTRAINT [DF__tblT_Gran__MainI__09DE7BCC] DEFAULT (0) FOR
[MainID]
GO

CREATE INDEX [IX_MainID] ON [dbo].[tblT_Grantors]([MainID]) ON
[PRIMARY]
GO

CREATE INDEX [IX_Grantor] ON [dbo].[tblT_Grantors]([GRANTOR]) ON
[PRIMARY]
GO

ALTER TABLE [dbo].[tblT_Subdivision] ADD
CONSTRAINT [DF__tblT_Subd__MainI__7B905C75] DEFAULT (0) FOR
[MainID],
CONSTRAINT [aaaaatblT_Subdivision_PK] PRIMARY KEY
NONCLUSTERED
(
[SubdivID]
) ON [PRIMARY]
GO

CREATE INDEX [{C3638ECB-1FC5-4012-BE3A-855E252E07B2}] ON
[dbo].[tblT_Subdivision]([MainID]) ON [PRIMARY]
GO

CREATE INDEX [GrantorID] ON [dbo].[tblT_Subdivision]([SubdivID]) ON
[PRIMARY]
GO

CREATE INDEX [MainID] ON [dbo].[tblT_Subdivision]([MainID]) ON
[PRIMARY]
GO

CREATE INDEX [Subdivision] ON
[dbo].[tblT_Subdivision]([Subdivision]) ON [PRIMARY]
GO

ALTER TABLE [dbo].[tblT_Documents] ADD
CONSTRAINT [tblT_Documents_FK00] FOREIGN KEY
(
[DocType]
) REFERENCES [dbo].[tblM_DocTypes] (
[DocType]
)
GO

ALTER TABLE [dbo].[tblT_Grantees] ADD
CONSTRAINT [tblT_Grantees_FK00] FOREIGN KEY
(
[MainID]
) REFERENCES [dbo].[tblT_Documents] (
[ID]
) ON DELETE CASCADE ON UPDATE CASCADE
GO

ALTER TABLE [dbo].[tblT_Grantors] ADD
CONSTRAINT [tblT_Grantors_FK00] FOREIGN KEY
(
[MainID]
) REFERENCES [dbo].[tblT_Documents] (
[ID]
) ON DELETE CASCADE ON UPDATE CASCADE
GO

ALTER TABLE [dbo].[tblT_Subdivision] ADD
CONSTRAINT [tblT_Subdivision_FK00] FOREIGN KEY
(
[MainID]
) REFERENCES [dbo].[tblT_Documents] (
[ID]
) ON DELETE CASCADE ON UPDATE CASCADE
GO

<----- Table Defs End -------
<----- Good Query Begins -------
SELECT DISTINCT d.ID, d.DocType, d.DocDt, d.Book, d.Page, gro.GRANTOR
FROM tblT_Documents d LEFT OUTER JOIN
tblT_Grantors gro ON d.ID = gro.MainID
WHERE (gro.GRANTOR LIKE N'%pracht%')

<----- Good Query Ends -------
<----- Failing Query Begins -------
SELECT DISTINCT d.ID, d.DocType, d.DocDt, d.Book, d.Page, gro.GRANTOR,
gre.GRANTEE, s.Subdivision
FROM tblT_Documents d LEFT OUTER JOIN
tblT_Subdivision s ON d.ID = s.MainID LEFT OUTER
JOIN
tblT_Grantees gre ON d.ID = gre.MainID LEFT
OUTER JOIN
tblT_Grantors gro ON d.ID = gro.MainID
WHERE (gro.GRANTOR LIKE N'%pracht%')

<----- Failing Query Ends -------|||(Ed_No_Spam_Please_Weber@.Weber_Spam_Not_Enterprise s.Org) writes:
> Per request, the following is the generate script for 4 tables; the
> query that works 'OK' and the query that doesn't work.

Thanks for the scripts!

There are a couple of things I like to address. Permit me to start
with the data model as such.

> CREATE TABLE [dbo].[tblT_Grantors] (
> [GrantorID] [int] IDENTITY (1, 1) NOT NULL ,
> [MainID] [int] NULL ,
> [GRANTOR] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS
> NULL
> ) ON [PRIMARY]
> GO

This table looks funny to me. Why would you insert a row with NULL values
in MainID and/or GRANTOR? Is possible to insert a GRANTOR without
connecting him to a document? Does it make sense to connect the same
GRANTOR twice to the same doument? What does a NULL GRANTOR mean?

The defintion I would expect is:

CREATE TABLE [dbo].[tblT_Grantors] (
[MainID] [int] NOT NULL ,
[GRANTOR] [nvarchar] (50) NOT NULL
) ON [PRIMARY]

And then (MainID, GRANTOR) would make up the primary key. But, of course,
since I don't know the business domain, I may be wrong here.

> ALTER TABLE [dbo].[tblT_Grantors] WITH NOCHECK ADD
> CONSTRAINT [PK_tblT_Grantors] PRIMARY KEY CLUSTERED
> (
> [GrantorID]
> ) ON [PRIMARY]

Assuming now that you really need to keep GrantorID, the query in question
is
likely to benefit if the primary key was non-clustered, and the index on
MainID was made clustered.

This comment applies to the Grantee and Subdivisions tables as well.

As for the indexing of all columns - this is likely to be of limited
usage. For some queries composed indexes may be better. (Although that is
not the case here.

> SELECT DISTINCT d.ID, d.DocType, d.DocDt, d.Book, d.Page, gro.GRANTOR
> FROM tblT_Documents d LEFT OUTER JOIN
> tblT_Grantors gro ON d.ID = gro.MainID
> WHERE (gro.GRANTOR LIKE N'%pracht%')

This query appears to have an error in it. Or at least it is written in
an unusual way. If you mean what the query produces, the normal way
would be to say:

SELECT DISTINCT d.ID, d.DocType, d.DocDt, d.Book, d.Page, gro.GRANTOR
FROM tblT_Documents d
JOIN tblT_Grantors gro ON d.ID = gro.MainID
WHERE (gro.GRANTOR LIKE N'%pracht%')

That is, there is no need for the LEFT JOIN. Since you have a condition
on Grantors in the WHERE clause, you are filtering away all rows with
documents without grantors. Or documents that has other grantors than
%pracht%. If you want all documents, but only include grantors like
%pracht%, the query would be:

SELECT DISTINCT d.ID, d.DocType, d.DocDt, d.Book, d.Page, gro.GRANTOR
FROM tblT_Documents d
LEFT JOIN tblT_Grantors gro ON d.ID = gro.MainID
AND (gro.GRANTOR LIKE N'%pracht%')

This far I have not discussed the DISTINCT. It appears to be unnecessary,
but since your data model permits the same grantor being added more than
once to the same document, it may not be.

><----- Failing Query Begins ------->
> SELECT DISTINCT d.ID, d.DocType, d.DocDt, d.Book, d.Page, gro.GRANTOR,
> gre.GRANTEE, s.Subdivision
> FROM tblT_Documents d
> LEFT OUTER JOIN tblT_Subdivision s ON d.ID = s.MainID
> LEFT OUTER JOIN tblT_Grantees gre ON d.ID = gre.MainID
> LEFT OUTER JOIN tblT_Grantors gro ON d.ID = gro.MainID
> WHERE (gro.GRANTOR LIKE N'%pracht%')

OK, so what you have here is not only a performance problem. The query is
simply plain wrong. Assume that for a document there are three grantees,
three subdivisions, and three grantors all matching %prachh%. How many
rows will you get for this document in the result set? Answer: 27. You
have a cartesian product of the subtables, and this is probably why the
query takes so long time to compute.

Unfortunately, I cannot say what the correct query is, since I don't know
the business domain. It may be that you cannot really return all in
one query, but will have to query for subdivisions and grantors
separately. To address this, I would need more information of what you
are trying to achieve.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Erland has highlighted most of these points but allow me to recap on the
performance issues.

1) shifting the clustered index on the 3 sub tables to MainID will help this
and other queries on these tables.
2) dont use left join if a join will do as it limits the options available
to the query optimiser, you can remove it from tblT_Grantors with no
functionality change in the query.
3) dont use distinct unless you need it, the workload is generates is
non-trivial.

SELECT d.ID, d.DocType, d.DocDt, d.Book, d.Page, gro.GRANTOR,
gre.GRANTEE, s.Subdivision
FROM tblT_Documents d
JOIN tblT_Grantors gro ON d.ID = gro.MainID
LEFT OUTER JOIN tblT_Subdivision s ON d.ID = s.MainID
LEFT OUTER JOIN tblT_Grantees gre ON d.ID = gre.MainID
WHERE (gro.GRANTOR LIKE N'%pracht%')

If this gives you no joy, check the execution plan and see if you are
getting the cartesian product issue (input rows scaling up by a factor after
each join operator).

Mr Tea|||Erland and Mr. Tea!

Thank you both for your wonderful insights.

Here are some clarifying items:

1) Yes, original data exists whereby the exact same Grantor appears
as a separate record for the exact same Document ID - this is original
source data (about 75,000 rows) that was never cleaned up --
Removing duplicates will correct this problem but it hasn't happened
yet. Therefore, the 'distinct' clause.

Second... The objective of the original query was to eventually put
the whole thing in a stored proc and then pass in the entire 'Where'
clause as a parameter. Therefore, the access form would collect
whichever child fields they wanted to search on, and the 'Where'
clause would be built dynamically in Access, and then passed as a
parameter to the SP. The LEFT JOIN seems necessary because the
criteria won't always be 'GRANTOR LIKE...' Rather, it may end up
being 'Subdivision LIKE ... AND GRANTEE LIKE... etc. etc. By keeping
the query focused on the Document parent table with all of the child
tables as Left Joins from the parent, then a generic where clause can
be passed in without problems. The JOIN alone would drop records if
the Where clause doesn't match the table being merely joined and not
left joined. (I think I'm saying this right...)

On Mon, 21 Feb 2005 08:02:43 GMT, "Lee Tudor" <mr_tea@.ntlworld.com>
wrote:

>Erland has highlighted most of these points but allow me to recap on the
>performance issues.
>1) shifting the clustered index on the 3 sub tables to MainID will help this
>and other queries on these tables.
>2) dont use left join if a join will do as it limits the options available
>to the query optimiser, you can remove it from tblT_Grantors with no
>functionality change in the query.
>3) dont use distinct unless you need it, the workload is generates is
>non-trivial.
>SELECT d.ID, d.DocType, d.DocDt, d.Book, d.Page, gro.GRANTOR,
>gre.GRANTEE, s.Subdivision
>FROM tblT_Documents d
> JOIN tblT_Grantors gro ON d.ID = gro.MainID
> LEFT OUTER JOIN tblT_Subdivision s ON d.ID = s.MainID
> LEFT OUTER JOIN tblT_Grantees gre ON d.ID = gre.MainID
>WHERE (gro.GRANTOR LIKE N'%pracht%')
>If this gives you no joy, check the execution plan and see if you are
>getting the cartesian product issue (input rows scaling up by a factor after
>each join operator).
>Mr Tea|||(Ed_No_Spam_Please_Weber@.Weber_Spam_Not_Enterprise s.Org) writes:
> Second... The objective of the original query was to eventually put
> the whole thing in a stored proc and then pass in the entire 'Where'
> clause as a parameter. Therefore, the access form would collect
> whichever child fields they wanted to search on, and the 'Where'
> clause would be built dynamically in Access, and then passed as a
> parameter to the SP. The LEFT JOIN seems necessary because the
> criteria won't always be 'GRANTOR LIKE...' Rather, it may end up
> being 'Subdivision LIKE ... AND GRANTEE LIKE... etc. etc. By keeping
> the query focused on the Document parent table with all of the child
> tables as Left Joins from the parent, then a generic where clause can
> be passed in without problems. The JOIN alone would drop records if
> the Where clause doesn't match the table being merely joined and not
> left joined. (I think I'm saying this right...)

But you still have the problem of the cartesian product. Before you go
any further you need to specify how you should handle multiple child
records for a document.

If you only were to present data from the document, it would be an
easy thing:

SELECT ...
FROM documents d
WHERE EXISTS (SELECT *
FROM grantors g
WHERE g.MainID = d.id
AND g.gratnor LIKE '%pracht%')
AND EXISTS (...)

If you only want to present one gratnor, grantee, subdivision etc per
document it's still an easy thing. Take the above into a temp table
and then:

UPDATE #temp
SET grantor = (SELECT MAX(grantor)
FROM grantors g
WHERE g.MainID = d.id)
FROM #temp d

If you want to present all grantors etc, you do indeed have a
presentational problem, that I cannot solve for you.

As for passing WHERE clauses from Access, please keep in mind that
the SQL syntax is very different in Access and SQL Server. In any
case, if your idea about writing stored procedures is that you pass
the WHERE clause as a parameter, stop writing store procedures. You
are just increasing the complexity without gaining anything at all.

Since you appear to be in to dynamic search conditions, you may be
interested in this article on my web site:
http://www.sommarskog.se/dyn-search.html. That is, if you really
want to do it in T-SQL and use stored procedures. If you are
more comfortable of building the SQL Statement in VB code in Access
to that. But in such case, build the entire SQL statement there.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||If you are building up your where clause dynamically, then build up your
join clause dynamically.

CREATE PROCEDURE Search @.whereclause varchar(500), @.joingrantor bit,
@.joingrantee bit, @.joinsubdivision bit AS

these bits would indicate if you need a 'JOIN' or a 'LEFT JOIN' depending if
the table appeared in the where clause. One other step would be to attempt
to make the search clause sargable. dont automatically tack on %% if you
dont need to (mainly applies to the prefix) as this also limits options
available to the optimiser. You could decide to prefix the % in your code
only if it was necessary.

If you could get the code to generate a block similar to this containing the
above suggestions:

SELECT DISTINCT d.ID, d.DocType, d.DocDt, d.Book, d.Page, gro.GRANTOR,
gre.GRANTEE, s.Subdivision
FROM tblT_Documents d
JOIN tblT_Grantors gro ON d.ID = gro.MainID
LEFT OUTER JOIN tblT_Subdivision s ON d.ID = s.MainID
LEFT OUTER JOIN tblT_Grantees gre ON d.ID = gre.MainID
WHERE (gro.GRANTOR LIKE N'pracht%')

with clustered indexes on: tblT_Grantors.Main, tblT_Subdivision.Main,
tblT_Grantees.Main
and nonclustered indexes on: tblT_Grantors.Grantor,
tblT_Subdivision.Subdivision, tblT_Grantees.Grantee

You should notice a performance increase by several orders of magnitude.

Mr Tea

<Ed_No_Spam_Please_Weber@.Weber_Spam_Not_Enterprises .Org> wrote in message
news:iuik11djrhqhcle7knq16dfgtntl7bpj76@.4ax.com...
> Erland and Mr. Tea!
> Thank you both for your wonderful insights.
> Here are some clarifying items:
> 1) Yes, original data exists whereby the exact same Grantor appears
> as a separate record for the exact same Document ID - this is original
> source data (about 75,000 rows) that was never cleaned up --
> Removing duplicates will correct this problem but it hasn't happened
> yet. Therefore, the 'distinct' clause.
> Second... The objective of the original query was to eventually put
> the whole thing in a stored proc and then pass in the entire 'Where'
> clause as a parameter. Therefore, the access form would collect
> whichever child fields they wanted to search on, and the 'Where'
> clause would be built dynamically in Access, and then passed as a
> parameter to the SP. The LEFT JOIN seems necessary because the
> criteria won't always be 'GRANTOR LIKE...' Rather, it may end up
> being 'Subdivision LIKE ... AND GRANTEE LIKE... etc. etc. By keeping
> the query focused on the Document parent table with all of the child
> tables as Left Joins from the parent, then a generic where clause can
> be passed in without problems. The JOIN alone would drop records if
> the Where clause doesn't match the table being merely joined and not
> left joined. (I think I'm saying this right...)
>
> On Mon, 21 Feb 2005 08:02:43 GMT, "Lee Tudor" <mr_tea@.ntlworld.com>
> wrote:
>>Erland has highlighted most of these points but allow me to recap on the
>>performance issues.
>>
>>1) shifting the clustered index on the 3 sub tables to MainID will help
>>this
>>and other queries on these tables.
>>2) dont use left join if a join will do as it limits the options available
>>to the query optimiser, you can remove it from tblT_Grantors with no
>>functionality change in the query.
>>3) dont use distinct unless you need it, the workload is generates is
>>non-trivial.
>>
>>SELECT d.ID, d.DocType, d.DocDt, d.Book, d.Page, gro.GRANTOR,
>>gre.GRANTEE, s.Subdivision
>>FROM tblT_Documents d
>> JOIN tblT_Grantors gro ON d.ID = gro.MainID
>> LEFT OUTER JOIN tblT_Subdivision s ON d.ID = s.MainID
>> LEFT OUTER JOIN tblT_Grantees gre ON d.ID = gre.MainID
>>WHERE (gro.GRANTOR LIKE N'%pracht%')
>>
>>If this gives you no joy, check the execution plan and see if you are
>>getting the cartesian product issue (input rows scaling up by a factor
>>after
>>each join operator).
>>
>>Mr Tea
>