Thursday, March 29, 2012
Help with CREATE TRIGGER syntax
CREATE TRIGGER tr_CMR_Client_Status_Confirm
ON [CMR Client Numbers]
FOR INSERT, UPDATE
AS
IF UPDATE [Current Status]
CASE WHEN [Current Status] = 'X' THEN [Status Flag] = [CMR
Client Number ID] ELSE [Status Flag] = NULL END1. IF UPDATE should have parentheses around the column name - note: if
any row's [Current Status] is updated, this will evaluate to true.
2. not a syntax error, but for clarity, should put a BEGIN..END after the IF
3. [biggest problem] There's no DML statement to do anything, just a
CASE expression...
4. ... which is incorrectly written
Look up UPDATE, CREATE TRIGGER and CASE in BOL
It is quite unclear from the code given what table the [Status Flag]
column belongs to, otherwise I could give an example of possible correct
trigger code.
Please provide DDL of the table.
mike wrote:
>Any help would be appreciated. What's wrong with the following syntax?
>
>CREATE TRIGGER tr_CMR_Client_Status_Confirm
>ON [CMR Client Numbers]
>FOR INSERT, UPDATE
> AS
> IF UPDATE [Current Status]
> CASE WHEN [Current Status] = 'X' THEN [Status Flag] = [CMR
>Client Number ID] ELSE [Status Flag] = NULL END
>
>
>
>|||What are you trying to do? It looks like you want to duplicate a value in
another column of your table, which isn't a good thing to do.
Please post DDL and sample data.
David Portas
SQL Server MVP
--sql
Tuesday, March 27, 2012
Help with blocking
I was hoping I could get some help from you all about a blocking
problem that is making me tear my hair out. I've got a client process
that is executing two DELETE statements hourly during our ten hour
production day. The first DELETE statement cleans out "expired"
entries from Table A and the second cleans out "expired" entries from
Table B.
Under normal circumstances, each of these queries executes in a few
seconds. About once a week the second query which deletes from Table B
runs long (2-3 minutes) though it does finish by itself. This creates
tons of blocking as both TABLEA and B are under heavy SELECT, INSERT,
and UPDATE activity. Here are the queries (sorta)...
Query1 - DELETE FROM TABLEA WHERE [FieldModified] < (GetDate() -
'00:05:00')
Query2 - DELETE FROM TABLEB WHERE [FieldGUID] NOT IN (SELECT
[FieldGUID] FROM TABLEA)
(note: FieldGUID is the PK for TABLEA)
I've checked the execution plan for the second query and I know that's
it is going to perform a table scan. I also know the row counts for
Tables A and B and my I/O subsystem and and can say for certain that
the second query should not run for more than a few seconds.
I've checked sysprocesses while this second query is running and
consistently find that it is not blocked, is not waiting, the [status]
column shows sleeping, open_tran column equals 2, cpu and physical_io
columns show low values and do change slowly over time. I have also
checked my normal metrics that I capture throughout the day and can't
find a problem. My server's CPU usage is low, disk queues are low to
nonexistent, low to no paging going on, no other processes running,
etc.
I can't for the life of me figure out what causes this second query to
consistently (but infrequently) run long. I'm running SQL 2000 SP3a
Enterprise Edition on a 2-node Active/Passive cluster. Each cluster
node has 8GB of memory and 4x2.8GHz CPUs.
Any and all help is greatly appreciated. Thanks.
Steve
With "implicit_transaction' on by default, your #1 query is enclosed within
a transaction. Thus, it holds the locks until it completes with the delete.
Your #2 query's SELECT statement is blocked at this point. This is probably
desired because you probably want to be sure of the data in TableA.
So, to improve this, I would suggest putting an index on your TableA's
FieldModified column. Also, consider using NOT EXISTS instead.
Now, if #2 is nothing more than a maintenance step which will run again, you
could force a READPAST (i.e. skip locked rows).
e.g.
DELETE FROM TABLEB WHERE [FieldGUID]
NOT IN (SELECT
[FieldGUID]
FROM TABLEA with(READPAST) )
-oj
<steveg@.payerpath.com> wrote in message
news:1107799902.434459.80610@.z14g2000cwz.googlegro ups.com...
> Hi all,
> I was hoping I could get some help from you all about a blocking
> problem that is making me tear my hair out. I've got a client process
> that is executing two DELETE statements hourly during our ten hour
> production day. The first DELETE statement cleans out "expired"
> entries from Table A and the second cleans out "expired" entries from
> Table B.
> Under normal circumstances, each of these queries executes in a few
> seconds. About once a week the second query which deletes from Table B
> runs long (2-3 minutes) though it does finish by itself. This creates
> tons of blocking as both TABLEA and B are under heavy SELECT, INSERT,
> and UPDATE activity. Here are the queries (sorta)...
> Query1 - DELETE FROM TABLEA WHERE [FieldModified] < (GetDate() -
> '00:05:00')
> Query2 - DELETE FROM TABLEB WHERE [FieldGUID] NOT IN (SELECT
> [FieldGUID] FROM TABLEA)
> (note: FieldGUID is the PK for TABLEA)
>
> I've checked the execution plan for the second query and I know that's
> it is going to perform a table scan. I also know the row counts for
> Tables A and B and my I/O subsystem and and can say for certain that
> the second query should not run for more than a few seconds.
> I've checked sysprocesses while this second query is running and
> consistently find that it is not blocked, is not waiting, the [status]
> column shows sleeping, open_tran column equals 2, cpu and physical_io
> columns show low values and do change slowly over time. I have also
> checked my normal metrics that I capture throughout the day and can't
> find a problem. My server's CPU usage is low, disk queues are low to
> nonexistent, low to no paging going on, no other processes running,
> etc.
> I can't for the life of me figure out what causes this second query to
> consistently (but infrequently) run long. I'm running SQL 2000 SP3a
> Enterprise Edition on a 2-node Active/Passive cluster. Each cluster
> node has 8GB of memory and 4x2.8GHz CPUs.
> Any and all help is greatly appreciated. Thanks.
> Steve
>
|||Hi,
Did you check lastwaittype and waitresource on sysprocesses which might give
you a hint..
We had some problems when we have max degree of parallelism is set to 0 (all
available processors) on some of the update statements for some queries. The
queries run with no problems when we use option maxdop (1) or when we set max
degree of parallelism to some value (processor number - 1). This is just an
assumption, our server has 8 cpu's..
Also check your log file growth settings, if you're transaction log file
setting is large and the auto growth is enabled like %10 etc maybe you need
to consider to change log auto-growth settings and auto-shrink settings as
well..
"steveg@.payerpath.com" wrote:
> Hi all,
> I was hoping I could get some help from you all about a blocking
> problem that is making me tear my hair out. I've got a client process
> that is executing two DELETE statements hourly during our ten hour
> production day. The first DELETE statement cleans out "expired"
> entries from Table A and the second cleans out "expired" entries from
> Table B.
> Under normal circumstances, each of these queries executes in a few
> seconds. About once a week the second query which deletes from Table B
> runs long (2-3 minutes) though it does finish by itself. This creates
> tons of blocking as both TABLEA and B are under heavy SELECT, INSERT,
> and UPDATE activity. Here are the queries (sorta)...
> Query1 - DELETE FROM TABLEA WHERE [FieldModified] < (GetDate() -
> '00:05:00')
> Query2 - DELETE FROM TABLEB WHERE [FieldGUID] NOT IN (SELECT
> [FieldGUID] FROM TABLEA)
> (note: FieldGUID is the PK for TABLEA)
>
> I've checked the execution plan for the second query and I know that's
> it is going to perform a table scan. I also know the row counts for
> Tables A and B and my I/O subsystem and and can say for certain that
> the second query should not run for more than a few seconds.
> I've checked sysprocesses while this second query is running and
> consistently find that it is not blocked, is not waiting, the [status]
> column shows sleeping, open_tran column equals 2, cpu and physical_io
> columns show low values and do change slowly over time. I have also
> checked my normal metrics that I capture throughout the day and can't
> find a problem. My server's CPU usage is low, disk queues are low to
> nonexistent, low to no paging going on, no other processes running,
> etc.
> I can't for the life of me figure out what causes this second query to
> consistently (but infrequently) run long. I'm running SQL 2000 SP3a
> Enterprise Edition on a 2-node Active/Passive cluster. Each cluster
> node has 8GB of memory and 4x2.8GHz CPUs.
> Any and all help is greatly appreciated. Thanks.
> Steve
>
|||oj,
The client that sends these two statements sends them in the same order
every time and won't send the second statement until the first has
completed. At a high level the code looks like the following so I
don't think the possibility you outlined is what is actually occuring.
try
prepare first delete
execute 1st delete
prepare 2nd delete
execute 2nd delete
catch
error handling logic
Steve
oj wrote:
> With "implicit_transaction' on by default, your #1 query is enclosed
within
> a transaction. Thus, it holds the locks until it completes with the
delete.
> Your #2 query's SELECT statement is blocked at this point. This is
probably
> desired because you probably want to be sure of the data in TableA.
> So, to improve this, I would suggest putting an index on your
TableA's
> FieldModified column. Also, consider using NOT EXISTS instead.
> Now, if #2 is nothing more than a maintenance step which will run
again, you[vbcol=seagreen]
> could force a READPAST (i.e. skip locked rows).
> e.g.
> DELETE FROM TABLEB WHERE [FieldGUID]
> NOT IN (SELECT
> [FieldGUID]
> FROM TABLEA with(READPAST) )
>
> --
> -oj
>
> <steveg@.payerpath.com> wrote in message
> news:1107799902.434459.80610@.z14g2000cwz.googlegro ups.com...
process[vbcol=seagreen]
from[vbcol=seagreen]
Table B[vbcol=seagreen]
creates[vbcol=seagreen]
INSERT,[vbcol=seagreen]
that's[vbcol=seagreen]
for[vbcol=seagreen]
that[vbcol=seagreen]
[status][vbcol=seagreen]
physical_io[vbcol=seagreen]
also[vbcol=seagreen]
can't[vbcol=seagreen]
to[vbcol=seagreen]
to[vbcol=seagreen]
SP3a[vbcol=seagreen]
cluster[vbcol=seagreen]
|||Sysprocesses shows waittype=0x0000, waittime=0, and lastwaittype is
PAGELATCH_SH or PAGELATCH_EX. What I get from that information is that
the DELETE statement that causes the trouble isn't waiting on anything
but that the last time it did wait was for PAGELATCH_SH or
PAGELATCH_EX.
I too have had problems with max degrees of parallelism which is why I
have it configured at the server level to be half the number of logical
CPUs in the server. I can also see in sysprocesses when the problem
DELETE is running that it is not running in parallel as there is only
one row for the problem SPID.
I have checked the file growth settings for both my database and
transaction log. Both TableA and TableB are in the same database whose
size is very stable throughout the day. This database is also set up
in Simple recovery mode so the transaction log stays mostly empty.
Steve
|||I see.
Just reread your post. You might be experiencing this:
http://support.microsoft.com/kb/329906
-oj
<steveg@.payerpath.com> wrote in message
news:1107803288.881378.258020@.l41g2000cwc.googlegr oups.com...
> oj,
> The client that sends these two statements sends them in the same order
> every time and won't send the second statement until the first has
> completed. At a high level the code looks like the following so I
> don't think the possibility you outlined is what is actually occuring.
> try
> prepare first delete
> execute 1st delete
> prepare 2nd delete
> execute 2nd delete
> catch
> error handling logic
>
> Steve
>
> oj wrote:
> within
> delete.
> probably
> TableA's
> again, you
> process
> from
> Table B
> creates
> INSERT,
> that's
> for
> that
> [status]
> physical_io
> also
> can't
> to
> to
> SP3a
> cluster
>
|||Also, there have been some *nasty* bugs with Parallelism (which some are
fixed in Sql2k sp4).
http://support.microsoft.com/search/...ery=sql+maxdop
I believe Umut is right on target here.
http://support.microsoft.com/kb/837983
Some info on upcoming fixes:
http://support.microsoft.com/kb/888799
-oj
<steveg@.payerpath.com> wrote in message
news:1107803288.881378.258020@.l41g2000cwc.googlegr oups.com...
> oj,
> The client that sends these two statements sends them in the same order
> every time and won't send the second statement until the first has
> completed. At a high level the code looks like the following so I
> don't think the possibility you outlined is what is actually occuring.
> try
> prepare first delete
> execute 1st delete
> prepare 2nd delete
> execute 2nd delete
> catch
> error handling logic
>
> Steve
>
> oj wrote:
> within
> delete.
> probably
> TableA's
> again, you
> process
> from
> Table B
> creates
> INSERT,
> that's
> for
> that
> [status]
> physical_io
> also
> can't
> to
> to
> SP3a
> cluster
>
|||I don't thing the DELETE is running as a parallel process because while
it's running I only see one row for it in sysprocesses. For parallel
queries I would expect to see multiple entries in sysprocesses for the
same SPID with different values in the ecid column for each process.
Steve
|||Hi Steve,
PAGELATCH_SH or PAGELATCH_EX explanation:
Contention can be caused by issues other than IO or memory performance, for
example, heavy concurrent inserts into the same index range can cause this
type of contention. If a lot of inserts need to be placed on the same page
they are serialized using the latch. A lot of inserts into the same range
can also cause page splits in the index which will hold onto the latch while
allocating a new page (this can take a while). Any read accesses to the same
range as the inserts would also conflict on the latches. (from Tom Davidson
article)
I recommend you Tom Davidson article - Opening Performance Tuning Toolbox
from SQLMag. I use it all the time and helps me a lot.
You did not give any details about how many rows does the table have and how
many rows are being deleted, if this is a batch process then consider to do
this in non business hours. If it's a part of your application and if there
are a lot of records affected by the delete then i think you need to change
your code..
If there are a few records affected by the delete then try to use cursors
and delete one by one but i don't recommend this if it's a batch process and
a lot of records are affected..
"steveg@.payerpath.com" wrote:
> Sysprocesses shows waittype=0x0000, waittime=0, and lastwaittype is
> PAGELATCH_SH or PAGELATCH_EX. What I get from that information is that
> the DELETE statement that causes the trouble isn't waiting on anything
> but that the last time it did wait was for PAGELATCH_SH or
> PAGELATCH_EX.
> I too have had problems with max degrees of parallelism which is why I
> have it configured at the server level to be half the number of logical
> CPUs in the server. I can also see in sysprocesses when the problem
> DELETE is running that it is not running in parallel as there is only
> one row for the problem SPID.
> I have checked the file growth settings for both my database and
> transaction log. Both TableA and TableB are in the same database whose
> size is very stable throughout the day. This database is also set up
> in Simple recovery mode so the transaction log stays mostly empty.
> Steve
>
|||I don't think parallel queries cause multiple rows in sysprocesses. I only
see multiple entries in sysprocesses related to same SPID when lightweight
pooling is set to 1. (NT Fibre's).
FYI: Parallel Process wait type is CXPACKET
"steveg@.payerpath.com" wrote:
> I don't thing the DELETE is running as a parallel process because while
> it's running I only see one row for it in sysprocesses. For parallel
> queries I would expect to see multiple entries in sysprocesses for the
> same SPID with different values in the ecid column for each process.
> Steve
>
sql
Help with blocking
I was hoping I could get some help from you all about a blocking
problem that is making me tear my hair out. I've got a client process
that is executing two DELETE statements hourly during our ten hour
production day. The first DELETE statement cleans out "expired"
entries from Table A and the second cleans out "expired" entries from
Table B.
Under normal circumstances, each of these queries executes in a few
seconds. About once a week the second query which deletes from Table B
runs long (2-3 minutes) though it does finish by itself. This creates
tons of blocking as both TABLEA and B are under heavy SELECT, INSERT,
and UPDATE activity. Here are the queries (sorta)...
Query1 - DELETE FROM TABLEA WHERE [FieldModified] < (GetDate() -
'00:05:00')
Query2 - DELETE FROM TABLEB WHERE [FieldGUID] NOT IN (SELECT
[FieldGUID] FROM TABLEA)
(note: FieldGUID is the PK for TABLEA)
I've checked the execution plan for the second query and I know that's
it is going to perform a table scan. I also know the row counts for
Tables A and B and my I/O subsystem and and can say for certain that
the second query should not run for more than a few seconds.
I've checked sysprocesses while this second query is running and
consistently find that it is not blocked, is not waiting, the [status]
column shows sleeping, open_tran column equals 2, cpu and physical_io
columns show low values and do change slowly over time. I have also
checked my normal metrics that I capture throughout the day and can't
find a problem. My server's CPU usage is low, disk queues are low to
nonexistent, low to no paging going on, no other processes running,
etc.
I can't for the life of me figure out what causes this second query to
consistently (but infrequently) run long. I'm running SQL 2000 SP3a
Enterprise Edition on a 2-node Active/Passive cluster. Each cluster
node has 8GB of memory and 4x2.8GHz CPUs.
Any and all help is greatly appreciated. Thanks.
SteveWith "implicit_transaction' on by default, your #1 query is enclosed within
a transaction. Thus, it holds the locks until it completes with the delete.
Your #2 query's SELECT statement is blocked at this point. This is probably
desired because you probably want to be sure of the data in TableA.
So, to improve this, I would suggest putting an index on your TableA's
FieldModified column. Also, consider using NOT EXISTS instead.
Now, if #2 is nothing more than a maintenance step which will run again, you
could force a READPAST (i.e. skip locked rows).
e.g.
DELETE FROM TABLEB WHERE [FieldGUID]
NOT IN (SELECT
[FieldGUID]
FROM TABLEA with(READPAST) )
-oj
<steveg@.payerpath.com> wrote in message
news:1107799902.434459.80610@.z14g2000cwz.googlegroups.com...
> Hi all,
> I was hoping I could get some help from you all about a blocking
> problem that is making me tear my hair out. I've got a client process
> that is executing two DELETE statements hourly during our ten hour
> production day. The first DELETE statement cleans out "expired"
> entries from Table A and the second cleans out "expired" entries from
> Table B.
> Under normal circumstances, each of these queries executes in a few
> seconds. About once a week the second query which deletes from Table B
> runs long (2-3 minutes) though it does finish by itself. This creates
> tons of blocking as both TABLEA and B are under heavy SELECT, INSERT,
> and UPDATE activity. Here are the queries (sorta)...
> Query1 - DELETE FROM TABLEA WHERE [FieldModified] < (GetDate() -
> '00:05:00')
> Query2 - DELETE FROM TABLEB WHERE [FieldGUID] NOT IN (SELECT
> [FieldGUID] FROM TABLEA)
> (note: FieldGUID is the PK for TABLEA)
>
> I've checked the execution plan for the second query and I know that's
> it is going to perform a table scan. I also know the row counts for
> Tables A and B and my I/O subsystem and and can say for certain that
> the second query should not run for more than a few seconds.
> I've checked sysprocesses while this second query is running and
> consistently find that it is not blocked, is not waiting, the [status]
> column shows sleeping, open_tran column equals 2, cpu and physical_io
> columns show low values and do change slowly over time. I have also
> checked my normal metrics that I capture throughout the day and can't
> find a problem. My server's CPU usage is low, disk queues are low to
> nonexistent, low to no paging going on, no other processes running,
> etc.
> I can't for the life of me figure out what causes this second query to
> consistently (but infrequently) run long. I'm running SQL 2000 SP3a
> Enterprise Edition on a 2-node Active/Passive cluster. Each cluster
> node has 8GB of memory and 4x2.8GHz CPUs.
> Any and all help is greatly appreciated. Thanks.
> Steve
>|||Hi,
Did you check lastwaittype and waitresource on sysprocesses which might give
you a hint..
We had some problems when we have max degree of parallelism is set to 0 (all
available processors) on some of the update statements for some queries. The
queries run with no problems when we use option maxdop (1) or when we set ma
x
degree of parallelism to some value (processor number - 1). This is just an
assumption, our server has 8 cpu's..
Also check your log file growth settings, if you're transaction log file
setting is large and the auto growth is enabled like %10 etc maybe you need
to consider to change log auto-growth settings and auto-shrink settings as
well..
"steveg@.payerpath.com" wrote:
> Hi all,
> I was hoping I could get some help from you all about a blocking
> problem that is making me tear my hair out. I've got a client process
> that is executing two DELETE statements hourly during our ten hour
> production day. The first DELETE statement cleans out "expired"
> entries from Table A and the second cleans out "expired" entries from
> Table B.
> Under normal circumstances, each of these queries executes in a few
> seconds. About once a week the second query which deletes from Table B
> runs long (2-3 minutes) though it does finish by itself. This creates
> tons of blocking as both TABLEA and B are under heavy SELECT, INSERT,
> and UPDATE activity. Here are the queries (sorta)...
> Query1 - DELETE FROM TABLEA WHERE [FieldModified] < (GetDate() -
> '00:05:00')
> Query2 - DELETE FROM TABLEB WHERE [FieldGUID] NOT IN (SELECT
> [FieldGUID] FROM TABLEA)
> (note: FieldGUID is the PK for TABLEA)
>
> I've checked the execution plan for the second query and I know that's
> it is going to perform a table scan. I also know the row counts for
> Tables A and B and my I/O subsystem and and can say for certain that
> the second query should not run for more than a few seconds.
> I've checked sysprocesses while this second query is running and
> consistently find that it is not blocked, is not waiting, the [status]
> column shows sleeping, open_tran column equals 2, cpu and physical_io
> columns show low values and do change slowly over time. I have also
> checked my normal metrics that I capture throughout the day and can't
> find a problem. My server's CPU usage is low, disk queues are low to
> nonexistent, low to no paging going on, no other processes running,
> etc.
> I can't for the life of me figure out what causes this second query to
> consistently (but infrequently) run long. I'm running SQL 2000 SP3a
> Enterprise Edition on a 2-node Active/Passive cluster. Each cluster
> node has 8GB of memory and 4x2.8GHz CPUs.
> Any and all help is greatly appreciated. Thanks.
> Steve
>|||oj,
The client that sends these two statements sends them in the same order
every time and won't send the second statement until the first has
completed. At a high level the code looks like the following so I
don't think the possibility you outlined is what is actually occuring.
try
prepare first delete
execute 1st delete
prepare 2nd delete
execute 2nd delete
catch
error handling logic
Steve
oj wrote:
> With "implicit_transaction' on by default, your #1 query is enclosed
within
> a transaction. Thus, it holds the locks until it completes with the
delete.
> Your #2 query's SELECT statement is blocked at this point. This is
probably
> desired because you probably want to be sure of the data in TableA.
> So, to improve this, I would suggest putting an index on your
TableA's
> FieldModified column. Also, consider using NOT EXISTS instead.
> Now, if #2 is nothing more than a maintenance step which will run
again, you[vbcol=seagreen]
> could force a READPAST (i.e. skip locked rows).
> e.g.
> DELETE FROM TABLEB WHERE [FieldGUID]
> NOT IN (SELECT
> [FieldGUID]
> FROM TABLEA with(READPAST) )
>
> --
> -oj
>
> <steveg@.payerpath.com> wrote in message
> news:1107799902.434459.80610@.z14g2000cwz.googlegroups.com...
process[vbcol=seagreen]
from[vbcol=seagreen]
Table B[vbcol=seagreen]
creates[vbcol=seagreen]
INSERT,[vbcol=seagreen]
that's[vbcol=seagreen]
for[vbcol=seagreen]
that[vbcol=seagreen]
[status][vbcol=seagreen]
physical_io[vbcol=seagreen]
also[vbcol=seagreen]
can't[vbcol=seagreen]
to[vbcol=seagreen]
to[vbcol=seagreen]
SP3a[vbcol=seagreen]
cluster[vbcol=seagreen]|||Sysprocesses shows waittype=0x0000, waittime=0, and lastwaittype is
PAGELATCH_SH or PAGELATCH_EX. What I get from that information is that
the DELETE statement that causes the trouble isn't waiting on anything
but that the last time it did wait was for PAGELATCH_SH or
PAGELATCH_EX.
I too have had problems with max degrees of parallelism which is why I
have it configured at the server level to be half the number of logical
CPUs in the server. I can also see in sysprocesses when the problem
DELETE is running that it is not running in parallel as there is only
one row for the problem SPID.
I have checked the file growth settings for both my database and
transaction log. Both TableA and TableB are in the same database whose
size is very stable throughout the day. This database is also set up
in Simple recovery mode so the transaction log stays mostly empty.
Steve|||I see.
Just reread your post. You might be experiencing this:
http://support.microsoft.com/kb/329906
-oj
<steveg@.payerpath.com> wrote in message
news:1107803288.881378.258020@.l41g2000cwc.googlegroups.com...
> oj,
> The client that sends these two statements sends them in the same order
> every time and won't send the second statement until the first has
> completed. At a high level the code looks like the following so I
> don't think the possibility you outlined is what is actually occuring.
> try
> prepare first delete
> execute 1st delete
> prepare 2nd delete
> execute 2nd delete
> catch
> error handling logic
>
> Steve
>
> oj wrote:
> within
> delete.
> probably
> TableA's
> again, you
> process
> from
> Table B
> creates
> INSERT,
> that's
> for
> that
> [status]
> physical_io
> also
> can't
> to
> to
> SP3a
> cluster
>|||Also, there have been some *nasty* bugs with Parallelism (which some are
fixed in Sql2k sp4).
http://support.microsoft.com/search...uery=sql+maxdop
I believe Umut is right on target here.
http://support.microsoft.com/kb/837983
Some info on upcoming fixes:
http://support.microsoft.com/kb/888799
-oj
<steveg@.payerpath.com> wrote in message
news:1107803288.881378.258020@.l41g2000cwc.googlegroups.com...
> oj,
> The client that sends these two statements sends them in the same order
> every time and won't send the second statement until the first has
> completed. At a high level the code looks like the following so I
> don't think the possibility you outlined is what is actually occuring.
> try
> prepare first delete
> execute 1st delete
> prepare 2nd delete
> execute 2nd delete
> catch
> error handling logic
>
> Steve
>
> oj wrote:
> within
> delete.
> probably
> TableA's
> again, you
> process
> from
> Table B
> creates
> INSERT,
> that's
> for
> that
> [status]
> physical_io
> also
> can't
> to
> to
> SP3a
> cluster
>|||I don't thing the DELETE is running as a parallel process because while
it's running I only see one row for it in sysprocesses. For parallel
queries I would expect to see multiple entries in sysprocesses for the
same SPID with different values in the ecid column for each process.
Steve|||Hi Steve,
PAGELATCH_SH or PAGELATCH_EX explanation:
Contention can be caused by issues other than IO or memory performance, for
example, heavy concurrent inserts into the same index range can cause this
type of contention. If a lot of inserts need to be placed on the same page
they are serialized using the latch. A lot of inserts into the same range
can also cause page splits in the index which will hold onto the latch while
allocating a new page (this can take a while). Any read accesses to the sam
e
range as the inserts would also conflict on the latches. (from Tom Davidson
article)
I recommend you Tom Davidson article - Opening Performance Tuning Toolbox
from SQLMag. I use it all the time and helps me a lot.
You did not give any details about how many rows does the table have and how
many rows are being deleted, if this is a batch process then consider to do
this in non business hours. If it's a part of your application and if there
are a lot of records affected by the delete then i think you need to change
your code..
If there are a few records affected by the delete then try to use cursors
and delete one by one but i don't recommend this if it's a batch process and
a lot of records are affected..
"steveg@.payerpath.com" wrote:
> Sysprocesses shows waittype=0x0000, waittime=0, and lastwaittype is
> PAGELATCH_SH or PAGELATCH_EX. What I get from that information is that
> the DELETE statement that causes the trouble isn't waiting on anything
> but that the last time it did wait was for PAGELATCH_SH or
> PAGELATCH_EX.
> I too have had problems with max degrees of parallelism which is why I
> have it configured at the server level to be half the number of logical
> CPUs in the server. I can also see in sysprocesses when the problem
> DELETE is running that it is not running in parallel as there is only
> one row for the problem SPID.
> I have checked the file growth settings for both my database and
> transaction log. Both TableA and TableB are in the same database whose
> size is very stable throughout the day. This database is also set up
> in Simple recovery mode so the transaction log stays mostly empty.
> Steve
>|||I don't think parallel queries cause multiple rows in sysprocesses. I only
see multiple entries in sysprocesses related to same SPID when lightweight
pooling is set to 1. (NT Fibre's).
FYI: Parallel Process wait type is CXPACKET
"steveg@.payerpath.com" wrote:
> I don't thing the DELETE is running as a parallel process because while
> it's running I only see one row for it in sysprocesses. For parallel
> queries I would expect to see multiple entries in sysprocesses for the
> same SPID with different values in the ecid column for each process.
> Steve
>
Help with blocking
I was hoping I could get some help from you all about a blocking
problem that is making me tear my hair out. I've got a client process
that is executing two DELETE statements hourly during our ten hour
production day. The first DELETE statement cleans out "expired"
entries from Table A and the second cleans out "expired" entries from
Table B.
Under normal circumstances, each of these queries executes in a few
seconds. About once a week the second query which deletes from Table B
runs long (2-3 minutes) though it does finish by itself. This creates
tons of blocking as both TABLEA and B are under heavy SELECT, INSERT,
and UPDATE activity. Here are the queries (sorta)...
Query1 - DELETE FROM TABLEA WHERE [FieldModified] < (GetDate() -
'00:05:00')
Query2 - DELETE FROM TABLEB WHERE [FieldGUID] NOT IN (SELECT
[FieldGUID] FROM TABLEA)
(note: FieldGUID is the PK for TABLEA)
I've checked the execution plan for the second query and I know that's
it is going to perform a table scan. I also know the row counts for
Tables A and B and my I/O subsystem and and can say for certain that
the second query should not run for more than a few seconds.
I've checked sysprocesses while this second query is running and
consistently find that it is not blocked, is not waiting, the [status]
column shows sleeping, open_tran column equals 2, cpu and physical_io
columns show low values and do change slowly over time. I have also
checked my normal metrics that I capture throughout the day and can't
find a problem. My server's CPU usage is low, disk queues are low to
nonexistent, low to no paging going on, no other processes running,
etc.
I can't for the life of me figure out what causes this second query to
consistently (but infrequently) run long. I'm running SQL 2000 SP3a
Enterprise Edition on a 2-node Active/Passive cluster. Each cluster
node has 8GB of memory and 4x2.8GHz CPUs.
Any and all help is greatly appreciated. Thanks.
SteveWith "implicit_transaction' on by default, your #1 query is enclosed within
a transaction. Thus, it holds the locks until it completes with the delete.
Your #2 query's SELECT statement is blocked at this point. This is probably
desired because you probably want to be sure of the data in TableA.
So, to improve this, I would suggest putting an index on your TableA's
FieldModified column. Also, consider using NOT EXISTS instead.
Now, if #2 is nothing more than a maintenance step which will run again, you
could force a READPAST (i.e. skip locked rows).
e.g.
DELETE FROM TABLEB WHERE [FieldGUID]
NOT IN (SELECT
[FieldGUID]
FROM TABLEA with(READPAST) )
-oj
<steveg@.payerpath.com> wrote in message
news:1107799902.434459.80610@.z14g2000cwz.googlegroups.com...
> Hi all,
> I was hoping I could get some help from you all about a blocking
> problem that is making me tear my hair out. I've got a client process
> that is executing two DELETE statements hourly during our ten hour
> production day. The first DELETE statement cleans out "expired"
> entries from Table A and the second cleans out "expired" entries from
> Table B.
> Under normal circumstances, each of these queries executes in a few
> seconds. About once a week the second query which deletes from Table B
> runs long (2-3 minutes) though it does finish by itself. This creates
> tons of blocking as both TABLEA and B are under heavy SELECT, INSERT,
> and UPDATE activity. Here are the queries (sorta)...
> Query1 - DELETE FROM TABLEA WHERE [FieldModified] < (GetDate() -
> '00:05:00')
> Query2 - DELETE FROM TABLEB WHERE [FieldGUID] NOT IN (SELECT
> [FieldGUID] FROM TABLEA)
> (note: FieldGUID is the PK for TABLEA)
>
> I've checked the execution plan for the second query and I know that's
> it is going to perform a table scan. I also know the row counts for
> Tables A and B and my I/O subsystem and and can say for certain that
> the second query should not run for more than a few seconds.
> I've checked sysprocesses while this second query is running and
> consistently find that it is not blocked, is not waiting, the [status]
> column shows sleeping, open_tran column equals 2, cpu and physical_io
> columns show low values and do change slowly over time. I have also
> checked my normal metrics that I capture throughout the day and can't
> find a problem. My server's CPU usage is low, disk queues are low to
> nonexistent, low to no paging going on, no other processes running,
> etc.
> I can't for the life of me figure out what causes this second query to
> consistently (but infrequently) run long. I'm running SQL 2000 SP3a
> Enterprise Edition on a 2-node Active/Passive cluster. Each cluster
> node has 8GB of memory and 4x2.8GHz CPUs.
> Any and all help is greatly appreciated. Thanks.
> Steve
>|||Hi,
Did you check lastwaittype and waitresource on sysprocesses which might give
you a hint..
We had some problems when we have max degree of parallelism is set to 0 (all
available processors) on some of the update statements for some queries. The
queries run with no problems when we use option maxdop (1) or when we set max
degree of parallelism to some value (processor number - 1). This is just an
assumption, our server has 8 cpu's..
Also check your log file growth settings, if you're transaction log file
setting is large and the auto growth is enabled like %10 etc maybe you need
to consider to change log auto-growth settings and auto-shrink settings as
well..
"steveg@.payerpath.com" wrote:
> Hi all,
> I was hoping I could get some help from you all about a blocking
> problem that is making me tear my hair out. I've got a client process
> that is executing two DELETE statements hourly during our ten hour
> production day. The first DELETE statement cleans out "expired"
> entries from Table A and the second cleans out "expired" entries from
> Table B.
> Under normal circumstances, each of these queries executes in a few
> seconds. About once a week the second query which deletes from Table B
> runs long (2-3 minutes) though it does finish by itself. This creates
> tons of blocking as both TABLEA and B are under heavy SELECT, INSERT,
> and UPDATE activity. Here are the queries (sorta)...
> Query1 - DELETE FROM TABLEA WHERE [FieldModified] < (GetDate() -
> '00:05:00')
> Query2 - DELETE FROM TABLEB WHERE [FieldGUID] NOT IN (SELECT
> [FieldGUID] FROM TABLEA)
> (note: FieldGUID is the PK for TABLEA)
>
> I've checked the execution plan for the second query and I know that's
> it is going to perform a table scan. I also know the row counts for
> Tables A and B and my I/O subsystem and and can say for certain that
> the second query should not run for more than a few seconds.
> I've checked sysprocesses while this second query is running and
> consistently find that it is not blocked, is not waiting, the [status]
> column shows sleeping, open_tran column equals 2, cpu and physical_io
> columns show low values and do change slowly over time. I have also
> checked my normal metrics that I capture throughout the day and can't
> find a problem. My server's CPU usage is low, disk queues are low to
> nonexistent, low to no paging going on, no other processes running,
> etc.
> I can't for the life of me figure out what causes this second query to
> consistently (but infrequently) run long. I'm running SQL 2000 SP3a
> Enterprise Edition on a 2-node Active/Passive cluster. Each cluster
> node has 8GB of memory and 4x2.8GHz CPUs.
> Any and all help is greatly appreciated. Thanks.
> Steve
>|||oj,
The client that sends these two statements sends them in the same order
every time and won't send the second statement until the first has
completed. At a high level the code looks like the following so I
don't think the possibility you outlined is what is actually occuring.
try
prepare first delete
execute 1st delete
prepare 2nd delete
execute 2nd delete
catch
error handling logic
Steve
oj wrote:
> With "implicit_transaction' on by default, your #1 query is enclosed
within
> a transaction. Thus, it holds the locks until it completes with the
delete.
> Your #2 query's SELECT statement is blocked at this point. This is
probably
> desired because you probably want to be sure of the data in TableA.
> So, to improve this, I would suggest putting an index on your
TableA's
> FieldModified column. Also, consider using NOT EXISTS instead.
> Now, if #2 is nothing more than a maintenance step which will run
again, you
> could force a READPAST (i.e. skip locked rows).
> e.g.
> DELETE FROM TABLEB WHERE [FieldGUID]
> NOT IN (SELECT
> [FieldGUID]
> FROM TABLEA with(READPAST) )
>
> --
> -oj
>
> <steveg@.payerpath.com> wrote in message
> news:1107799902.434459.80610@.z14g2000cwz.googlegroups.com...
> > Hi all,
> >
> > I was hoping I could get some help from you all about a blocking
> > problem that is making me tear my hair out. I've got a client
process
> > that is executing two DELETE statements hourly during our ten hour
> > production day. The first DELETE statement cleans out "expired"
> > entries from Table A and the second cleans out "expired" entries
from
> > Table B.
> >
> > Under normal circumstances, each of these queries executes in a few
> > seconds. About once a week the second query which deletes from
Table B
> > runs long (2-3 minutes) though it does finish by itself. This
creates
> > tons of blocking as both TABLEA and B are under heavy SELECT,
INSERT,
> > and UPDATE activity. Here are the queries (sorta)...
> >
> > Query1 - DELETE FROM TABLEA WHERE [FieldModified] < (GetDate() -
> > '00:05:00')
> >
> > Query2 - DELETE FROM TABLEB WHERE [FieldGUID] NOT IN (SELECT
> > [FieldGUID] FROM TABLEA)
> > (note: FieldGUID is the PK for TABLEA)
> >
> >
> > I've checked the execution plan for the second query and I know
that's
> > it is going to perform a table scan. I also know the row counts
for
> > Tables A and B and my I/O subsystem and and can say for certain
that
> > the second query should not run for more than a few seconds.
> >
> > I've checked sysprocesses while this second query is running and
> > consistently find that it is not blocked, is not waiting, the
[status]
> > column shows sleeping, open_tran column equals 2, cpu and
physical_io
> > columns show low values and do change slowly over time. I have
also
> > checked my normal metrics that I capture throughout the day and
can't
> > find a problem. My server's CPU usage is low, disk queues are low
to
> > nonexistent, low to no paging going on, no other processes running,
> > etc.
> >
> > I can't for the life of me figure out what causes this second query
to
> > consistently (but infrequently) run long. I'm running SQL 2000
SP3a
> > Enterprise Edition on a 2-node Active/Passive cluster. Each
cluster
> > node has 8GB of memory and 4x2.8GHz CPUs.
> >
> > Any and all help is greatly appreciated. Thanks.
> >
> > Steve
> >|||Sysprocesses shows waittype=0x0000, waittime=0, and lastwaittype is
PAGELATCH_SH or PAGELATCH_EX. What I get from that information is that
the DELETE statement that causes the trouble isn't waiting on anything
but that the last time it did wait was for PAGELATCH_SH or
PAGELATCH_EX.
I too have had problems with max degrees of parallelism which is why I
have it configured at the server level to be half the number of logical
CPUs in the server. I can also see in sysprocesses when the problem
DELETE is running that it is not running in parallel as there is only
one row for the problem SPID.
I have checked the file growth settings for both my database and
transaction log. Both TableA and TableB are in the same database whose
size is very stable throughout the day. This database is also set up
in Simple recovery mode so the transaction log stays mostly empty.
Steve|||I see.
Just reread your post. You might be experiencing this:
http://support.microsoft.com/kb/329906
-oj
<steveg@.payerpath.com> wrote in message
news:1107803288.881378.258020@.l41g2000cwc.googlegroups.com...
> oj,
> The client that sends these two statements sends them in the same order
> every time and won't send the second statement until the first has
> completed. At a high level the code looks like the following so I
> don't think the possibility you outlined is what is actually occuring.
> try
> prepare first delete
> execute 1st delete
> prepare 2nd delete
> execute 2nd delete
> catch
> error handling logic
>
> Steve
>
> oj wrote:
>> With "implicit_transaction' on by default, your #1 query is enclosed
> within
>> a transaction. Thus, it holds the locks until it completes with the
> delete.
>> Your #2 query's SELECT statement is blocked at this point. This is
> probably
>> desired because you probably want to be sure of the data in TableA.
>> So, to improve this, I would suggest putting an index on your
> TableA's
>> FieldModified column. Also, consider using NOT EXISTS instead.
>> Now, if #2 is nothing more than a maintenance step which will run
> again, you
>> could force a READPAST (i.e. skip locked rows).
>> e.g.
>> DELETE FROM TABLEB WHERE [FieldGUID]
>> NOT IN (SELECT
>> [FieldGUID]
>> FROM TABLEA with(READPAST) )
>>
>> --
>> -oj
>>
>> <steveg@.payerpath.com> wrote in message
>> news:1107799902.434459.80610@.z14g2000cwz.googlegroups.com...
>> > Hi all,
>> >
>> > I was hoping I could get some help from you all about a blocking
>> > problem that is making me tear my hair out. I've got a client
> process
>> > that is executing two DELETE statements hourly during our ten hour
>> > production day. The first DELETE statement cleans out "expired"
>> > entries from Table A and the second cleans out "expired" entries
> from
>> > Table B.
>> >
>> > Under normal circumstances, each of these queries executes in a few
>> > seconds. About once a week the second query which deletes from
> Table B
>> > runs long (2-3 minutes) though it does finish by itself. This
> creates
>> > tons of blocking as both TABLEA and B are under heavy SELECT,
> INSERT,
>> > and UPDATE activity. Here are the queries (sorta)...
>> >
>> > Query1 - DELETE FROM TABLEA WHERE [FieldModified] < (GetDate() -
>> > '00:05:00')
>> >
>> > Query2 - DELETE FROM TABLEB WHERE [FieldGUID] NOT IN (SELECT
>> > [FieldGUID] FROM TABLEA)
>> > (note: FieldGUID is the PK for TABLEA)
>> >
>> >
>> > I've checked the execution plan for the second query and I know
> that's
>> > it is going to perform a table scan. I also know the row counts
> for
>> > Tables A and B and my I/O subsystem and and can say for certain
> that
>> > the second query should not run for more than a few seconds.
>> >
>> > I've checked sysprocesses while this second query is running and
>> > consistently find that it is not blocked, is not waiting, the
> [status]
>> > column shows sleeping, open_tran column equals 2, cpu and
> physical_io
>> > columns show low values and do change slowly over time. I have
> also
>> > checked my normal metrics that I capture throughout the day and
> can't
>> > find a problem. My server's CPU usage is low, disk queues are low
> to
>> > nonexistent, low to no paging going on, no other processes running,
>> > etc.
>> >
>> > I can't for the life of me figure out what causes this second query
> to
>> > consistently (but infrequently) run long. I'm running SQL 2000
> SP3a
>> > Enterprise Edition on a 2-node Active/Passive cluster. Each
> cluster
>> > node has 8GB of memory and 4x2.8GHz CPUs.
>> >
>> > Any and all help is greatly appreciated. Thanks.
>> >
>> > Steve
>> >
>|||Also, there have been some *nasty* bugs with Parallelism (which some are
fixed in Sql2k sp4).
http://support.microsoft.com/search/default.aspx?query=sql+maxdop
I believe Umut is right on target here.
http://support.microsoft.com/kb/837983
Some info on upcoming fixes:
http://support.microsoft.com/kb/888799
-oj
<steveg@.payerpath.com> wrote in message
news:1107803288.881378.258020@.l41g2000cwc.googlegroups.com...
> oj,
> The client that sends these two statements sends them in the same order
> every time and won't send the second statement until the first has
> completed. At a high level the code looks like the following so I
> don't think the possibility you outlined is what is actually occuring.
> try
> prepare first delete
> execute 1st delete
> prepare 2nd delete
> execute 2nd delete
> catch
> error handling logic
>
> Steve
>
> oj wrote:
>> With "implicit_transaction' on by default, your #1 query is enclosed
> within
>> a transaction. Thus, it holds the locks until it completes with the
> delete.
>> Your #2 query's SELECT statement is blocked at this point. This is
> probably
>> desired because you probably want to be sure of the data in TableA.
>> So, to improve this, I would suggest putting an index on your
> TableA's
>> FieldModified column. Also, consider using NOT EXISTS instead.
>> Now, if #2 is nothing more than a maintenance step which will run
> again, you
>> could force a READPAST (i.e. skip locked rows).
>> e.g.
>> DELETE FROM TABLEB WHERE [FieldGUID]
>> NOT IN (SELECT
>> [FieldGUID]
>> FROM TABLEA with(READPAST) )
>>
>> --
>> -oj
>>
>> <steveg@.payerpath.com> wrote in message
>> news:1107799902.434459.80610@.z14g2000cwz.googlegroups.com...
>> > Hi all,
>> >
>> > I was hoping I could get some help from you all about a blocking
>> > problem that is making me tear my hair out. I've got a client
> process
>> > that is executing two DELETE statements hourly during our ten hour
>> > production day. The first DELETE statement cleans out "expired"
>> > entries from Table A and the second cleans out "expired" entries
> from
>> > Table B.
>> >
>> > Under normal circumstances, each of these queries executes in a few
>> > seconds. About once a week the second query which deletes from
> Table B
>> > runs long (2-3 minutes) though it does finish by itself. This
> creates
>> > tons of blocking as both TABLEA and B are under heavy SELECT,
> INSERT,
>> > and UPDATE activity. Here are the queries (sorta)...
>> >
>> > Query1 - DELETE FROM TABLEA WHERE [FieldModified] < (GetDate() -
>> > '00:05:00')
>> >
>> > Query2 - DELETE FROM TABLEB WHERE [FieldGUID] NOT IN (SELECT
>> > [FieldGUID] FROM TABLEA)
>> > (note: FieldGUID is the PK for TABLEA)
>> >
>> >
>> > I've checked the execution plan for the second query and I know
> that's
>> > it is going to perform a table scan. I also know the row counts
> for
>> > Tables A and B and my I/O subsystem and and can say for certain
> that
>> > the second query should not run for more than a few seconds.
>> >
>> > I've checked sysprocesses while this second query is running and
>> > consistently find that it is not blocked, is not waiting, the
> [status]
>> > column shows sleeping, open_tran column equals 2, cpu and
> physical_io
>> > columns show low values and do change slowly over time. I have
> also
>> > checked my normal metrics that I capture throughout the day and
> can't
>> > find a problem. My server's CPU usage is low, disk queues are low
> to
>> > nonexistent, low to no paging going on, no other processes running,
>> > etc.
>> >
>> > I can't for the life of me figure out what causes this second query
> to
>> > consistently (but infrequently) run long. I'm running SQL 2000
> SP3a
>> > Enterprise Edition on a 2-node Active/Passive cluster. Each
> cluster
>> > node has 8GB of memory and 4x2.8GHz CPUs.
>> >
>> > Any and all help is greatly appreciated. Thanks.
>> >
>> > Steve
>> >
>|||I don't thing the DELETE is running as a parallel process because while
it's running I only see one row for it in sysprocesses. For parallel
queries I would expect to see multiple entries in sysprocesses for the
same SPID with different values in the ecid column for each process.
Steve|||Hi Steve,
PAGELATCH_SH or PAGELATCH_EX explanation:
Contention can be caused by issues other than IO or memory performance, for
example, heavy concurrent inserts into the same index range can cause this
type of contention. If a lot of inserts need to be placed on the same page
they are serialized using the latch. A lot of inserts into the same range
can also cause page splits in the index which will hold onto the latch while
allocating a new page (this can take a while). Any read accesses to the same
range as the inserts would also conflict on the latches. (from Tom Davidson
article)
I recommend you Tom Davidson article - Opening Performance Tuning Toolbox
from SQLMag. I use it all the time and helps me a lot.
You did not give any details about how many rows does the table have and how
many rows are being deleted, if this is a batch process then consider to do
this in non business hours. If it's a part of your application and if there
are a lot of records affected by the delete then i think you need to change
your code..
If there are a few records affected by the delete then try to use cursors
and delete one by one but i don't recommend this if it's a batch process and
a lot of records are affected..
"steveg@.payerpath.com" wrote:
> Sysprocesses shows waittype=0x0000, waittime=0, and lastwaittype is
> PAGELATCH_SH or PAGELATCH_EX. What I get from that information is that
> the DELETE statement that causes the trouble isn't waiting on anything
> but that the last time it did wait was for PAGELATCH_SH or
> PAGELATCH_EX.
> I too have had problems with max degrees of parallelism which is why I
> have it configured at the server level to be half the number of logical
> CPUs in the server. I can also see in sysprocesses when the problem
> DELETE is running that it is not running in parallel as there is only
> one row for the problem SPID.
> I have checked the file growth settings for both my database and
> transaction log. Both TableA and TableB are in the same database whose
> size is very stable throughout the day. This database is also set up
> in Simple recovery mode so the transaction log stays mostly empty.
> Steve
>|||I don't think parallel queries cause multiple rows in sysprocesses. I only
see multiple entries in sysprocesses related to same SPID when lightweight
pooling is set to 1. (NT Fibre's).
FYI: Parallel Process wait type is CXPACKET
"steveg@.payerpath.com" wrote:
> I don't thing the DELETE is running as a parallel process because while
> it's running I only see one row for it in sysprocesses. For parallel
> queries I would expect to see multiple entries in sysprocesses for the
> same SPID with different values in the ecid column for each process.
> Steve
>|||Umut,
I'm pretty sure that parallel queries do spawn multiple rown in
sysprocesses. I see them from time to time on large queries.
Sysprocesses would get, for example, 5 rows for the same SPID. The
ecid column in sysprocesses would be 0-4 for theses rows and indicates
the execution context that each thread is running under. We're not
running on NT fibers so I always *assumed* that these were from
parallel executions.
Anyway, I don't mean to turn my back on your suggestion. I don't think
my problem is related to parallel execution but I also don't know what
it is related to which is why I'm here soliciting suggestions from this
forum. I'll get the developers to add a query hint to disable
parallelism. I don't think it will fix the problem but it's worth
trying. Thanks for your help.
Steve
Umut Nazlica wrote:
> I don't think parallel queries cause multiple rows in sysprocesses. I
only
> see multiple entries in sysprocesses related to same SPID when
lightweight
> pooling is set to 1. (NT Fibre's).
>
> FYI: Parallel Process wait type is CXPACKET
>
> "steveg@.payerpath.com" wrote:
> > I don't thing the DELETE is running as a parallel process because
while
> > it's running I only see one row for it in sysprocesses. For
parallel
> > queries I would expect to see multiple entries in sysprocesses for
the
> > same SPID with different values in the ecid column for each
process.
> >
> > Steve
> >
> >|||Umut,
Thanks for the quote from Tom Davidson's article.
As far as row counts for these tables goes...
TableA holds one row for every user logged into our website. Nominally
100 and probably goes up to about 500.
TableB holds information related to what what the user is doing on the
website. This table is highly variable but is certainly larger than
TableA. It proabbly runs between 20,000 - 200,000 rows.
This process that executes the two DELETE statements is essentially
"expiring" user sessions which have been inactive for some period of
time. We want it to be running during production hours. As I
mentioned in my original post, these two queries normally (98%) run for
about 5 seconds and cause no problems. However, about 2% of the time
the second query appears to "hang" and finally completes after 2-3
minutes of processing.
While this second DELETE statement is "hung", sysprocesses shows that
it is sleeping. What would make SQL sleep this SPID on a simple
DELETE? I understand that it can happen when SQL is waiting for the
application to provide some feedback but this is a simple DELETE; there
shouldn't be any "back-and-forth" interaction between the client and
server that I'm aware of. Can anyone explain why a simple DELETE
statement would go to sleep?
Steve|||Hi Steve,
Did by chance did you have any measurements about the number of records that
TABLEB has when the delete statement runs. What i mean is from your message i
understand number of records of TableB is vary, so sometimes you are
deleteing 1000 records and sometimes you are deleting 199000 records etc..
Did you try delete like following:
DELETE FROM TABLEB WHERE NOT EXISTs (SELECT
NULL FROM TABLEA WHERE tableb.col1 = tablea.col1)
(did you test NOT EXISTS)
Regards..
"steveg@.payerpath.com" wrote:
> Umut,
> Thanks for the quote from Tom Davidson's article.
> As far as row counts for these tables goes...
> TableA holds one row for every user logged into our website. Nominally
> 100 and probably goes up to about 500.
> TableB holds information related to what what the user is doing on the
> website. This table is highly variable but is certainly larger than
> TableA. It proabbly runs between 20,000 - 200,000 rows.
> This process that executes the two DELETE statements is essentially
> "expiring" user sessions which have been inactive for some period of
> time. We want it to be running during production hours. As I
> mentioned in my original post, these two queries normally (98%) run for
> about 5 seconds and cause no problems. However, about 2% of the time
> the second query appears to "hang" and finally completes after 2-3
> minutes of processing.
> While this second DELETE statement is "hung", sysprocesses shows that
> it is sleeping. What would make SQL sleep this SPID on a simple
> DELETE? I understand that it can happen when SQL is waiting for the
> application to provide some feedback but this is a simple DELETE; there
> shouldn't be any "back-and-forth" interaction between the client and
> server that I'm aware of. Can anyone explain why a simple DELETE
> statement would go to sleep?
> Steve
>|||On 8 Feb 2005 05:32:33 -0800, steveg@.payerpath.com wrote:
(snip)
>While this second DELETE statement is "hung", sysprocesses shows that
>it is sleeping. What would make SQL sleep this SPID on a simple
>DELETE? I understand that it can happen when SQL is waiting for the
>application to provide some feedback but this is a simple DELETE; there
>shouldn't be any "back-and-forth" interaction between the client and
>server that I'm aware of. Can anyone explain why a simple DELETE
>statement would go to sleep?
Hi Steve,
I realize it's mentioned before in this thread, but this really sounds as
if eitehr tempdb or the log file (probably the latter) is autogrowing.
What is the average number of rows affected by this second delete?
If I were you, I'd check that autoshrink is not accidentally enabled for
this database, manually grow the log file to twice it's current size and
check if this helps.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||Yes, that might be possible as i mentioned before in addition to this youn
capture a trace which includes log auto growth and the statements to be sure..
I don't know if it's possible with your application as you mentioned before
tablaA has certain amount of records and tableb (could be 5000 or 200000)is
vary it would be better to insert the records to a new table which
tablea.col1=tableb.col1 and drop tableb and rename new table as tableb, let's
assume that you have 100 records in tablea and 100000 records in tableb, why
delete all the 99900 records instead of adding 100 records to a new table and
drop oldone and rename new one.. (Well again this looks like a batch process
to me..)
Regards..
"Hugo Kornelis" wrote:
> On 8 Feb 2005 05:32:33 -0800, steveg@.payerpath.com wrote:
> (snip)
> >While this second DELETE statement is "hung", sysprocesses shows that
> >it is sleeping. What would make SQL sleep this SPID on a simple
> >DELETE? I understand that it can happen when SQL is waiting for the
> >application to provide some feedback but this is a simple DELETE; there
> >shouldn't be any "back-and-forth" interaction between the client and
> >server that I'm aware of. Can anyone explain why a simple DELETE
> >statement would go to sleep?
> Hi Steve,
> I realize it's mentioned before in this thread, but this really sounds as
> if eitehr tempdb or the log file (probably the latter) is autogrowing.
> What is the average number of rows affected by this second delete?
> If I were you, I'd check that autoshrink is not accidentally enabled for
> this database, manually grow the log file to twice it's current size and
> check if this helps.
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)
>|||Hugo,
I just checked my database settings. My user database is not set up to
auto shrink, the only options it has set are auto create and auto
update statistics.
I'm fairly certain that neither tempdb nor my user db are
growing/shrinking during this process. I'm basing that statement on
the date/time stamp on the data and log files. It is my understanding
that the data/log file date/time will be updated when the file grows,
shrinks, or when the file is initially opened by SQL after a server
restart. All of the files (db & log) for tempdb and my user database
have date/time stamps which correspond to the last time our server was
rebooted which happened last week. However, we experienced our hung
DELETE problem as recently as yesterday.
The second DELETE will affect a LOT of rows probably in the
neighborhood of 100,000. I suspect that the second DELETE is taking an
Exclusive lock at the table level based on its execution plan, large
number of rows affected, and other processes (INSERT/UPDATE) being
blocked while attempting to put an Intent-Exclusive lock on TableB.
I'm OK with that as long as the DELETE runs for 10s as opposed to 3m.
I had a Profiler session running yesterday watching long running
queries. Once the "hung" DELETE did finally complete I could see that
the level of CPU and I/O that it used during its 150s runtime was no
more that it normally uses during a 5-7s runtime. It's doing no more
work, it's not waiting, it's not blocked, sysprocesses shows that it is
sleeping with open transactions.
Steve
Hugo Kornelis wrote:
> On 8 Feb 2005 05:32:33 -0800, steveg@.payerpath.com wrote:
> (snip)
> >While this second DELETE statement is "hung", sysprocesses shows
that
> >it is sleeping. What would make SQL sleep this SPID on a simple
> >DELETE? I understand that it can happen when SQL is waiting for the
> >application to provide some feedback but this is a simple DELETE;
there
> >shouldn't be any "back-and-forth" interaction between the client and
> >server that I'm aware of. Can anyone explain why a simple DELETE
> >statement would go to sleep?
> Hi Steve,
> I realize it's mentioned before in this thread, but this really
sounds as
> if eitehr tempdb or the log file (probably the latter) is
autogrowing.
> What is the average number of rows affected by this second delete?
> If I were you, I'd check that autoshrink is not accidentally enabled
for
> this database, manually grow the log file to twice it's current size
and
> check if this helps.
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)|||I'm going to be taking over control of thses two DELETE statements
temporarily. The developers will be commenting them out of our
application and I'll be moving them into a SQL job which will run
hourly. This will give me a bit more flexibility in trying some of the
suggestions that I've garnered here. We've got the space so I'll also
expand my log files and tempdb and see if that helps any.
Any other thoughts are still welcomed. Particularly regarding what
outside factors would cause this process to go to sleep instead of
continuing to work. Thanks.|||On 8 Feb 2005 09:54:09 -0800, steveg@.payerpath.com wrote:
>I'm fairly certain that neither tempdb nor my user db are
>growing/shrinking during this process.
(snip)
Hi Steve,
With that ruled out, and pretty much everything else I can think of
already checked as well, I'm afraid I can't help you further with this.
I guess the only options that you now have are to wait if Umut or some
other poster comes up with something, accept it and live with it, create
some creative workaround or open a case with Microsoft PSS (product
support).
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||"The second DELETE will affect a LOT of rows probably in the
neighborhood of 100,000. I suspect that the second DELETE is taking an
Exclusive lock at the table level based on its execution plan, large
number of rows affected, and other processes (INSERT/UPDATE) being
blocked while attempting to put an Intent-Exclusive lock on TableB.
I'm OK with that as long as the DELETE runs for 10s as opposed to 3m. "
Steve,
With reference to the above, do you think you might want to break down your
delete into batches(setting rowcount). this will give you some more
flexibility
Monday, March 26, 2012
Help with backups
have the SQL client tools in which I use to connect to the SQL server. How
can I perform a backup locally. When I try, it looks at the remote drives
rather then my local drives when I try to backup to my local folder.
Thanks
David
You might try this:
backup database DB1 to disk = '\\mymachine\c:\mssql\backups\DB1_date.bck'
You could run that via SQL Analyzer or put it into a script or job.
hth.
"David" <dross@.si.rr.com> wrote in message
news:um2RfUZdEHA.2520@.TK2MSFTNGP12.phx.gbl...
> Hi, I have a SQL server running on a remote network. On my local machine
I
> have the SQL client tools in which I use to connect to the SQL server.
How
> can I perform a backup locally. When I try, it looks at the remote drives
> rather then my local drives when I try to backup to my local folder.
> Thanks
> David
>
sql
Help with backups
have the SQL client tools in which I use to connect to the SQL server. How
can I perform a backup locally. When I try, it looks at the remote drives
rather then my local drives when I try to backup to my local folder.
Thanks
DavidYou might try this:
backup database DB1 to disk = '\\mymachine\c:\mssql\backups\DB1_date.bck'
You could run that via SQL Analyzer or put it into a script or job.
hth.
"David" <dross@.si.rr.com> wrote in message
news:um2RfUZdEHA.2520@.TK2MSFTNGP12.phx.gbl...
> Hi, I have a SQL server running on a remote network. On my local machine
I
> have the SQL client tools in which I use to connect to the SQL server.
How
> can I perform a backup locally. When I try, it looks at the remote drives
> rather then my local drives when I try to backup to my local folder.
> Thanks
> David
>
Help with backups
have the SQL client tools in which I use to connect to the SQL server. How
can I perform a backup locally. When I try, it looks at the remote drives
rather then my local drives when I try to backup to my local folder.
Thanks
DavidYou might try this:
backup database DB1 to disk = '\\mymachine\c:\mssql\backups\DB1_date.bck'
You could run that via SQL Analyzer or put it into a script or job.
hth.
"David" <dross@.si.rr.com> wrote in message
news:um2RfUZdEHA.2520@.TK2MSFTNGP12.phx.gbl...
> Hi, I have a SQL server running on a remote network. On my local machine
I
> have the SQL client tools in which I use to connect to the SQL server.
How
> can I perform a backup locally. When I try, it looks at the remote drives
> rather then my local drives when I try to backup to my local folder.
> Thanks
> David
>
Friday, March 23, 2012
Help with aggregate query - Cant get it right!
Hi,
I have we have a client who gives their invoices in a flat file format, we import it into a SQL Server table.
Nothing is normalized – everything is repeated in every record. The fields are:
customerNumber
Invoice_number
PO_number
Qty
Description
Line_number
Line_total
Freight
Tax
Invoice_date
So an if an order has 10 line items, the header information (invoice number, PO number, ivoice date) are repeated on each of the lines
I am writing a query to show the following
Order number, Invoice total, Date
select invoice_no, sum(line_total + freight + tax) as invoiceTotal, customerNumber, Invoice_date from invoices group by invoice_no, Invoice, customerNumber
This works great - for each invoice I get the invoice number, InvoiceTotal, and Date
Then I was asked to add the PO Number – this is where I can't get it right.
When I added "PO_number" to the query, I got two lines for each invoice
select invoice_no, sum(line_total + freight + tax) as invoiceTotal, customerNumber, Invoice,PO_number from invoices group by invoice_no, Invoice, Sold_To_Cust_No,PO_number
Please help - I need to end up with: invoice_no, invoiceTotal, customerNumber, Invoice_date and PO_number (sequence does not matter)
Thanks
Hello my friend,
I see the problem. You want to get a field value without it causing other data to repeat. The solution is to use a function to get the one value and put this in the query like so: -
CREATE FUNCTION fn_GetPONumberByInvoice
(
@.InvoiceNumber AS VARCHAR(50) -- if it is an INT or whatever then change this line
)
RETURNS VARCHAR(80)
AS
BEGIN
DECLARE @.PONumber AS VARCHAR(80)
SET @.PONumber = (SELECT TOP 1 PO_Number FROM Invoices WHERE Invoice_Number = @.InvoiceNumber)
RETURN @.PONumber
END
Then in your query use the following: -
SELECT invoice_no,
sum(line_total + freight + tax) as invoiceTotal, customerNumber,
Invoice_date,
dbo.fn_GetPONumberByInvoice(invoice_no) AS 'PO_Number'
FROM invoices
GROUP BY invoice_no, Invoice, customerNumber
Kind regards
Scotty
|||
Scotty, thanks for your reply.
I am not well versed in SQL, so I have a question:
How can I use the query you wrote to create a view? I did not mention that earlier (sorry), but the objective is to create a view with header information for each invoice (invoice number, date, total amount and PO number)
Hope you can help.
Again, thanks.
|||Hello again my friend,
You should be able to do it like this: -
CREATE VIEW vw_Invoices
AS
SELECT invoice_no,
sum(line_total + freight + tax) as invoiceTotal, customerNumber,
Invoice_date,
dbo.fn_GetPONumberByInvoice(invoice_no) AS 'PO_Number'
FROM invoices
GROUP BY invoice_no, Invoice, customerNumber
Kind regards
Scotty
|||
Scotty, THANK YOU.
Wednesday, March 21, 2012
Help with a Query
condition in my WHERE clause is:
WHERE
(ISNULL(ClientPhone, '%%') LIKE '%' + COALESCE(@.ClientPhone,
ClientPhone, '') + '%')
This allows me to search on the ClientPhone column depending on whether
or not the @.ClientPhone parameter was NULL or not.
This works great, except that I recently noticed that if I leave the
@.ClientPhone parameter as NULL, the search results are exluding rows
which have the ClientPhone formatted as [1] 310-555-1212
So, any row in which the phone number has the [ and ] characters is not
being returned.
How should my condition in the WHERE clause change to handle this?
Thank you for your helpOn 4 Oct 2005 11:09:23 -0700, george.durzi@.gmail.com wrote:
>In a stored procedure that performs a search on a Client table, one
>condition in my WHERE clause is:
>WHERE
> (ISNULL(ClientPhone, '%%') LIKE '%' + COALESCE(@.ClientPhone,
>ClientPhone, '') + '%')
>This allows me to search on the ClientPhone column depending on whether
>or not the @.ClientPhone parameter was NULL or not.
>This works great, except that I recently noticed that if I leave the
>@.ClientPhone parameter as NULL, the search results are exluding rows
>which have the ClientPhone formatted as [1] 310-555-1212
>So, any row in which the phone number has the [ and ] characters is not
>being returned.
>How should my condition in the WHERE clause change to handle this?
>Thank you for your help
Hi George,
The reason is that the characters [ and ] in a LIKE expression have a
special meaning (see description in Books Online for details).
Here's a way to get around this:
WHERE
REPLACE(REPLACE(COALESCE(ClientPhone, '%%'), '[', '('), ']', ')')
LIKE
REPLACE(REPLACE('%' + COALESCE(@.ClientPhone, ClientPhone, '') + '%'),
'[', '('), ']', ')')
Ugly? Yes.
If I were you, I'd change the existing square brackets in your data to
round brackets (as is the custom for phone numbers), and add a CHECK
constraint on the column to ensure no new square brackets are entered.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||Thanks for your reply. As you recommended, I'll clean up the data to
get rid of the [ and ] in the ClientPhone column. I'll also add the
CHECK constraint to prevent those from getting into that column..
Thanks again|||Please provide DDL and sample data, since there must be a simpler way to do
what you need. In the mean time try replacing "[" and "]" with "(" and ")"
inside the where clause.
Read more about wildcard characters here:
http://msdn.microsoft.com/library/d...br />
115x.asp
A far better solution involves a few extra columns inside the table (or a
view) where individual parts of the phone number are stored, since it is
quite obvious that on of your business requirements might benefit greatly
from this.
MLsql
Monday, March 19, 2012
Help with a complicated query
Work2Address. They all link to one table, Client. As you can probably
tell, this is not the best way to set up the database, so I am moving the
data from the 4 tables into one Address table. What I need to do, though,
is there is a field in each of the 4 tables labeled "preferred", meaning
it's the preferred address to use. Ideally, there would be only one
Preferred column marked True for each client, although that may not be the
case in the existing data. I need to do a query that would include the
preferred column from all 4 tables, that would
result in the 4 Preferred fields from each table for every client in the
client table. Can I do that in one query and if so, how?
On Thu, 9 Dec 2004 12:35:51 -0700, "Rock" <rockisland@.yahoo.com>
wrote:
>I have 4 tables, we'll name them HomeAddress, WorkAddress, Home2Address,
>Work2Address. They all link to one table, Client. As you can probably
>tell, this is not the best way to set up the database, so I am moving the
>data from the 4 tables into one Address table. What I need to do, though,
>is there is a field in each of the 4 tables labeled "preferred", meaning
>it's the preferred address to use. Ideally, there would be only one
>Preferred column marked True for each client, although that may not be the
>case in the existing data. I need to do a query that would include the
>preferred column from all 4 tables, that would
>result in the 4 Preferred fields from each table for every client in the
>client table. Can I do that in one query and if so, how?
I think you want something like
update NewAddress set preferred='Home' where clientID in
(select clientID from HomeAddress where preferred=True)
Run the four versions of this sequentially, and live with the results.
With a little work you could merge the four into one, but what the
heck.
J.
|||Well, maybe that will work, if the data is already correct in the first
place. But if I set a client to Home as the preferred, and then another
update changes that, I wouldn't know it and could end up with incorrect
data. So I need to get the results of the current data first, and was
hoping to be able to display the ClientID along with the preferred column
from each table, so that at a glance I can tell not only which ones are the
preferred, but will also be able to tell those that have no preferred method
or those that have more than one. Then I can notify my customer and let him
choose how to fix the data so that each client has one and only one
preferred address. Does that make sense?
"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
news:qtahr0l1viu7g8f9ng15fl0a2nik4n6jao@.4ax.com... [vbcol=seagreen]
> On Thu, 9 Dec 2004 12:35:51 -0700, "Rock" <rockisland@.yahoo.com>
> wrote:
though,[vbcol=seagreen]
the
> I think you want something like
> update NewAddress set preferred='Home' where clientID in
> (select clientID from HomeAddress where preferred=True)
> Run the four versions of this sequentially, and live with the results.
> With a little work you could merge the four into one, but what the
> heck.
> J.
>
|||SELECT
cl.ClientID,
a.preferred As home_preferred,
b.preferred as work_preferred,
c.preferred as home2_preferred,
d.preferred as work2_preferred
FROM
client cl
LEFT JOIN homeaddress a
on cl.clientid = a.clientid
LEFT JOIN workaddress b
on cl.clientid = b.clientid
LEFT JOIN home2address c
on cl.clientid = c.clientid
LEFT JOIN work2address d
on cl.clientid = d.clientid
"Rock" <rockisland@.yahoo.com> wrote in message
news:%234dhrli3EHA.3000@.TK2MSFTNGP15.phx.gbl...
> Well, maybe that will work, if the data is already correct in the first
> place. But if I set a client to Home as the preferred, and then another
> update changes that, I wouldn't know it and could end up with incorrect
> data. So I need to get the results of the current data first, and was
> hoping to be able to display the ClientID along with the preferred column
> from each table, so that at a glance I can tell not only which ones are
the
> preferred, but will also be able to tell those that have no preferred
method
> or those that have more than one. Then I can notify my customer and let
him[vbcol=seagreen]
> choose how to fix the data so that each client has one and only one
> preferred address. Does that make sense?
>
> "JXStern" <JXSternChangeX2R@.gte.net> wrote in message
> news:qtahr0l1viu7g8f9ng15fl0a2nik4n6jao@.4ax.com...
Home2Address,[vbcol=seagreen]
the[vbcol=seagreen]
> though,
meaning[vbcol=seagreen]
> the
the
>
|||create table #mypref
(clientId char(8),
pref char(8))
insert into #mypref
select clientID,'home'
from homeaddress
where preferred=True
union
select clientID,'work'
from workaddress
where preferred=True
union
select clientID,'home2'
from homeaddress2
where preferred=True
union
select clientID,'work2'
from workaddress2
where preferred=True
-- those with multiples
select * from #mypref
where clientId in
(select clientId from #mypref
group by clientId
having count(*)>1)
-- those with none
select cl.clientId from client
where clientId not in
(select clientID from #mypref)
/* except there are always bugs with NOT IN */
J.
On Thu, 9 Dec 2004 13:25:27 -0800, "Jeff Williams"
<jwilliams@.nospam.com> wrote:
>SELECT
> cl.ClientID,
> a.preferred As home_preferred,
> b.preferred as work_preferred,
> c.preferred as home2_preferred,
> d.preferred as work2_preferred
>FROM
> client cl
> LEFT JOIN homeaddress a
> on cl.clientid = a.clientid
> LEFT JOIN workaddress b
> on cl.clientid = b.clientid
> LEFT JOIN home2address c
> on cl.clientid = c.clientid
> LEFT JOIN work2address d
> on cl.clientid = d.clientid
>"Rock" <rockisland@.yahoo.com> wrote in message
>news:%234dhrli3EHA.3000@.TK2MSFTNGP15.phx.gbl...
>the
>method
>him
>Home2Address,
>the
>meaning
>the
>
Help with a complicated query
Work2Address. They all link to one table, Client. As you can probably
tell, this is not the best way to set up the database, so I am moving the
data from the 4 tables into one Address table. What I need to do, though,
is there is a field in each of the 4 tables labeled "preferred", meaning
it's the preferred address to use. Ideally, there would be only one
Preferred column marked True for each client, although that may not be the
case in the existing data. I need to do a query that would include the
preferred column from all 4 tables, that would
result in the 4 Preferred fields from each table for every client in the
client table. Can I do that in one query and if so, how?On Thu, 9 Dec 2004 12:35:51 -0700, "Rock" <rockisland@.yahoo.com>
wrote:
>I have 4 tables, we'll name them HomeAddress, WorkAddress, Home2Address,
>Work2Address. They all link to one table, Client. As you can probably
>tell, this is not the best way to set up the database, so I am moving the
>data from the 4 tables into one Address table. What I need to do, though,
>is there is a field in each of the 4 tables labeled "preferred", meaning
>it's the preferred address to use. Ideally, there would be only one
>Preferred column marked True for each client, although that may not be the
>case in the existing data. I need to do a query that would include the
>preferred column from all 4 tables, that would
>result in the 4 Preferred fields from each table for every client in the
>client table. Can I do that in one query and if so, how?
I think you want something like
update NewAddress set preferred='Home' where clientID in
(select clientID from HomeAddress where preferred=True)
Run the four versions of this sequentially, and live with the results.
With a little work you could merge the four into one, but what the
heck.
J.|||Well, maybe that will work, if the data is already correct in the first
place. But if I set a client to Home as the preferred, and then another
update changes that, I wouldn't know it and could end up with incorrect
data. So I need to get the results of the current data first, and was
hoping to be able to display the ClientID along with the preferred column
from each table, so that at a glance I can tell not only which ones are the
preferred, but will also be able to tell those that have no preferred method
or those that have more than one. Then I can notify my customer and let him
choose how to fix the data so that each client has one and only one
preferred address. Does that make sense?
"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
news:qtahr0l1viu7g8f9ng15fl0a2nik4n6jao@.4ax.com...
> On Thu, 9 Dec 2004 12:35:51 -0700, "Rock" <rockisland@.yahoo.com>
> wrote:
> >I have 4 tables, we'll name them HomeAddress, WorkAddress, Home2Address,
> >Work2Address. They all link to one table, Client. As you can probably
> >tell, this is not the best way to set up the database, so I am moving the
> >data from the 4 tables into one Address table. What I need to do,
though,
> >is there is a field in each of the 4 tables labeled "preferred", meaning
> >it's the preferred address to use. Ideally, there would be only one
> >Preferred column marked True for each client, although that may not be
the
> >case in the existing data. I need to do a query that would include the
> >preferred column from all 4 tables, that would
> >result in the 4 Preferred fields from each table for every client in the
> >client table. Can I do that in one query and if so, how?
> I think you want something like
> update NewAddress set preferred='Home' where clientID in
> (select clientID from HomeAddress where preferred=True)
> Run the four versions of this sequentially, and live with the results.
> With a little work you could merge the four into one, but what the
> heck.
> J.
>|||SELECT
cl.ClientID,
a.preferred As home_preferred,
b.preferred as work_preferred,
c.preferred as home2_preferred,
d.preferred as work2_preferred
FROM
client cl
LEFT JOIN homeaddress a
on cl.clientid = a.clientid
LEFT JOIN workaddress b
on cl.clientid = b.clientid
LEFT JOIN home2address c
on cl.clientid = c.clientid
LEFT JOIN work2address d
on cl.clientid = d.clientid
"Rock" <rockisland@.yahoo.com> wrote in message
news:%234dhrli3EHA.3000@.TK2MSFTNGP15.phx.gbl...
> Well, maybe that will work, if the data is already correct in the first
> place. But if I set a client to Home as the preferred, and then another
> update changes that, I wouldn't know it and could end up with incorrect
> data. So I need to get the results of the current data first, and was
> hoping to be able to display the ClientID along with the preferred column
> from each table, so that at a glance I can tell not only which ones are
the
> preferred, but will also be able to tell those that have no preferred
method
> or those that have more than one. Then I can notify my customer and let
him
> choose how to fix the data so that each client has one and only one
> preferred address. Does that make sense?
>
> "JXStern" <JXSternChangeX2R@.gte.net> wrote in message
> news:qtahr0l1viu7g8f9ng15fl0a2nik4n6jao@.4ax.com...
> > On Thu, 9 Dec 2004 12:35:51 -0700, "Rock" <rockisland@.yahoo.com>
> > wrote:
> > >I have 4 tables, we'll name them HomeAddress, WorkAddress,
Home2Address,
> > >Work2Address. They all link to one table, Client. As you can probably
> > >tell, this is not the best way to set up the database, so I am moving
the
> > >data from the 4 tables into one Address table. What I need to do,
> though,
> > >is there is a field in each of the 4 tables labeled "preferred",
meaning
> > >it's the preferred address to use. Ideally, there would be only one
> > >Preferred column marked True for each client, although that may not be
> the
> > >case in the existing data. I need to do a query that would include the
> > >preferred column from all 4 tables, that would
> > >result in the 4 Preferred fields from each table for every client in
the
> > >client table. Can I do that in one query and if so, how?
> >
> > I think you want something like
> >
> > update NewAddress set preferred='Home' where clientID in
> > (select clientID from HomeAddress where preferred=True)
> >
> > Run the four versions of this sequentially, and live with the results.
> >
> > With a little work you could merge the four into one, but what the
> > heck.
> >
> > J.
> >
>|||create table #mypref
(clientId char(8),
pref char(8))
insert into #mypref
select clientID,'home'
from homeaddress
where preferred=True
union
select clientID,'work'
from workaddress
where preferred=True
union
select clientID,'home2'
from homeaddress2
where preferred=True
union
select clientID,'work2'
from workaddress2
where preferred=True
-- those with multiples
select * from #mypref
where clientId in
(select clientId from #mypref
group by clientId
having count(*)>1)
-- those with none
select cl.clientId from client
where clientId not in
(select clientID from #mypref)
/* except there are always bugs with NOT IN */
J.
On Thu, 9 Dec 2004 13:25:27 -0800, "Jeff Williams"
<jwilliams@.nospam.com> wrote:
>SELECT
> cl.ClientID,
> a.preferred As home_preferred,
> b.preferred as work_preferred,
> c.preferred as home2_preferred,
> d.preferred as work2_preferred
>FROM
> client cl
> LEFT JOIN homeaddress a
> on cl.clientid = a.clientid
> LEFT JOIN workaddress b
> on cl.clientid = b.clientid
> LEFT JOIN home2address c
> on cl.clientid = c.clientid
> LEFT JOIN work2address d
> on cl.clientid = d.clientid
>"Rock" <rockisland@.yahoo.com> wrote in message
>news:%234dhrli3EHA.3000@.TK2MSFTNGP15.phx.gbl...
>> Well, maybe that will work, if the data is already correct in the first
>> place. But if I set a client to Home as the preferred, and then another
>> update changes that, I wouldn't know it and could end up with incorrect
>> data. So I need to get the results of the current data first, and was
>> hoping to be able to display the ClientID along with the preferred column
>> from each table, so that at a glance I can tell not only which ones are
>the
>> preferred, but will also be able to tell those that have no preferred
>method
>> or those that have more than one. Then I can notify my customer and let
>him
>> choose how to fix the data so that each client has one and only one
>> preferred address. Does that make sense?
>>
>> "JXStern" <JXSternChangeX2R@.gte.net> wrote in message
>> news:qtahr0l1viu7g8f9ng15fl0a2nik4n6jao@.4ax.com...
>> > On Thu, 9 Dec 2004 12:35:51 -0700, "Rock" <rockisland@.yahoo.com>
>> > wrote:
>> > >I have 4 tables, we'll name them HomeAddress, WorkAddress,
>Home2Address,
>> > >Work2Address. They all link to one table, Client. As you can probably
>> > >tell, this is not the best way to set up the database, so I am moving
>the
>> > >data from the 4 tables into one Address table. What I need to do,
>> though,
>> > >is there is a field in each of the 4 tables labeled "preferred",
>meaning
>> > >it's the preferred address to use. Ideally, there would be only one
>> > >Preferred column marked True for each client, although that may not be
>> the
>> > >case in the existing data. I need to do a query that would include the
>> > >preferred column from all 4 tables, that would
>> > >result in the 4 Preferred fields from each table for every client in
>the
>> > >client table. Can I do that in one query and if so, how?
>> >
>> > I think you want something like
>> >
>> > update NewAddress set preferred='Home' where clientID in
>> > (select clientID from HomeAddress where preferred=True)
>> >
>> > Run the four versions of this sequentially, and live with the results.
>> >
>> > With a little work you could merge the four into one, but what the
>> > heck.
>> >
>> > J.
>> >
>>
>