Thursday, March 29, 2012
Help with Cursor to insert 100 rows at a time
Can one of you help me with using a cursor that would insert only 100 rows at a time from source table 1 to target table 2. I am not able to loop beyond the first 100 rows.
Here is what I have till now:
CREATE procedure Insert100RowsAtaTime
AS
SET NOCOUNT ON
declare @.Col1 int
declare @.Col2 char(9)
DECLARE @.RETURNVALUE int
DECLARE @.ERRORMESSAGETXT varchar(510)
DECLARE @.ERRORNUM int
DECLARE @.LOCALROWCOUNT int
declare Insert_Cur cursor local fast_forward
FOR
SELECT top 100 Col1,Col2 from Table1
WHERE Col1 not in ( SELECT Col1 /* Col1 is PK. This statement is used to prevent the same rows from being inserted in Table 2*/
from Table2)
set @.RETURNVALUE = 0
set @.ERRORNUM = 0
BEGIN
open Insert_Cur
fetch NEXT from Insert_Cur into @.Col1, @.Col2
while (@.@.FETCH_STATUS = 0)
insert into Table2 (Col1,Col2) select @.Col1,@.Col2
SELECT @.ERRORNUM = @.@.ERROR, @.LOCALROWCOUNT = @.@.ROWCOUNT
IF @.ERRORNUM = 0
BEGIN
IF @.LOCALROWCOUNT >= 1
BEGIN
SELECT @.RETURNVALUE = 0
END
ELSE
BEGIN
SELECT @.RETURNVALUE = 1
RAISERROR ('INSERT FAILS',16, 1)
END
END
ELSE
BEGIN
SELECT @.ERRORMESSAGETXT = description FROM [master].[dbo].[sysmessages]
WHERE error = @.@.ERROR
RAISERROR (@.ERRORMESSAGETXT, 16, 1)
SELECT @.RETURNVALUE = 1
END
fetch NEXT from Insert_Cur into @.Col1, @.Col2
end
close Insert_Cur
deallocate Insert_Cur
RETURN @.RETURNVALUE
ENDFirst of all, I don't understand what you really want to do so I can't give you a usable or correct response. I can tell you almost certainly that a cursor is not the correct answer.
You have a PK. A cursor isn't needed and it will probably hurt you in terms of both complexity and performance.
Can you describe what you really want in terms of the real world? In business or end-user terms, not in geek speak.
There are definitely ways to do what you want. They are probably simple and fast. I don't know enough to help you yet, but if you describe what you are trying to do a bit better then I'd bet that someone here can help.
-PatP|||It seems to me your not inserting all rows with 100 rows at the time, you're inserting 100 rows one at the time. After row no 100, the cursor is finished and your procedure is done.
This should be more like what you descibe (albeit not the most efficient way, but at least it eliminates the cursor):
WHILE EXISTS (
SELECT 1
FROM Table1
WHERE Col1 NOT IN (SELECT Col1 FROM Table2)
)
BEGIN
INSERT table2 (Col1, Col2)
SELECT top 100 Col1,Col2
FROM Table1
WHERE Col1 NOT IN (SELECT Col1 FROM Table2)
-- Maybe do some error checking here
END
A question about your code:
SELECT @.ERRORMESSAGETXT = description FROM [master].[dbo].[sysmessages]
WHERE error = @.@.ERROR
RAISERROR (@.ERRORMESSAGETXT, 16, 1)
What's this supposed to do?
1) The message is already raised the moment the error occurs
2) What about the placeholders in the messages?
3) @.@.ERROR at that moment is always 0|||Table 1 has over 500 million rows. The task is To select data from Table 1 (based on business rules) and insert into Table 2. The concern was Selecting all data may take a long time to execute and in case of any issue with the quety, a long time to roll back. Hence 100 rows at a time using a cursor.|||As mentioned, forget the cursor! They are for row-by-row processing.
Tuesday, March 27, 2012
Help with cascading SELECT
I have the following scenario that I need to address with which I would
appreciate some help if anyone has the time:
(Simply) I have an asp.net application:
I have 2 tables:
Table 1 is called Categories and is used to hold hierarchical data (Tiers) -
these Tiers can be altered by the users. There may be 10 Categories in Tier
1, Tier 2 will hold 1 or more sub-categories for those 10, Tier 3 will have
sub-categories of Tier 2 etc. etc. through to Tier 5
Table 2 is called Jobs and holds information pertaining to a specific task
that is categorised by Table 1, so this would have single entries that you
should be able to back-track through to Tier 1 of Table 1
This structure is to simply organise data in a highly retrievable structure.
So when a user clicks on a Category in Tier 1 I want to retrieve all Jobs
that are under that category regardless of the sub-category structure
underneath, a click on a category in Tier 2 will filter these, a click on a
sub category in Tier 3 will make the list smaller etc. etc. (I'm sure you ge
t
the picture)
When a category or a job is defined and entered by the user a unique numeric
reference is automatically assigned to that record - it also records the
unique reference of the category above in a field called "Parent_Ref". I
should therefore be able to take the parent record from a Job (which is the
end of the line) and track this back to a sub-category in Tier 3, 4, or 5,
this category record will also have a parent which goes back to Tier 2 and
this will have a recod that tracks back to Tier 1
When the user initially clicks on a Tier 1 Category I need to do a search
for all Table 1 records that have the unique reference of that Tier 1
category in their "Parent_Ref" field.... for all returned records I need to
then do a search through Table 2 for all Jobs that are associated directly
with that category. This will give me 0 to x returns from Jobs. I then need
to requery and find all records from Tier 3 in Table 1 that have one of the
unique references from the Tier 2 records that were just returned in their
"Parent_Ref" field then retrieve all Jobs that relate to these retrieved
records and loop through all the Categories in Table 1 until no more returns
are possible
I hope this is clear !?! I'm sure this is a standard scenario, I have
researched the Select options in transact SQL but can not see the light. I
appreciate that this will probably need to be a stored procedure that will b
e
triggered by my asp.net application where I simply pass the unique ref in to
SQL and let it do the search, but I do not know how to cascade the queries i
n
the above fashion
Thanks
StuartStuart
Can you post DDL+ sample data + expected result?
Like
CREATE TABLE jobs
(
...
...
)
"Stuart" <Stuart@.discussions.microsoft.com> wrote in message
news:133CA7C6-2A90-46C9-94A3-07C51F277741@.microsoft.com...
> Hello
> I have the following scenario that I need to address with which I would
> appreciate some help if anyone has the time:
> (Simply) I have an asp.net application:
> I have 2 tables:
> Table 1 is called Categories and is used to hold hierarchical data
> (Tiers) -
> these Tiers can be altered by the users. There may be 10 Categories in
> Tier
> 1, Tier 2 will hold 1 or more sub-categories for those 10, Tier 3 will
> have
> sub-categories of Tier 2 etc. etc. through to Tier 5
> Table 2 is called Jobs and holds information pertaining to a specific task
> that is categorised by Table 1, so this would have single entries that you
> should be able to back-track through to Tier 1 of Table 1
> This structure is to simply organise data in a highly retrievable
> structure.
> So when a user clicks on a Category in Tier 1 I want to retrieve all Jobs
> that are under that category regardless of the sub-category structure
> underneath, a click on a category in Tier 2 will filter these, a click on
> a
> sub category in Tier 3 will make the list smaller etc. etc. (I'm sure you
> get
> the picture)
> When a category or a job is defined and entered by the user a unique
> numeric
> reference is automatically assigned to that record - it also records the
> unique reference of the category above in a field called "Parent_Ref". I
> should therefore be able to take the parent record from a Job (which is
> the
> end of the line) and track this back to a sub-category in Tier 3, 4, or 5,
> this category record will also have a parent which goes back to Tier 2 and
> this will have a recod that tracks back to Tier 1
> When the user initially clicks on a Tier 1 Category I need to do a search
> for all Table 1 records that have the unique reference of that Tier 1
> category in their "Parent_Ref" field.... for all returned records I need
> to
> then do a search through Table 2 for all Jobs that are associated directly
> with that category. This will give me 0 to x returns from Jobs. I then
> need
> to requery and find all records from Tier 3 in Table 1 that have one of
> the
> unique references from the Tier 2 records that were just returned in their
> "Parent_Ref" field then retrieve all Jobs that relate to these retrieved
> records and loop through all the Categories in Table 1 until no more
> returns
> are possible
> I hope this is clear !?! I'm sure this is a standard scenario, I have
> researched the Select options in transact SQL but can not see the light. I
> appreciate that this will probably need to be a stored procedure that will
> be
> triggered by my asp.net application where I simply pass the unique ref in
> to
> SQL and let it do the search, but I do not know how to cascade the queries
> in
> the above fashion
> Thanks
> Stuart|||Look at this example:
http://milambda.blogspot.com/2005/0...or-monkeys.html
ML
http://milambda.blogspot.com/|||
"Uri Dimant" wrote:
> Stuart
> Can you post DDL+ sample data + expected result?
> Like
> CREATE TABLE jobs
> (
> ...
> ....
> )
>
>
> "Stuart" <Stuart@.discussions.microsoft.com> wrote in message
> news:133CA7C6-2A90-46C9-94A3-07C51F277741@.microsoft.com...
>
>|||Stuart
'?
"Stuart" <Stuart@.discussions.microsoft.com> wrote in message
news:147912F5-39C6-4868-B368-ED57D26CC465@.microsoft.com...
>
> "Uri Dimant" wrote:
>|||Thanks very much - seems to be just the thing ! - but in your experience
would this function in an acceptable way with a max of 1000 entries in the
table ?
"ML" wrote:
> Look at this example:
> http://milambda.blogspot.com/2005/0...or-monkeys.html
>
> ML
> --
> http://milambda.blogspot.com/|||Sorry Uri - I spent some time composing a complete response to your request
only to have this interface bomb out when I posted it ! I appreciate your
time, but the answer from ML actually answers my question...
"Uri Dimant" wrote:
> Stuart
> '?
>
>
> "Stuart" <Stuart@.discussions.microsoft.com> wrote in message
> news:147912F5-39C6-4868-B368-ED57D26CC465@.microsoft.com...
>
>|||The performance of the function depends on two facts:
1) the number of rows in the table; and
2) the average depth of hierarchies.
The best way to know would be to test it on your system. It should peform
adequately with several thousands of rows with an average depth of about ten
levels. When I say "adequatly" that does not mean "perfectly". In SQL 2005
the function should be redesigned using a recursive common table expression
(rCTE), which might improve performance, since it's built into the SQL Serve
r
engine and requires less CPU time to execute.
ML
http://milambda.blogspot.com/
Help with calculating duration time
Ticket_No (int)
Machine_No (int)
Description (char)
Start_Time (datetime)
End_Time (datetime)
I want to be able to calculate total duration time(in hours) that EACH MACHINE had a ticket open...but here is the tricky part. The total duration time that a machine had ticket open has to encompas any tickets that may fall in the same time period. For example:
If Machine A has a ticket open at 8:30 and the ticket is closed at 10:00. Meanwhile, Machine A had another separate ticket open at 9:30 which was closed at 10:30. In this case, the total duration time for this machine would be from 8:30 to 10:30 for a total of 2 hrs duration time.
Can anyone help me get started in tackling this problem or provide any examples?Blind code, but give it a try: create table #Blocks
(Machine_No int,
Start_Time datetime,
End_Time datetime)
insert into #Blocks
(Machine_No,
Start_Time,
End_Time)
select Machine_No,
Start_Time,
max(End_Time)
from Tickets
inner join --FirstTickets
(select Machine_No
min(Start_Time) Start_Time
from Tickets) FirstTickets
on Tickets.Machine_No = FirstTickets.Machine_No
and Tickets.Start_Time = FirstTickets.StartTime
while @.@.RowCount > 0
begin
while @.@.RowCount > 0 --I don't think this will be reset by previous check...
update #Blocks
set End_Time = max(End_Time)
from #Blocks
inner join Tickets
on #Blocks.Machine_No = Tickets.Machine_No
and Tickets.Start_Time between #Blocks.Start_Time and #Blocks.End_Time
and Tickets.End_Time > #Blocks.End_Time
insert into #Blocks
(Machine_No,
Start_Time,
End_Time)
select Machine_No,
Start_Time,
max(End_Time)
from Tickets
inner join --FirstTickets
(select Machine_No
min(Start_Time) Start_Time
from Tickets
where Start_Time >
(select max(End_Time)
from #Blocks
where #Blocks.Machine_No = Tickets.Machine_No) NextTickets
on Tickets.Machine_No = NextTickets.Machine_No
and Tickets.Start_Time = NextTickets.StartTime
end
select Machine_No,
sum(datediff(s, Start_Time, End_Time)/3600.00 as Hours
from #Blocks
group by Machine_No
If you can deal with discrete time segments (say, 1 minute increments) and place an upper limit on the date range, you may be able to do this with a simpler query using a table of integers.|||I really think I can use this (with minor modifications). How can I make this work so that it only compares the records within the same day and not all the records? Thank you.|||This formula will concatenate datetime to whole dates:
select dateadd(d, datediff(d, 0, [Yourdate]), 0) as WholeDate|||How can I modify the following solution so that it only captures the duration time for range of 7 am - 11 pm spanning multiple days? Right now, it works only for 24 hr period. Here is my code:
SELECT dbo.tbl_installed_ATMs.Term_ID, FirstTickets.Object_Key, FirstTickets.Start_Time, MAX(dbo.tbl_install_tickets.END_TIME) AS End_Time,
CONVERT(varchar(12), dbo.tbl_install_tickets.START_TIME, 110) AS Start_Date, CONVERT(varchar(12), dbo.tbl_install_tickets.END_TIME, 110)
AS End_Date, dbo.tbl_status_code_key.DESCRIPTION
INTO dbo.Blocks
FROM dbo.tbl_install_tickets INNER JOIN
(SELECT Object_Key, MIN(Start_Time) Start_Time
FROM tbl_install_tickets
GROUP BY Object_Key) FirstTickets ON dbo.tbl_install_tickets.OBJECT_KEY = FirstTickets.Object_Key AND
dbo.tbl_install_tickets.START_TIME = FirstTickets.Start_Time INNER JOIN
dbo.tbl_status_code_key ON dbo.tbl_install_tickets.STATUS_CODE_KEY = dbo.tbl_status_code_key.LINK INNER JOIN
dbo.tbl_installed_ATMs ON FirstTickets.Object_Key = dbo.tbl_installed_ATMs.Object_Key
GROUP BY FirstTickets.Object_Key, FirstTickets.Start_Time, CONVERT(varchar(12), dbo.tbl_install_tickets.START_TIME, 110), CONVERT(varchar(12),
dbo.tbl_install_tickets.END_TIME, 110), dbo.tbl_status_code_key.DESCRIPTION, dbo.tbl_installed_ATMs.Term_ID
while @.@.RowCount > 0
begin
while @.@.RowCount > 0 --I don't think this will be reset by previous check...
update Blocks
set End_Time = (Select max(Blocks.End_Time)
from Blocks
inner join tbl_install_tickets
On Blocks.Object_Key = tbl_install_Tickets.Object_Key
and tbl_install_Tickets.Start_Time between Blocks.Start_Time and Blocks.End_Time
and tbl_install_Tickets.End_Time > Blocks.End_Time
)
from Blocks
end
SELECT Term_ID, CONVERT(varchar(12), Start_Time, 110) AS Date, DESCRIPTION AS Ticket_type, SUM(DATEDIFF(s, Start_Time, End_Time) / 3600.00)
AS Duration
FROM dbo.Blocks
GROUP BY Term_ID, CONVERT(varchar(12), Start_Time, 110), DESCRIPTION
GO
Help with Calculated member
I'm trying to create a calculate member in SQL Server 2005 that will return a distinct count of clients year to date for all dates within my time dimension. I have figured out that if I create a measure that returns a distinct count then I can use the aggregate function and YTD function to get the distinct count YTD for any date. However, the aggregate function requires a slicer dimension (i think) and every time I put the mdx into the expression box, it does not like the syntax.
I have tried this:
With member [Authorization Service Date].[Calendar].[BlaBla] as
Aggregate(
YTD(
[Authorization Service Date].[Calendar].Current)
)
select [Authorization Service Date].[Calendar].[BlaBla] on columns
from dss
where [Measures].[Authorized Member Distinct Count]
does not like syntax
I have tried this:
Aggregate(
YTD(
[Authorization Service Date].[Calendar].Current)
)
select [Authorization Service Date].[Calendar].[BlaBla] on columns
from dss
where [Measures].[Authorized Member Distinct Count]
does not like syntax
and I have tried this:
Aggregate(
YTD(
[Authorization Service Date].[Calendar].Current)
)
returns error: Error 1 MdxScript(DSS) (67, 4) The function can be called only on a named set. 0 0
I don't know wht this means
Any help would be deeply appreciated
I think you are looking to use the CURRENTMEMBER function instead of CURRENT.
I think this will simply take you to your next challenge. If you run this query, you'll likely get NULL for your result.
I think something like this will better serve your needs:
Code Snippet
With member [Measures].[x] as
COUNT(
EXISTS(
[Customer].[Customer].[Customer].Members,
YTD([Delivery Date].[Calendar].CurrentMember),
'Internet Sales'
)
)
select
[Measures].[x] on columns
from [Adventure Works]
where [Delivery Date].[Date].[June 1, 2002]
;
|||Bryan,
I appreciate your taking your time to assist me with this. Unfortunately, it did not work. As translated, your mdx on my system looks like
with member [Measures].[x] as
COUNT(
EXISTS(
)[CCE Members].[MemberId].Members,
YTD([Authorization Service Date].[Calendar].CurrentMember),
'Authorization Service Days Count'
)
SELECT [Measures].[x] ON 0
FROM [dss]
WHERE [Authorization Service Date].[Full Date].&[6303]
I get xError
Just for your info however, I know that all of the underlying attributes and metrics join up properly becuase when I execute:
SELECT [Measures].[Authorization Service Days Count] ON 0,
[CCE Members].[MemberId].Members ON 1
FROM [dss]
WHERE [Authorization Service Date].[Full Date].&[6303]
it works.
Again, your assistance is trully appreciated.
Thanks,
Michael
|||Two things I noticed in the EXISTS function. Try changing the set to include just the leaf level members:
[CCE Members].[MemberId].Members --> [CCE Members].[MemberId].[MemberId].Members
Also, is 'Authorization Service Days Count' the name of the measure or the measure group? You need to identify the measure group.
B.
sqlHelp with building query
This will probably be trivial and basic for most, but I'm having a hard time trying to figure out the best way to do a SELECT statement. First, let me explain what I have:
Two tables:
Table 1:
Orders
Some of the fields:
ID
PropID
WorkOrderNum
OrderDesc
DateCompleted
Table 2:
OrderDetail
ID
OrderID
TenantName
As you probably have realized, the OrderID in my 'OrderDetail' table corresponds to the ID field in my 'Orders' table. The 'Orders' table contains the order header information, while the OrderDetail contains line items for that order - 1 line item per record.
Here is my SQL statement to retrieve an order when searching by the 'Order Description' (Orders.OrderDesc):
SELECT PropertyLocations.PropertyLocation, Orders.ID, Orders.PropID, Orders.WorkOrderNum, Orders.OrderDesc, Orders.DateCompleted FROM PropertyLocations, ORDERS WHERE PropertyLocations.ID = Orders.PropID AND OrderDesc LIKE '%lds%'
Ok, so now for the 'big' question/problem: I also need to be able to search the 'Tenant Name' field from the 'OrderDetail' table. So what is the best/most efficient way of doing that? The other stipulation about that is that there can be (and usually is) several records/line items (in the OrderDetail table, of course) that contains the same (or similar) data, but I don't want duplicates. And when I say duplicates, all I care about is retrieving a few fields (as you can see from my SQL statement) from the 'Orders' table. Another way to describe what I want is that I want all unique orders that have a 'TenantName' in the 'OrderDetail' table that matches the search criteria. My brain just isn't wanting to figure this out right now, so I was hoping someone could help me out.
thanks.Acciording to your explanation,
1. PropertyLocations has 1:many relation to ORDERS,
2. Orders has 1:many relation to OrderDetail.
My first try would be like below for TenantName:
SELECT Distinct
PropertyLocations.PropertyLocation,
OrderDetail.TenantName,
Orders.ID, Orders.PropID, Orders.WorkOrderNum,
Orders.OrderDesc, Orders.DateCompleted
FROM PropertyLocations INNER JOIN Orders
ON PropertyLocations.ID = Orders.PropID
INNER JOIN OrderDetail
ON Orders.ID = OrderDetail.OrderID
WHERE Orders.OrderDesc LIKE '%lds%'
AND OrderDetail.TenantName LIKE 'Steve%'|||That's almost it. It's pulling up all my orders that has the specified 'tenantname' (that is great!)...but it's showing duplicate orders.
For instance:
I have a record in my 'Orders' table of ID=2886. And in my 'OrderDetail' table, there are 3 records that have the characters 'dr.' in it that all have the value of 2886 in the 'OrderID' field which means that they all belong to the record in the 'Orders' table with ID of 2886 that I mentioned at the beginning of this paragraph. So it is showing me my data from the 'Orders' table 3 times (duplicates).
I also modified your statement slightly:
SELECT Distinct PropertyLocations.PropertyLocation, OrderDetail.TenantName, Orders.ID, Orders.PropID, Orders.WorkOrderNum, Orders.OrderDesc, Orders.DateCompleted FROM PropertyLocations INNER JOIN Orders ON PropertyLocations.ID = Orders.PropID INNER JOIN OrderDetail ON Orders.ID = OrderDetail.OrderID WHERE OrderDetail.TenantName LIKE '%Dr.%' ORDER BY WorkOrderNum DESC
I'm only doing 1 'LIKE' at a time...but that's probably my fault...I wasn't real clear about it.
So how do I make it show me just a single instance of a record from the 'Orders' table even though that there may be several records in the 'OrderDetail' table that matches the search criteria?
P.S. Thank you for reminding me about trying to be SQL99 compliant. ;)
thanks.|||The reason that you do have duplicates is because of Orders.DateCompleted even if you have DISTINCT in the SELECT statement.
Therefore my next question would be
1. Do you need Orders.DateCompleted to display?
2. If YES, then duplicates are unavoidable.
3. If NO, then remove Orders.DateCompleted from the SELECT statement. - Problem solved.
In case of YES,
1. Do the dates of Orders.DateCompleted fall into the same date?
If YES, then use CAST(Orders.DateCompleted as varchar(11)) to pick up only the date portion that ignores time portion of the Orders.DateCompleted. - Problem solved.
If NO, then still duplicates are unavoidable.|||Well, I tried removing the Orders.DateCompleted field even though I really wanted it, but it still gave me the same results.
I included 2 screenshots now, one w/DateCompleted so everyone can see exactly what I'm getting, and my SQL statement, and then 1 after I removed the Orders.DateCompleted field.
Just to be sure that everyone knows what I mean when I say duplicates, I mean that the same exact Work Order (which is a row...or an order...or record) is showing up multiple times even though only 1 actually exists in the database. You can see this in the screenshots - there are 3 rows of the WO#6997 which is far from correct - there is only 1 record in the database with that Work Order Number. But there are 3 unique records in the OrderDetail table that contain the characters 'dr.' in it that belong to that WO#6997 work order. (And just to clarify, the WorkOrderNum is not my 'unique id' field in the 'Orders' table, nor is it the field that is 'linked' with the OrderDetail table)
If the above is what everyone already realized/knew, then cool, but if it wasn't, then maybe it will help clear things up for someone to help a little easier.
thanks.|||Let me clarify
1. You have one record in the Order table with WO#6997.
2. You have three records in the OrderDetail table with WO#6997 and TenantName like '%dr%'.
3. WO# in the Order table is not unique.
4. WO# in the OrderDetail is not unique.
OK.
First, my previous email regarding the Orders.DateCompleted was incorrect. Because it belongs to Orders table, it wouldn't generate duplicates. So please ignore my suggestion to remove the column Orders.DateCompleted.
Then please show me the TenantNames. Are they the same?
If they are different, obviously you will have 3 records.
Inside the SELECT statement, you have the following columns to display:
PropertyLocations.PropertyLocation,
OrderDetail.TenantName,
Orders.ID,
Orders.PropID,
Orders.WorkOrderNum,
Orders.OrderDesc,
Orders.DateCompleted
Among them, only TenantName comes from OrderDetail table.
Therefore if TenantName is the same it shouldn't display three records.
I am puzzled and very curious on the result for the TenantName.|||1. Yes
2. Kind of...The OrderDetail table doesn't actually have a WO# field in it. It has an OrderID field that corresponds to the ID field of the Orders table (which contains a WO# field)...but yes, there are 3 records that show up from the OrderDetail field with a TenantName like '%dr.%' that belong to the record that contains WO#6997 in the Orders table
3. Actually, WO# is unique (although there might be some records in the table ('Orders' table as I mentioned), that don't have a value for this field...but the records would be so old, it will never come up...so is it still considered 'unique'?), but isn't my identity field. I have an ID field for that.
4. As I mentioned, the OrderDetail table does not have a WO# field.
Ok, no problem.
Below are the 3 records that came up that matched the criteria:
The following data is from the TenantName field, and yes, as you can see, are unique. They all share the same OrderID of 2886...which is the ID of the Orders table. So I guess this is where the problem is? See, I don't really care that 3 records from the OrderDetail table matches my search criteria...all I need to know is that at least one record matches, and then I want to show the Order/Work Order from the Orders table that it corresponds to...1 time.
At Napa 9, 870 Napa Valley Corp. Dr.:
Work to be done at: [nl]870 Napa Valley Corp.Dr./Napa 9[nl]2511-2515 Napa Valley Corp. Dr./The Vines[nl]~+
At The Vines, 2511-2515 Napa Valley Corp. Dr.:
Yes, those are the columns I would like to display, and yes, TenantName is the only one that comes from the OrderDetail table.
As you see, the 3 records aren't the same that match the search criteria.|||Now I can see whole picture.
You have one record in the Order table with WO#6997.
You have three records in the OrderDetail table with WO#6997 and TenantName like '%dr%'.
ID is primary key in Order table.
OrderID in the OrderDetail is foreign key to Order table.
Inside the SELECT statement, you have the following columns to display:
PropertyLocations.PropertyLocation,
OrderDetail.TenantName,
Orders.ID,
Orders.PropID,
Orders.WorkOrderNum,
Orders.OrderDesc,
Orders.DateCompleted
But OrderDetail.TenantName are not the same. Therefore you have to force it to be the same by using the criteria.
So
Select Distinct
PropertyLocations.PropertyLocation,
'Like ''%Dr.%''' as [TenantName Criteria],
Orders.ID,
Orders.PropID,
Orders.WorkOrderNum,
Orders.OrderDesc,
Orders.DateCompletedFROM PropertyLocations INNER JOIN Orders ON PropertyLocations.ID = Orders.PropID INNER JOIN OrderDetail ON Orders.ID = OrderDetail.OrderID WHERE OrderDetail.TenantName LIKE '%Dr.%' ORDER BY WorkOrderNum DESC
I hope this should satisfy your requirements.|||Cool, I'm sorry it took so long to get there.
1. Yes
2. Yes, in a way...see #2 from my previous post
3. Yes
4. Yes
Ok, this is where you're going to want to kill me. I just realized after re-reading your post...I don't actually need to display OrderDetail.TenantName. You can even see that I'm not showing that field/column in my screenshots...silly me.
So I tried one of your previous SELECT statements, and removed it (the OrderDetail.TenantName part) from the beginning so as to not display it, and it worked! I'm so sorry for all of the confusion and the extra steps.
But out of not wanting all your hard work to not be in vain, I tried your last statement, and it worked too! So now I know what to do if I have a similar situation. Although, I do have a question...I've never seen part of your statement before:
'Like ''%Dr.%''' as [TenantName Criteria],
What is that all about? Obviously it works, but it's new to me. I also don't really understand why it would've worked without showing the OrderDetail.TenantName column.
Thanks for sticking with me and all of your help, TerryP.|||I am glad that I could help you.
Your question:
'Like ''%Dr.%''' as [TenantName Criteria],
This is a constant column. Because the TenantName is not actually the name of the tenant but a memo or description, I tried to make a common string for TenantName which comes from the criteria in the WHERE clause. When it displayed, all three records would have 'Like ''%Dr.%'''. So I could eliminate multiple records of the same constant string to a single record by using DISTINCT keyword in SELECT clause.
I also don't really understand why it would've worked without showing the OrderDetail.TenantName column
You do not have to display all or any of the columns of tables used in FROM clause. But it would have returned three rows if DISTINCT had not been used. So the trick was in DISTINCT keyword.
Monday, March 26, 2012
help with Benchmark
the time sql 2005 takes to insert a row ?
i saw at microsfot.com some benchmarks but based on transactions and
comparing Sql with Oracle and IBM, that's not what i'm looking for, i'm
looking for times in mili seconds or micro seconds Sql takes to do it
Thanks!!
No, that information isn't available from Microsoft (and it would be a
license violation for anyone else to publish it without Microsoft's
permission). One major reason is that you have to say under what
conditions. Is it in a user-defined transaction or not? How many indexes
of what type? Do you need to split a b-tree node? Are there concurrent
users? Are there any conflicts? blah blah blah. In other words, it's not
a useful number in this day and age. Now if there were a standardized
benchmark that gave this information it might be interesting, in the context
of that benchmark. Old benchmarks like Wisconsin used to do some of this
(though it was more query than insert/update focused). But most such
benchmarks are no longer published.
You might find some data in comparisons the MySQL folks publish as they (a)
try to make themselves look good by focusing on single-statement comparisons
rather than complex application comparisons and (b) they don't seem to mind
violating Microsoft (and others') license agreements. At least that was the
situation when I looked a couple of years ago.
Hal Berenson, President
PredictableIT, LLC
http://www.predictableit.com
"Rick" <elmargaro@.hotmail.com> wrote in message
news:OMlF5gCVGHA.5808@.TK2MSFTNGP12.phx.gbl...
> Hi guys!! does any one know a page at microsoft.com or another site that
> has the time sql 2005 takes to insert a row ?
> i saw at microsfot.com some benchmarks but based on transactions and
> comparing Sql with Oracle and IBM, that's not what i'm looking for, i'm
> looking for times in mili seconds or micro seconds Sql takes to do it
> Thanks!!
>
>
|||oki doki, thanks Hal, i'll look for some comparisons.
Regards.
"Hal Berenson" <hberenson@.scalabilityexperts.com> escribi en el mensaje
news:eSysFQJVGHA.4864@.TK2MSFTNGP12.phx.gbl...
> No, that information isn't available from Microsoft (and it would be a
> license violation for anyone else to publish it without Microsoft's
> permission). One major reason is that you have to say under what
> conditions. Is it in a user-defined transaction or not? How many indexes
> of what type? Do you need to split a b-tree node? Are there concurrent
> users? Are there any conflicts? blah blah blah. In other words, it's
> not a useful number in this day and age. Now if there were a standardized
> benchmark that gave this information it might be interesting, in the
> context of that benchmark. Old benchmarks like Wisconsin used to do some
> of this (though it was more query than insert/update focused). But most
> such benchmarks are no longer published.
> You might find some data in comparisons the MySQL folks publish as they
> (a) try to make themselves look good by focusing on single-statement
> comparisons rather than complex application comparisons and (b) they don't
> seem to mind violating Microsoft (and others') license agreements. At
> least that was the situation when I looked a couple of years ago.
> --
> Hal Berenson, President
> PredictableIT, LLC
> http://www.predictableit.com
>
> "Rick" <elmargaro@.hotmail.com> wrote in message
> news:OMlF5gCVGHA.5808@.TK2MSFTNGP12.phx.gbl...
>
help with Benchmark
the time sql 2005 takes to insert a row '
i saw at microsfot.com some benchmarks but based on transactions and
comparing Sql with Oracle and IBM, that's not what i'm looking for, i'm
looking for times in mili seconds or micro seconds Sql takes to do it
Thanks!!No, that information isn't available from Microsoft (and it would be a
license violation for anyone else to publish it without Microsoft's
permission). One major reason is that you have to say under what
conditions. Is it in a user-defined transaction or not? How many indexes
of what type? Do you need to split a b-tree node? Are there concurrent
users? Are there any conflicts? blah blah blah. In other words, it's not
a useful number in this day and age. Now if there were a standardized
benchmark that gave this information it might be interesting, in the context
of that benchmark. Old benchmarks like Wisconsin used to do some of this
(though it was more query than insert/update focused). But most such
benchmarks are no longer published.
You might find some data in comparisons the mysql folks publish as they (a)
try to make themselves look good by focusing on single-statement comparisons
rather than complex application comparisons and (b) they don't seem to mind
violating Microsoft (and others') license agreements. At least that was the
situation when I looked a couple of years ago.
Hal Berenson, President
PredictableIT, LLC
http://www.predictableit.com
"Rick" <elmargaro@.hotmail.com> wrote in message
news:OMlF5gCVGHA.5808@.TK2MSFTNGP12.phx.gbl...
> Hi guys!! does any one know a page at microsoft.com or another site that
> has the time sql 2005 takes to insert a row '
> i saw at microsfot.com some benchmarks but based on transactions and
> comparing Sql with Oracle and IBM, that's not what i'm looking for, i'm
> looking for times in mili seconds or micro seconds Sql takes to do it
> Thanks!!
>
>|||oki doki, thanks Hal, i'll look for some comparisons.
Regards.
"Hal Berenson" <hberenson@.scalabilityexperts.com> escribi en el mensaje
news:eSysFQJVGHA.4864@.TK2MSFTNGP12.phx.gbl...
> No, that information isn't available from Microsoft (and it would be a
> license violation for anyone else to publish it without Microsoft's
> permission). One major reason is that you have to say under what
> conditions. Is it in a user-defined transaction or not? How many indexes
> of what type? Do you need to split a b-tree node? Are there concurrent
> users? Are there any conflicts? blah blah blah. In other words, it's
> not a useful number in this day and age. Now if there were a standardized
> benchmark that gave this information it might be interesting, in the
> context of that benchmark. Old benchmarks like Wisconsin used to do some
> of this (though it was more query than insert/update focused). But most
> such benchmarks are no longer published.
> You might find some data in comparisons the mysql folks publish as they
> (a) try to make themselves look good by focusing on single-statement
> comparisons rather than complex application comparisons and (b) they don't
> seem to mind violating Microsoft (and others') license agreements. At
> least that was the situation when I looked a couple of years ago.
> --
> Hal Berenson, President
> PredictableIT, LLC
> http://www.predictableit.com
>
> "Rick" <elmargaro@.hotmail.com> wrote in message
> news:OMlF5gCVGHA.5808@.TK2MSFTNGP12.phx.gbl...
>
help with Benchmark
the time sql 2005 takes to insert a row '
i saw at microsfot.com some benchmarks but based on transactions and
comparing Sql with Oracle and IBM, that's not what i'm looking for, i'm
looking for times in mili seconds or micro seconds Sql takes to do it
Thanks!!No, that information isn't available from Microsoft (and it would be a
license violation for anyone else to publish it without Microsoft's
permission). One major reason is that you have to say under what
conditions. Is it in a user-defined transaction or not? How many indexes
of what type? Do you need to split a b-tree node? Are there concurrent
users? Are there any conflicts? blah blah blah. In other words, it's not
a useful number in this day and age. Now if there were a standardized
benchmark that gave this information it might be interesting, in the context
of that benchmark. Old benchmarks like Wisconsin used to do some of this
(though it was more query than insert/update focused). But most such
benchmarks are no longer published.
You might find some data in comparisons the MySQL folks publish as they (a)
try to make themselves look good by focusing on single-statement comparisons
rather than complex application comparisons and (b) they don't seem to mind
violating Microsoft (and others') license agreements. At least that was the
situation when I looked a couple of years ago.
--
Hal Berenson, President
PredictableIT, LLC
http://www.predictableit.com
"Rick" <elmargaro@.hotmail.com> wrote in message
news:OMlF5gCVGHA.5808@.TK2MSFTNGP12.phx.gbl...
> Hi guys!! does any one know a page at microsoft.com or another site that
> has the time sql 2005 takes to insert a row '
> i saw at microsfot.com some benchmarks but based on transactions and
> comparing Sql with Oracle and IBM, that's not what i'm looking for, i'm
> looking for times in mili seconds or micro seconds Sql takes to do it
> Thanks!!
>
>|||oki doki, thanks Hal, i'll look for some comparisons.
Regards.
"Hal Berenson" <hberenson@.scalabilityexperts.com> escribió en el mensaje
news:eSysFQJVGHA.4864@.TK2MSFTNGP12.phx.gbl...
> No, that information isn't available from Microsoft (and it would be a
> license violation for anyone else to publish it without Microsoft's
> permission). One major reason is that you have to say under what
> conditions. Is it in a user-defined transaction or not? How many indexes
> of what type? Do you need to split a b-tree node? Are there concurrent
> users? Are there any conflicts? blah blah blah. In other words, it's
> not a useful number in this day and age. Now if there were a standardized
> benchmark that gave this information it might be interesting, in the
> context of that benchmark. Old benchmarks like Wisconsin used to do some
> of this (though it was more query than insert/update focused). But most
> such benchmarks are no longer published.
> You might find some data in comparisons the MySQL folks publish as they
> (a) try to make themselves look good by focusing on single-statement
> comparisons rather than complex application comparisons and (b) they don't
> seem to mind violating Microsoft (and others') license agreements. At
> least that was the situation when I looked a couple of years ago.
> --
> Hal Berenson, President
> PredictableIT, LLC
> http://www.predictableit.com
>
> "Rick" <elmargaro@.hotmail.com> wrote in message
> news:OMlF5gCVGHA.5808@.TK2MSFTNGP12.phx.gbl...
>> Hi guys!! does any one know a page at microsoft.com or another site that
>> has the time sql 2005 takes to insert a row '
>> i saw at microsfot.com some benchmarks but based on transactions and
>> comparing Sql with Oracle and IBM, that's not what i'm looking for, i'm
>> looking for times in mili seconds or micro seconds Sql takes to do it
>> Thanks!!
>>
>sql
Help with backup strategy
I was just asked by my management to implement a new backup strategy. They
want me to be able to go back at any point in time to recover data. Example
auditing purposes. What do you guys recommend? My database is 8gb and
growing. Currently I was making a complete backup daily but only for 6 days
MTWTFS. Now I must also be able to recover at the very most recent point in
time as well.
Thanks
Transaction log backups:
http://msdn.microsoft.com/library/de...kprst_565v.asp
http://msdn.microsoft.com/library/de...ackpc_5a61.asp
http://msdn.microsoft.com/library/de...ackpc_6pv6.asp
David Portas
SQL Server MVP
"Chris" <Chris@.discussions.microsoft.com> wrote in message
news:63FAD062-5331-4F19-906E-B2B3E4F8223F@.microsoft.com...
> Hi,
> I was just asked by my management to implement a new backup strategy. They
> want me to be able to go back at any point in time to recover data.
> Example
> auditing purposes. What do you guys recommend? My database is 8gb and
> growing. Currently I was making a complete backup daily but only for 6
> days
> MTWTFS. Now I must also be able to recover at the very most recent point
> in
> time as well.
> Thanks
|||Chris wrote:
> Hi,
> I was just asked by my management to implement a new backup strategy. They
> want me to be able to go back at any point in time to recover data. Example
> auditing purposes. What do you guys recommend? My database is 8gb and
> growing. Currently I was making a complete backup daily but only for 6 days
> MTWTFS. Now I must also be able to recover at the very most recent point in
> time as well.
> Thanks
Hi Chris
You need to backup both your database and you logfiles. In this way, you
can always restore a database backup and then apply the logfiles up till
the time you want data restored.
You could e.g. do a full database backup at 22.00 and then logfiles
backup's at 10.00, 14.00 and 18.00. If you then need to restore data to
how it was at 12.00, you restore your full backup and then apply the log
from 10.00 and the log from 14.00 with the STOPAT 12.00 option.
The times is of course just examples, and can be anything you want. If
you just need this for auditing reasons, you could maybe go with only 1
logfile backup.
Try reading up on Backup and Restore in Books On Line - e.g. "Backing Up
and Restoring Databases", That might give you an idea on how to set up
the best plan that suits you.
Regards
Steen
|||Chris
http://vyaskn.tripod.com/sql_server_...ices.htm#Step1
--administaiting best practices
"Chris" <Chris@.discussions.microsoft.com> wrote in message
news:63FAD062-5331-4F19-906E-B2B3E4F8223F@.microsoft.com...
> Hi,
> I was just asked by my management to implement a new backup strategy. They
> want me to be able to go back at any point in time to recover data.
> Example
> auditing purposes. What do you guys recommend? My database is 8gb and
> growing. Currently I was making a complete backup daily but only for 6
> days
> MTWTFS. Now I must also be able to recover at the very most recent point
> in
> time as well.
> Thanks
|||Hi,
But would this allow me to go back say 6 months of a full year?
Thanks
"Chris" wrote:
> Hi,
> I was just asked by my management to implement a new backup strategy. They
> want me to be able to go back at any point in time to recover data. Example
> auditing purposes. What do you guys recommend? My database is 8gb and
> growing. Currently I was making a complete backup daily but only for 6 days
> MTWTFS. Now I must also be able to recover at the very most recent point in
> time as well.
> Thanks
|||Chris
Yes it would. Let say you do backups of the database every night and every
hour a log file. Well , due to a hardware failure or something like that you
losed the last full backup. Don't worry , restore a full backup database
from two weeks ago and the apply every log file backup ( the last one with
recovery option) .See, the data is in the database.
"Chris" <Chris@.discussions.microsoft.com> wrote in message
news:AA41051A-A58E-4359-B3B3-5671B151F5DC@.microsoft.com...[vbcol=seagreen]
> Hi,
> But would this allow me to go back say 6 months of a full year?
> Thanks
> "Chris" wrote:
Help with backup strategy
I was just asked by my management to implement a new backup strategy. They
want me to be able to go back at any point in time to recover data. Example
auditing purposes. What do you guys recommend? My database is 8gb and
growing. Currently I was making a complete backup daily but only for 6 days
MTWTFS. Now I must also be able to recover at the very most recent point in
time as well.
ThanksTransaction log backups:
http://msdn.microsoft.com/library/d... />
t_565v.asp
http://msdn.microsoft.com/library/d...>
kpc_5a61.asp
http://msdn.microsoft.com/library/d...>
kpc_6pv6.asp
David Portas
SQL Server MVP
--
"Chris" <Chris@.discussions.microsoft.com> wrote in message
news:63FAD062-5331-4F19-906E-B2B3E4F8223F@.microsoft.com...
> Hi,
> I was just asked by my management to implement a new backup strategy. They
> want me to be able to go back at any point in time to recover data.
> Example
> auditing purposes. What do you guys recommend? My database is 8gb and
> growing. Currently I was making a complete backup daily but only for 6
> days
> MTWTFS. Now I must also be able to recover at the very most recent point
> in
> time as well.
> Thanks|||Chris wrote:
> Hi,
> I was just asked by my management to implement a new backup strategy. They
> want me to be able to go back at any point in time to recover data. Exampl
e
> auditing purposes. What do you guys recommend? My database is 8gb and
> growing. Currently I was making a complete backup daily but only for 6 day
s
> MTWTFS. Now I must also be able to recover at the very most recent point i
n
> time as well.
> Thanks
Hi Chris
You need to backup both your database and you logfiles. In this way, you
can always restore a database backup and then apply the logfiles up till
the time you want data restored.
You could e.g. do a full database backup at 22.00 and then logfiles
backup's at 10.00, 14.00 and 18.00. If you then need to restore data to
how it was at 12.00, you restore your full backup and then apply the log
from 10.00 and the log from 14.00 with the STOPAT 12.00 option.
The times is of course just examples, and can be anything you want. If
you just need this for auditing reasons, you could maybe go with only 1
logfile backup.
Try reading up on Backup and Restore in Books On Line - e.g. "Backing Up
and Restoring Databases", That might give you an idea on how to set up
the best plan that suits you.
Regards
Steen|||Chris
http://vyaskn.tripod.com/ sql_serve...r />
.htm#Step1
--administaiting best practices
"Chris" <Chris@.discussions.microsoft.com> wrote in message
news:63FAD062-5331-4F19-906E-B2B3E4F8223F@.microsoft.com...
> Hi,
> I was just asked by my management to implement a new backup strategy. They
> want me to be able to go back at any point in time to recover data.
> Example
> auditing purposes. What do you guys recommend? My database is 8gb and
> growing. Currently I was making a complete backup daily but only for 6
> days
> MTWTFS. Now I must also be able to recover at the very most recent point
> in
> time as well.
> Thanks|||Hi,
But would this allow me to go back say 6 months of a full year?
Thanks
"Chris" wrote:
> Hi,
> I was just asked by my management to implement a new backup strategy. They
> want me to be able to go back at any point in time to recover data. Exampl
e
> auditing purposes. What do you guys recommend? My database is 8gb and
> growing. Currently I was making a complete backup daily but only for 6 day
s
> MTWTFS. Now I must also be able to recover at the very most recent point i
n
> time as well.
> Thanks|||Chris
Yes it would. Let say you do backups of the database every night and every
hour a log file. Well , due to a hardware failure or something like that you
losed the last full backup. Don't worry , restore a full backup database
from two weeks ago and the apply every log file backup ( the last one with
recovery option) .See, the data is in the database.
"Chris" <Chris@.discussions.microsoft.com> wrote in message
news:AA41051A-A58E-4359-B3B3-5671B151F5DC@.microsoft.com...[vbcol=seagreen]
> Hi,
> But would this allow me to go back say 6 months of a full year?
> Thanks
> "Chris" wrote:
>
Help with backup strategy
I was just asked by my management to implement a new backup strategy. They
want me to be able to go back at any point in time to recover data. Example
auditing purposes. What do you guys recommend? My database is 8gb and
growing. Currently I was making a complete backup daily but only for 6 days
MTWTFS. Now I must also be able to recover at the very most recent point in
time as well.
ThanksTransaction log backups:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/adminsql/ad_bkprst_565v.asp
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/howtosql/ht_7_backpc_5a61.asp
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/howtosql/ht_7_backpc_6pv6.asp
--
David Portas
SQL Server MVP
--
"Chris" <Chris@.discussions.microsoft.com> wrote in message
news:63FAD062-5331-4F19-906E-B2B3E4F8223F@.microsoft.com...
> Hi,
> I was just asked by my management to implement a new backup strategy. They
> want me to be able to go back at any point in time to recover data.
> Example
> auditing purposes. What do you guys recommend? My database is 8gb and
> growing. Currently I was making a complete backup daily but only for 6
> days
> MTWTFS. Now I must also be able to recover at the very most recent point
> in
> time as well.
> Thanks|||Chris wrote:
> Hi,
> I was just asked by my management to implement a new backup strategy. They
> want me to be able to go back at any point in time to recover data. Example
> auditing purposes. What do you guys recommend? My database is 8gb and
> growing. Currently I was making a complete backup daily but only for 6 days
> MTWTFS. Now I must also be able to recover at the very most recent point in
> time as well.
> Thanks
Hi Chris
You need to backup both your database and you logfiles. In this way, you
can always restore a database backup and then apply the logfiles up till
the time you want data restored.
You could e.g. do a full database backup at 22.00 and then logfiles
backup's at 10.00, 14.00 and 18.00. If you then need to restore data to
how it was at 12.00, you restore your full backup and then apply the log
from 10.00 and the log from 14.00 with the STOPAT 12.00 option.
The times is of course just examples, and can be anything you want. If
you just need this for auditing reasons, you could maybe go with only 1
logfile backup.
Try reading up on Backup and Restore in Books On Line - e.g. "Backing Up
and Restoring Databases", That might give you an idea on how to set up
the best plan that suits you.
Regards
Steen|||Chris
http://vyaskn.tripod.com/sql_server_administration_best_practices.htm#Step1
--administaiting best practices
"Chris" <Chris@.discussions.microsoft.com> wrote in message
news:63FAD062-5331-4F19-906E-B2B3E4F8223F@.microsoft.com...
> Hi,
> I was just asked by my management to implement a new backup strategy. They
> want me to be able to go back at any point in time to recover data.
> Example
> auditing purposes. What do you guys recommend? My database is 8gb and
> growing. Currently I was making a complete backup daily but only for 6
> days
> MTWTFS. Now I must also be able to recover at the very most recent point
> in
> time as well.
> Thanks|||Hi,
But would this allow me to go back say 6 months of a full year?
Thanks
"Chris" wrote:
> Hi,
> I was just asked by my management to implement a new backup strategy. They
> want me to be able to go back at any point in time to recover data. Example
> auditing purposes. What do you guys recommend? My database is 8gb and
> growing. Currently I was making a complete backup daily but only for 6 days
> MTWTFS. Now I must also be able to recover at the very most recent point in
> time as well.
> Thanks|||Chris
Yes it would. Let say you do backups of the database every night and every
hour a log file. Well , due to a hardware failure or something like that you
losed the last full backup. Don't worry , restore a full backup database
from two weeks ago and the apply every log file backup ( the last one with
recovery option) .See, the data is in the database.
"Chris" <Chris@.discussions.microsoft.com> wrote in message
news:AA41051A-A58E-4359-B3B3-5671B151F5DC@.microsoft.com...
> Hi,
> But would this allow me to go back say 6 months of a full year?
> Thanks
> "Chris" wrote:
>> Hi,
>> I was just asked by my management to implement a new backup strategy.
>> They
>> want me to be able to go back at any point in time to recover data.
>> Example
>> auditing purposes. What do you guys recommend? My database is 8gb and
>> growing. Currently I was making a complete backup daily but only for 6
>> days
>> MTWTFS. Now I must also be able to recover at the very most recent point
>> in
>> time as well.
>> Thankssql
Help with Arithmetic Overflow error
procedure. The SP was developed some time ago and recently started
thowing this error:
Arithmetic overflow error converting numeric to data type numeric.
This error doesn't seem to make much sense since it happens when
inserting data into a physical table from a temp table in the stored
procedure. I've been pulling my hair out trying to figure out a way to
fix it. I'm pasting the code below along with my print statement and
comment showing where the error occurs. Any help is GREATLY
appreciated. Thanks!
--ALTER PROCEDURE [dbo].SP_OCCalculationMVTriggerTest
declare @.dtAsOfdate DATETIME
set @.dtAsOfDate = '2006-04-16';
DECLARE @.RC INTEGER
--
-- 1) Eligible Investments:
--
-- Input: @.SPVId - SPV we are running process for
-- @.Yes - value of enum CCPEnum::eYesNoYes (get by
lookup).
-- Output: Recordset (temp table) of Collaterals that are eligible for
MV Test (#MVTriggerInvestments).
DECLARE @.Yes INTEGER
EXEC @.RC = [dbo].CPLookupVal 'YesNo', 'Yes', @.Yes OUTPUT
IF (@.RC<>0)BEGIN
RAISERROR ('SP_OCCalculationMVTriggerTest: Failed to find Yes
enum', 16, 1) WITH SETERROR
END
drop table #MVTriggerInvestments
BEGIN
SELECT dbal.SPVId,
dbal.CusipId,
dbal.GroupId,
@.dtAsOfDate AS AsOfDate,
dbal.NormalOCRate,
dbal.SteppedUpOCRate,
dbal.AllocMarketValue AS MarketValue,
dbal.NbrDays,
dbal.PriceChangeRatio
INTO #MVTriggerInvestments
FROM DailyCollateralBalance dbal
JOIN CollateralGroupIncludeInOC gin
ON dbal.SPVId = 2
AND gin.SPVId = 2
AND dbal.AsOfDate = '2006-04-16'
AND @.dtAsOfDate BETWEEN gin.EffectiveFrom AND
gin.EffectiveTo
AND dbal.GroupId = gin.GroupId
AND gin.IncludeInOC = @.Yes
END
select * from #MVTriggerInvestments
print 'end #1'
--select * from #MVTriggerInvestments --looks ok
---
-- 2) Calculate Weighted Average Price change ratio Market Value (by
Group):
-- PCRMV - Price Change Ratio Market Value
---
-- Input : Recordset of collaterals (having New/Old prices, MarketValue
defined)
-- Output: Recordset Aggregated by Group (#GroupOCRate)
drop table #MVTriggerGroup
BEGIN
SELECT A.SPVId,
A.AsOfDate,
A.GroupId,
A.NormalOCRate,
A.SteppedUpOCRate,
A.MarketValue,
[dbo].fn_divide_or_number (B.PriceChangeRatioMarketValue,
B.MarketValueForPeriod, 0.00) as PriceChangeRatio,
CAST (0 AS NUMERIC(12,9))
AS OCRate,
CAST ('' AS VARCHAR(6))
AS OCRateType,
CAST (0 AS NUMERIC(18,2))
AS DiscMarketValue,
CAST (0 AS NUMERIC(18,2))
AS InterestAccrued
INTO #MVTriggerGroup
FROM
(
SELECT SPVId,
AsOfDate,
GroupId,
NormalOCRate,
SteppedUpOCRate,
SUM(MarketValue) AS MarketValue
FROM #MVTriggerInvestments
GROUP BY SPVId, AsOfDate, GroupId, NormalOCRate,
SteppedUpOCRate
) A --works up to here
JOIN
(SELECT SPVId,
SUM(AllocMarketValue) AS MarketValueForPeriod,
SUM(AllocMarketValue * PriceChangeRatio) as
PriceChangeRatioMarketValue,
GroupId
FROM T_DailyCollateralBalance
WHERE SPVId = 2
AND AsOfDate between '2006-03-17' and '2006-04-15'
AND IsBusinessDay = 1
GROUP BY SPVId, GroupId
) B
ON A.SPVId = B.SPVId
AND A.GroupId = B.GroupId
END
print 'end #2'
---
-- Calculate OCRate to apply for each group.
---
BEGIN
UPDATE #MVTriggerGroup
SET OCRate = (CASE WHEN ((PriceChangeRatio < 0) AND
ABS(PriceChangeRatio) > (0.55 * NormalOCRate)) THEN SteppedUpOCRate
ELSE NormalOCRate
END),
OCRateType = (CASE WHEN ((PriceChangeRatio < 0) AND
ABS(PriceChangeRatio) > (0.55 * NormalOCRate)) THEN 'stepup'
ELSE 'normal'
END)
END
print 'end #3'
--
-- Calculate discounted Market Value
--
UPDATE #MVTriggerGroup
SET DiscMarketValue = MarketValue / (1.0 + OCRate * 0.01)
print 'end #4'
--
-- Insert data from temp tables
--
-- 1)
select * from #MVTriggerInvestments
print 'begin tran'
BEGIN TRAN
DELETE T_MVTriggerInvestments
WHERE SPVId = 2 AND AsOfDate = '2006-04-16'
print 'DELETE T_MVTriggerInvestments'
--error is
here!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
INSERT T_MVTriggerInvestments
(
SPVId ,
CusipId ,
GroupId ,
AsOfDate ,
NormalOCRate ,
SteppedUpOCRate ,
MarketValue ,
NbrDays ,
OldPrice ,
NewPrice ,
PriceChangeRatio
)
SELECT SPVId ,
CusipId ,
GroupId ,
AsOfDate ,
NormalOCRate ,
SteppedUpOCRate ,
MarketValue ,
NbrDays ,
0.00 ,
0.00 ,
PriceChangeRatio
FROM #MVTriggerInvestments
print 'end mvtriggerinv select'
COMMIT TRAN
--end
error!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!
-- 2)
print 'begin tran 2'
BEGIN TRAN
DELETE T_OCTestGroup
WHERE SPVId = 2 AND AsOfDate = '2006-04-16'
INSERT T_OCTestGroup
(
SPVId ,
AsOfDate ,
GroupId ,
NormalOCRate ,
SteppedUpOCRate ,
MarketValue ,
PriceChangeRatio,
OCRate ,
OCRateType ,
DiscMarketValue ,
InterestAccrued ,
SectionA ,
CPFace ,
IntExpense ,
Fees ,
SectionB ,
Receivables ,
IntReceivables ,
CashBalance ,
Investments ,
SectionC ,
ExcessCollateral,
MaxCPAllowed
)
SELECT
SPVId ,
AsOfDate ,
GroupId ,
NormalOCRate ,
SteppedUpOCRate ,
MarketValue ,
PriceChangeRatio,
OCRate ,
OCRateType ,
DiscMarketValue ,
InterestAccrued ,
0 ,
0 ,
0 ,
0 ,
0 ,
0 ,
0 ,
0 ,
0 ,
0 ,
0 ,
0
FROM #MVTriggerGroup
print 'end tran 2'
COMMIT TRANThis is what the data looks like:
2 3128HDZR3 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 .00 956 .15469
2 3128JLT66 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 22764923.64 693 .40260
2 3128JLVQ9 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 33325930.88 693 .12272
2 3128JMY50 ARM 3-1 2006-04-16
00:00:00.000 4.220000000 6.330000000 .00 480 -.37161
2 3128NCB86 ARM 3-1 2006-04-16
00:00:00.000 4.220000000 6.330000000 .00 13 3083.50000
2 3128NCNV2 ARM 3-1 2006-04-16
00:00:00.000 4.220000000 6.330000000 .00 19 .00000
2 31295KSK1 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 .00 956 -.10075
2 31295NBU1 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 .00 956 .97381
2 31295NCW6 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 .00 956 .23484
2 31295NDA3 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 3445254.54 956 .51738
2 31295NMK1 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 .00 956 .18800
2 31295NNJ3 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 4723304.57 956 -.06034
2 31295NUE6 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 10974246.83 956 -.33549
2 31295NUG1 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 .00 956 .09947
2 31295NUR7 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 3840958.96 956 -.41991
2 31295NZ76 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 4679164.24 956 -.24881
2 31390YZH4 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 3333555.16 956 -.11334
2 31402C6D9 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 46830311.23 689 .74464
2 31402DB25 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 .00 689 .27197
2 31402DBZ2 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 .00 689 .78266
2 31404EXL5 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 7785530.05 721 -.22302
2 31404LPV6 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 7814559.22 755 .72999
2 31404MBN7 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 .00 721 .03317
2 31405JJ21 ARM 3-1 2006-04-16
00:00:00.000 4.220000000 6.330000000 .00 480 -.26118
2 31409ARA9 ARM 3-1 2006-04-16
00:00:00.000 4.220000000 6.330000000 .00 13 3083.50000
2 31409BB71 ARM 3-1 2006-04-16
00:00:00.000 4.220000000 6.330000000 .00 13 3083.50000
2 31409BGD3 ARM 5-1 2006-04-16
00:00:00.000 5.140000000 7.710000000 .00 13 3083.50000
2 31409DSH7 ARM 3-1 2006-04-16
00:00:00.000 4.220000000 6.330000000 .00 13 3083.50000
2 31409DSM6 ARM 3-1 2006-04-16
00:00:00.000 4.220000000 6.330000000 .00 7 3083.50000
2 31409DVN0 ARM 5-1 2006-04-16
00:00:00.000 5.140000000 7.710000000 .00 13 3083.50000
2 31409UYN9 ARM 5-1 2006-04-16
00:00:00.000 5.140000000 7.710000000 .00 13 3083.50000
2 36225CWV4 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 1575644.70 956 -.14477
2 36225DAY0 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 8995207.86 693 -.17400
2 31337NND6 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 .00 956 .00290
2 31349SEA7 ARM 3-1 2006-04-16
00:00:00.000 4.220000000 6.330000000 26777158.30 875 -.32595
2 31349SHH9 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 19822156.78 840 -.22582
2 31387A3M5 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 .00 956 .50317
2 31391BKL0 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 4267353.02 956 -.98672
2 31391CJA4 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 7446628.34 956 -1.00060
2 31391NM65 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 2783766.24 956 -.33597
2 31404E2E5 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 23112826.15 689 -.16876
2 31404EEJ1 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 12078241.02 784 .10121
2 31404JRP2 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 16699637.14 755 .28834
2 31404JVR3 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 10781379.91 755 .43313
2 31404JWZ4 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 11040810.52 662 .46652
2 31404JXW0 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 3883162.72 721 .27610
2 31404JXZ3 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 .06 662 .00905
2 31404KRS3 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 .00 693 -.20539
2 36225C4Y9 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 .00 784 -.04323
2 36225C6Y7 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 .00 721 -.13550
2 36225C7B6 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 19935755.89 721 -.15476
2 36225DAV6 ARM 1-1 FI 2006-04-16
00:00:00.000 2.930000000 4.395000000 11457579.74 693 -.15476
Wish I could upload a spreadsheet...sql
Wednesday, March 21, 2012
Help with a query
Im trying to export columns to text files so that they can be accessed via a website to show statistics.
(My SQL database is used for something else and I do not want the website directly connecting to it.)
So first I would have the table ordered by a specific column and then export the top 50 results for example.
I had it working to export to excell but I lost the query :(
Do I use something like EXPORT COLUMNS or INSERT INTO text file sorta thing
ThanksHi Paul
irrespective of the purpose of the db I would have the website access it directly. Why are you unhappy with this?
In any event BCP is an efficient way to get data out of the db and into text files.|||I dont want to lean on the SQL performance and im sure constant connections like this would.
I have never used BCP before so I guess I will go and have a look into that
thanks|||Still no luck BCP is very confusing :S|||is it? it's just a console app. here's how you can export data for an entire table:
bcp MyDatabase.dbo.MyTable out myfile.txt -c -T -SMYSERVER
and here's how to export the result of a query:
bcp "select foo from MyDatabase.dbo.MyTable where bar=12" out myfile.txt -c -T -SMYSERVER
Monday, March 12, 2012
Help With a Calculated Member Calculation
Hello,
I have a fact file that includes a measure called Credit Hours. I also have a dimension called DimTerm what includes time data for two terms: 05Fall and 06Fall. My goal is to get a number change between 05 and 06 terms. So I created a calculated member with the logic below but my percentage change is always %0.00. Maybe someone knows a way of doing this through MDX which would allow me to connct to Term values?
([Measures].[Credit Hours] - [Measures].[Credit Hours]) / [Measures].[Credit Hours]
Thanks!
Take a look at the ParallelPeriod function. Your calculated member will look something like
([Credit Hours] - ([Credit Hours], ParallelPeriod(...))) / [Credit Hours]
Help with 2 datetime fields-1 stores date, the other time
We have a lame app that uses 2 datetime(8) fields, 1 stores the date, the
other the time.
example query:
select aud_dt, aud_tm
from orders
results:
aud_dt aud_tm
2006-06-08 00:00:00.000 1900-01-01 12:32:26.287
I'm trying to create a query that give me records from the current date in
the past hour.
Here's a script that gives me todays date but I cannot figure out the time:
select aud_dt, aud_tm, datediff(d,aud_dt,getdate()), datediff(mi, aud_tm,
getdate())
from orders
where (datediff(d,aud_dt,getdate()) = 0)
results:
aud_dt aud_tm
datediff(0=today) timediff (since 1900-01-01)
2006-06-08 00:00:00.000 1900-01-01 12:32:26.287 0
55978689
I added this next part to the above query but it does not work since the
date/time is from 1900-01-01
and (datediff(mi, aud_tm, getdate()) <= 60)
Thanks for any help.rdraider wrote:
> Hi,
> We have a lame app that uses 2 datetime(8) fields, 1 stores the date, the
> other the time.
> example query:
> select aud_dt, aud_tm
> from orders
> results:
> aud_dt aud_tm
> 2006-06-08 00:00:00.000 1900-01-01 12:32:26.287
> I'm trying to create a query that give me records from the current date in
> the past hour.
> Here's a script that gives me todays date but I cannot figure out the time:
> select aud_dt, aud_tm, datediff(d,aud_dt,getdate()), datediff(mi, aud_tm,
> getdate())
> from orders
> where (datediff(d,aud_dt,getdate()) = 0)
> results:
> aud_dt aud_tm
> datediff(0=today) timediff (since 1900-01-01)
> 2006-06-08 00:00:00.000 1900-01-01 12:32:26.287 0
> 55978689
>
> I added this next part to the above query but it does not work since the
> date/time is from 1900-01-01
> and (datediff(mi, aud_tm, getdate()) <= 60)
>
> Thanks for any help.
The correct way would be to fix the database and use one datetime
column. I'll assume you already know this and that it isn't possible
for some reason.
So, if you want to combine those two into one datetime field (which you
could then use in a query however you like) you can use something like
this:
cast((cast(aud_dt as float) + cast(aud_tm as float)) as datetime)
although you might lose some precision in the miliseconds. If that's
unacceptable, you can instead do this:
convert(datetime, convert(varchar(10), aud_dt, 1) + ' ' +
convert(varchar(10), aud_tm, 14))|||These both work well. Miliseconds don't matter.
Thank you.
"ZeldorBlat" <zeldorblat@.gmail.com> wrote in message
news:1149832253.251499.139790@.h76g2000cwa.googlegr oups.com...
> rdraider wrote:
>> Hi,
>> We have a lame app that uses 2 datetime(8) fields, 1 stores the date, the
>> other the time.
>> example query:
>>
>> select aud_dt, aud_tm
>> from orders
>>
>> results:
>> aud_dt aud_tm
>> 2006-06-08 00:00:00.000 1900-01-01 12:32:26.287
>>
>> I'm trying to create a query that give me records from the current date
>> in
>> the past hour.
>> Here's a script that gives me todays date but I cannot figure out the
>> time:
>>
>> select aud_dt, aud_tm, datediff(d,aud_dt,getdate()), datediff(mi, aud_tm,
>> getdate())
>> from orders
>> where (datediff(d,aud_dt,getdate()) = 0)
>>
>> results:
>> aud_dt aud_tm
>> datediff(0=today) timediff (since 1900-01-01)
>> 2006-06-08 00:00:00.000 1900-01-01 12:32:26.287 0
>> 55978689
>>
>>
>> I added this next part to the above query but it does not work since the
>> date/time is from 1900-01-01
>> and (datediff(mi, aud_tm, getdate()) <= 60)
>>
>>
>> Thanks for any help.
> The correct way would be to fix the database and use one datetime
> column. I'll assume you already know this and that it isn't possible
> for some reason.
> So, if you want to combine those two into one datetime field (which you
> could then use in a query however you like) you can use something like
> this:
> cast((cast(aud_dt as float) + cast(aud_tm as float)) as datetime)
> although you might lose some precision in the miliseconds. If that's
> unacceptable, you can instead do this:
> convert(datetime, convert(varchar(10), aud_dt, 1) + ' ' +
> convert(varchar(10), aud_tm, 14))
Wednesday, March 7, 2012
Help Updating Data Across Servers
We have two DataBases Residing on two different Servers. We would like to
set up real time uodates between these two servers.
We have an Inventory System on server A and a Tracking System on Server B.
Now an Transaction in Server B should Update a Certain Table or Tables on
Server A. and vice versa.
I would like people to help me choose the Best soulution for this to work
successfully.
I was leaning towards Linked Servers anfd The making use of triggers. Is
this a Good Option.
But waht happens if the Trigger Fails how do we get the data Across to the
Servers.
Thanks for your help in advance.
Thanks,
Jothi
Hi,
If the database is residing in 2 servers and if you need to update data to
and fro then the only solution is using Linked servers and Triggers

But update Trigger using linked server will slow down the process.
Thanks
Hari
SQL Server MVP
"Jothi" <Jothi@.discussions.microsoft.com> wrote in message
news:1B9DD1BE-0DFB-4480-BD24-97C4C10A97C2@.microsoft.com...
> Hi,
> We have two DataBases Residing on two different Servers. We would like to
> set up real time uodates between these two servers.
> We have an Inventory System on server A and a Tracking System on Server B.
> Now an Transaction in Server B should Update a Certain Table or Tables on
> Server A. and vice versa.
> I would like people to help me choose the Best soulution for this to work
> successfully.
> I was leaning towards Linked Servers anfd The making use of triggers. Is
> this a Good Option.
> But waht happens if the Trigger Fails how do we get the data Across to the
> Servers.
> Thanks for your help in advance.
> Thanks,
> Jothi
>
|||hari,
Yes the data is acroos two servers.
How do i handle situations where a trigger fails.
DUe to non avialability of a server.
Thanks,
Jothi
"Hari Prasad" wrote:
> Hi,
> If the database is residing in 2 servers and if you need to update data to
> and fro then the only solution is using Linked servers and Triggers

> But update Trigger using linked server will slow down the process.
> Thanks
> Hari
> SQL Server MVP
> "Jothi" <Jothi@.discussions.microsoft.com> wrote in message
> news:1B9DD1BE-0DFB-4480-BD24-97C4C10A97C2@.microsoft.com...
>
>
Help Updating Data Across Servers
We have two DataBases Residing on two different Servers. We would like to
set up real time uodates between these two servers.
We have an Inventory System on server A and a Tracking System on Server B.
Now an Transaction in Server B should Update a Certain Table or Tables on
Server A. and vice versa.
I would like people to help me choose the Best soulution for this to work
successfully.
I was leaning towards Linked Servers anfd The making use of triggers. Is
this a Good Option.
But waht happens if the Trigger Fails how do we get the data Across to the
Servers.
Thanks for your help in advance.
Thanks,
JothiHi,
If the database is residing in 2 servers and if you need to update data to
and fro then the only solution is using Linked servers and Triggers

But update Trigger using linked server will slow down the process.
Thanks
Hari
SQL Server MVP
"Jothi" <Jothi@.discussions.microsoft.com> wrote in message
news:1B9DD1BE-0DFB-4480-BD24-97C4C10A97C2@.microsoft.com...
> Hi,
> We have two DataBases Residing on two different Servers. We would like to
> set up real time uodates between these two servers.
> We have an Inventory System on server A and a Tracking System on Server B.
> Now an Transaction in Server B should Update a Certain Table or Tables on
> Server A. and vice versa.
> I would like people to help me choose the Best soulution for this to work
> successfully.
> I was leaning towards Linked Servers anfd The making use of triggers. Is
> this a Good Option.
> But waht happens if the Trigger Fails how do we get the data Across to the
> Servers.
> Thanks for your help in advance.
> Thanks,
> Jothi
>|||hari,
Yes the data is acroos two servers.
How do i handle situations where a trigger fails.
DUe to non avialability of a server.
Thanks,
Jothi
"Hari Prasad" wrote:
> Hi,
> If the database is residing in 2 servers and if you need to update data to
> and fro then the only solution is using Linked servers and Triggers

> But update Trigger using linked server will slow down the process.
> Thanks
> Hari
> SQL Server MVP
> "Jothi" <Jothi@.discussions.microsoft.com> wrote in message
> news:1B9DD1BE-0DFB-4480-BD24-97C4C10A97C2@.microsoft.com...
>
>
Help Updating Data Across Servers
We have two DataBases Residing on two different Servers. We would like to
set up real time uodates between these two servers.
We have an Inventory System on server A and a Tracking System on Server B.
Now an Transaction in Server B should Update a Certain Table or Tables on
Server A. and vice versa.
I would like people to help me choose the Best soulution for this to work
successfully.
I was leaning towards Linked Servers anfd The making use of triggers. Is
this a Good Option.
But waht happens if the Trigger Fails how do we get the data Across to the
Servers.
Thanks for your help in advance.
Thanks,
JothiHi,
If the database is residing in 2 servers and if you need to update data to
and fro then the only solution is using Linked servers and Triggers:)
But update Trigger using linked server will slow down the process.
Thanks
Hari
SQL Server MVP
"Jothi" <Jothi@.discussions.microsoft.com> wrote in message
news:1B9DD1BE-0DFB-4480-BD24-97C4C10A97C2@.microsoft.com...
> Hi,
> We have two DataBases Residing on two different Servers. We would like to
> set up real time uodates between these two servers.
> We have an Inventory System on server A and a Tracking System on Server B.
> Now an Transaction in Server B should Update a Certain Table or Tables on
> Server A. and vice versa.
> I would like people to help me choose the Best soulution for this to work
> successfully.
> I was leaning towards Linked Servers anfd The making use of triggers. Is
> this a Good Option.
> But waht happens if the Trigger Fails how do we get the data Across to the
> Servers.
> Thanks for your help in advance.
> Thanks,
> Jothi
>|||hari,
Yes the data is acroos two servers.
How do i handle situations where a trigger fails.
DUe to non avialability of a server.
Thanks,
Jothi
"Hari Prasad" wrote:
> Hi,
> If the database is residing in 2 servers and if you need to update data to
> and fro then the only solution is using Linked servers and Triggers:)
> But update Trigger using linked server will slow down the process.
> Thanks
> Hari
> SQL Server MVP
> "Jothi" <Jothi@.discussions.microsoft.com> wrote in message
> news:1B9DD1BE-0DFB-4480-BD24-97C4C10A97C2@.microsoft.com...
> > Hi,
> > We have two DataBases Residing on two different Servers. We would like to
> > set up real time uodates between these two servers.
> > We have an Inventory System on server A and a Tracking System on Server B.
> > Now an Transaction in Server B should Update a Certain Table or Tables on
> > Server A. and vice versa.
> >
> > I would like people to help me choose the Best soulution for this to work
> > successfully.
> >
> > I was leaning towards Linked Servers anfd The making use of triggers. Is
> > this a Good Option.
> >
> > But waht happens if the Trigger Fails how do we get the data Across to the
> > Servers.
> >
> > Thanks for your help in advance.
> >
> > Thanks,
> > Jothi
> >
>
>
Help troubleshooting DTC start
Our Small Business Server 2003 (SBS) can not start the DTC (distributed
transaction coordinator) service. Each time it tries, an error appears is
the System log that says, " Error - Distributed Transaction Coordinator
failed to start.
(event id 7000) "The service did not respond to the start or control request
in a timely fashion".
SBS has a single instance of SQL. We have tried the following:
1. Stopping the SQL Server, Server Agent and attempting to restart DTC -
same error
2. Stopping SQL server, Server Agent and then typing "MSDTC -resetlog" in a
command window. The command appears to have completed successfully, however
the same error appears when we attempt to start DTC.
Does anyone have any suggestions on how to trouble-shoot this particular
issue. It happens consistently on this server.
Thanks for your help.
Bob
Hi Bob,
Go to the windows\system32\MSDtc, rename MSDTC.LOG and create a new file
MSDTC.LOG.
Open a command prompt, type mdtc -resetlog.
Hugs,
Rodrigo Fernandes
"Bob L. - SBS" wrote:
> Hi -
> Our Small Business Server 2003 (SBS) can not start the DTC (distributed
> transaction coordinator) service. Each time it tries, an error appears is
> the System log that says, " Error - Distributed Transaction Coordinator
> failed to start.
> (event id 7000) "The service did not respond to the start or control request
> in a timely fashion".
> SBS has a single instance of SQL. We have tried the following:
> 1. Stopping the SQL Server, Server Agent and attempting to restart DTC -
> same error
> 2. Stopping SQL server, Server Agent and then typing "MSDTC -resetlog" in a
> command window. The command appears to have completed successfully, however
> the same error appears when we attempt to start DTC.
> Does anyone have any suggestions on how to trouble-shoot this particular
> issue. It happens consistently on this server.
> Thanks for your help.
> Bob
>
|||Thanks for the quick response, however...
Went to dir MSDtc and found MSDTC.LOG file (4096 KB).
Renamed current log file.
Created new txt file with same name MSDTC.LOG
Opened CMD window, entered msdtc -resetlog (ran without error)
Stopped SQL Server Agent service
Stopped SQL Server service
Attempted to start DTC service - same error
Looked back at new log file - it is now 4096 KB in size
?
Bob
"Rodrigo Fernandes" wrote:
[vbcol=seagreen]
> Hi Bob,
> Go to the windows\system32\MSDtc, rename MSDTC.LOG and create a new file
> MSDTC.LOG.
> Open a command prompt, type mdtc -resetlog.
> Hugs,
> Rodrigo Fernandes
>
> "Bob L. - SBS" wrote:
|||Hi Bob,
What the account you use to start DTC Service ?
Try to start DTC with Local Administrator account.
I'm searching anothers solutions for help you.
Hugs,
Rodrigo Fernandes
MCSE/MCDBA
"Bob L. - SBS" wrote:
> Hi -
> Our Small Business Server 2003 (SBS) can not start the DTC (distributed
> transaction coordinator) service. Each time it tries, an error appears is
> the System log that says, " Error - Distributed Transaction Coordinator
> failed to start.
> (event id 7000) "The service did not respond to the start or control request
> in a timely fashion".
> SBS has a single instance of SQL. We have tried the following:
> 1. Stopping the SQL Server, Server Agent and attempting to restart DTC -
> same error
> 2. Stopping SQL server, Server Agent and then typing "MSDTC -resetlog" in a
> command window. The command appears to have completed successfully, however
> the same error appears when we attempt to start DTC.
> Does anyone have any suggestions on how to trouble-shoot this particular
> issue. It happens consistently on this server.
> Thanks for your help.
> Bob
>
|||Rodrigo - Thank you. Changing accounts from the Network Service (NT
Authority) account to the Local Administrator account let DTC start without a
problem.
You are "the man of the hour".
Bob
PS - I'm attempting to isolate the permissions problem with Regmon
(Sysinternals.com), but it is "Started" and that is what really matters.
"Rodrigo Fernandes" wrote:
[vbcol=seagreen]
> Hi Bob,
> What the account you use to start DTC Service ?
> Try to start DTC with Local Administrator account.
> I'm searching anothers solutions for help you.
> Hugs,
> Rodrigo Fernandes
> MCSE/MCDBA
> "Bob L. - SBS" wrote:
|||Actually, changing accounts "just look like it worked." In SBS 2003,
Microsoft requires that DTC be run with the Network Service account (for
security reasons). So, back to the fun...
Here is what I learned from this exercise:
1. The SLOW boot was caused by malfunctioning COM+
- Component Services was VERY SLOW to open
- When finally open, "My Computer" would not display on right side
- Nor when right-clicking would any MSDTC function be visible
2. The malfunction in COM+ was that MS DTC was not starting
- Error 1053 - when attempting to start DTC
- Changing accounts from Network Service to Local Account let DTC start
- However, in SBS 2003, DTC must be run with Network Service account
(MS security "fix")
- Running MSDTC - uninstall and MSDTC -install did not help problem
- MSDTC -resetlog (with accompanying file changes) did not help problem
- Running filemon (sysinternals.com) on DTC start with Network Service
account showed two files that Network Service did not have permissions to
access.
- Changing (restoring) permissions on these two files cured the
problem. DTC started under Network Service account and COM+ snapped back
into operation (after a reboot).
We are talking to a third-party vendor about changing file permissions...
Thanks again.
Bob L. - SBS
"Bob L. - SBS" wrote:
[vbcol=seagreen]
> Rodrigo - Thank you. Changing accounts from the Network Service (NT
> Authority) account to the Local Administrator account let DTC start without a
> problem.
> You are "the man of the hour".
> Bob
> PS - I'm attempting to isolate the permissions problem with Regmon
> (Sysinternals.com), but it is "Started" and that is what really matters.
> "Rodrigo Fernandes" wrote:
Help troubleshooting DTC start
Our Small Business Server 2003 (SBS) can not start the DTC (distributed
transaction coordinator) service. Each time it tries, an error appears is
the System log that says, " Error - Distributed Transaction Coordinator
failed to start.
(event id 7000) "The service did not respond to the start or control request
in a timely fashion".
SBS has a single instance of SQL. We have tried the following:
1. Stopping the SQL Server, Server Agent and attempting to restart DTC -
same error
2. Stopping SQL server, Server Agent and then typing "MSDTC -resetlog" in a
command window. The command appears to have completed successfully, however
the same error appears when we attempt to start DTC.
Does anyone have any suggestions on how to trouble-shoot this particular
issue. It happens consistently on this server.
Thanks for your help.
BobHi Bob,
Go to the windows\system32\MSDtc, rename MSDTC.LOG and create a new file
MSDTC.LOG.
Open a command prompt, type mdtc -resetlog.
Hugs,
Rodrigo Fernandes
"Bob L. - SBS" wrote:
> Hi -
> Our Small Business Server 2003 (SBS) can not start the DTC (distributed
> transaction coordinator) service. Each time it tries, an error appears is
> the System log that says, " Error - Distributed Transaction Coordinator
> failed to start.
> (event id 7000) "The service did not respond to the start or control reque
st
> in a timely fashion".
> SBS has a single instance of SQL. We have tried the following:
> 1. Stopping the SQL Server, Server Agent and attempting to restart DTC -
> same error
> 2. Stopping SQL server, Server Agent and then typing "MSDTC -resetlog" in
a
> command window. The command appears to have completed successfully, howev
er
> the same error appears when we attempt to start DTC.
> Does anyone have any suggestions on how to trouble-shoot this particular
> issue. It happens consistently on this server.
> Thanks for your help.
> Bob
>|||Thanks for the quick response, however...
Went to dir MSDtc and found MSDTC.LOG file (4096 KB).
Renamed current log file.
Created new txt file with same name MSDTC.LOG
Opened CMD window, entered msdtc -resetlog (ran without error)
Stopped SQL Server Agent service
Stopped SQL Server service
Attempted to start DTC service - same error
Looked back at new log file - it is now 4096 KB in size
?
Bob
"Rodrigo Fernandes" wrote:
[vbcol=seagreen]
> Hi Bob,
> Go to the windows\system32\MSDtc, rename MSDTC.LOG and create a new file
> MSDTC.LOG.
> Open a command prompt, type mdtc -resetlog.
> Hugs,
> Rodrigo Fernandes
>
> "Bob L. - SBS" wrote:
>|||Hi Bob,
What the account you use to start DTC Service ?
Try to start DTC with Local Administrator account.
I'm searching anothers solutions for help you.
Hugs,
Rodrigo Fernandes
MCSE/MCDBA
"Bob L. - SBS" wrote:
> Hi -
> Our Small Business Server 2003 (SBS) can not start the DTC (distributed
> transaction coordinator) service. Each time it tries, an error appears is
> the System log that says, " Error - Distributed Transaction Coordinator
> failed to start.
> (event id 7000) "The service did not respond to the start or control reque
st
> in a timely fashion".
> SBS has a single instance of SQL. We have tried the following:
> 1. Stopping the SQL Server, Server Agent and attempting to restart DTC -
> same error
> 2. Stopping SQL server, Server Agent and then typing "MSDTC -resetlog" in
a
> command window. The command appears to have completed successfully, howev
er
> the same error appears when we attempt to start DTC.
> Does anyone have any suggestions on how to trouble-shoot this particular
> issue. It happens consistently on this server.
> Thanks for your help.
> Bob
>|||Rodrigo - Thank you. Changing accounts from the Network Service (NT
Authority) account to the Local Administrator account let DTC start without
a
problem.
You are "the man of the hour".
Bob
PS - I'm attempting to isolate the permissions problem with Regmon
(Sysinternals.com), but it is "Started" and that is what really matters.
"Rodrigo Fernandes" wrote:
[vbcol=seagreen]
> Hi Bob,
> What the account you use to start DTC Service ?
> Try to start DTC with Local Administrator account.
> I'm searching anothers solutions for help you.
> Hugs,
> Rodrigo Fernandes
> MCSE/MCDBA
> "Bob L. - SBS" wrote:
>|||Actually, changing accounts "just look like it worked." In SBS 2003,
Microsoft requires that DTC be run with the Network Service account (for
security reasons). So, back to the fun...
Here is what I learned from this exercise:
1. The SLOW boot was caused by malfunctioning COM+
- Component Services was VERY SLOW to open
- When finally open, "My Computer" would not display on right side
- Nor when right-clicking would any MSDTC function be visible
2. The malfunction in COM+ was that MS DTC was not starting
- Error 1053 - when attempting to start DTC
- Changing accounts from Network Service to Local Account let DTC start
- However, in SBS 2003, DTC must be run with Network Service account
(MS security "fix")
- Running MSDTC - uninstall and MSDTC -install did not help problem
- MSDTC -resetlog (with accompanying file changes) did not help problem
- Running filemon (sysinternals.com) on DTC start with Network Service
account showed two files that Network Service did not have permissions to
access.
- Changing (restoring) permissions on these two files cured the
problem. DTC started under Network Service account and COM+ snapped back
into operation (after a reboot).
We are talking to a third-party vendor about changing file permissions...
Thanks again.
Bob L. - SBS
"Bob L. - SBS" wrote:
[vbcol=seagreen]
> Rodrigo - Thank you. Changing accounts from the Network Service (NT
> Authority) account to the Local Administrator account let DTC start withou
t a
> problem.
> You are "the man of the hour".
> Bob
> PS - I'm attempting to isolate the permissions problem with Regmon
> (Sysinternals.com), but it is "Started" and that is what really matters.
> "Rodrigo Fernandes" wrote:
>