Friday, March 23, 2012
multiplication by row without a cursor? Myth or Madness?
I haven't run across the need...but in a nutshell...we have a table with many rows of data, one column in each row needs to be multiplied by all other same-columns in the table's other rows.
For example...
MyKey int, MyFloat float(53)
I want to multiply Myfloat by all other Myfloat columns in the table.
Similar to SUM(MyFloat) but something like PRODUCT(MyFloat).
Is there a aggregate kept in a basement closet somewhere, or a way to perform this operation rather than using a cursor to do it:
An example of my table:
1 3.2
2 4.1
3 7.1
if I could do a PRODUCT(MyFloat) I would want the result to be (3.2 X 4.1 X 7.1) or 93.152
Do I have to do this with a cursor?
Thanks!
PaulGot it...
DECLARE @.MyInt int
SET @.MyInt = 1
SELECT @.MyInt = @.MyInt * MyFloat from MyTable where MyKey = <whatever>
print @.Myint
Yeah, I know...it's an overflow waiting to happen...but still...
Thanks for reading...and sorry to talk to myself yet again...I guess my Mom was right when she told me that I'm just a waste of bandwidth... :(|||What, did nobody pay attention during High School math classes?
select POWER(10, sum(log10(MyFloat))) from MyTable|||I have a harder time believing he has a freind...
:D
Just joking of course...|||Whee!!! The forum is working again!!! (I HEARD that, Brett...and a DOG and 3 tortoises still count as friends!)
Thanks for the suggestion, Blindman...I guess all that time and money was wasted in my math classes! :) Ya know, somehow, I just KNEW I shoulda paid more attention in my LOG class :D
However...(hey, you knew this was coming) that doesn't work, of course, for the negative numbers I've got in my "product" table...
I know a select can be written to check the sign of each number included in the calculation, but before I go and burn an excessive number of brain cells (or search keystrokes) anyone have a handy-dandy code block avialable for that part of the calculation?
It will, I am reasonably certain, require merely counting the number of negative numbers in the calculation and an ABS, like:
select POWER(10, sum(log10(ABS(MyFloat)))) from MyTable
Then somehow applying the sign ( ans * -1 for odd number of negative MyFloat entries in the calculation) to the result, but I am presently at a loss for how to apply the sign aspect of the problem.
Off to the races...
Thanks again!
Paul|||I imagine it would be as simple as:
select POWER(10, sum(log10(ABS(MyFloat))))*sign(MyFloat) from MyTable
My only problem with it is that if you have a 0 anywhere in the column, you could end up with problems, too. I think for simplicity sake, you should go with your first query. Of course, SQL 2005 is bound to disable Blindman's query that it was based on, since it has proven itself to be way too useful.|||OK, from what I have been able to ascertain (yep, just bought a new dictionary, thankyouverymuch! ;) ) the proposed addition of SIGN(MyFloat) won't work because that isn't (at least as written) an aggregate, so the compiler complains about it being invalid in the select list 'cause there is no GROUP BY nor is it in an aggregate.
Neither can it be added into the POWER(xxx) expression - because, again, POWER() apparently maintains a strong and lasting hatred of negative numbers - preferring to relegate them to nothingness rather than deal with them in a play-nice-in-the-sandbox way. But I digress...
What I needed was a way to apply the sign AFTER the PRODUCT-like operation.
What I found was several web hits that had logic for PRODUCT-type implementations in SQL that were NEAR what I wanted...and so...without further delay, here is my final product:
SELECT (EXP (SUM (LOG (CASE WHEN MyFloat = 0.00
THEN NULL
ELSE ABS(MyFloat) END))))
* (CASE WHEN (SUM (CASE WHEN SIGN(MyFloat) = -1
THEN 1
ELSE 0 END)) % 2 = 1
THEN -1.00
ELSE 1.00 END) AS Prod
FROM MyTable
The first CASE handles the situation of having NULL or Zero values in the table (which I simply want to ignore).
The second CASE applies the sign to the result, by keeping track of the NUMBER of negative values, and then multiplying the EXP(SUM(LOG())) overall result by -1 if the COUNT of negative values is odd, or by 1 if the COUNT is even.
I wish I could take credit for this, but alas, just for a workable b@.st@.rdization of the work of other folks ;)
I still am bothered by the complexity...I mean, if the calculation changes a year from now, I KNOW they are gonna send that Jr. Programmer to ask me about what the HEY I was thinking, or else just have him/her make changes without knowing what it does to start with (though I'm SURE that never happens ;) ).
The whole thing is further complicated because I (err...my "friend", I mean) have to perform some other calculations on the value returned from the above select...and if I was to use my original method, I'd have to make two passes through the table. That may be preferable to the relative complexity of this select...but I'll try it and see if my peers shoot me down in the code review ;)
Thanks guys!!! Great soundboarding!
Oh, and it REALLY started out to be the question of a co-worker working on the same project as I...it became mine when it took me too long to get back to her with a "I would do this..." email... :(|||Accounts for negative values and nulls:
------------------
set nocount on
declare @.PRODUCT table (Product varchar(30), Value float)
insert into @.Product (Product, Value) Values ('Coffee', -4.4)
insert into @.Product (Product, Value) Values ('Coffee', -5.5)
insert into @.Product (Product, Value) Values ('Coffee', 6.6)
insert into @.Product (Product, Value) Values ('Coffee', 7.7)
insert into @.Product (Product, Value) Values ('Tea', -44)
insert into @.Product (Product, Value) Values ('Tea', 55)
insert into @.Product (Product, Value) Values ('Tea', 66)
insert into @.Product (Product, Value) Values ('Tea', 77)
insert into @.Product (Product, Value) Values ('Water', -44)
insert into @.Product (Product, Value) Values ('Water', 55)
insert into @.Product (Product, Value) Values ('Water', 66)
insert into @.Product (Product, Value) Values ('Water', 77)
insert into @.Product (Product, Value) Values ('Water', 0)
select Product, EXP(sum(log(abs(nullif(Value, 0))))) *
(1+2*(cast(sum(sign(Value)-1)/2 as int) % 2)) * min(abs(sign(value)))
from @.PRODUCT
group by Product
------------------
This is truly mental mssql-bation...|||Hi, this is Paul's boss...thanks for exploding his head...
Now..WHO is going to cleanup this mess? ;)|||Well, if you'll check out my title I AM the Electronic Janitor.
Now where did I put my Electro-mop...?
Multiplication Aggregate
USE Northwind
GO
CREATE FUNCTION udf_MULT (@.x float)
Returns float
AS
BEGIN
DECLARE @.y float
SELECT @.y = @.x * 2
RETURN @.y
END
GO
SELECT Freight, dbo.udf_MULT(Freight) FROM Orders
GO|||Brett, i don't think your udf_MULT is what Teddy wants
sample rows:
name value
foo 2
bar 4
qux 6
fap 8
select sum(value) from ... gives 20
select mult(value) from ..., assuming there were such an aggregate function as "mult()", would return 384
maybe a variation of that routine written up in that sqlteam article using coalesce to produce a comma-delimited list?
rudy|||You mean like:
USE Northwind
DECLARE @.x money
SELECT @.x = ISNULL(@.x,1) + Freight FROM Orders
SELECT @.x
I added them because of overflow...but you could use * instead...|||mmm, i love overflow
perhaps that's why "mult()" was never invented -- too easy to blow up the query real good
:)|||Add for the +, the ISNULL value should be 0, not 1...1 For Multiplication
hmmmmmmmmm...my cup runneth over with a flaming homer...or was that a flaming moe...
Hey...seeing it in type...
Noooow I get it...
Great episode...
Got to pick me up some cough medice on the way home...|||Thanks for the reply guys!!
I've been a bit busy with the holiday so I haven't had much time to make my rounds here. So far I've learned that this essentially is not possible with anything other then ints without a udf. Currently I'm searching for a udf that will do this for me, as I am not terribly familiar with coding them myself. Perhaps it's time to up the learning curve eh?|||a MULT function for varchar?
Hmmmm, now you've got me TOTALLY lost...
What exactly are you trying to do?|||Originally posted by Brett Kaiser
a MULT function for varchar?
Hmmmm, now you've got me TOTALLY lost...
What exactly are you trying to do?
Not for varchar, float. The closest I've found so far is sort of a bizarre workaround that doesn't seem to work properly over large recordsets:
EXP(SUM(LOG(value)))|||Originally posted by Brett Kaiser
What exactly are you trying to do?
Do you mean like what r937 said?
You can't (to my knowledge) create a user defined scalar function...
But you can employ the other method I posted
DECLARE @.x bigint
SELECT @.x = @.x + floatColumn FROM Table
SELECT @.x
Is that what you're trying to do? Multiply all the numbers in 1 column?|||Originally posted by Brett Kaiser
Do you mean like what r937 said?
You can't (to my knowledge) create a user defined scalar function...
But you can employ the other method I posted
DECLARE @.x bigint
SELECT @.x = @.x + floatColumn FROM Table
SELECT @.x
Is that what you're trying to do? Multiply all the numbers in 1 column?
That appears to be returning null. Does the select @.x = iterate through the entire recordset? I'm a bit new with variable usage in the queries themselves. Generally I would do this sort of thing at the application level with my front-end.|||Give this a read...
http://sqlteam.com/item.asp?ItemID=2368|||That appears to be returning nullhence the need for the COALESCE
and yes, you did mention this earlier, brett, but it's worth repeating -- 0 for addition, 1 for multiplication
:D|||Originally posted by r937
hence the need for the COALESCE
and yes, you did mention this earlier, brett, but it's worth repeating -- 0 for addition, 1 for multiplication
:D
Ok, I'm think I'm getting on the right track, but I am still not quite getting what I need. Currently I am using:
DECLARE @.x FLOAT
SELECT @.x = COALESCE(@.x, 1) * myField
FROM myTable
SELECT @.x
That is now returning 0.0 Oddly enough, if I use + instead of *, I return the same as SUM() + 1. So it does appear to function exactly as expected when applied to addition (the +1 is the result of using 0 as opposed to 1 for the null return), I'm still a bit stumped as to why it returns zero for multiplication. What am I overlooking?|||overlooking? probably precision and/or scale (i can never remember which is which)
try
SELECT @.x = COALESCE(@.x, 1.000000000) * myField|||Originally posted by r937
overlooking? probably precision and/or scale (i can never remember which is which)
try
SELECT @.x = COALESCE(@.x, 1.000000000) * myField
The issue was an overflow error. I ran the same syntax over a much smaller dataset and produced the desired result.
Thanks a bunch guys!!
This is at least enough to get me over the "how the hell..." hump that everyone hits once in a while.
Much obliged.|||And just for another twist...what if 1 of your rows has a 0
Come on...
puuuuleeeeeeeeeeeeze
Tell us this is more than an academic exercise...because I can't think of a practical application at all...|||Originally posted by Brett Kaiser
And just for another twist...what if 1 of your rows has a 0
Come on...
puuuuleeeeeeeeeeeeze
Tell us this is more than an academic exercise...because I can't think of a practical application at all...
The practical application is to a series of modifiers to an insurance quote. If one of them is zero, then the quote is free. I doubt that will happen.
:)|||So you have a very specific predicate...the quote...how many modifiers per quote on average?
...and if it can happen it will happen...just a matter of time...
CREATE TABLE myTable99 (Col1 int NOT NULL CONSTRAINT myTable99_chk1 CHECK (Col1 <> 0))|||Originally posted by Brett Kaiser
So you have a very specific predicate...the quote...how many modifiers per quote on average?
...and if it can happen it will happen...just a matter of time...
CREATE TABLE myTable99 (Col1 int NOT NULL CONSTRAINT myTable99_chk1 CHECK (Col1 <> 0))
approx 7-8 modifiers per quote. The mods tend to be in the .75 - 1.5 range. And it will never be 0. Not unless the chairman of the board owes someone a serious favor. In which case, that's really not my problem to begin with.
The issue is the varying number of modfiers for each quote. Some will only have one, some will have 8. That's why this mult function comes in handy. This is fairly easy to do client-side, but for some reason is murderous to do in sql until this morning.|||So you would check to see if the result of this multiplier function is zero, and then act accordingly?
There are other methods that might work. For instance:
select min(Abs(YourColumn))
...would return zero if any of the values is zero.
blindman|||...and you can include this directly in a calculation:
Cost * cast(cast(min(Abs(YourColumn)) as bit) as int)
...would result in a cost of $0 if any of the multipliers is zero.
blindman|||Originally posted by blindman
...and you can include this directly in a calculation:
Cost * cast(cast(min(Abs(YourColumn)) as bit) as int)
...would result in a cost of $0 if any of the multipliers is zero.
blindman
That's really not an issue. This is well and solved.
Anywho, the current method would return 0 as-is.|||Originally posted by Teddy
Not for varchar, float. The closest I've found so far is sort of a bizarre workaround that doesn't seem to work properly over large recordsets:
EXP(SUM(LOG(value)))
I do like the workaround solution. Could you tell me what do you mean by "doesn't seem to work properly over large recordsets"? Is it the overflow problem? I tested it on a table of 4000 records, the results is as expected.|||Originally posted by shianmiin
I do like the workaround solution. Could you tell me what do you mean by "doesn't seem to work properly over large recordsets"? Is it the overflow problem?
edit: read that wrong.
I'm not sure what the issue is. It returns nothing at all when I use a large set. Very odd.|||Originally posted by Teddy
edit: read that wrong.
I'm not sure what the issue is. It returns nothing at all when I use a large set. Very odd.
for actual cases, i.e.
approx 7-8 modifiers per quote. The mods tend to be in the .75 - 1.5 range. And it will never be 0. Not unless the chairman of the board owes someone a serious favor.
I think this would be a best solution to me. :)
select
case
when min(isnull(f1, 0))=0 then 0
else exp(sum(log(isnull(case when f1=0 then 1 else f1 end,1))))
end
from t1
The query posted handles zero and null value correctly.
And I don't think overflow would be a problem.
(There shouldn't be negative values in the table, if it is, you need to add some code to handle it properly)|||The issue is:
USE Northwind
GO
DECLARE @.x BIGINT
SELECT @.x = 1
--Wont Blow Up
SELECT @.x = @.x * ISNULL(EmployeeId,1)
FROM Orders
WHERE EmployeeId = 1
SELECT @.@.ERROR AS ErrorCode
--Will Blow Up
SELECT @.x
SELECT @.x = 1
SELECT @.x = @.x * ISNULL(EmployeeId,1)
FROM Orders
WHERE EmployeeId = 2
SELECT @.@.ERROR AS ErrorCode
SELECT @.x
--The error should read:
-- Server: Msg 8115, Level 16, State 2, Line 11
-- Arithmetic overflow error converting expression to data type bigint.
In any event, error checking is a must...for all code...|||Originally posted by Brett Kaiser
The issue is:
USE Northwind
GO
DECLARE @.x BIGINT
SELECT @.x = 1
--Wont Blow Up
SELECT @.x = @.x * ISNULL(EmployeeId,1)
FROM Orders
WHERE EmployeeId = 1
SELECT @.@.ERROR AS ErrorCode
--Will Blow Up
SELECT @.x
SELECT @.x = 1
SELECT @.x = @.x * ISNULL(EmployeeId,1)
FROM Orders
WHERE EmployeeId = 2
SELECT @.@.ERROR AS ErrorCode
SELECT @.x
--The error should read:
-- Server: Msg 8115, Level 16, State 2, Line 11
-- Arithmetic overflow error converting expression to data type bigint.
In any event, error checking is a must...for all code...
Since the query returns float, the range would be - 1.79E + 308 through 1.79E + 308, that's what I meant that overflow shouldn't be a problem for actual situration. To prevent errors of overflow, the query may check the value before applying EXP(), that would be something like
case
when sum(log(...)) > 307 then ... -- overflow
when sum(log(...)) < -307 then ... -- underflow
else exp(sum(log(...)))
end|||Ahhh proactive error handling...very nice...
just watch...the [banging head]overhead[/banging head] for some statements...|||Originally posted by Brett Kaiser
Ahhh proactive error handling...very nice...
just watch...the [banging head]overhead[/banging head] for some statements...
I agree that the overhead should be considered. Normally procedural statements have more control over what needs to be done while combination of functions tends to make code look neat. Which choice to go would depends on actual siturations. :)
Wednesday, March 21, 2012
Multiple where clause help please
What is wrong with this query? There are no messages other than (0 row(s)
affected).
The criteria 5 and 16 does exist.
UPDATE articles
SET DEPART_REF = '19'
where ARTICLE_PARENT_ID = '5' or ARTICLE_PARENT_ID = '16'
Thanks in advance from a sql newbie
JakeNevermind, I am asking for the wrong fields in the where clause!
"GitarJake" <spamaintme@.oz.net> wrote in message
news:SXEXd.58461$SF.35123@.lakeread08...
> Hello,
> What is wrong with this query? There are no messages other than (0 row(s)
> affected).
> The criteria 5 and 16 does exist.
> UPDATE articles
> SET DEPART_REF = '19'
> where ARTICLE_PARENT_ID = '5' or ARTICLE_PARENT_ID = '16'
> Thanks in advance from a sql newbie
> Jake
>
Multiple where clause help please
What is wrong with this query? There are no messages other than (0 row(s)
affected).
The criteria 5 and 16 does exist.
UPDATE articles
SET DEPART_REF = '19'
where ARTICLE_PARENT_ID = '5' or ARTICLE_PARENT_ID = '16'
Thanks in advance from a sql newbie
Jake
Nevermind, I am asking for the wrong fields in the where clause!
"GitarJake" <spamaintme@.oz.net> wrote in message
news:SXEXd.58461$SF.35123@.lakeread08...
> Hello,
> What is wrong with this query? There are no messages other than (0 row(s)
> affected).
> The criteria 5 and 16 does exist.
> UPDATE articles
> SET DEPART_REF = '19'
> where ARTICLE_PARENT_ID = '5' or ARTICLE_PARENT_ID = '16'
> Thanks in advance from a sql newbie
> Jake
>
sql
Multiple where clause help please
What is wrong with this query? There are no messages other than (0 row(s)
affected).
The criteria 5 and 16 does exist.
UPDATE articles
SET DEPART_REF = '19'
where ARTICLE_PARENT_ID = '5' or ARTICLE_PARENT_ID = '16'
Thanks in advance from a sql newbie
JakeNevermind, I am asking for the wrong fields in the where clause!
"GitarJake" <spamaintme@.oz.net> wrote in message
news:SXEXd.58461$SF.35123@.lakeread08...
> Hello,
> What is wrong with this query? There are no messages other than (0 row(s)
> affected).
> The criteria 5 and 16 does exist.
> UPDATE articles
> SET DEPART_REF = '19'
> where ARTICLE_PARENT_ID = '5' or ARTICLE_PARENT_ID = '16'
> Thanks in advance from a sql newbie
> Jake
>
Multiple Values for single row
now every user will search and add other users in the database..so every user will have a contact list...i have no idea how to implement this...
so far i created a table 'UserAccount' with column names as
UserName as varchar(50)
Password as varchar(50)
EmailID as varchar(100)
DateOfJoining as datetime
UserID as int --> this is unique for user..i enabled automatic increment..and this is primary key..
so now every user must have a list of other userid's.. as contact list..
Any help any ideas will be great since i have no clue how to put multiple values for each row..i didnt even know how to search for this problems solution..iam sorry if this posted somewhere else..
THANK YOU !
if it helps..iam using sql server express edition..and iam accessing using asp.net/C#Hi,
Create a contact list table which includes columns,
UserId,
ContactId
Eralper
Monday, March 19, 2012
Multiple Unique Keys
individually identified by a unique value that is composed of more than one
column. How can define multiple unique keys for a table? Thanks to everyone
for being there to help.
Simply include all the appropriate columns in the PK definition:
CREATE TABLE [dbo].[Demo] (
[a] [int] NOT NULL ,
[b] [int] NOT NULL ,
[c] [int] NOT NULL ,
CONSTRAINT [PK_Demo] PRIMARY KEY CLUSTERED
(
[a],
[b],
[c]
)
)
Roy Harvey
Beacon Falls, CT
On Wed, 29 Nov 2006 07:46:01 -0800, coenzyme
<coenzyme@.discussions.microsoft.com> wrote:
>Hello everyone. I attempting to construct a table where a single row can be
>individually identified by a unique value that is composed of more than one
>column. How can define multiple unique keys for a table? Thanks to everyone
>for being there to help.
|||Thanks for responding Roy.
"Roy Harvey" wrote:
> Simply include all the appropriate columns in the PK definition:
> CREATE TABLE [dbo].[Demo] (
> [a] [int] NOT NULL ,
> [b] [int] NOT NULL ,
> [c] [int] NOT NULL ,
> CONSTRAINT [PK_Demo] PRIMARY KEY CLUSTERED
> (
> [a],
> [b],
> [c]
> )
> )
> Roy Harvey
> Beacon Falls, CT
> On Wed, 29 Nov 2006 07:46:01 -0800, coenzyme
> <coenzyme@.discussions.microsoft.com> wrote:
>
Multiple Unique Keys
individually identified by a unique value that is composed of more than one
column. How can define multiple unique keys for a table? Thanks to everyone
for being there to help.Simply include all the appropriate columns in the PK definition:
CREATE TABLE [dbo].[Demo] (
[a] [int] NOT NULL ,
[b] [int] NOT NULL ,
[c] [int] NOT NULL ,
CONSTRAINT [PK_Demo] PRIMARY KEY CLUSTERED
(
[a],
[b],
[c]
)
)
Roy Harvey
Beacon Falls, CT
On Wed, 29 Nov 2006 07:46:01 -0800, coenzyme
<coenzyme@.discussions.microsoft.com> wrote:
>Hello everyone. I attempting to construct a table where a single row can be
>individually identified by a unique value that is composed of more than one
>column. How can define multiple unique keys for a table? Thanks to everyone
>for being there to help.|||Thanks for responding Roy.
"Roy Harvey" wrote:
> Simply include all the appropriate columns in the PK definition:
> CREATE TABLE [dbo].[Demo] (
> [a] [int] NOT NULL ,
> [b] [int] NOT NULL ,
> [c] [int] NOT NULL ,
> CONSTRAINT [PK_Demo] PRIMARY KEY CLUSTERED
> (
> [a],
> [b],
> [c]
> )
> )
> Roy Harvey
> Beacon Falls, CT
> On Wed, 29 Nov 2006 07:46:01 -0800, coenzyme
> <coenzyme@.discussions.microsoft.com> wrote:
>
>
Multiple ToggleItems
are filtered by the same datetimestamp, which appears in the detail
row. Right now, I have the ToggleItem for all of the tables set to
the same text box at the top of the report. I can also set the
ToggleItem for each of the tables to a cell contained in the same
table. Is there any way to specify multiple Toggleitem values, i.e.
the text box at the top of the report AND the cell in the table
itself?No, this isn't possible.
The closest I think you can get is to have a report parameter which
determines the initial visibility state of the tables.
<Hidden>=Parameters!InitiallyHidden.Value</Hidden>
Then instead of using the top level textbox be a toggle item, you make it a
drillthrough back to the same report with the initallyhidden parameter value
set to
=Not(Parameters!InitiallyHidden.Value)
--
This post is provided 'AS IS' with no warranties, and confers no rights. All
rights reserved. Some assembly required. Batteries not included. Your
mileage may vary. Objects in mirror may be closer than they appear. No user
serviceable parts inside. Opening cover voids warranty. Keep out of reach of
children under 3.
"utterman" <chris.utter@.gmail.com> wrote in message
news:806a4b3c.0407281257.46ecf520@.posting.google.com...
> I have a report that utilizes five separate tables. All of the tables
> are filtered by the same datetimestamp, which appears in the detail
> row. Right now, I have the ToggleItem for all of the tables set to
> the same text box at the top of the report. I can also set the
> ToggleItem for each of the tables to a cell contained in the same
> table. Is there any way to specify multiple Toggleitem values, i.e.
> the text box at the top of the report AND the cell in the table
> itself?
Multiple text updates to a single row
I have a table A that has related records in table B. I need to run an update to concatonate certian values in table B into a single value in table A.
Since an UPDATE can't update the same row twice, is there any way I can do this other than use a Cursor?
No need to use cursor. simple update should be enough, but you need to watch out for the data types though. You should be able to use somthing like this.
Code Snippet
declare @.a table (x int, y varchar(200))
declare @.b table (x int, i int, j char(2), k bit)
insert @.a select 1, NULL
insert @.a select 2, NULL
insert @.a select 3, NULL
insert @.a select 4, NULL
insert @.a select 5, NULL
insert @.b select 1, 55, 'AB', 0
insert @.b select 2, 66, 'CD', 1
insert @.b select 3, 77, 'EF', 1
insert @.b select 4, 88, 'GH', 0
insert @.b select 5, 99, 'IJ', 1
update a
set a.y = cast (b.i as varchar(10)) + b.j + cast (b.k as varchar(1))
from @.a a join @.b b
on a.x = b.x
select * from @.a
|||
Thank you for the response. Unfortunately, this is not my situation. I have multiple related records in table B that relate back to single records in table A. I'll update your code example to reflect the problem I have.
Code Snippet
declare @.a table (x int, y varchar(200))
declare @.b table (x int, j char(2))
insert @.a select 1, NULL
insert @.a select 2, NULL
insert @.b select 1, 'AB'
insert @.b select 1, 'CD'
insert @.b select 1, 'EF'
insert @.b select 2, 'ZY'
insert @.b select 2, 'RX'
--The Following select will fail to return
--concatonated values in a.y because the
--UPDATE can't update the same row value
--twice in the same UPDATE statement.
update a
set a.y = ISNULL(a.y,'') + b.j
from @.a a join @.b b
on a.x = b.x
--column y only holds the first value for
--group 1 and group 2
select * from @.a
|||
I am sure, there will be better ways than this code, you can do something like this.
Code Snippet
declare @.a table (x int, y varchar(200))
declare @.b table (x int, j char(2))
declare @.temp table (x int)
declare @.x int
declare @.y varchar(200)
insert @.a select 1, NULL
insert @.a select 2, NULL
insert @.b select 1, 'AB'
insert @.b select 1, 'CD'
insert @.b select 1, 'EF'
insert @.b select 2, 'ZY'
insert @.b select 2, 'RX'
insert @.temp
select distinct x from @.a
while exists (select 1 from @.temp)
begin
select top 1 @.x = x from @.temp
select @.y = ISNULL(@.y, '') + j from @.b b where b.x = @.x
select @.y as 'the value'
update a
set y = ISNULL(y, '') + @.y
from @.a a
where a.x = @.x
delete from @.temp where x = @.x
set @.y = ''
end
select * from @.a
|||
Here is an old post that should help. Basically, you create a udf to concatenate the values.
http://groups.google.com/group/microsoft.public.sqlserver.programming/browse_thread/thread/81da308504ac0e90
|||Is this just for display? Or are you going to be storing the values like this? Best policy would be to use the UI to display the rows as you want. For display, you can use the techniques on the following page:
http://databases.aspfaq.com/general/how-do-i-concatenate-strings-from-a-column-into-a-single-row.html
But if you are doing this to store the values like this, it is a bad idea. Having each value in a different row is the best policy always. Far easier in SQL to build up a value than it is tear it apart.
set nocount on
declare @.a table (x int, y varchar(200))
declare @.b table (x int, j char(2))
insert @.a select 1, NULL
insert @.a select 2, NULL
insert @.b select 1, 'AB'
insert @.b select 1, 'CD'
insert @.b select 1, 'EF'
insert @.b select 2, 'ZY'
insert @.b select 2, 'RX'
update a
set y = LEFT(o.list, LEN(o.list)-1)
FROM @.a a
CROSS APPLY
(
SELECT
CONVERT(VARCHAR(12), j) + ',' AS [text()]
FROM
@.b s
WHERE
s.x = a.x
ORDER BY
x
FOR XML PATH('')
) o (list)
select *
from @.a
Returns:
x y
-- -
1 AB,CD,EF
2 ZY,RX
|||First, thanks to Sankar and oj for your assistance.
Louis, your solution is what I was looking for. I do need to store the value, but it is for a very specific reason. I am finishing a table and stored procedure solution to delivering an open ended number of report subscription records to an SSIS package to output between 1 and 76 reports to PDF with a potentially different set of input parameters and values per client, per report.
Using the code above, I will be able to populate a single table structure with all report subscription records regardless of input parameters. I will concatenate the parameters as you illustrated above as "&parm1=val1&parm2=val2" while the next record may have "&parm5=value5" only. It allows me to run all filtered reports based on scheduling values (i.e. daily, weekly, monthly) while accounting for the differences in reports and client requirements.
I have not spent enough time exploring the XML functionality in SQL Server, and it bit me this time.
Thanks again for the help.
Hugh
|||oj,
On second glance I used your UDF route. Made it easy to integrate into an existing sql insert without adding any more code to my stored procedure.
Thanks.
Multiple text updates to a single row
I have a table A that has related records in table B. I need to run an update to concatonate certian values in table B into a single value in table A.
Since an UPDATE can't update the same row twice, is there any way I can do this other than use a Cursor?
No need to use cursor. simple update should be enough, but you need to watch out for the data types though. You should be able to use somthing like this.
Code Snippet
declare @.a table (x int, y varchar(200))
declare @.b table (x int, i int, j char(2), k bit)
insert @.a select 1, NULL
insert @.a select 2, NULL
insert @.a select 3, NULL
insert @.a select 4, NULL
insert @.a select 5, NULL
insert @.b select 1, 55, 'AB', 0
insert @.b select 2, 66, 'CD', 1
insert @.b select 3, 77, 'EF', 1
insert @.b select 4, 88, 'GH', 0
insert @.b select 5, 99, 'IJ', 1
update a
set a.y = cast (b.i as varchar(10)) + b.j + cast (b.k as varchar(1))
from @.a a join @.b b
on a.x = b.x
select * from @.a
|||
Thank you for the response. Unfortunately, this is not my situation. I have multiple related records in table B that relate back to single records in table A. I'll update your code example to reflect the problem I have.
Code Snippet
declare @.a table (x int, y varchar(200))
declare @.b table (x int, j char(2))
insert @.a select 1, NULL
insert @.a select 2, NULL
insert @.b select 1, 'AB'
insert @.b select 1, 'CD'
insert @.b select 1, 'EF'
insert @.b select 2, 'ZY'
insert @.b select 2, 'RX'
--The Following select will fail to return
--concatonated values in a.y because the
--UPDATE can't update the same row value
--twice in the same UPDATE statement.
update a
set a.y = ISNULL(a.y,'') + b.j
from @.a a join @.b b
on a.x = b.x
--column y only holds the first value for
--group 1 and group 2
select * from @.a
|||
I am sure, there will be better ways than this code, you can do something like this.
Code Snippet
declare @.a table (x int, y varchar(200))
declare @.b table (x int, j char(2))
declare @.temp table (x int)
declare @.x int
declare @.y varchar(200)
insert @.a select 1, NULL
insert @.a select 2, NULL
insert @.b select 1, 'AB'
insert @.b select 1, 'CD'
insert @.b select 1, 'EF'
insert @.b select 2, 'ZY'
insert @.b select 2, 'RX'
insert @.temp
select distinct x from @.a
while exists (select 1 from @.temp)
begin
select top 1 @.x = x from @.temp
select @.y = ISNULL(@.y, '') + j from @.b b where b.x = @.x
select @.y as 'the value'
update a
set y = ISNULL(y, '') + @.y
from @.a a
where a.x = @.x
delete from @.temp where x = @.x
set @.y = ''
end
select * from @.a
|||
Here is an old post that should help. Basically, you create a udf to concatenate the values.
http://groups.google.com/group/microsoft.public.sqlserver.programming/browse_thread/thread/81da308504ac0e90
|||Is this just for display? Or are you going to be storing the values like this? Best policy would be to use the UI to display the rows as you want. For display, you can use the techniques on the following page:
http://databases.aspfaq.com/general/how-do-i-concatenate-strings-from-a-column-into-a-single-row.html
But if you are doing this to store the values like this, it is a bad idea. Having each value in a different row is the best policy always. Far easier in SQL to build up a value than it is tear it apart.
set nocount on
declare @.a table (x int, y varchar(200))
declare @.b table (x int, j char(2))
insert @.a select 1, NULL
insert @.a select 2, NULL
insert @.b select 1, 'AB'
insert @.b select 1, 'CD'
insert @.b select 1, 'EF'
insert @.b select 2, 'ZY'
insert @.b select 2, 'RX'
update a
set y = LEFT(o.list, LEN(o.list)-1)
FROM @.a a
CROSS APPLY
(
SELECT
CONVERT(VARCHAR(12), j) + ',' AS [text()]
FROM
@.b s
WHERE
s.x = a.x
ORDER BY
x
FOR XML PATH('')
) o (list)
select *
from @.a
Returns:
x y
-- -
1 AB,CD,EF
2 ZY,RX
|||First, thanks to Sankar and oj for your assistance.
Louis, your solution is what I was looking for. I do need to store the value, but it is for a very specific reason. I am finishing a table and stored procedure solution to delivering an open ended number of report subscription records to an SSIS package to output between 1 and 76 reports to PDF with a potentially different set of input parameters and values per client, per report.
Using the code above, I will be able to populate a single table structure with all report subscription records regardless of input parameters. I will concatenate the parameters as you illustrated above as "&parm1=val1&parm2=val2" while the next record may have "&parm5=value5" only. It allows me to run all filtered reports based on scheduling values (i.e. daily, weekly, monthly) while accounting for the differences in reports and client requirements.
I have not spent enough time exploring the XML functionality in SQL Server, and it bit me this time.
Thanks again for the help.
Hugh
|||oj,
On second glance I used your UDF route. Made it easy to integrate into an existing sql insert without adding any more code to my stored procedure.
Thanks.
Monday, March 12, 2012
Multiple tables or one large table
table sets or a single large table set with a key in each row.
Details:
SQL Server 2005
We have an application that uses 20 or so tables but will likely grow into
the small hundreds. We will potentially have hundreds of customers. Each
customer might add 200 to 10,000 rows per day to the database. We can:
A. Have a single set of tables where the a customer key value defines each
row.
B. Have a set of identical tables for each customer
We have considered the "management" aspect and would use DDL trigers to
ensure that the tables and indexes stay identical. We're concerned with the
performance difference.
What effects will each option have on caching? Indexing? Overall speed of
data retrieval?"Sierra" <Sierra@.discussions.microsoft.com> wrote in message
news:EBB395BF-475F-4125-8E12-B4FD7D873DFE@.microsoft.com...
> What are the performance considerations for deciding whether to have
> multiple
> table sets or a single large table set with a key in each row.
> Details:
> SQL Server 2005
> We have an application that uses 20 or so tables but will likely grow into
> the small hundreds. We will potentially have hundreds of customers. Each
> customer might add 200 to 10,000 rows per day to the database. We can:
> A. Have a single set of tables where the a customer key value defines each
> row.
> B. Have a set of identical tables for each customer
> We have considered the "management" aspect and would use DDL trigers to
> ensure that the tables and indexes stay identical. We're concerned with
> the
> performance difference.
>
Assuming that
-Customer is the leading column in every primary key
-All queries have a Customer parameter
> What effects will each option have on caching? Indexing? Overall speed of
> data retrieval?
Minimal probably. And the overall size of the database doesn't sound too
big, so I wouldn't make any drastic decisions on the basis of performance.
I would definitely design the logical schema to support multiple customers
per database. Once you have that you can consolidate all customers into a
single database, break them into a few or even one per customer.
David|||I would not have a separate table, or tables, for each customer. Consider
maybe archiving old data at a certain point to limit the number of records.
Definitely normalize your tables, but don't go to extremes, because that can
backfire from a performance standpoint too.
"Sierra" <Sierra@.discussions.microsoft.com> wrote in message
news:EBB395BF-475F-4125-8E12-B4FD7D873DFE@.microsoft.com...
> What are the performance considerations for deciding whether to have
multiple
> table sets or a single large table set with a key in each row.
> Details:
> SQL Server 2005
> We have an application that uses 20 or so tables but will likely grow into
> the small hundreds. We will potentially have hundreds of customers. Each
> customer might add 200 to 10,000 rows per day to the database. We can:
> A. Have a single set of tables where the a customer key value defines each
> row.
> B. Have a set of identical tables for each customer
> We have considered the "management" aspect and would use DDL trigers to
> ensure that the tables and indexes stay identical. We're concerned with
the
> performance difference.
> What effects will each option have on caching? Indexing? Overall speed of
> data retrieval?
Friday, March 9, 2012
multiple subscriptions in to same table
one or more of the row partitions. We'd prefer not to have separate
databases or tables for each partition at the subscriber. We've look
at various scenarios, transactional repl with DTS filter, merge
replication with dynamic join filters. Can't figure out clearly if
this would work.
Is it possible to have multiple subscriptions in to the same target
table on the subscriber, assuming the same table structure?
Seems like this could work with merge replication provided the initial
snapshot is loaded without deleting existing table structure or data
since the updates are based on rowguid which would keep things
separate.
I am a little confused about what you are trying to accomplish. You can
publish a table multiple times in the same publication or in different
publications.
If you publish a table multiple times in the same publication the
destination object name must be different. Does this answer your question?
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<vbhackattack@.hotmail.com> wrote in message
news:1172026819.916313.105530@.k78g2000cwa.googlegr oups.com...
> We want to publish a row filtered table where each subscriber can have
> one or more of the row partitions. We'd prefer not to have separate
> databases or tables for each partition at the subscriber. We've look
> at various scenarios, transactional repl with DTS filter, merge
> replication with dynamic join filters. Can't figure out clearly if
> this would work.
> Is it possible to have multiple subscriptions in to the same target
> table on the subscriber, assuming the same table structure?
> Seems like this could work with merge replication provided the initial
> snapshot is loaded without deleting existing table structure or data
> since the updates are based on rowguid which would keep things
> separate.
>
|||On Feb 20, 7:00 pm, vbhackatt...@.hotmail.com wrote:
> We want to publish a row filtered table where each subscriber can have
> one or more of the row partitions.
> Seems like this could work with merge replication
Yes this can be done with merge replication. You need to enable the
"multiple subscribers can have data" option -- I forget what it is
called exactly -- when you create the publication.
And also you need to have filtering logic that would result in
overlapped data such as "where SalesPersonId=X or RegionId=Y"; each
SalesPerson gets their individual data and everybody that asks for
RegionId Y gets gets data for RegionId=Y.
|||I will try to restate my scenario to clarify: I would like for a
single subscriber (a site) to get 2 or more horizontal partitions of a
publication (2 or more offices' data) in to the same local subscriber
table, and be able to update them back to the publisher. I say "in to
the same local table" so the local windows application reads/writes a
single table and knows nothing about the replication configuration. I
do not want multiple tables or multiple databases one for each
horizontal partition. Reading the docs I figure replication could
possibly handle this:
- snapshots are partitioned so they can be loaded more than one in the
same table, use option not to drop existing schema
- updates with a single partition will work because insert/update/
delete only touches rows within the partition subscription
Might have to config manually since replication wizard might create
the agent jobs with overlapping names
One problem I see is if an update causes a row to disqualify for one
partition and requalify for the other partition, this would translate
in to a delete from one and insert in the other so the data gets
dropped at the subscriber and then re-downloaded. Not a big deal with
just a single table but eventually I'm extending this to multiple
tables some with lots of data in each partition.
Does that make my scenario clearer?
On Feb 21, 8:58 am, "Hilary Cotter" <hilary.cot...@.gmail.com> wrote:[vbcol=seagreen]
> I am a little confused about what you are trying to accomplish. You can
> publish a table multiple times in the same publication or in different
> publications.
> If you publish a table multiple times in the same publication the
> destination object name must be different. Does this answer your question?
> --
> Hilary Cotter
> Looking for a SQL Server replication book?http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTShttp://www.indexserverfaq.com
> <vbhackatt...@.hotmail.com> wrote in message
> news:1172026819.916313.105530@.k78g2000cwa.googlegr oups.com...
>
|||Yes it does. The best way to do this would be to make the single subscriber
a publisher and the other current publishers subscribers.
In this case the single publisher could start off as empty and you could use
sp_addtabletocontents on the subscribers to push their data to the
publisher.
With filtering DML on subscriber a would move to the publisher and not be
replicated to subscriber b. The partition realignments you speak of
(deleting a row on subscriber b and have it inserted on subscriber a) are
handled if you use join filters which extend the join to all child rows of a
modified parent row.
Yours is a complex scenario - have I understood it correctly?
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
<vbhackattack@.hotmail.com> wrote in message
news:1172146434.103851.121880@.l53g2000cwa.googlegr oups.com...
>I will try to restate my scenario to clarify: I would like for a
> single subscriber (a site) to get 2 or more horizontal partitions of a
> publication (2 or more offices' data) in to the same local subscriber
> table, and be able to update them back to the publisher. I say "in to
> the same local table" so the local windows application reads/writes a
> single table and knows nothing about the replication configuration. I
> do not want multiple tables or multiple databases one for each
> horizontal partition. Reading the docs I figure replication could
> possibly handle this:
> - snapshots are partitioned so they can be loaded more than one in the
> same table, use option not to drop existing schema
> - updates with a single partition will work because insert/update/
> delete only touches rows within the partition subscription
> Might have to config manually since replication wizard might create
> the agent jobs with overlapping names
> One problem I see is if an update causes a row to disqualify for one
> partition and requalify for the other partition, this would translate
> in to a delete from one and insert in the other so the data gets
> dropped at the subscriber and then re-downloaded. Not a big deal with
> just a single table but eventually I'm extending this to multiple
> tables some with lots of data in each partition.
> Does that make my scenario clearer?
> On Feb 21, 8:58 am, "Hilary Cotter" <hilary.cot...@.gmail.com> wrote:
>
Monday, February 20, 2012
multiple rows to one row and multiple columns
Hi,
I want to convert multiple rows to one row and multiple columns. I saw some examples with PIVOT but i could not get them to work.
Heres what i want to do:
This is the how the table is:
EmpID Designation
I want it to display as:
EmpID Designation1 Designation2 Designation3
678 CFA
679 CFA
680 CFA
685 CFP CIMA IMCA
could anyone provide some help on this?
Thanks
Below is one way to do it:
Code Snippet
select pt.EmpID, pt.[1] as Designation1, pt.[2] as Designation2
, pt.[3] as Designation3, pt.[4] as Designation4
from (
select t.EmpID, t.Designation
, ROW_NUMBER() OVER(PARTITION BY t.EmpID ORDER BY t.Designation) as rn
from your_table as t
) as t1
pivot (max(t1.Designation) for t1.rn in ([1], [2], [3], [4])
/* add max number of designations per emp*/
) as pt
Multiple rows of data into 1 row results?
I have a Data Table (ID, Name, Date, etc), Facility Table (ID, Name) and FacilityKey Table (Data ID and Facility ID). The Data table has a 1 to many relationship with the FacilityKey table.
I want to return the Data ID and each Facility Name associated with that Data ID to display in a DataGrid (preferably each Facility Name in the same cell on the grid too), but am stumped on how to get teh query to work (or where to look in the SQL documentation for something like this).
Thanks!What you are trying to do is typically called a pivot table or cross tab query.
Take a look at this post:view post 350666. Someone has posted a suggestion to use a table control to accomplish this. See if anything there helps you. If not, post again :-)
Terri|||I think I may have to do it outside of SQL. I don't want to do any sort of aggregation of the data - just display it in 1 row instead of multiple and I don't see anywhere how to do the SQL cross tab without using an aggregate function of some sort.|||Well, you could write a UDF that would take the DataID as a parameter and would return a comma-delimited list of all of the facilities for that DataID. Like this:
CREATE FUNCTION getFacilities
(@.DataID int)
RETURNS varchar(8000)
AS
BEGINDECLARE @.facilities VARCHAR(8000)
SET @.facilities = ''SELECT
@.facilities = @.facilities + ', ' + FacilityTable.FacilityName
FROM
DataTable
INNER JOIN
FacilityKeyTable ON DataTable.DataID = FacilityKeyTable.KeyDataID
INNER JOIN
FacilityTable ON FacilityKeyTable.KeyFacilityID = FacilityTable.FacilityID
WHERE
DataTable.DataID = @.DataIDIF @.facilities <> ''
BEGIN
SET @.facilities = SUBSTRING(@.facilities,3,LEN(@.facilities)-2)
ENDRETURN (@.facilities)
END
You would use that function like this:
SELECT DataID, dbo.getFacilities(DataID) FROM DataTable
Terri|||Thank you! That is exactly what I was looking for!|||If efficiency is a priority for you then this function is a poor solution since under the hood it means you are doing N queries for each row returned where N is the number of facilities. A cross-tab query will be more efficient.
Multiple rows into one row
I have a two tables one contains Plan information and another that
contains product information about the plan
ex:
Plan table
PlanID Plan_name
1 a
2 b
Product Table
ProductID PlanID Comments
1 1 com1
2 1 com2
3 1 com3
What I am looking to do if possible would be the following
Plan Product1 Comments1 Product2 Comments2
1 1 com1 2 com2
I am wondering down what path I should explore cause I am new to this.
I am using sql 2005Chicagoboy27 wrote:
Quote:
Originally Posted by
Hello I am wondering if this is possible.
>
I have a two tables one contains Plan information and another that
contains product information about the plan
>
ex:
Plan table
PlanID Plan_name
1 a
2 b
>
Product Table
ProductID PlanID Comments
1 1 com1
2 1 com2
3 1 com3
>
What I am looking to do if possible would be the following
>
Plan Product1 Comments1 Product2 Comments2
1 1 com1 2 com2
>
>
I am wondering down what path I should explore cause I am new to this.
I am using sql 2005
Take a look at PIVOT in Books Online. Also Itzik Ben-Gan has articles
on PIVOT in the last two issues of SQL Server Magazine. www.sqlmag.com
--
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/...US,SQL.90).aspx
--|||Thanks I will explore that option...
David Portas wrote:
Quote:
Originally Posted by
Chicagoboy27 wrote:
Quote:
Originally Posted by
Hello I am wondering if this is possible.
I have a two tables one contains Plan information and another that
contains product information about the plan
ex:
Plan table
PlanID Plan_name
1 a
2 b
Product Table
ProductID PlanID Comments
1 1 com1
2 1 com2
3 1 com3
What I am looking to do if possible would be the following
Plan Product1 Comments1 Product2 Comments2
1 1 com1 2 com2
I am wondering down what path I should explore cause I am new to this.
I am using sql 2005
>
Take a look at PIVOT in Books Online. Also Itzik Ben-Gan has articles
on PIVOT in the last two issues of SQL Server Magazine. www.sqlmag.com
>
--
David Portas, SQL Server MVP
>
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
>
SQL Server Books Online:
http://msdn2.microsoft.com/library/...US,SQL.90).aspx
--
Multiple rows into one query result row...
TABLE 1: A simple list of people...
ID USER
---
1 Mike
2 John
TABLE 2: Name/Value pairs of attributes linked to table 1 by ID...
ID NAME VALUE
------
1 Hair Brown
1 Eyes Blue
2 Weight 200
So you can see from this that Mike has brown hair and blue eyes, and that John weighs 200 lbs.
I want a query that selects a person and all their attributes (whatever they may be), and returns the results on one row like this (when run with a WHERE clause that selects user Mike).
USER HAIR EYES
------
Mike Brown Blue
And returns this when run with a WHERE clause that selects user John...
USER WEIGHT
-----
John 200
Any ideas? Thanks in advance!Wow! Talk about stuff that will give you nightmares!
-PatP|||This question has been asked and anwered hundreds of times in all forums. Do some research!
Hint: Search for PIVOT TABLE, CROSSTAB QUERY, etc... :mad:|||This question has been asked and anwered hundreds of times in all forums. Do some research!
Hint: Search for PIVOT TABLE, CROSSTAB QUERY, etc...I think this one is a twist on the usual cross tab... It looks like they want the query schema to change from row to row, which goes way beyond a cross-tab in my mind. It makes my head hurt just thinking about it. A cross-tab seems simple to code and use in comparison to this request.
-PatP|||Thanks, Pat. I'm being bludgeoned on a few boards over this. People keep saying cross-tab, which is not what I'm after here...
It does indeed make the head hurt...|||I would classify this as a candidate for a recursive query. Search through this forum for some good ideas.|||candidate for dynamic sql
step 1: select distinct name from table2 where id=n
if you don't know n, do a join and use WHERE table1.name = 'fred'
step 2: construct multiple LEFT OUTER JOIN query, from table1 to table2 as many times as there are attributes that fred has (from step1), aliasing each table2.value to the corresponding column name
step 3: execute the dynamic query
easy peasy|||I think that Rudy's suggestion is a good one, if you can allow all of the rows in a given result set to have the same schema. This is probably the closest answer possible to what you want using standard SQL tools. Supporting irregularly shaped result sets is possible using some tools, but not using standard recordset-oriented tools.
As Fibber used to say: "T'ain't pretty, McGee!"
-PatP
Multiple rows inserations
Is this possible?
Quote:
Originally Posted by sajithamol
A normal insert statement allows us to insert one row at a time. But I would like to know if it is possible to insert many rows in a single SQL statement.I am NOT asking about the BCP or the Bulk insert option, because my data is not in any kind of file. My requirement is to insert 6 rows into a table at one shot.
Is this possible?
In what format is you data? You could probably write a procedure to insert more rows
Multiple row updates
I am working on a web app and am at a point where I have multiple rows in my GUI that need to be sent to and saved in SQL Server when the user presses Save. We want to pass the rows to a working table and the do a begin tran, move rows from working table to permanent tables with set processing, commit tran. The debate we are having is how to get the data to the work table. We can do individual inserts to the work table 1 round trip for each row (could be 100's of rows) or concatenate all rows into 1 long (up to 8K at a time) string and make one call sending the long string and then parse it into the work table in SQL Server. Trying to consider network usage and overhead by sending many short items vs 1 long item and cpu overhead for many inserts vs string manipulation and parsing. Suggestions?
Thanks you
JeffHow about dump to Text File then Use Stored procedure to import then move files out of the way when complete.
May even remove the need for a work table cos U will already have a log & using DTS or BCP to import is pretty fast
just a thought
GW|||Yeah I was thinking that's what I would do...
But what happens when, let's say you have 10 users trying to do this at the same time...
Your work table seems problematic
Just dump the file...and use a sproc to bcp it in...
Multiple Row Update/Insert
Hi,
I have a a table that is primarily a linking table that contains values for categories
indid int Indicator ID
indtype int Indicator Type can be either 0 or 1
catflagnum int Category Flag Number the number of the Category
catflagvalue int Cat Flag Value The Value for that category
this table can then be updated from a web form.
The Question I have is that can I do this in one statement or do I have to do it one at a time
i.e
The Data set could look something like
indid = 3 'This value will be the same for all rows
indtype = 0 'This will be the Same for all rows
Catflagnum = 1 'This value will change
Catflagvalue = 1 'This value will change
indid = 3 'This value will be the same for all rows
indtype = 0 'This will be the Same for all rows
Catflagnum = 2 'This value will change
Catflagvalue = 3 'This value will change
indid = 3 'This value will be the same for all rows
indtype = 0 'This will be the Same for all rows
Catflagnum = 3 'This value will change
Catflagvalue = 1 'This value will change
indid = 3 'This value will be the same for all rows
indtype = 0 'This will be the Same for all rows
Catflagnum = 4 'This value will change
Catflagvalue = 5 'This value will change
A further complication is that in the table an entry may not be in that table for the category so we would possibly have to check if the record exist or do an insert
I am stumped on this one?
IF EXISTS ( SELECT * FROM <table>)
BEGIN
-- Record already exists so you need to UPDATE
END
ELSE
BEGIN
-- You need to do an INSERT
END
Is there any particular reason you are trying to stick to one sql stmt ? You can still get all this done in one trip to the database if thats what you are worried about..|||
I have that sort of T-SQL sorted out, that's not a problem.
Basically what I wanted to do as , the requirement states for me to do is ,
There are a load of drop downlist boxes on a Page , that are basically choices the user can make , etc,
then I need to update the record in , when a user clicks a submit button, so I needed to iterate through all the controls and submit thier values to a
I was wondering if I could do it in one trip, i.e Pass all the parameters to a stored proc, and do it in one batch
at the moment I am doing it in a loop that loops through the controls, and submits thier Data One By One, I was wondering If I could do it in one trip as opposed to 12
if that makes sense?
I was wondering if I could do it in one trip, i.e Pass all the parameters to a stored proc, and do it in one batch
at the moment I am doing it in a loop that loops through the controls, and submits thier Data One By One, I was wondering If I could do it in one trip as opposed to 12
if that makes sense?
Yes. It actually depends on your database structure but for the part you posted above I would think you could pass in all the parameters to a stored proc..write out all your logic there and do the insert/update..all in one trip.|||
Try the link below for sp_executesql it will enable you run more than one statement. Hope this helps.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_sp_ea-ez_2h7w.asp
Thanks for both of those solutions,
I did learn from the sp_executesql , that will be a solutiuon for another part of the same project, thanks for that.
As for the other solution of passing all my Parameters to a stored proc, I don't like this idea, as it would mean I am passing 24 Parameters to my stored proc, and the coding implications for doing that are not cool. There must be a more elegant solution than that.