Showing posts with label key. Show all posts
Showing posts with label key. Show all posts

Friday, March 23, 2012

Multipurpose Foreign Key

I've got a field [user.unitID] that can be a foreign key on the table
[Base] *OR* the table [Command] depending on the value of [User.role]
(see "Structure," below).

Question 1: Is this poor DB design?
Question2: If this is okay DB design (or, if I can't change this DB),
how do I perform a join?

I started writing a sproc (see "Beginnings of sproc," below), which
will definitely work, once I get it set up, but when I do these sorts
of things, I later come to find that there was a straight SQL way to
do it. So, is there a straight SQL way to do this, building joins with
CASEs, or something like that?

Thanks,
Jamie

## Structure (simplified) ##

[USER]
userID
unitID
role -- values can be 'B' or 'C' referring to [base] or [command] tbl

[BASE]
ID
NAME

[COMMAND]
ID
NAME

## Beginnings of a sproc ##

GO
USE myDB
IF EXISTS (SELECT name FROM sysobjects
WHERE name = 'getSessionInfo' AND type = 'P')
DROP PROCEDURE getSessionInfo
GO
USE myDB
GO
CREATE PROCEDURE getSessionInfo
@.userID varchar(50),
@.password varchar(50)
AS

DECLARE @.myUnitType varchar(2);

SELECT @.myUnitType = unitType
FROM
[user]
WHERE userID = @.userID
AND [password] = @.password

... blah blah blahJamie Jackson (wasteNOSPAMbasket@.bigfoot.com) writes:
> I've got a field [user.unitID] that can be a foreign key on the table
> [Base] *OR* the table [Command] depending on the value of [User.role]
> (see "Structure," below).
> Question 1: Is this poor DB design?

It is certainly not the plain standard design. And the method has the
apparent advantage that you can use FOREIGN-KEY constraint to enforce
the integrity, but you need to rely on triggers.

But since I know very little of your business problem, I am hesitant
to label the design as outright bad, or even poor. What I can say, is
that had I had the problem, I would definitely have looked into a solution
that would have permitted me to use DRI, but that would definitely have
been a case of fitting the solution to the tool.

> Question2: If this is okay DB design (or, if I can't change this DB),
> how do I perform a join?

Depends a little on the output, but say you want user and name of
base or command:

SELECT u.name, u.role, rolename = coalece(b.name, c.name)
FROM users u
LEFT JOIN base b ON u.role = 'B'
AND u.unitid = b.unitid
LEFT JOIN command c ON u.role = 'C'
AND u.unitid = c.unitid

An alternative is to introduce a basecommand table, to gather common
information, but I don't know enough about your business problem to
say whether this is a good idea or not.

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Jamie Jackson wrote:
> I've got a field [user.unitID] that can be a foreign key on the table
> [Base] *OR* the table [Command] depending on the value of [User.role]
> (see "Structure," below).
> Question 1: Is this poor DB design?
<snip
Yes. A table represents something. In your case a User, a Base or a
Command. In your design you specify whether a User can have a relation
with a Base or not. If it can have a relation, then this means there is
a foreign key column (or a relation table) for this specific relation.
For Command you make the same decision.

So in your case, User should have one column with a foreign key
constraint to Base, and another column with a foreign key constraint to
Command. You make the relations optional by allowing NULL values in the
foreign key columns.

If this is an Object Oriented design (for example, when Command extends
from Base), then you are in trouble, because OO and RDBMS don't map very
well.

Hope this helps,
Gert-Jan|||Thanks, Erland, that SQL was a nice tutorial on filtered joins and the
coalesce function, neither of which I've ever used.

Thanks,
Jamie

On Wed, 16 Jul 2003 21:54:51 +0000 (UTC), Erland Sommarskog
<sommar@.algonet.se> wrote:

>Jamie Jackson (wasteNOSPAMbasket@.bigfoot.com) writes:
>> I've got a field [user.unitID] that can be a foreign key on the table
>> [Base] *OR* the table [Command] depending on the value of [User.role]
>> (see "Structure," below).
>>
>> Question 1: Is this poor DB design?
>It is certainly not the plain standard design. And the method has the
>apparent advantage that you can use FOREIGN-KEY constraint to enforce
>the integrity, but you need to rely on triggers.
>But since I know very little of your business problem, I am hesitant
>to label the design as outright bad, or even poor. What I can say, is
>that had I had the problem, I would definitely have looked into a solution
>that would have permitted me to use DRI, but that would definitely have
>been a case of fitting the solution to the tool.
>> Question2: If this is okay DB design (or, if I can't change this DB),
>> how do I perform a join?
>Depends a little on the output, but say you want user and name of
>base or command:
>
> SELECT u.name, u.role, rolename = coalece(b.name, c.name)
> FROM users u
> LEFT JOIN base b ON u.role = 'B'
> AND u.unitid = b.unitid
> LEFT JOIN command c ON u.role = 'C'
> AND u.unitid = c.unitid
>An alternative is to introduce a basecommand table, to gather common
>information, but I don't know enough about your business problem to
>say whether this is a good idea or not.

Multipul Selections in a Parameter field

I have a report that I would like to allow the user to use the Ctrl key to select mutiple cost centers in a report parameter field. Does anyone know how this would be done
From http://www.developmentnow.com/g/115_2004_11_0_4_0/sql-server-reporting-services.ht
Posted via DevelopmentNow.com Group
http://www.developmentnow.comIf you are using 2005 why do you need a ctrl key for selection, instead you
can use the multiple check box to pick multiple values.
Amarnath
"Duvon Harper" wrote:
> I have a report that I would like to allow the user to use the Ctrl key to select mutiple cost centers in a report parameter field. Does anyone know how this would be done?
> From http://www.developmentnow.com/g/115_2004_11_0_4_0/sql-server-reporting-services.htm
> Posted via DevelopmentNow.com Groups
> http://www.developmentnow.com
>

Monday, March 19, 2012

multiple updates to table

Hello all,

I have a parallel process updating a single table from various sources using update table statements using a key column.

I'm afraid the process will fail when an update will occur to a record with the same key simultaneously.

Does anyone have a suggestion how to accomplish this? Is there a way timing the updates in queue?

Thanks.

There is no concurrent update in SQL Server, DML statement are queued and executed one by one (if they are not in a transaction). There are sure scenarios which would lead to concurrency conflicts, but this has to be handled by your frontedn application.

HTH, Jens Suessmeyer.'

http://www.sqlserver2005.de

|||

I assume you have multiple clients pumping data into a single table, right?

The next question is what you mean by key? Like an identity key? If so, that won't happen. SQL Server single threads identity key generation so that no two rows will get the same key.

Multiple updates and Identity fields

I have a table used by multiple applications. One column is an Identify field and is also used as a Primary key. What is\are the best practices to use get the identity value returned after an INSERT made by my code.. I'm worried that if someone does an INSERT into the same table a "zillionth" of a second later than I did, that I could get their Identity value.

TIA,

Barkingdog

Are you using SQL2k5 ? THen you can use the new OUTPUT clause. Otherwise you should have a look on SCOPE_IDENTITY() which should fit your needs. But the new OUTPUT function should be more straight forward in your case.

HTH, JEns K. Suessmeyer.

http://www.sqlserver2005.de|||

Jens,

I looked up OUTPUT in sql 2k5 BOL: Here's an example that I found:

>>>
Copy Code
USE AdventureWorks;
GO
DECLARE @.MyTableVar table( ScrapReasonID smallint,
Name varchar(50),
ModifiedDate datetime);
INSERT Production.ScrapReason
OUTPUT INSERTED.ScrapReasonID, INSERTED.Name, INSERTED.ModifiedDate
INTO @.MyTableVar
VALUES (N'Operator error', GETDATE());

--1. Display the result set of the table variable.
SELECT ScrapReasonID, Name, ModifiedDate FROM @.MyTableVar;

--2. Display the result set of the table.
SELECT ScrapReasonID, Name, ModifiedDate
FROM Production.ScrapReason;
GO
>>>>

So with OUTPUT the ScrapReasonID from @.MyTableVar will hold the identity of the row just inserted. Is this correct?


What is the difference in output between queries 1 and 2? Shouldn't they be the same?

TIA,

Barkingdog

Monday, March 12, 2012

Multiple tables or one large table

What are the performance considerations for deciding whether to have multiple
table sets or a single large table set with a key in each row.
Details:
SQL Server 2005
We have an application that uses 20 or so tables but will likely grow into
the small hundreds. We will potentially have hundreds of customers. Each
customer might add 200 to 10,000 rows per day to the database. We can:
A. Have a single set of tables where the a customer key value defines each
row.
B. Have a set of identical tables for each customer
We have considered the "management" aspect and would use DDL trigers to
ensure that the tables and indexes stay identical. We're concerned with the
performance difference.
What effects will each option have on caching? Indexing? Overall speed of
data retrieval?
"Sierra" <Sierra@.discussions.microsoft.com> wrote in message
news:EBB395BF-475F-4125-8E12-B4FD7D873DFE@.microsoft.com...
> What are the performance considerations for deciding whether to have
> multiple
> table sets or a single large table set with a key in each row.
> Details:
> SQL Server 2005
> We have an application that uses 20 or so tables but will likely grow into
> the small hundreds. We will potentially have hundreds of customers. Each
> customer might add 200 to 10,000 rows per day to the database. We can:
> A. Have a single set of tables where the a customer key value defines each
> row.
> B. Have a set of identical tables for each customer
> We have considered the "management" aspect and would use DDL trigers to
> ensure that the tables and indexes stay identical. We're concerned with
> the
> performance difference.
>
Assuming that
-Customer is the leading column in every primary key
-All queries have a Customer parameter

> What effects will each option have on caching? Indexing? Overall speed of
> data retrieval?
Minimal probably. And the overall size of the database doesn't sound too
big, so I wouldn't make any drastic decisions on the basis of performance.
I would definitely design the logical schema to support multiple customers
per database. Once you have that you can consolidate all customers into a
single database, break them into a few or even one per customer.
David
|||I would not have a separate table, or tables, for each customer. Consider
maybe archiving old data at a certain point to limit the number of records.
Definitely normalize your tables, but don't go to extremes, because that can
backfire from a performance standpoint too.
"Sierra" <Sierra@.discussions.microsoft.com> wrote in message
news:EBB395BF-475F-4125-8E12-B4FD7D873DFE@.microsoft.com...
> What are the performance considerations for deciding whether to have
multiple
> table sets or a single large table set with a key in each row.
> Details:
> SQL Server 2005
> We have an application that uses 20 or so tables but will likely grow into
> the small hundreds. We will potentially have hundreds of customers. Each
> customer might add 200 to 10,000 rows per day to the database. We can:
> A. Have a single set of tables where the a customer key value defines each
> row.
> B. Have a set of identical tables for each customer
> We have considered the "management" aspect and would use DDL trigers to
> ensure that the tables and indexes stay identical. We're concerned with
the
> performance difference.
> What effects will each option have on caching? Indexing? Overall speed of
> data retrieval?

Multiple tables or one large table

What are the performance considerations for deciding whether to have multipl
e
table sets or a single large table set with a key in each row.
Details:
SQL Server 2005
We have an application that uses 20 or so tables but will likely grow into
the small hundreds. We will potentially have hundreds of customers. Each
customer might add 200 to 10,000 rows per day to the database. We can:
A. Have a single set of tables where the a customer key value defines each
row.
B. Have a set of identical tables for each customer
We have considered the "management" aspect and would use DDL trigers to
ensure that the tables and indexes stay identical. We're concerned with the
performance difference.
What effects will each option have on caching? Indexing? Overall speed of
data retrieval?"Sierra" <Sierra@.discussions.microsoft.com> wrote in message
news:EBB395BF-475F-4125-8E12-B4FD7D873DFE@.microsoft.com...
> What are the performance considerations for deciding whether to have
> multiple
> table sets or a single large table set with a key in each row.
> Details:
> SQL Server 2005
> We have an application that uses 20 or so tables but will likely grow into
> the small hundreds. We will potentially have hundreds of customers. Each
> customer might add 200 to 10,000 rows per day to the database. We can:
> A. Have a single set of tables where the a customer key value defines each
> row.
> B. Have a set of identical tables for each customer
> We have considered the "management" aspect and would use DDL trigers to
> ensure that the tables and indexes stay identical. We're concerned with
> the
> performance difference.
>
Assuming that
-Customer is the leading column in every primary key
-All queries have a Customer parameter

> What effects will each option have on caching? Indexing? Overall speed of
> data retrieval?
Minimal probably. And the overall size of the database doesn't sound too
big, so I wouldn't make any drastic decisions on the basis of performance.
I would definitely design the logical schema to support multiple customers
per database. Once you have that you can consolidate all customers into a
single database, break them into a few or even one per customer.
David|||I would not have a separate table, or tables, for each customer. Consider
maybe archiving old data at a certain point to limit the number of records.
Definitely normalize your tables, but don't go to extremes, because that can
backfire from a performance standpoint too.
"Sierra" <Sierra@.discussions.microsoft.com> wrote in message
news:EBB395BF-475F-4125-8E12-B4FD7D873DFE@.microsoft.com...
> What are the performance considerations for deciding whether to have
multiple
> table sets or a single large table set with a key in each row.
> Details:
> SQL Server 2005
> We have an application that uses 20 or so tables but will likely grow into
> the small hundreds. We will potentially have hundreds of customers. Each
> customer might add 200 to 10,000 rows per day to the database. We can:
> A. Have a single set of tables where the a customer key value defines each
> row.
> B. Have a set of identical tables for each customer
> We have considered the "management" aspect and would use DDL trigers to
> ensure that the tables and indexes stay identical. We're concerned with
the
> performance difference.
> What effects will each option have on caching? Indexing? Overall speed of
> data retrieval?

Multiple tables or one large table

What are the performance considerations for deciding whether to have multiple
table sets or a single large table set with a key in each row.
Details:
SQL Server 2005
We have an application that uses 20 or so tables but will likely grow into
the small hundreds. We will potentially have hundreds of customers. Each
customer might add 200 to 10,000 rows per day to the database. We can:
A. Have a single set of tables where the a customer key value defines each
row.
B. Have a set of identical tables for each customer
We have considered the "management" aspect and would use DDL trigers to
ensure that the tables and indexes stay identical. We're concerned with the
performance difference.
What effects will each option have on caching? Indexing? Overall speed of
data retrieval?"Sierra" <Sierra@.discussions.microsoft.com> wrote in message
news:EBB395BF-475F-4125-8E12-B4FD7D873DFE@.microsoft.com...
> What are the performance considerations for deciding whether to have
> multiple
> table sets or a single large table set with a key in each row.
> Details:
> SQL Server 2005
> We have an application that uses 20 or so tables but will likely grow into
> the small hundreds. We will potentially have hundreds of customers. Each
> customer might add 200 to 10,000 rows per day to the database. We can:
> A. Have a single set of tables where the a customer key value defines each
> row.
> B. Have a set of identical tables for each customer
> We have considered the "management" aspect and would use DDL trigers to
> ensure that the tables and indexes stay identical. We're concerned with
> the
> performance difference.
>
Assuming that
-Customer is the leading column in every primary key
-All queries have a Customer parameter
> What effects will each option have on caching? Indexing? Overall speed of
> data retrieval?
Minimal probably. And the overall size of the database doesn't sound too
big, so I wouldn't make any drastic decisions on the basis of performance.
I would definitely design the logical schema to support multiple customers
per database. Once you have that you can consolidate all customers into a
single database, break them into a few or even one per customer.
David|||I would not have a separate table, or tables, for each customer. Consider
maybe archiving old data at a certain point to limit the number of records.
Definitely normalize your tables, but don't go to extremes, because that can
backfire from a performance standpoint too.
"Sierra" <Sierra@.discussions.microsoft.com> wrote in message
news:EBB395BF-475F-4125-8E12-B4FD7D873DFE@.microsoft.com...
> What are the performance considerations for deciding whether to have
multiple
> table sets or a single large table set with a key in each row.
> Details:
> SQL Server 2005
> We have an application that uses 20 or so tables but will likely grow into
> the small hundreds. We will potentially have hundreds of customers. Each
> customer might add 200 to 10,000 rows per day to the database. We can:
> A. Have a single set of tables where the a customer key value defines each
> row.
> B. Have a set of identical tables for each customer
> We have considered the "management" aspect and would use DDL trigers to
> ensure that the tables and indexes stay identical. We're concerned with
the
> performance difference.
> What effects will each option have on caching? Indexing? Overall speed of
> data retrieval?

Friday, March 9, 2012

multiple table entry in a stored procedure

is there a way when making a stored procedure if i can enter the information in multiple tables?? say the primary key into another table to link the relationship? or should i just pull it out and then put in?

Yes it is possible.

After the first insert call "SCOPE_IDENTITY() " .

This will retrieve the ID number from the newly inserted row. This can be stored in a variable and entered into a second insert :)

Monday, February 20, 2012

Multiple Search

Does anyone know how to search for more than one key word
at a time in MS Access 2000?
Example: Cars or Trucks (in a Memo field)Please post this to an MS Access newsgroup.
--
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
What hardware is your SQL Server running on?
http://vyaskn.tripod.com/poll.htm
"Mike" <mike@.countrymedic.com> wrote in message
news:037e01c33f3a$6e3695f0$a101280a@.phx.gbl...
Does anyone know how to search for more than one key word
at a time in MS Access 2000?
Example: Cars or Trucks (in a Memo field)