Showing posts with label separate. Show all posts
Showing posts with label separate. Show all posts

Wednesday, March 28, 2012

Multi-step SSIS Job Hangs

I have two SSIS packages that I want to run in one SQL Agent job as two individual steps. The two packages run fine when they are in separate jobs. However, when I run the job conaining both SSIS packages (under the same proxy), the first SSIS package starts, but hangs in the middle.

I then tried setting the DelayValidation flag to True as suggested for a similar issue in another thread from this forum. After changing the DelayValidation flag to True for all containers and tasks on the second SSIS package, the first SSIS package ran completely through sucessfully, but the job continued executing for hours and the second SSIS package never started. I finally killed the job.

Any ideas as to what is the problem here? I have logged to the event viewer and see that the first package completes sucessfully. They run successfully in separate jobs, but I can not get them to run together within the same job without hanging.

Any help is appreciated,

Paulette

What task is the package hanging on?

Do the packages share any external resources? Perhaps there is resource locking occurring.

-Jamie

|||

Thanks Jamie. Well, I know Step 1 (the first SSIS package) finished based on an event viewer message 'Package "ExtractPkg" finished Successfully.', but that was the last event to be logged. No event was logged stating that the second package started. It seems that the job is hanging while trying to start the second package, so I cannot see a task that either package is hanging on.

As far as shared resources... The packages do not share a "Data Source" in the solution, however they do have individual connections that point to the same SQL server and database.

Also, they both share audit and error handler "child" packages that they call from their event handlers. DelayValidation for the Audit and Error handler packages is currently set to false. Could the sharing of these audit and error handler packages be causing a problem? We planned on removing the audit and error handler packages from future versions since we've discovered the automatic logging provided in SSIS is sufficient for our needs.

- Paulette

|||

My first avenue of attack would be to disable/remove the eventhandlers to see if the problem goes away. That sounds like it could be the problem.

By the way, executing a package from each eventhandler is a huge overhead as I have talked about here: http://blogs.conchango.com/jamiethomson/archive/2005/03/01/1100.aspx

-Jamie

|||

Thank you Jamie! When I removed the event handlers the job ran perfectly!

- Paulette

Friday, March 23, 2012

Multipls cursors help

Hello,
I am trying to create multiple cursors and update 2 separate tables. But the
data tables seem to be locking after I run them. Is my logic off? My code is
below -
set xact_abort on
declare @.user_id int, @.topic_name varchar(20), @.key_seq_num int
declare GetUserID cursor for
select [user_id] from users where [user_id] not in (select [user_id] from
userdata)
open GetUserID
fetch next from GetUserID into @.user_id
while @.@.fetch_status = 0
begin
begin tran
--new cursor
declare GetUserData cursor for
select topic_name from userdata where [user_id] = 327
open GetUserData
fetch next from GetUserData into @.topic_name
while @.@.fetch_status = 0
begin
begin tran
--new cursor
declare GetNextID cursor for
select key_seq_num from nextkey where key_name = 'userdata_user_data_id'
open GetNextID
fetch next from GetNextID into @.key_seq_num
while @.@.fetch_status = 0
begin
begin tran
insert into UserData(user_data_id, [user_id], topic_name, update_date,
update_user, create_date, create_user)
select @.key_seq_num, @.user_id, @.topic_name, getdate(), 'Default',
getdate(), 'Default'
update nextkey set key_seq_num = @.key_seq_num + 1
from nextkey where key_name = 'userdata_user_data_id'
commit tran
print @.key_seq_num
fetch next from GetNextID into @.key_seq_num
end
close GetNextID
deallocate GetNextID
fetch next from GetUserData into @.topic_name
end
close GetUserData
deallocate GetUserData
fetch next from GetUserID into @.user_id
end
close GetUserID
deallocate GetUserIDAnil,
Can you explain what are you trying to accomplish?
AMB
"Anil" wrote:

> Hello,
> I am trying to create multiple cursors and update 2 separate tables. But t
he
> data tables seem to be locking after I run them. Is my logic off? My code
is
> below -
> set xact_abort on
> declare @.user_id int, @.topic_name varchar(20), @.key_seq_num int
> declare GetUserID cursor for
> select [user_id] from users where [user_id] not in (select [user_id] from
> userdata)
> open GetUserID
> fetch next from GetUserID into @.user_id
> while @.@.fetch_status = 0
> begin
> begin tran
> --new cursor
> declare GetUserData cursor for
> select topic_name from userdata where [user_id] = 327
> open GetUserData
> fetch next from GetUserData into @.topic_name
> while @.@.fetch_status = 0
> begin
> begin tran
> --new cursor
> declare GetNextID cursor for
> select key_seq_num from nextkey where key_name = 'userdata_user_data_id'
> open GetNextID
> fetch next from GetNextID into @.key_seq_num
> while @.@.fetch_status = 0
> begin
> begin tran
> insert into UserData(user_data_id, [user_id], topic_name, update_date,
> update_user, create_date, create_user)
> select @.key_seq_num, @.user_id, @.topic_name, getdate(), 'Default',
> getdate(), 'Default'
> update nextkey set key_seq_num = @.key_seq_num + 1
> from nextkey where key_name = 'userdata_user_data_id'
> commit tran
> print @.key_seq_num
> fetch next from GetNextID into @.key_seq_num
> end
> close GetNextID
> deallocate GetNextID
> fetch next from GetUserData into @.topic_name
> end
> close GetUserData
> deallocate GetUserData
> fetch next from GetUserID into @.user_id
> end
> close GetUserID
> deallocate GetUserID|||I am trying to take all the user id's from cursor 1, some value from cursor
2
and 3 and update them into a table. Cursor 3 has an id that needs to be
incremented each time as it is not an auto number. The way I have laid out m
y
cursors, is that the correct way?
Thanks.
Anil
"Alejandro Mesa" wrote:
> Anil,
> Can you explain what are you trying to accomplish?
>
> AMB
>
> "Anil" wrote:
>|||Anil,
Can you explain it without referencing the cursors?
AMB
"Anil" wrote:
> I am trying to take all the user id's from cursor 1, some value from curso
r 2
> and 3 and update them into a table. Cursor 3 has an id that needs to be
> incremented each time as it is not an auto number. The way I have laid out
my
> cursors, is that the correct way?
> Thanks.
> Anil
> "Alejandro Mesa" wrote:
>|||Please provide DDL and sample data.
http://www.aspfaq.com/etiquette.asp?id=5006
AMB
"Alejandro Mesa" wrote:
> Anil,
> Can you explain it without referencing the cursors?
>
> AMB
>
> "Anil" wrote:
>|||Is the syntax correct for multiple cursors? Is it possible to do what I am
trying to do with cursors?
"Alejandro Mesa" wrote:
> Anil,
> Can you explain what are you trying to accomplish?
>
> AMB
>
> "Anil" wrote:
>|||Is the syntax correct? Is it possible to do what I am trying to do?
"Alejandro Mesa" wrote:
> Anil,
> Can you explain what are you trying to accomplish?
>
> AMB
>
> "Anil" wrote:
>|||Hi Anil,
Looks like the tables which you are accessing is getting locked since you do
all your stuff within a transaction.
You can do 2 things to see if the issue is happening because of a lock
1. Remove all transaction (begin/commit tran) and see if your code actually
works.
2. If your code works after step 1 then use WITH (NOLOCK) clause in your
select statements.
Regards,
Joe.
"Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in message
news:47D3D1AF-4E14-4C0B-80B9-66ADCCF91254@.microsoft.com...
> Please provide DDL and sample data.
> http://www.aspfaq.com/etiquette.asp?id=5006
>
> AMB
> "Alejandro Mesa" wrote:
>
cursor 2
be
out my
tables. But the
My code is
[user_id] from
'userdata_user_data_id'
update_date,|||Hey Joe,
Thanks for the help. Worked like a charm!!
Anil
"Joe" wrote:

> Hi Anil,
> Looks like the tables which you are accessing is getting locked since you
do
> all your stuff within a transaction.
> You can do 2 things to see if the issue is happening because of a lock
> 1. Remove all transaction (begin/commit tran) and see if your code actuall
y
> works.
> 2. If your code works after step 1 then use WITH (NOLOCK) clause in your
> select statements.
>
> Regards,
> Joe.
>
> "Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in messag
e
> news:47D3D1AF-4E14-4C0B-80B9-66ADCCF91254@.microsoft.com...
> cursor 2
> be
> out my
> tables. But the
> My code is
> [user_id] from
> 'userdata_user_data_id'
> update_date,
>
>|||Hey Joe,
Thanks for the help. Worked like a charm!!
Anil
"Joe" wrote:

> Hi Anil,
> Looks like the tables which you are accessing is getting locked since you
do
> all your stuff within a transaction.
> You can do 2 things to see if the issue is happening because of a lock
> 1. Remove all transaction (begin/commit tran) and see if your code actuall
y
> works.
> 2. If your code works after step 1 then use WITH (NOLOCK) clause in your
> select statements.
>
> Regards,
> Joe.
>
> "Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in messag
e
> news:47D3D1AF-4E14-4C0B-80B9-66ADCCF91254@.microsoft.com...
> cursor 2
> be
> out my
> tables. But the
> My code is
> [user_id] from
> 'userdata_user_data_id'
> update_date,
>
>sql

Multiple-Instance Cluster

Situation: Two nodes, two nodes are active, two virtual
servers, two separate applications. One active and one
wait on each. Active on each fails over to wait on each
other server so that I should only lose one application at
a time.
Question: Two SQL instances on each node/virtual server?
Each SQL instance has its own IP address? The active and
wait have their own IP addresses?
Ok, so you want to run a 2 node cluster, with both nodes being active at the
same time. Two databases for two applications.
You will only need 1 instance of SQL on a node, unless the application
requires something different.
Each Node will have an IP addr, the cluster will have an addr, and each
instance (1 on each server) will have an IP addr.
With all the said, I never recommend an Active/Active cluster. SQL optimizes
memory. When you have a failure, both instances of SQL and its IP addr will
then be on one box. This will change the way SQL can use and optimize
memory. This process takes time, during that time, the system is much
slower. Performance on most Active/Active SQL Clusters suffer during a
failed state.
If at all possible, run both apps on the same node, at the same time. Making
sure of course, that each system has ample resources.
Hope this helps.
Cheers,
Rod
"Lucian" <anonymous@.discussions.microsoft.com> wrote in message
news:1d2b101c42329$d3d501c0$a401280a@.phx.gbl...
> Situation: Two nodes, two nodes are active, two virtual
> servers, two separate applications. One active and one
> wait on each. Active on each fails over to wait on each
> other server so that I should only lose one application at
> a time.
> Question: Two SQL instances on each node/virtual server?
> Each SQL instance has its own IP address? The active and
> wait have their own IP addresses?
|||With SQL Server 2000 you can install up to 16 instances (1 default and 15
named or 16 named). The number of nodes that can be a possible owner for
each instance depends on your edition/version of Windows. If you have a
typical 2-node cluster, each node can own 0,1, or many of the installed
instances.
Whether it makes sense to install multiple instances (something that used
to be called active/active when you could only have two instances) depends
totally on your environment - your hardware, your application, your
business needs, etc.
If you want to know more about clustering, check out BOL or the MSPress
book on High Availability.
Cindy Gross, MCDBA, MCSE
http://cindygross.tripod.com
This posting is provided "AS IS" with no warranties, and confers no rights.

Wednesday, March 21, 2012

multiple web site to access the same SQL Server database

Each SQL Server database should correspond to one IP address. I assume web
server and database server are in separate machines. Here's 2 cases:
case 1: A single web site have data access to the SQL Server database.
case 2: There are multiple web sites need to have data access to the same
SQL Server database.
Obviously, case 2 will create more network traffic than case 1. Then how
should do we about that? Use more than one SQL Server (i.e. different IP
address)? But what IP address should use in connection string in web pages'
Please advise!
Thanks!Hi,
If you want to distribute the work load amoung multiple SQL servers I
believe you would have to use some form of clustering technology such as
Microsoft Cluster Service. Then you can use the IP address of the virtual
SQL Server. There are limitations to this solution so be sure to investigate
and make sure that it fits your requirement.
Hope this helps
Chris Taylor
"Matt" <mattloude@.hotmail.com> wrote in message
news:e92o1DZoDHA.1632@.TK2MSFTNGP10.phx.gbl...
> Each SQL Server database should correspond to one IP address. I assume web
> server and database server are in separate machines. Here's 2 cases:
> case 1: A single web site have data access to the SQL Server database.
> case 2: There are multiple web sites need to have data access to the same
> SQL Server database.
> Obviously, case 2 will create more network traffic than case 1. Then how
> should do we about that? Use more than one SQL Server (i.e. different IP
> address)? But what IP address should use in connection string in web
pages'
> Please advise!
> Thanks!
>
>

Monday, March 19, 2012

Multiple ToggleItems

I have a report that utilizes five separate tables. All of the tables
are filtered by the same datetimestamp, which appears in the detail
row. Right now, I have the ToggleItem for all of the tables set to
the same text box at the top of the report. I can also set the
ToggleItem for each of the tables to a cell contained in the same
table. Is there any way to specify multiple Toggleitem values, i.e.
the text box at the top of the report AND the cell in the table
itself?No, this isn't possible.
The closest I think you can get is to have a report parameter which
determines the initial visibility state of the tables.
<Hidden>=Parameters!InitiallyHidden.Value</Hidden>
Then instead of using the top level textbox be a toggle item, you make it a
drillthrough back to the same report with the initallyhidden parameter value
set to
=Not(Parameters!InitiallyHidden.Value)
--
This post is provided 'AS IS' with no warranties, and confers no rights. All
rights reserved. Some assembly required. Batteries not included. Your
mileage may vary. Objects in mirror may be closer than they appear. No user
serviceable parts inside. Opening cover voids warranty. Keep out of reach of
children under 3.
"utterman" <chris.utter@.gmail.com> wrote in message
news:806a4b3c.0407281257.46ecf520@.posting.google.com...
> I have a report that utilizes five separate tables. All of the tables
> are filtered by the same datetimestamp, which appears in the detail
> row. Right now, I have the ToggleItem for all of the tables set to
> the same text box at the top of the report. I can also set the
> ToggleItem for each of the tables to a cell contained in the same
> table. Is there any way to specify multiple Toggleitem values, i.e.
> the text box at the top of the report AND the cell in the table
> itself?

Monday, March 12, 2012

multiple tempdb filegroup

If I want to try the multiple tempdb file trick (have number of files equal
to number of processors), is it better if they are each a separate filegroup,
or is it better (or required!) if they are all members of a single filegroup?
Also, while the kb article recommends using fixed-size and equal-sized
files, it seems to me that is should not really matter, if the algorithm is
to try to use the largest file first, it will quickly become not the largest
and we will get pretty much the same round-robin as the much more limited
configurations would see.
Anyone like to comment on the performance improvements, if any, likely?
This is mostly sequential batch processing, but with fair millions of
records, and a lot more use of group-by, mostly on current 2 * dualcore
systems, and doesn't actually have a lot of tempdb contention that I've
noticed, though we'd like to see better
performance in any case.
Thanks.
Josh
If you do use multiple filegroups, you would be using a very unusually and
uncommon tempdb configuration. My suggestion is, forget about multiple
filegroups. What matters is the number of data files.
I have not done any performance tests myself on the impact of the number of
the tempdb data files, but I've done tests on the performance impact of the
number of data files in a user database, and there is significant perfromance
impact if there is significant allocation activity.
http://sqlblog.com/blogs/linchi_shea/archive/2007/01/29/how-many-data-files-should-i-create-for-a-user-database.aspx
Since tempdb by its very nature is allocation intensive, I would not
question the impact of the recommend configuration on the tempdb performance
(and therefore the SQL Server performance).
Linchi
"JRStern" wrote:

> If I want to try the multiple tempdb file trick (have number of files equal
> to number of processors), is it better if they are each a separate filegroup,
> or is it better (or required!) if they are all members of a single filegroup?
> Also, while the kb article recommends using fixed-size and equal-sized
> files, it seems to me that is should not really matter, if the algorithm is
> to try to use the largest file first, it will quickly become not the largest
> and we will get pretty much the same round-robin as the much more limited
> configurations would see.
>
> Anyone like to comment on the performance improvements, if any, likely?
> This is mostly sequential batch processing, but with fair millions of
> records, and a lot more use of group-by, mostly on current 2 * dualcore
> systems, and doesn't actually have a lot of tempdb contention that I've
> noticed, though we'd like to see better
> performance in any case.
> Thanks.
> Josh

multiple tempdb filegroup

If I want to try the multiple tempdb file trick (have number of files equal
to number of processors), is it better if they are each a separate filegroup
,
or is it better (or required!) if they are all members of a single filegroup
?
Also, while the kb article recommends using fixed-size and equal-sized
files, it seems to me that is should not really matter, if the algorithm is
to try to use the largest file first, it will quickly become not the largest
and we will get pretty much the same round-robin as the much more limited
configurations would see.
Anyone like to comment on the performance improvements, if any, likely?
This is mostly sequential batch processing, but with fair millions of
records, and a lot more use of group-by, mostly on current 2 * dualcore
systems, and doesn't actually have a lot of tempdb contention that I've
noticed, though we'd like to see better
performance in any case.
Thanks.
JoshIf you do use multiple filegroups, you would be using a very unusually and
uncommon tempdb configuration. My suggestion is, forget about multiple
filegroups. What matters is the number of data files.
I have not done any performance tests myself on the impact of the number of
the tempdb data files, but I've done tests on the performance impact of the
number of data files in a user database, and there is significant perfromanc
e
impact if there is significant allocation activity.
http://sqlblog.com/blogs/linchi_she...r-database.aspx
Since tempdb by its very nature is allocation intensive, I would not
question the impact of the recommend configuration on the tempdb performance
(and therefore the SQL Server performance).
Linchi
"JRStern" wrote:

> If I want to try the multiple tempdb file trick (have number of files equa
l
> to number of processors), is it better if they are each a separate filegro
up,
> or is it better (or required!) if they are all members of a single filegro
up?
> Also, while the kb article recommends using fixed-size and equal-sized
> files, it seems to me that is should not really matter, if the algorithm i
s
> to try to use the largest file first, it will quickly become not the large
st
> and we will get pretty much the same round-robin as the much more limited
> configurations would see.
>
> Anyone like to comment on the performance improvements, if any, likely?
> This is mostly sequential batch processing, but with fair millions of
> records, and a lot more use of group-by, mostly on current 2 * dualcore
> systems, and doesn't actually have a lot of tempdb contention that I've
> noticed, though we'd like to see better
> performance in any case.
> Thanks.
> Josh

multiple tempdb filegroup

If I want to try the multiple tempdb file trick (have number of files equal
to number of processors), is it better if they are each a separate filegroup,
or is it better (or required!) if they are all members of a single filegroup?
Also, while the kb article recommends using fixed-size and equal-sized
files, it seems to me that is should not really matter, if the algorithm is
to try to use the largest file first, it will quickly become not the largest
and we will get pretty much the same round-robin as the much more limited
configurations would see.
Anyone like to comment on the performance improvements, if any, likely?
This is mostly sequential batch processing, but with fair millions of
records, and a lot more use of group-by, mostly on current 2 * dualcore
systems, and doesn't actually have a lot of tempdb contention that I've
noticed, though we'd like to see better
performance in any case.
Thanks.
JoshIf you do use multiple filegroups, you would be using a very unusually and
uncommon tempdb configuration. My suggestion is, forget about multiple
filegroups. What matters is the number of data files.
I have not done any performance tests myself on the impact of the number of
the tempdb data files, but I've done tests on the performance impact of the
number of data files in a user database, and there is significant perfromance
impact if there is significant allocation activity.
http://sqlblog.com/blogs/linchi_shea/archive/2007/01/29/how-many-data-files-should-i-create-for-a-user-database.aspx
Since tempdb by its very nature is allocation intensive, I would not
question the impact of the recommend configuration on the tempdb performance
(and therefore the SQL Server performance).
Linchi
"JRStern" wrote:
> If I want to try the multiple tempdb file trick (have number of files equal
> to number of processors), is it better if they are each a separate filegroup,
> or is it better (or required!) if they are all members of a single filegroup?
> Also, while the kb article recommends using fixed-size and equal-sized
> files, it seems to me that is should not really matter, if the algorithm is
> to try to use the largest file first, it will quickly become not the largest
> and we will get pretty much the same round-robin as the much more limited
> configurations would see.
>
> Anyone like to comment on the performance improvements, if any, likely?
> This is mostly sequential batch processing, but with fair millions of
> records, and a lot more use of group-by, mostly on current 2 * dualcore
> systems, and doesn't actually have a lot of tempdb contention that I've
> noticed, though we'd like to see better
> performance in any case.
> Thanks.
> Josh

Friday, March 9, 2012

Multiple sqlservr.exe processes load

Just installed SBS 2003, and I notice that there are 3
separate processes 'sqlservr.exe' loaded. Is this
normal? In addition to using SQL Server, I'm using Backup
Exec, which uses the same instance of SQL Server 2000.
Reason I ask this is that I'm having trouble making
Enterprise manager connections from workstations, so I'm
looking for cause of problems.It sounds like you have multiple instances of SQL Server running. Open
Control Panel -Administrative Tools - Services and see how many services
you have running that start with MSSQL. This will tell you how many
instances of SQL Server are running.
Rand
This posting is provided "as is" with no warranties and confers no rights.